-
Notifications
You must be signed in to change notification settings - Fork 512
Feature Request: Please also support Ollama models #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for the suggestion! I agree this would be useful. However, support for new LLMs is not a priority at the moment, because there's still a lot of core work to be done (leveraging current LLM endpoints), and we are optimizing the system to work as well as possible with OpenAI models. But I'll leave the issue here as a feature request to keep it in our radar for the time being. |
How are we going to experiment with this when we have to pay just to tinker? Feels like this should be a priority |
It should be easy to implement with python-dotenv OPENAI_BASE_URL=http://localhost:11434/v1 Ollama provides an OpenAI compatible API on the local computer, the calls are simply directed there. Best regards, |
its a quick fix you can edit, I did the following Added the following to the config file. and Updated openai_utils.py in the site packages or the original file before the install with self.client = OpenAI(api_key=config["OpenAI"]["API_KEY"], base_url=config["OpenAI"]["END_POINT"])#os.getenv("OPENAI_API_KEY")) So far this has worked for the base examples, I am still testing a more complex example that uses all the features. Edit*** Terminal: LMStudio server: |
Thx @Katzukum It seems to me, when working with local LLMs, you might want to comment out Also, if you’re using a resource-intensive model, consider setting |
Update: we actually have an ongoing PR proposal that would enable this: #47 As these are complex changes, it might take some time to review, but the fact that someone already did a lot of heavy lifting here helps, and I'll try to review it with care. |
See #99 TL;DR import os
os.environ["OPENAI_API_KEY"] = "ollama"
os.environ["OPENAI_BASE_URL"] = "http://localhost:11434/v1" |
great to hear, thank you, I'll try it out. |
I'd like to run it locally.
Thanks in advance,
The text was updated successfully, but these errors were encountered: