-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local LLM models #11
Comments
I definitely will be exploring this idea. It might take a bit of time, but expect it to be a feature in the near future. Thank you! |
So good news GPT4ALL client has created an API web mode within their client so that you can use ANY LLM (including GPT3.5/4) as part of the communication. Their API mode is compatible with opening AI, so all you would have to do is allow the pointing of the plugin to localhost on port 4891. Ignore the previous API I mentioned as this is no longer needed. On the opensource LLM front, the advancement is at lightning speed. |
@blu3knight can we specify host and port using this plugin? |
Well depends what you are referencing to. For GPT4all it is no longer a plug-in but actually part of the code. For this plug-in I took a look and this is part of the code in a separate file for each of the providers supported. |
@blu3knight I am referring to this project. So it looks like extracting these settings to a file and exposing them in the config UI? Maybe duplicating the OpenAI file to "local" llm first? |
I am not the author of the project just read the code. To me there looks like changes need to be made for the config Ui and then adding the config for GPT4all would get this working, but I did not dive deeply into the code to figure everything out. |
@rizerphe great new development in the Local API case, early next week it will be able to ingest and answer questions on Markdown, PDF's, and other data, by adding the directory to the GUI. So all you would need to do is ask questions about the local files. Implementation super simple, download Windows, Mac or Ubuntu Linux front end, install it, add directory (some questions about it), but then using the OpenAI API (that you already use) you can interact directly with the data. Based on my understanding of the current plugin in the config you would want to have the following for people to change: Local LLM URL: predefault with HTTP://localhost:4891/v1 or leave blank I can help test and help write the appropriate docs, etc if you would like. |
Reopening this because adding one provider just isn't enough |
I think that by using the openai api but making the host / api key/ model changeable, you can service more than one provider. Example
these are just 3 easy ones, but I think there are others that use open AI api. |
@blu3knight for that I'd also need to then properly process the list of the models, and OpenAI complicates this a lot. I currently have the models just hard-coded. Why I can't just fetch them is that I have to somehow differentiate completion, transcription and chat models, and the API doesn't provide me with an easy way to do so. I will see what I can do, but it's more complicated than just exposing the |
Hoping we have an azure version of chatgpt! |
I'm the maintainer of https://github.com/BerriAI/litellm/, happy to make a PR to help integrate Local LLM models + azure while calling them in the chatGPT input / output format |
Is there local-llm support? Based on the discussion and the following snippet from README.md no?
|
Pls also add LM Studio |
Would you consider using a local LLM model that is compatible to the OpenAI GPT API, but would need a config to use locally.
As information here is an API that is able to be used by a lot of models.
https://github.com/go-skynet/LocalAI
For embeddings support this is new (mudler/LocalAI#70)
The text was updated successfully, but these errors were encountered: