Skip to content

How to use different LLM's and host everything locally. #271

Answered by dartpain
shawnZhang-3 asked this question in Q&A
Discussion options

You must be logged in to vote

Also please note, default embeddings are locally run. so no need to EMBEDDINGS_KEY=http://xxx.xxx.xxx.xxx:5000/embed

Finally I recently added a swappable base_url for openai client, thus if you configure docsgpt with LLM_NAME=openai
You can run any model you want locally with openai compatible servers. for example vllm
Or ollama

But please make sure you put something in api_key like not-key
Also make sure you specify correct MODEL_NAME that you have launched via methods above or any other.
Also you will need to specify OPENAI_BASE_URL, if you are running it locally its probably http://localhost:11434/v1 for ollama. Also be careful of running docsgpt inside container as you will need to po…

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@gl2007
Comment options

Answer selected by dartpain
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants