How to use different LLM's and host everything locally. #271
-
I tried to host everything locally followed the instructions: 1.Firstly, I runned follow : Use Manifest to host API locally: Just run in a seperate shell. You will need 80gb of extra space on your disk python3 -m manifest.api.app
3.set the env file: 4.then I runned docker-compose build && docker-compose up, It worked well I got I don't know what's the problem and don't know how to do next step. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
I also have this issue. Were you able to find the solution? |
Beta Was this translation helpful? Give feedback.
-
Also please note, default embeddings are locally run. so no need to Finally I recently added a swappable base_url for openai client, thus if you configure docsgpt with LLM_NAME=openai But please make sure you put something in api_key like I will need to update our documentation to show a simple process |
Beta Was this translation helpful? Give feedback.
Also please note, default embeddings are locally run. so no need to
EMBEDDINGS_KEY=http://xxx.xxx.xxx.xxx:5000/embed
Finally I recently added a swappable base_url for openai client, thus if you configure docsgpt with LLM_NAME=openai
You can run any model you want locally with openai compatible servers. for example vllm
Or ollama
But please make sure you put something in api_key like
not-key
Also make sure you specify correct MODEL_NAME that you have launched via methods above or any other.
Also you will need to specify OPENAI_BASE_URL, if you are running it locally its probably
http://localhost:11434/v1
for ollama. Also be careful of running docsgpt inside container as you will need to po…