-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Searxng and Perplexica are not working properly #60
Comments
Hi, thanks for the detailed report! There are a few items, I'll try to cover them one-by-one:
That's perfectly fine, but I'd recommend removing harbor defaults # see current
harbor defaults rm ollama # remove built-in ollama Otherwise - all Harbor services will try using internal Ollama instance. It shares model cache with the host instance, which is why you might be seeing the same models available. Note that removing it from defaults means that all the services talking to built-in Ollama won't be doing so configured to do so anymore (and you'll have to configure them manually). An alternative to avoid manual reconfiguration is to replace Harbor's internal Ollama URL with yours, as described in this issue # 172.17.0.1 is the IP of your host within the container
# 11434 is the port for Ollama on the host
harbor config set ollama.internal_url http://172.17.0.1:11434
This looks like one of two possible things: Web search is enabled after the initial generation on the message To solve, ensure that you're turning on "web search" before the first generation in the conversation, otherwise the RAG template might not be applied to the message upon first generation (and model won't see anything). Certain models are also very overfit to reply that they don't have access to the "current" information on such requests, even despite they do via RAG (Qwen 2.5 shouldn't be one of them, though, but I only tested with up to 14B) Embedding model missing By default, Open WebUI is configured with harbor ollama pull mxbai-embed-large:latest If these doesn't help, I'd take a look at verbose logs of Open WebUI itself to understand specific prompts and content sent to the Ollama API.
Unfortunately, some of the settings (Embeddings config) can't be pre-configured so have to be done via the Perplexica UI. This is mentioned in the Service Wiki, I should make it more prominent there. When working with Perplexica + Ollama combo there are a few things to keep in mind:
In your instance specifically, it'll be very slow, due to a combination of factors:
|
All config are default, nothing changed after git clone and habor installation steps.
I am using ollama instance running outside of docker (not the harbor ollama), which can be accessed by webui when used for chat.
harbor info
Searxng seem to return some results but they don't get passed to any LLM I try.
searxng log
Perplexica keeps looking for an answer, never ends.
The text was updated successfully, but these errors were encountered: