-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enumerate Ollama models on the server pointed to by :host directive #447
Comments
This has come up before, see #394. Fetching it easy enough, the question is when it should be done. We don't want gptel to be making network request at the time that |
I understand. Perhaps in the Transient menu there could be an option under the "models" submenu to populate the Ollama list? |
Do you add and remove models from Ollama often enough that this is a concern? I'm curious. I can't run Ollama right now, but back when I had an Ollama-capable PC, I changed models maybe twice in three months.
There is no "models" submenu.
True. I just don't know where to put this code in the usual course of using gptel. There is one additional concern. How should conflicts between the Models explicitly defined for gptel: :models
'(model1
(model2
:capabilities (media nosystem)
:description "description2"
:mime-types ("image/jpeg" "image/png"))
model3) Models returned from API call, after processing: :models
'((model1
:description "description1"
:context-window 32)
(model2
:capabilities (media)
:description "description2_alternate"
:mime-types ("image/png" "image/heic"))) It's clear how to update
|
In the branch (gptel--ollama-fetch-models) ;; Update active backend if it's of type Ollama (gptel--ollama-fetch-models "Ollama") ;; Update backend named "Ollama"
;; OR
(gptel--ollama-fetch-models some-backend) ;; Update backend "some-backend" You can test it out. The merge strategy is non-destructive, and when there is a conflict it prioritizes provided metadata over metadata returned by Ollama. The problem, as discussed, is where/when this code should run. |
Presently, I have to add/remove all Ollama models manually modifying the ":models" string.
Would it be possible for GPTEL to enumerate the existing models on the fly when the transient menu opens, just like it is done for Anthropic, Gemini and OpenAI models?
Thanks for considering my request and apologies if I am missing something.
The text was updated successfully, but these errors were encountered: