-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: How to ensure my Intel Arc A770 GPU is used? #1083
Comments
Hi, fabric does not host any LLM on his own, but uses different AI vendors. |
So I agree and understand that Fabric itself is not the LLM and, however I apologize for my naivete up front but I guess my problem is tying ollama to Fabric and ensuring they work well together? I have gotten ollama to work with my Intel ARC A770 GPU recently on a side project using miniforge etc. But it's not consistent. I know that is not the concern of this project, but I love the project and what possibilities it has for everyone who can successfully run it, preferably on a GPU... HA! So.. LSS if you (or anyone) has any great suggestions or articles to try/read on my issue, I would greatly appreciate it. |
use |
It should also be noted that Ollama also has a bad habit of unloading models if they haven't been actively used in the past 5 mins. There's an env var you can set that will have Ollama keep the model loaded longer. To have a longer delay before a model is unloaded, set the |
What is your question?
After reading the documentation, I am still not clear how to get my Intel Arc A770 GPU selected to work over my CPU. Any suggestions would be appreciated. Thanks in advance.
The text was updated successfully, but these errors were encountered: