Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Local LLM using FastAPI and Ollama #21183

Open
sbenhoff007 opened this issue Oct 21, 2024 · 0 comments
Open

Running Local LLM using FastAPI and Ollama #21183

sbenhoff007 opened this issue Oct 21, 2024 · 0 comments

Comments

@sbenhoff007
Copy link
Collaborator

FastAPI provides high-performance API framework to expose LLM capabilities as a service . Ollama offers effeicient way to download and run LLM models. By combining the strength of FastAPI, Ollama and Docker, users can deploy Local LLM on their local infrastructure flawlessly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants