This Q&A application leverages the Ollama
language model (LLAMA 3.2:1b) to provide intelligent responses to user queries.
- LANGCHAIN
- OllamaLLM
- STREAMLIT
Download and install Ollama
. Then pull the required model as follows:
ollama run llama3.2:1b
Ollama has to be running locally for this app to work.
Generate API key for Langsmith to get an interactive tracking of your requests to LLM via langchain here
.
Refer sample .env in project's root directory.
pip install -r requirements.txt
streamlit run app.py