Skip to content

Welcome to the Llama-3 Chatbot project! This chatbot allows you to interact with the Llama-3 model via a simple command-line interface. Type your messages, and receive responses from Llama-3.

License

Notifications You must be signed in to change notification settings

anushkaspatil/llama3-chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Llama-3 Chatbot

Welcome to the Llama-3 Chatbot project! This chatbot allows you to interact with the Llama-3 model via a simple command-line interface on your local system. Type your messages, and receive responses from Llama-3.

Table of Contents

Installation

Prerequisites

Llama-3 setup

  • Open the terminal and change the directory where Ollama is located (C:OS drive)
  • Download the llama3 model on the local system
    ollama pull llama3
  • Lists all the models downloaded in the local system (4.7 GB)
    ollama list
  • Run the model
    ollama run llama3:latest
  • Serving llama locally
    ollama serve

Steps

  1. Clone the repository:

    git clone https://github.com/anushkaspatil/llama3-chatbot.git
    cd llama3-chatbot
  2. Create a virtual environment (optional but recommended):

    python -m venv venv
    source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
  3. Install the required Python packages:

    pip install ollama

Usage

You can choose between two methods to run the chatbot: synchronous and asynchronous.

Synchronous Method

The synchronous method is straightforward is slower due to the blocking calls. To start the chat with the synchronous method, install and import the dependencies and run the file chat.py

Features

  • Real-time streaming of responses from the Llama model
  • Infinite loop for continuous interaction
  1. Run the chatbot:
python chat.py
  1. Interact with the chatbot:
    • Enter your messages when prompted.
    • The chatbot will provide responses from the Llama model.

Asynchronous Method

This method streams responses from a Llama model using the AsyncClient. The chatbot continuously prompts the user for input and streams the responses in real-time. AsyncClient from Ollama is required.

Features

  • Asynchronous operation using asyncio
  • Real-time streaming of responses from the Llama model
  • Infinite loop for continuous interaction
  1. Run the chatbot:

    python updated_chat.py
  2. Interact with the chatbot:

    • Enter your messages when prompted.
    • The chatbot will stream responses from the Llama model.

By following these steps and guidelines, you'll be well-equipped to embark on your LLM journey using Llama-3. Happy coding!

Examples

These are a few screenshotes attached of the working chatbot for reference.

Screenshot (312)

Screenshot (311)

Resources

  1. Official Doucumentation Meta Llama
  2. Reference Article - Beginner Friendly
  3. YouTube video - 1st half

License

This project is licensed under the MIT License.

About

Welcome to the Llama-3 Chatbot project! This chatbot allows you to interact with the Llama-3 model via a simple command-line interface. Type your messages, and receive responses from Llama-3.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages