Skip to content

Personal chatbot using LLMs currently supporting huggingface API and Nvidia API with LightRAG implementation for RAG, and stable diffusion for image generation with Lora for image enhancement

License

Notifications You must be signed in to change notification settings

MRX760/Personal-chatbot

Repository files navigation

Personal-chatbot

Personal chatbot using LLMs currently supporting huggingface API and Nvidia API with LightRAG implementation for RAG, and stable diffusion for image generation with Lora for image enhancement

The chatbot code are using OOP method which makes this implementation much flexible for any kind of LLMs API and local running model (future work for demo)

For implementation example, please refer to chabot.py

LightRAG documentation: click here

Supported Nvidia Model: click here

Lora and SD checkpoint: click here

Limitations:

  • Large file analysis would make the input token much larger and thus raising an error.
  • Long chat history also makes input token much larger
  • Because all of the task performed by LLM, the performance of analysis and RAG are heavily depends on LLM model you're using and text-embedding model for retrieval on RAG.
  • In default, this code use llama 70b with limited strong-detailed analysis compared to human.
  • RAG method are using LightRAG

How to use the demo:

  • Install dependencies pip install -r requirements.txt
  • Make sure you've cloned the latest LightRAG sub-module used in this repo. Using command git clone https://github.com/HKUDS/LightRAG.git
  • Put Lora and SD checkpoint inside stable-diffusion/models corresponding folder. However, it's customizable on your needs but you'll have to modify the streamlit_app.py code.
  • Insert your API key inside the bot.set_api_key("nvapi-xxxx") to use nvidia NIM or bot.set_api_key("hf_xxxx") to use hf model.
  • To run streamlit GUI, run this on your environment command line or prompt: streamlit run streamlit_app.py

Starting guide on demo

  • Send /relearn command to chatbot to begin re-learning on every file inside data folder
  • Use command /rag what-to-do to use RAG in chatbot
  • To perform analysis, simply upload file into chatbot and send this syntax: /analyze filename.extension what-to-do or /analyze filename.extension for general information analysis

Button to upload file into chatbot

  • To save an uploaded file into /data files for RAG, simply use /save command and chatbot will automatically re-learn without using /relearn command.
  • To perform image generation, use english word like generate me an image of.. or make me an image... Currently, the algorithm works by detecting the patterns of user prompt in english.

Screenshots of Nvidia API NIM Demo

Document Analysis Example

Finding Page

Finding page

Analyzing CSV File

Analyzing csv file

Note: Performance depends on the type of LLM. The cut-off part of the chat is influenced by the limitations of the LLM used in the screenshot.

Analyzing PDF File

Analyzing pdf file

Image Generation Examples

Mouse Image

Mouse

House Image

House

About

Personal chatbot using LLMs currently supporting huggingface API and Nvidia API with LightRAG implementation for RAG, and stable diffusion for image generation with Lora for image enhancement

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages