GPTAggregator is a Python-based application that provides a unified interface for interacting with various large language models (LLMs) via their respective APIs. The project is designed to be user-friendly and easily extensible, making it a powerful tool for developers, researchers and anyone interested in exploiting the capabilities of large language models. GPTAggregator makes it possible to switch seamlessly from one model to another within the same conversation, centralise conversation storage, automatically optimise messages, and much more.
-
Supported LLM Providers:
-
Seamless Model Switching: Switch between different LLMs mid-conversation, leveraging the strengths of each to enhance the chat experience.
-
Secure Conversation Storage: Store and retrieve conversations for later reference or analysis. Conversations stayed on your local machine and are not shared with any third parties (only with local models).
-
Automatic Prompt Optimization: Utilize advanced prompt engineering techniques to improve model responses and user interaction over time.
-
Image Input Support: Upload a picture and ask a question to GPT-4 TURBO.
-
Retrieval Augmented Generation(RAG) using LlamaIndex: Use LlamaIndex to retrieve relevant documents for a given query and generate responses based on the retrieved documents. This feature is available for all models.
-
Clone the repository:
git clone https://github.com/AlexisBalayre/GPTAggregator.git cd GPTAggregator
-
Set up a virtual environment:
python3 -m venv venv source venv/bin/activate
-
Install the required packages:
python -m pip install -r requirements.txt
Set the necessary environment variables for the LLM providers you want to use (e.g., OpenAI, Anthropic, MistralAI).
cp .env.example .env
Update the .env
file with your API keys.
You can also configure the available models in the models.json
file.
Run the streamlit app:
streamlit run app.py
After launching the application, use the web interface provided by Streamlit to interact with the models. You can select different models from the sidebar, view and manage past conversations, and configure chat parameters to tailor the interaction to your preferences.
The project is structured as follows:
app.py
: The main entry point to start the application.Chat.py
: Defines the Chat class responsible for managing chat interactions.LLMConnector.py
: Handles connections to various LLM APIs.models.json
: Configuration file for available models.requirements.txt
: Lists all Python library dependencies.llm_conversations/
: Default directory where conversation histories are stored.
If you would like to contribute to the GPTAggregator project, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and ensure the code passes all tests.
- Submit a pull request with a detailed description of your changes.
Your contributions are greatly appreciated!
This project is licensed under the MIT License.
For any questions or inquiries, please reach out to the project maintainers at alexis@balayre.com.