- turn your laptop computer into a webserver of nested data and code capable agents !
A Data-Capable AGI-style Agent Builder of Agents , that creates swarms , runs commands and securely processes and creates datasets, databases, visualizations, and analyses.
- DataTonic solves simple tasks that require complex data processing
- it's perfect for data analytics and business intelligence
you can use image or audio input in your native language : DataTonic is AGI for all
- your request is first processed according to a statement of work
- additional data is retrieved and stored to enrich it
- multiple agents are created based on your specific use case
- Multiple Multi-Agent Environments produce files and folders according to your use case.
DataTonic produces fixed business intelligence assets based on autonomous multimedia data processing.
- Sales Profiles
- Adaptive Summaries
- Dataset Analytics
- Research Reports
- Business Automation Applications
Based on those it can produce :
- Strategies
- Applications
- Analyses
- rich business intelligence
DataTonic provides junior executives with an extremely effective solution for basic and time-consuming data processing, document creation or business intelligence tasks.
Now anyone can :
- get rapid client profile and sales strategy including design assets for executions with a single request
- create a functional web application
- create entire databases of business intelligence that can be used by enterprise systems.
Do not wait for accounting, legal or business intelligence reporting with uncertain quality and long review cycles. DataTonic accelerates the slowest part of analysis : data processing and project planning execution.
DataTonic is unique for many reasons :
- local and secure application threads.
- compatible with microsoft enterprise environments.
- based on a rigorous and reproducible evaluation method.
- developper friendly : easily plug in new functionality and integrations.
- is DataTonic Accessible ?
yes DataTonic is accessible both audio and image input.
- can i use it to make beautiful graphs and statistical analyses with little or no starting data ?
yes. DataTonic will look for the data it needs but you can add your .db files or any other types of files with DataTonic.
- can i write a book or a long report ?
yes. DataTonic produces rich , full-length content.
- can i make an app ?
yes. DataTonic is more tailored to business intelligence but it is able to produce functioning applications inside generated repositories.
- can it do my job ?
yes DataTonic is able to automate many junior positions and it will include more enterprise connectors, soon !
You can use datatonic however you want, here's how we're using it :
- add case books to your folder for embedding : now DataTonic always presents its results in a case study!
- add medical textbooks to your folder for embedding : now DataTonic helps you through med-school !
- add entire company business information : Data tonic is now your strategic advisor !
- ask data tonic to create targetted sales strategies : now DataTonic is your sales assistant !
Data Tonic is the first multi-nested agent-builder-of-agents!
DataTonic Team started by evaluating multiple models against the new google/gemini models , testing all functions. Based on our evaluation results we optimized default prompts and created new prompts and prompt pipeline configurations.
Learn more about using TruLens and our scientific method in the evaluation folder. We share our results in the evaluation/results folder.
you can also replicate our evaluation by following the instructions in #Easy Deploy
DataTonic is the first application to use a doubly nested multi-environment multi-agent builder-of-agents configuration . Here's how it works !
Data Tonic uses a novel combination of three orchestration libraries.
- Each library creates it's own multi-agent environment.
- Each of these environments includes a code execution and code generation capability.
- Each of these stores data and embeddings on it's own datalake.
- Autogen is at the interface with the user and orchestrates the semantic kernel hub as well as using Taskweaver for data processing tasks.
- Semantic-kernel is a hub that includes internet browsing capabilities and is specifically designed to use taskweaver for data storage and retrieval and produce fixed intelligence assets also specifically designed for Autogen.
- Taskweaver is used as a plugin in semantic kernel for data storage and retrieval and also in autogen, but remains an autonomous task that can execute complex tasks in its multi-environment execution system.
- Gemini is used in various configurations both for text using the autogen connector and for multimodal/image information processing.
- Autogen uses a semantic-kernel function calling agent to access the internet using the google api semantic-kernel then processes the new information and stores it inside a SQL database orchestrated by Taskweaver.
Please try the methods below to use and deploy DataTonic.
The easiest way to use DataTonic is to deploy on github spaces and use the notebooks in the evaluation/results folder .
Click here for easy_deploy [COMING SOON!]
in the mean time please follow the instructions below:
- Star then Fork this repository
- use these instructions to deploy a code space for DataTonic
- Configure DataTonic according to the instructions below or
- navigate to evaluation/results to use our evaluation methods.
Please follow the instructions in this readme exactly.
- Step 1 : Star this repository
- Step 2 : Fork this repository
please use command line with administrator priviledges for the below.
- run the following command to install googlecloud/vertex cli :
pip install google-cloud-aiplatform
- navigate to this url :
https://console.cloud.google.com/vertex-ai
-
Create a new project and add a payment method.
-
click on 'multimodal' on the left then 'my prompts' on the top:
-
click on 'create prompt' and 'GET CODE' on the top right in the next screen:
-
then click on 'curl' on the top right to find your 'endpoint' and projectid , and other information e.g.
cat << EOF > request.json
{
"contents": [
{
"role": "user",
"parts": []
}
],
"generation_config": {
"maxOutputTokens": 2048,
"temperature": 0.4,
"topP": 1,
"topK": 32
}
}
EOF
API_ENDPOINT="us-central1-aiplatform.googleapis.com"
PROJECT_ID="focused-album-408018"
MODEL_ID="gemini-pro-vision"
LOCATION_ID="us-central1"
curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${API_ENDPOINT}/v1/projects/${PROJECT_ID}/locations/${LOCATION_ID}/publishers/google/models/${MODEL_ID}:streamGenerateContent" -d '@request.json'
run the following command to find your API key after following the instructions above:
gcloud auth print-access-token
IMPORTANT : this key will expire every less than 30 minutes, so please refresh it regularly and accordingly
- navigate to openai and create a new key :
https://platform.openai.com/api-keys
- use the Azure OAI portal by navigating to this page :
https://oai.azure.com/portal
-
go to playground
-
make a note of your
endpoint
,API Key
, andmodel name
to use it later.
-
visit this url and download and install the required packages :
https://www.sqlite.org/download.html
For Windows : You need the SQLite source files, including the sqlite3.h header file, for the pysqlite3 installation. -
Go to the SQLite Download Page.
-
Download the sqlite-amalgamation-*.zip file under the "Source Code" section.
-
Extract the contents of this zip file to a known directory (e.g., C:\sqlite).
Set Environment Variables:
You need to ensure that the directory where you extracted the SQLite source files is included in your system's PATH environment variable.
- Right-click on 'This PC' or 'My Computer' and select 'Properties'.
- Click on 'Advanced system settings' and then 'Environment Variables'.
- Under 'System Variables', find and select the 'Path' variable, then click 'Edit'.
- Add the path to the directory where you extracted the SQLite source files (e.g., C:\sqlite).
- Click 'OK' to close all dialog boxes.
add your path :
setx SQLITE_INC "C:\sqlite"
proceed with the rest of the setup below.
TaskWeaver requires Python >= 3.10. It can be installed by running the following command:
# [optional to create conda environment]
# conda create -n taskweaver python=3.10
# conda activate taskweaver
# clone the repository
git clone https://github.com/microsoft/TaskWeaver.git
cd TaskWeaver
# install the requirements
pip install -r requirements.txt
Command Prompt: download and install wsl:
pip install wsl
then run
wsl
then install sqlite
sudo apt-get update
sudo apt-get install libsqlite3-dev #or : sqlite-devel
sudo pip install pysqlite3
This section provides instructions on setting up the project. Please turn off your firewall and use administrator priviledges on the command line.
clone this repository using the command line :
git clone https://github.com/Tonic-AI/DataTonic.git
- add relevant files one by one with no folder to the folder called 'src/autogen/add_your_files_here'
- supported file types : ".pdf" , ".html" , ".eml" & ".xlsx":
- you'll need the keys you made above for the following.
- use a text editor , and IDE or command line to edit the following documents.
- Edit then save the files
edit 'OAI_CONFIG_LIST'
"api_key": "your OpenAI Key goes here",
and
"api_key": "your Google's GenAI Key goes here",
1. modify Line 135 in autogen_module.py
```python
os.environ['OPENAI_API_KEY'] = 'Your key here'
```
2. modify .env.example
```os
OPENAI_API_KEY = "your_key_here"
```
save as '.env' - this should create a new file.
**or**
rename to '.env' - this will rename the existing file.
3. modify src\tonicweaver\taskweaver_config.json
```json
{
"llm.api_base": "https://api.openai.com/v1",
"llm.api_key": "",
"llm.model": "gpt-4-1106-preview"
}
```
4.
edit ./src/semantic_kernel/semantic_kernel_module.py
line 64: semantic_kernel_data_module = SemanticKernelDataModule('<google_api_key>', '<google_search_engine_id>')
and
line 158: semantic_kernel_data_module = SemanticKernelDataModule('<google_api_key>', '<google_search_engine_id>')
with your google API key and Search Engine ID , made above.
src/semantic_kernel/googleconnector.py
from the project directory :
cd ./src/tonicweaver
git clone https://github.com/microsoft/TaskWeaver.git
cd ./src/tonicweaver/TaskWeaver
# install the requirements
pip install -r requirements.txt
from the project directory :
pip install -r requirements.txt
python run app.py
We welcome contributions from the community! Whether you're opening a bug report, suggesting a new feature, or submitting a pull request, every contribution is valuable to us. Please follow these guidelines to contribute to DataTonic.
Before you begin, ensure you have the latest version of the main branch:
git checkout main
git pull origin main
Then, create a new branch for your contribution:
Copy code
git checkout -b <your-branch-name>
If you encounter any bugs, please file an issue on our GitHub repository. Include as much detail as possible:
- A clear and concise description of the bug
- Steps to reproduce the behavior
- Expected behavior vs actual behavior
- Screenshots if applicable
- Any additional context or logs
We are always looking for suggestions to improve DataTonic. If you have an idea, please open an issue with the tag 'enhancement'. Provide:
- A clear and concise description of the proposed feature
- Any relevant examples or mockups
- A description of the benefits to DataTonic users
If you'd like to contribute code, please follow these steps:
Follow the setup instructions in the README to get DataTonic running on your local machine.
Ensure that your changes adhere to the existing code structure and standards. Add or update tests as necessary.
Write clear and meaningful commit messages. This helps to understand the purpose of your changes and speed up the review process.
git commit -m "A brief description of the commit"
Push your changes to your remote branch:
git push origin <your-branch-name>
Go to the repository on GitHub and open a new pull request against the main branch. Provide a clear description of the problem you're solving. Link any relevant issues.
Maintainers will review your pull request. Be responsive to feedback to ensure a smooth process.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project, you agree to abide by its terms.
By contributing to DataTonic, you agree that your contributions will be licensed under its LICENSE.
Thank you for contributing to DataTonic!🚀