π Full Documentation π§βπ» CLI π Use Cases π¬ Join the Discord π Read the Blog πΊ YouTube
The TrustGraph Engine provides all the tools, services, Graph Stores, and VectorDBs needed to deploy reliable, scalable, and accurate AI agents. The AI Engine includes:
- Bulk document ingestion
- Automated Knowledge Graph Building
- Automated Vectorization
- Model Agnostic LLM Integration
- RAG combining both Knowledge Graphs and VectorDBs
- Enterprise Grade Reliability, Scalability, and Modularity
- Data Privacy enablement with local LLM deployments with Ollama and Llamafile
Ingest your sensitive data in batches and build reusable and enhanced knowledge cores that transform general purpose LLMs into knowledge specialists. The observability dashboard allows you to monitor LLM latency, resource management, and token throughput in realtime. Visualize your enhanced data with Neo4j.
There are two primary ways of interacting with TrustGraph:
- TrustGraph CLI
- Dev Configuration UI
The TrustGraph CLI
installs the commands for interacting with TrustGraph while running. The Configuration UI
enables customization of TrustGraph deployments prior to launching.
pip3 install trustgraph-cli==0.13.2
Note
The TrustGraph CLI
version must match the desired TrustGraph
release version.
While TrustGraph is endlessly customizable, the configuration editor can build a custom configuration in seconds with Docker.
Launch the Developer Config UI π
TrustGraph-Deploy.mp4
Launch Steps:
- For the selected
Model Deployment
, follow the instructions inModel credentials
section to configure any required environment variables or paths - Fill in the desired LLM name in the
Model Name
field that corresponds to your selectedModel Deployment
- Set all desired
Model Parameters
- Click
GENERATE
under theDeployment configuration
section - Follow the instructions under
Launch
Once deploy.zip
has been unzipped, launching TrustGraph is as simple as navigating to the deploy
directory and running:
docker compose up -d
When finished, shutting down TrustGraph is as simple as:
docker compose down -v
TrustGraph releases are available here. Download deploy.zip
for the desired release version.
Release Type | Release Version |
---|---|
Latest | 0.14.4 |
Stable | 0.13.2 |
TrustGraph is fully containerized and is launched with a YAML
configuration file. Unzipping the deploy.zip
will add the deploy
directory with the following subdirectories:
docker-compose
minikube-k8s
gcp-k8s
Each directory contains the pre-built YAML
configuration files needed to launch TrustGraph:
Model Deployment | Graph Store | Launch File |
---|---|---|
AWS Bedrock API | Cassandra | tg-bedrock-cassandra.yaml |
AWS Bedrock API | Neo4j | tg-bedrock-neo4j.yaml |
AzureAI API | Cassandra | tg-azure-cassandra.yaml |
AzureAI API | Neo4j | tg-azure-neo4j.yaml |
AzureOpenAI API | Cassandra | tg-azure-openai-cassandra.yaml |
AzureOpenAI API | Neo4j | tg-azure-openai-neo4j.yaml |
Anthropic API | Cassandra | tg-claude-cassandra.yaml |
Anthropic API | Neo4j | tg-claude-neo4j.yaml |
Cohere API | Cassandra | tg-cohere-cassandra.yaml |
Cohere API | Neo4j | tg-cohere-neo4j.yaml |
Google AI Studio API | Cassandra | tg-googleaistudio-cassandra.yaml |
Google AI Studio API | Neo4j | tg-googleaistudio-neo4j.yaml |
Llamafile API | Cassandra | tg-llamafile-cassandra.yaml |
Llamafile API | Neo4j | tg-llamafile-neo4j.yaml |
Ollama API | Cassandra | tg-ollama-cassandra.yaml |
Ollama API | Neo4j | tg-ollama-neo4j.yaml |
OpenAI API | Cassandra | tg-openai-cassandra.yaml |
OpenAI API | Neo4j | tg-openai-neo4j.yaml |
VertexAI API | Cassandra | tg-vertexai-cassandra.yaml |
VertexAI API | Neo4j | tg-vertexai-neo4j.yaml |
Once a configuration launch file
has been selected, deploy TrustGraph with:
Docker:
docker compose -f <launch-file.yaml> up -d
Kubernetes:
kubectl apply -f <launch-file.yaml>
- PDF decoding
- Text chunking
- On-Device SLM inference with Ollama or Llamafile
- Cloud LLM infernece:
AWS Bedrock
,AzureAI
,Anthropic
,Cohere
,OpenAI
, andVertexAI
- Chunk-mapped vector embeddings with HuggingFace models
- RDF Knowledge Extraction Agents
- Apache Cassandra or Neo4j as the graph store
- Qdrant as the VectorDB
- Build and load Knowledge Cores
- RAG query service using both the Graph Store and VectorDB
- Grafana telemetry dashboard
- Module integration with Apache Pulsar
- Container orchestration with
Docker
,Podman
, orMinikube
TrustGraph is designed to be modular to support as many Language Models and environments as possible. A natural fit for a modular architecture is to decompose functions into a set of modules connected through a pub/sub backbone. Apache Pulsar serves as this pub/sub backbone. Pulsar acts as the data broker managing data processing queues connected to procesing modules.
- For processing flows, Pulsar accepts the output of a processing module and queues it for input to the next subscribed module.
- For services such as LLMs and embeddings, Pulsar provides a client/server model. A Pulsar queue is used as the input to the service. When processed, the output is then delivered to a separate queue where a client subscriber can request that output.
TrustGraph extracts knowledge from a text corpus (PDF or text) to an ultra-dense knowledge graph using 3 automonous knowledge agents. These agents focus on individual elements needed to build the RDF knowledge graph. The agents are:
- Topic Extraction Agent
- Entity Extraction Agent
- Node Connection Agent
The agent prompts are built through templates, enabling customized extraction agents for a specific use case. The extraction agents are launched automatically with the loader commands.
PDF file:
tg-load-pdf <document.pdf>
Text or Markdown file:
tg-load-text <document.txt>
Once the knowledge graph and embeddings have been built or a knowledge core has been loaded, RAG queries are launched with a single line:
tg-query-graph-rag -q "Write a blog post about the 5 key takeaways from SB1047 and how they will impact AI development."