Note: If you want to use vespper for your team or for your organisation please reach out to us. This open-source project is suited for single individual use. Any advanced investigation features will be under vespper-ee.
Vespper is an AI-powered on-call engineer. It can automatically jump into incidents & alerts with you, and provide you useful & contextual insights and RCA in real time.
Most people don't like to do on-call shifts. It requires engineers to be swift and solve problems quickly. Moreover, it takes time to reach to the root cause of the problem. That's why we developed Vespper. We believe Gen AI can help on-call developers solve issues faster.
- Overview
- Why
- Key Features
- Demo
- Getting started
- Support and feedback
- Contributing to Vespper
- Troubleshooting
- Telemetry
- License
- Learn more
- Contributors
- Automatic RCA: Vespper automatically listens to production incidents/alerts and automatically investigates them for you.
- Slack integration: Vespper lives inside your Slack. Simply connect it and enjoy an on-call engineer that never sleeps.
- Integrations: Vespper integrates with popular observability/incident management tools such as Datadog, Coralogix, Opsgenie and Pagerduty. It also integrates to other tools as GitHub, Notion, Jira and Confluence to gain insights on incidents.
- Intuitive UX: Vespper offers a familiar experience. You can talk to it and ask follow-up questions.
- Secure: Self-host Vespper and own your data. Always.
- Open Source: We love open-source. Self-host Vespper and use it for free.
Checkout our demo video to see Vespper in action.
In order to run Vespper, you need to clone the repo & run the app using Docker Compose.
Ensure you have the following installed:
- Docker & Docker Compose - The app works with Docker containers. To run it, you need to have Docker Desktop, which comes with Docker CLI, Docker Engine and Docker Compose.
You can find the installation video here.
-
Clone the repository:
git clone git@github.com:vespper/vespper.git && cd vespper
-
Configure LiteLLM Proxy Server:
We use LiteLLM Proxy Server to interact with 100+ of LLMs in a unified interface (OpenAI interface).
-
Copy the example files:
cp config/litellm/.env.example config/litellm/.env cp config/litellm/config.example.yaml config/litellm/config.yaml
-
Define your OpenAI key and place it inside
config/litellm/.env
asOPENAI_API_KEY
. You can get your API key here. Rest assured, you won't be charged unless you use the API. For more details on pricing, check here.
-
-
Copy the
.env.example
file:cp .env.example .env
-
Open the
.env
file in your favorite editor (vim, vscode, emacs, etc):vim .env # or emacs or vscode or nano
-
Update these variables:
-
SLACK_BOT_TOKEN
,SLACK_APP_TOKEN
andSLACK_SIGNING_SECRET
- These variables are needed in order to talk to Vespper on Slack. Please follow this guide to create a new Slack app in your organization. -
(Optional)
SMTP_CONNECTION_URL
- This variable is needed in order to invite new members to your Vespper organization via email and allow them to use the bot. It's not mandatory if you just want to test Vespper and play with it. If you do want to send invites to your team members, you can use a service like SendGrid/Mailgun. Should follow this pattern:smtp://username:password@domain:port
.
-
-
Launch the project:
docker compose up -d
That's it. You should be able to visit Vespper's dashboard in http://localhost:5173. Simply create a user (with the same e-mail as the one in your Slack user) and start to configure your organization. If something does not work for you, please checkout our troubleshooting or reach out to us via our support channels.
The next steps are to configure your organization a bit more (connect incident management tools, build a knowledge base, etc). Head over to the connect & configure section in our docs for more information π«
If you want, you can pull our Docker images from DockerHub instead of cloning the repo & building from scratch.
In order to do that, follow these steps:
-
Download configuration files:
curl https://raw.githubusercontent.com/vespper/vespper/main/tools/scripts/download_env_files.sh | sh
-
Follow steps 2 and 5 above to configure LiteLLM Proxy and your
.env
file respectively. Namely, you'd need to configure your OpenAI key atconfig/litellm/.env
and configure your Slack credentials in the root.env
. -
Spin up the environment using docker compose:
curl https://raw.githubusercontent.com/vespper/vespper/main/tools/scripts/start.sh | sh
That's it π« You should be able to visit Vespper's dashboard in http://localhost:5173.
-
Pull the latest changes:
git pull
-
Rebuild images:
docker-compose up --build -d
Visit our example guides in order to deploy Vespper to your cloud.
We use ChromaDB as our vector DB. We also use vector admin in order to see the ingested documents. To use vector admin, simply run this command:
docker compose up vector-admin -d
This command starts vector-admin at port 3001. Head over to http://localhost:3001 and configure your local ChromaDB. Note: Since vector-admin runs inside a docker container, in the "host" field make sure to insert http://host.docker.internal:8000
instead of http://localhost:8000
. This is because "localhost" doesn't refer to the host inside the container itself.
Moreover, in the "API Header & Key", you'd need to put "X-Chroma-Token" as the header and the value you have inside .env CHROMA_SERVER_AUTHN_CREDENTIALS
as the value.
To learn how to use VectorAdmin, visit the docs.
In order of preference the best way to communicate with us:
- GitHub Discussions: Contribute ideas, support requests and report bugs (preferred as there is a static & permanent for other community members).
- Slack: community support. Click here to join.
- Privately: contact at support@vespper.com
If you're interested in contributing to Vespper, checkout our CONTRIBUTING.md
file π« π§ββοΈ
If you encounter any problems/errors/issues with Vespper, checkout our troubleshooting guide. We try to update it regularly, and fix some of the urgent problems there as soon as possible.
Moreover, feel free to reach out to us at our support channels.
By default, Vespper automatically sends basic usage statistics from self-hosted instances to our server via PostHog.
This allows us to:
- Understand how Vespper is used so we can improve it.
- Track overall usage for internal purposes and external reporting, such as for fundraising.
Rest assured, the data collected is not shared with third parties and does not include any sensitive information. We aim to be transparent, and you can review the specific data we collect here.
If you prefer not to participate, you can easily opt-out by setting TELEMETRY_ENABLED=false
inside your .env
.
This project is licensed under the Apache 2.0 license - see the LICENSE file for details
Check out the official website at https://vespper.com for more information.
Built with β€οΈ by Dudu & Topaz