k8sgpt
is a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English.
It has SRE experience codified into its analyzers and helps to pull out the most relevant information to enrich it with AI.
brew tap k8sgpt-ai/k8sgpt
brew install k8sgpt
RPM-based installation (RedHat/CentOS/Fedora)
32 bit:
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.4/k8sgpt_386.rpm
sudo rpm -ivh k8sgpt_386.rpm
64 bit:
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.4/k8sgpt_amd64.rpm
sudo rpm -ivh -i k8sgpt_amd64.rpm
DEB-based installation (Ubuntu/Debian)
32 bit:
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.4/k8sgpt_386.deb
sudo dpkg -i k8sgpt_386.deb
64 bit:
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.4/k8sgpt_amd64.deb
sudo dpkg -i k8sgpt_amd64.deb
APK-based installation (Alpine)
32 bit:
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.4/k8sgpt_386.apk
apk add k8sgpt_386.apk
64 bit:
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.4/k8sgpt_amd64.apk
apk add k8sgpt_amd64.apk
Failing Installation on WSL or Linux (missing gcc)
When installing Homebrew on WSL or Linux, you may encounter the following error:==> Installing k8sgpt from k8sgpt-ai/k8sgpt Error: The following formula cannot be installed from a bottle and must be
built from the source. k8sgpt Install Clang or run brew install gcc.
If you install gcc as suggested, the problem will persist. Therefore, you need to install the build-essential package.
sudo apt-get update
sudo apt-get install build-essential
- Download the latest Windows binaries of k8sgpt from the Release tab based on your system architecture.
- Extract the downloaded package to your desired location. Configure the system path variable with the binary location
- Run
k8sgpt version
- Currently the default AI provider is OpenAI, you will need to generate an API key from OpenAI
- You can do this by running
k8sgpt generate
to open a browser link to generate it
- You can do this by running
- Run
k8sgpt auth
to set it in k8sgpt.- You can provide the password directly using the
--password
flag.
- You can provide the password directly using the
- Run
k8sgpt filters
to manage the active filters used by the analyzer. By default, all filters are executed during analysis. - Run
k8sgpt analyze
to run a scan. - And use
k8sgpt analyze --explain
to get a more detailed explanation of the issues.
K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but you will be able to write your own analyzers.
- podAnalyzer
- pvcAnalyzer
- rsAnalyzer
- serviceAnalyzer
- eventAnalyzer
- ingressAnalyzer
- statefulSetAnalyzer
- deploymentAnalyzer
- cronJobAnalyzer
- nodeAnalyzer
- hpaAnalyzer
- pdbAnalyzer
- networkPolicyAnalyzer
Kubernetes debugging powered by AI
Usage:
k8sgpt [command]
Available Commands:
analyze This command will find problems within your Kubernetes cluster
auth Authenticate with your chosen backend
completion Generate the autocompletion script for the specified shell
filters Manage filters for analyzing Kubernetes resources
generate Generate Key for your chosen backend (opens browser)
help Help about any command
integration Intergrate another tool into K8sGPT
serve Runs k8sgpt as a server
version Print the version number of k8sgpt
Flags:
--config string config file (default is $HOME/.k8sgpt.yaml)
-h, --help help for k8sgpt
--kubeconfig string Path to a kubeconfig. Only required if out-of-cluster. (default "$HOME/.kube/config")
--kubecontext string Kubernetes context to use. Only required if out-of-cluster.
Use "k8sgpt [command] --help" for more information about a command.
Manage filters
List filters
k8sgpt filters list
Add default filters
k8sgpt filters add [filter(s)]
- Simple filter :
k8sgpt filters add Service
- Multiple filters :
k8sgpt filters add Ingress,Pod
Add default filters
k8sgpt filters remove [filter(s)]
- Simple filter :
k8sgpt filters remove Service
- Multiple filters :
k8sgpt filters remove Ingress,Pod
Run a scan with the default analyzers
k8sgpt generate
k8sgpt auth
k8sgpt analyze --explain
Filter on resource
k8sgpt analyze --explain --filter=Service
Filter by namespace
k8sgpt analyze --explain --filter=Pod --namespace=default
Output to JSON
k8sgpt analyze --explain --filter=Service --output=json
Anonymize during explain
k8sgpt analyze --explain --filter=Service --output=json --anonymize
With this option, the data is anonymized before being sent to the AI Backend. During the analysis execution, k8sgpt
retrieves sensitive data (Kubernetes object names, labels, etc.). This data is masked when sent to the AI backend and replaced by a key that can be used to de-anonymize the data when the solution is returned to the user.
- Error reported during analysis:
Error: HorizontalPodAutoscaler uses StatefulSet/fake-deployment as ScaleTargetRef which does not exist.
- Payload sent to the AI backend:
Error: HorizontalPodAutoscaler uses StatefulSet/tGLcCRcHa1Ce5Rs as ScaleTargetRef which does not exist.
- Payload returned by the AI:
The Kubernetes system is trying to scale a StatefulSet named tGLcCRcHa1Ce5Rs using the HorizontalPodAutoscaler, but it cannot find the StatefulSet. The solution is to verify that the StatefulSet name is spelled correctly and exists in the same namespace as the HorizontalPodAutoscaler.
- Payload returned to the user:
The Kubernetes system is trying to scale a StatefulSet named fake-deployment using the HorizontalPodAutoscaler, but it cannot find the StatefulSet. The solution is to verify that the StatefulSet name is spelled correctly and exists in the same namespace as the HorizontalPodAutoscaler.
Anonymization does not currently apply to events.
Manage integrations
List integrations
k8sgpt integrations list
Activate integrations
k8sgpt integrations activate [integration(s)]
Use integration
k8sgpt analyze --filter=[integration(s)]
Deactivate integrations
k8sgpt integrations deactivate [integration(s)]
Serve mode
k8sgpt serve
Analysis with serve mode
curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"
k8sgpt
stores config data in the $XDG_CONFIG_HOME/k8sgpt/k8sgpt.yaml
file. The data is stored in plain text, including your OpenAI key.
Config file locations:
OS | Path |
---|---|
MacOS | ~/Library/Application Support/k8sgpt/k8sgpt.yaml |
Linux | ~/.config/k8sgpt/k8sgpt.yaml |
Windows | %LOCALAPPDATA%/k8sgpt/k8sgpt.yaml |
Please read our contributing guide.
Find us on Slack