Skip to content

Latest commit

 

History

History
356 lines (251 loc) · 9.41 KB

README.md

File metadata and controls

356 lines (251 loc) · 9.41 KB

Badge with time spent Open in GitHub Codespaces

About

fish-ai adds AI functionality to Fish. It has been tested with macOS and Linux, but should run on any system where a supported version of Python and git is installed.

Originally based on Tom Dörr's fish.codex repository, but with some additional functionality.

It can be hooked up to OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, Anthropic, GitHub or a self-hosted LLM behind any OpenAI-compatible API.

If you like it, please add a ⭐. If you don't like it, create a PR. 😆

🎥 Demo

Demo

👨‍🔧 How to install

Install fish-ai

Install the plugin. You can install it using fisher.

fisher install realiserad/fish-ai

Create a configuration

Create a configuration file ~/.config/fish-ai.ini.

If you use a self-hosted LLM:

[fish-ai]
configuration = self-hosted

[self-hosted]
provider = self-hosted
server = https://<your server>:<port>/v1
model = <your model>
api_key = <your API key>

If you are self-hosting, my recommendation is to use Ollama with Llama 3.1 70B. An out of the box configuration running on localhost could then look something like this:

[fish-ai]
configuration = local-llama

[local-llama]
provider = self-hosted
model = llama3.1
server = http://localhost:11434/v1

If you use OpenAI:

[fish-ai]
configuration = openai

[openai]
provider = openai
model = gpt-4o
api_key = <your API key>
organization = <your organization>

If you use Azure OpenAI:

[fish-ai]
configuration = azure

[azure]
provider = azure
server = https://<your instance>.openai.azure.com
model = <your deployment name>
api_key = <your API key>

If you use Gemini:

[fish-ai]
configuration = gemini

[gemini]
provider = google
api_key = <your API key>

If you use Hugging Face:

[fish-ai]
configuration = huggingface

[huggingface]
provider = huggingface
email = <your email>
password = <your password>
model = meta-llama/Meta-Llama-3.1-70B-Instruct

Available models are listed here. Note that 2FA must be disabled on the account.

If you use Mistral:

[fish-ai]
configuration = mistral

[mistral]
provider = mistral
api_key = <your API key>

If you use GitHub Models:

[fish-ai]
configuration = github

[github]
provider = self-hosted
server = https://models.inference.ai.azure.com
api_key = <paste GitHub PAT here>
model = gpt-4o-mini

You can create a personal access token (PAT) here. The PAT does not require any permissions.

If you use Anthropic:

[anthropic]
provider = anthropic
api_key = <your API key>

🙉 How to use

Transform comments into commands and vice versa

Type a comment (anything starting with #), and press Ctrl + P to turn it into shell command!

You can also run it in reverse. Type a command and press Ctrl + P to turn it into a comment explaining what the command does.

Autocomplete commands

Begin typing your command and press Ctrl + Space to display a list of completions in fzf (it is bundled with the plugin, no need to install it separately). Completions load in the background and show up as they become available.

Suggest fixes

If a command fails, you can immediately press Ctrl + Space at the command prompt to let fish-ai suggest a fix!

🤸 Additional options

You can tweak the behaviour of fish-ai by putting additional options in your fish-ai.ini configuration file.

Explain in a different language

To explain shell commands in a different language, set the language option to the name of the language. For example:

[fish-ai]
language = Swedish

This will only work well if the LLM you are using has been trained on a dataset with the chosen language.

Change the temperature

Temperature is a decimal number between 0 and 1 controlling the randomness of the output. Higher values make the LLM more creative, but may impact accuracy. The default value is 0.2.

Here is an example of how to increase the temperature to 0.5.

[fish-ai]
temperature = 0.5

This option is not supported when using the huggingface provider.

Number of completions

To change the number of completions suggested by the LLM when pressing Ctrl + Space, set the completions option. The default value is 5.

Here is an example of how you can increase the number of completions to 10:

[fish-ai]
completions = 10

Personalise completions using commandline history

You can personalise completions suggested by the LLM by sending an excerpt of your commandline history.

To enable it, specify the maximum number of commands from the history to send to the LLM using the history_size option. The default value is 0 (do not send any commandline history).

[fish-ai]
history_size = 5

If you enable this option, consider the use of sponge to automatically remove broken commands from your commandline history.

Disable the status emoji

By default, a status emoji is shown in the right prompt. If you already use your right prompt for something else, or just don't like emojis, you can disable them:

[fish-ai]
status_emoji = False

Restart any open terminal windows for the change to take effect.

Preview pipes

To send the output of a pipe to the LLM when completing a command, use the preview_pipe option.

[fish-ai]
preview_pipe = True

This will send the output of the longest consecutive pipe after the last unterminated parenthesis before the cursor. For example, if you autocomplete az vm list | jq, the output from az vm list will be sent to the LLM.

This behaviour is disabled by default, as it may slow down the completion process and lead to commands being executed twice.

🎭 Switch between contexts

You can switch between different sections in the configuration using the fish_ai_switch_context command.

🐾 Data privacy

When using the plugin, fish-ai submits the name of your OS and the commandline buffer to the LLM.

When you codify or complete a command, it also sends the contents of any files you mention (as long as the file is readable), and when you explain or complete a command, the output from <command> --help is provided to the LLM for reference.

fish-ai can also send an exerpt of your commandline history when completing a command. This is disabled by default.

Finally, to fix the previous command, the previous commandline buffer, along with any terminal output and the corresponding exit code is sent to the LLM.

If you are concerned with data privacy, you should use a self-hosted LLM. When hosted locally, no data ever leaves your machine.

Redaction of sensitive information

The plugin attempts to redact sensitive information from the prompt before submitting it to the LLM. Sensitive information is replaced by the <REDACTED> placeholder.

The following information is redacted:

  • Passwords and API keys supplied on the commandline.
  • Base64 encoded data in single or double quotes.
  • PEM-encoded private keys.

🔨 Development

If you want to contribute, I recommend to read ARCHITECTURE.md first.

This repository ships with a devcontainer.json which can be used with GitHub Codespaces or Visual Studio Code with the Dev Containers extension.

To install fish-ai from a local copy, use fisher:

fisher install .

Enable debug logging

Enable debug logging by putting debug = True in your fish-ai.ini. Logging is done to syslog by default (if available). You can also enable logging to file using log = <path to file>, for example:

[fish-ai]
debug = True
log = ~/.fish-ai/log.txt

Run the tests

The installation tests are packaged into containers and can be executed locally with e.g. docker.

docker build -f tests/ubuntu/Dockerfile .
docker build -f tests/fedora/Dockerfile .
docker build -f tests/archlinux/Dockerfile .

The Python modules containing most of the business logic can be tested using pytest.

Create a release

A release is created by GitHub Actions when a new tag is pushed.

set tag (grep '^version =' pyproject.toml | \
    cut -d '=' -f2- | \
    string replace -ra '[ "]' '')
git tag -a "v$tag" -m "🚀 v$tag"
git push origin "v$tag"