Skip to content

Latest commit

 

History

History
79 lines (42 loc) · 2.2 KB

README.md

File metadata and controls

79 lines (42 loc) · 2.2 KB

local-ai-code-completion README

Enables AI Assisted code completion, similar to Github Copilot, completely locally. No code leaves your machine. This has two major benefits:

  • Cost. This extension is completely free to use.
  • Privacy. No data is shared with third-parties, everything stays on your computer.

Features

AI Assisted code completion.

You trigger code completion by pressing Ctrl+Alt+C.

You accept a completion by pressing Tab.

You cancel an ongoing completion by pressing Escape.

You delete a non-accepted completion by pressing Escape.

GIF is sped up.

usage example

The extension uses codellama 7B under the hood, which supports many languages including Python, C++, Java, PHP, Typescript (Javascript), C# and Bash.

According to evaluation results from Meta, codellama 7B is almost on par with Codex, the model used by Github Copilot.

Requirements

This extension requires an Ollama installation to run the language model locally. Ollama does not currently support Windows, which also means that this extension is not compatible with Windows.

Known Issues

  • Time to start generating can be very long. This is an inherent issue to the model running locally on your computer.
  • Inference is slow. Also a consequence of running the model locally, but depends on your system.

Release Notes

1.2.0

Added

  • Config option for generation timeout
  • Config options for baseUrl of Ollama API (enables use of the extension with a remote or local Ollama server)

Changed

  • Improved logging

Fixed

  • Bug where aborting generation would not work

Thanks to @johnnyasantoss for making these changes.


1.1.0

Added options for changing model, temperature and top_p parameters.


1.0.0

Initial release.