Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sublime-LSP Local Code-Assistant Support #2520

Open
fruffy opened this issue Sep 23, 2024 · 7 comments
Open

Sublime-LSP Local Code-Assistant Support #2520

fruffy opened this issue Sep 23, 2024 · 7 comments
Labels
enhancement protocol updates Issues related to changes in the LSP protocol. Usually because we're out-of-date.

Comments

@fruffy
Copy link

fruffy commented Sep 23, 2024

There is currently no first-class code local assistant plugin for Sublime-LSP. Github Copilot and Codeium plugins exist but they require a consistent internet connection and use proprietary models.

It would be nice to have good support for a local model. There seems to be some interest in integrating tabby with Sublime-LSP: TabbyML/tabby#219

Unfortunately, I do not know how to build such a plugin. How easy is it to add support or integrate SublimeLSP with a code assistant?

@predragnikolic
Copy link
Member

predragnikolic commented Sep 23, 2024

Is your request is to have a LSP-tabby package?
Does tabby support the LSP spec?

Not so related, but have you maybe seen https://github.com/yaroslavyaroslav/OpenAI-sublime-text ? I guess OpenAi package doesn't have ghost text completion items. But it does support having a local model.

@fruffy
Copy link
Author

fruffy commented Sep 23, 2024

Is your request is to have a LSP-tabby package?

Yes, tabby could be a good target. It looks like they have LSP support https://github.com/TabbyML/tabby/tree/main/clients/example-vscode-lsp

Not so related, but have you maybe seen https://github.com/yaroslavyaroslav/OpenAI-sublime-text ? I guess OpenAi package doesn't have ghost text completion items. But it does support having a local model.

Yes, I have looked at this repository and I think it is neat. But as you said it doesn't offer the ghost text completion, which imho is a huge value-add.

@jwortmann
Copy link
Member

jwortmann commented Oct 3, 2024

It seems like the LSP server is implemented via tabby-agent (https://www.npmjs.com/package/tabby-agent) and it is a server (the first one?) with support for textDocument/inlineCompletion 🚀

This client currently doesn't have support for inline completions ("ghost text completion"), but it would be interesting to see how it could be implemented and how the UI for that could work (but regular textDocument/completion is apparently supported by the server too).

So I tried to set up the server on a Windows PC for testing

  1. From https://github.com/TabbyML/tabby/releases downloaded and extracted tabby_x86_64-windows-msvc.zip
  2. Downloaded model: .\tabby.exe download --model StarCoder-1B (1.3 GB, also see https://tabby.tabbyml.com/docs/models/)
  3. Installed tabby-agent: npm install -g tabby-agent
  4. Added config in LSP.sublime-settings, e.g.
    {
        "clients": {
            "tabby": {
                "enabled": true,
                "command": ["tabby-agent", "--stdio"],
                "selector": "source.python",
                "disabled_capabilities": {
                    "completionProvider": true
                }
            },
        }
    }
  5. Manually started the background server: .\tabby.exe serve (it will initiate another download of 140 MB)
  6. Opened http://localhost:8080 in a browser
  7. Now this page asks me to create an admin account, including name, email, password. This seems to be mandatory (https://tabby.tabbyml.com/docs/quick-start/register-account/) and this is where I quit.
  8. Then you probably need to paste an authorization token into a ~/.tabby-client/agent/config.toml file, which is read by tabby-agent (https://tabby.tabbyml.com/docs/extensions/configurations/).

Maybe I will try again when there is a downloadable server / local model which doesn't require you to sign up for an account. Perhaps at the current time these AI models (still?) require a lot of performance, and therefore it might be expected to setup the server on a separate machine or so (just guessing). If anyone else already has or want to set up this server, feel free to share your experiences and whether it works. As said before, inline completions are not yet supported by this client, but apparently it should/might work with regular completions too.

@fruffy
Copy link
Author

fruffy commented Oct 4, 2024

Thanks! This is a very useful investigation.

@jwortmann jwortmann added enhancement protocol updates Issues related to changes in the LSP protocol. Usually because we're out-of-date. labels Oct 12, 2024
@jwortmann
Copy link
Member

Update: it seems that running tabby without an authorization token is already supported. The trick is to start tabby with a --no-webserver argument. Besides that, one needs to specify the model explicitly when starting tabby, otherwise it doesn't work. For example

tabby.exe serve --model StarCoder-1B --no-webserver

After testing this, unfortunately I got a timeout after 30 seconds and a response with no results for the textDocument/inlineCompletion request. This was with the downloaded CPU version of tabby, which apparently is too slow even for the smallest model.

So now I downloaded tabby_x86_64-windows-msvc-cuda122.zip from the releases page on GitHub. This also needs the CUDA toolkit as a requirement, which is another 3 GB download: https://developer.nvidia.com/cuda-downloads
After installing latest version (12.6 Update 2) of the toolkit, tabby-agent & tabby with the GPU seems to work fine now (for now I have a local implementation which just does the inline completion request, but no rendering yet).

@fruffy
Copy link
Author

fruffy commented Oct 25, 2024

Thanks, I actually was able to get this to run using your setup! But I had to remove

"disabled_capabilities": {
    "completionProvider": true
}

and the generated code is not well formatted yet, there is something off with the output still.

I am guessing this is the only completion currently supported? Is ghost text completion the same kind of completion the Codeium plugin uses?

@jwortmann
Copy link
Member

jwortmann commented Oct 26, 2024

I intentionally added the "disabled_capabilities" to the config above because the AI/LLM completions are relatively expensive to compute, and the completion popup only appears when the responses from all active language servers are ready. So I think it would be annoying to have a significant delay everytime for the completion popup.

For inline completions this could be handled independently and it would not affect the regular completion popup. My initial idea though would be to request inline completions only on demand when you explicitly trigger auto-complete via Ctrl + Space. But we probably need to experiment with that to see what kind of behavior works best for them. Or maybe provide a user setting for that.

The wrong formatting for regular completions is a bug in tabby-agent, I believe. I reported an issue for that at TabbyML/tabby#3330


Is ghost text completion the same kind of completion the Codeium plugin uses?

I haven't used that plugin, but I would guess that it is. I also like how an implementation in IntelliJ works by only presenting a single line instead of multi-line ghost text. They call it "full line completion" and I found this GIF in the docs:

Full_line_code_completion

This has the advantage that the code doesn't temporarily shift to the bottom when a block phantom is added under the current line. But I'm not sure if this would be possible or easy to do with what is provided in the LSP response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement protocol updates Issues related to changes in the LSP protocol. Usually because we're out-of-date.
Projects
None yet
Development

No branches or pull requests

3 participants