Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code completion broken (nonsense output) #338

Open
vrgimael opened this issue Oct 2, 2024 · 3 comments
Open

Code completion broken (nonsense output) #338

vrgimael opened this issue Oct 2, 2024 · 3 comments

Comments

@vrgimael
Copy link

vrgimael commented Oct 2, 2024

Describe the bug
The autocomplete suggestions returned are always kinda nonsense.

To Reproduce
I just tried to use the VSCode extension with Ollama using the proposed model in the tutorial.

Expected behavior
I expected something that would at the very least make syntactic sense, but the output is always broken as if the prompt is incorrect (however I looked at the code for the templates and logged info and it seems fine).

Screenshots
Screenshot 2024-10-02 at 21 11 17

Screenshot 2024-10-02 at 21 11 33

API Provider
Ollama

Chat or Auto Complete?
Autocomplete (haven't really tested the chat)

Model Name
codellama:7b-code, qwen2.5-coder:7b-base, stable-code:3b-code, deepseek-coder:6.7b-base

Desktop (please complete the following information):

  • OS: MacOS
  • Version: 15.0

Additional context
I also tried the config with multiple different templates with similar but different broken results.

Please let me know if any further information is needed.

@rjmacarthy
Copy link
Collaborator

Your settings look correct. Please could you share the debug output? Many thanks.

@atljoseph
Copy link

atljoseph commented Oct 22, 2024

I ran into the same issue. Every code completion is absolutely, completely, positively, off-point. To the degree that i immediately turned it off and said "NOPE". It was worse than continue's completion.

Does the same on my mac and on ubuntu.

In Styles.Css:

nav ul li {
display: inline;
```css <----- The suggestion
margin-right: 10px;
}

Debug

[Extension Host] [twinny] Twinny Stream Debug
Streaming response from 192.168.50.44:11434.
Request body:
{
"model": "qwen2.5-coder:1.5b",
"prompt": "

/**/ \n\n/* Language: CSS (css) /\n/ File uri: file:///home/joseph/ai/my-vscode-extension/webview/src/styles.css (css) /\nbody {\n    font-family: Arial, sans-serif;\n    margin: 0;\n    padding: 0;\n    background-color: #333;\n}\n\nnav {\n    background-color: #6b654a;\n    color: rgb(226, 227, 194);\n    padding: 10px;\n}\n\nnav ul {\n    list-style-type: none;\n    padding: 0;\n}\n\nnav ul li {\n    display: inline;\n    mar  \n    margin-right: 10px;\n}\n\nnav ul li a {\n    color: rgb(226, 227, 194);\n    font-weight: bolder;\n    / text-decoration: none; */\n}\n\n#content {\n    padding: 20px;\n}\n ",
"stream": true,
"keep_alive": "5m",
"options": {
"temperature": 0.2,
"num_predict": 512
}
}

Request options:
{

"hostname": "123.123.123.123",
"port": 11434,
"path": "/api/generate",
"protocol": "http",
"method": "POST",
"headers": {
"Content-Type": "application/json",
"Authorization": ""
}
}
[Extension Host] [twinny] Streaming response end due to multiline not required 22
Completion: ```

@rjmacarthy
Copy link
Collaborator

rjmacarthy commented Oct 22, 2024

Hey @atljoseph thanks for the report, the reason you are getting bad output is because you're using an instruct model for FIM completions...You should use a base model instead.

Based on your debug output you are using qwen 1.5b therefore I'd recommend https://ollama.com/library/qwen2.5-coder:1.5b-base for you. FYI: I have not tested this model myself so I cannot guarantee it's accuracy, however the 7b works for me perfectly.

Please refer to the documentation for more supported models using FIM:

https://twinnydotdev.github.io/twinny-docs/general/supported-models/

Many thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants