-
-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code completion broken (nonsense output) #338
Comments
Your settings look correct. Please could you share the debug output? Many thanks. |
I ran into the same issue. Every code completion is absolutely, completely, positively, off-point. To the degree that i immediately turned it off and said "NOPE". It was worse than continue's completion. Does the same on my mac and on ubuntu. In Styles.Css:nav ul li { Debug[Extension Host] [twinny] Twinny Stream Debug /**/ \n\n/* Language: CSS (css) /\n/ File uri: file:///home/joseph/ai/my-vscode-extension/webview/src/styles.css (css) /\nbody {\n font-family: Arial, sans-serif;\n margin: 0;\n padding: 0;\n background-color: #333;\n}\n\nnav {\n background-color: #6b654a;\n color: rgb(226, 227, 194);\n padding: 10px;\n}\n\nnav ul {\n list-style-type: none;\n padding: 0;\n}\n\nnav ul li {\n display: inline;\n mar \n margin-right: 10px;\n}\n\nnav ul li a {\n color: rgb(226, 227, 194);\n font-weight: bolder;\n / text-decoration: none; */\n}\n\n#content {\n padding: 20px;\n}\n ", |
Hey @atljoseph thanks for the report, the reason you are getting bad output is because you're using an instruct model for FIM completions...You should use a base model instead. Based on your debug output you are using qwen 1.5b therefore I'd recommend https://ollama.com/library/qwen2.5-coder:1.5b-base for you. FYI: I have not tested this model myself so I cannot guarantee it's accuracy, however the 7b works for me perfectly. Please refer to the documentation for more supported models using FIM: https://twinnydotdev.github.io/twinny-docs/general/supported-models/ Many thanks, |
Describe the bug
The autocomplete suggestions returned are always kinda nonsense.
To Reproduce
I just tried to use the VSCode extension with Ollama using the proposed model in the tutorial.
Expected behavior
I expected something that would at the very least make syntactic sense, but the output is always broken as if the prompt is incorrect (however I looked at the code for the templates and logged info and it seems fine).
Screenshots
API Provider
Ollama
Chat or Auto Complete?
Autocomplete (haven't really tested the chat)
Model Name
codellama:7b-code, qwen2.5-coder:7b-base, stable-code:3b-code, deepseek-coder:6.7b-base
Desktop (please complete the following information):
Additional context
I also tried the config with multiple different templates with similar but different broken results.
Please let me know if any further information is needed.
The text was updated successfully, but these errors were encountered: