You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Try using twinny with deepseek-coder-v2:16b via ollama, but both chat and FIM don't seem to work due to the following errors seen in the ollama logs:
[GIN] 2024/09/13 - 13:06:38 | 200 | 14.255666264s | 172.17.0.1 | POST "/api/chat"
check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
[GIN] 2024/09/13 - 13:06:41 | 200 | 2.408644743s | 172.17.0.1 | POST "/api/chat"
check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
[GIN] 2024/09/13 - 13:07:09 | 200 | 8.941356956s | 172.17.0.1 | POST "/api/chat"
check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
[GIN] 2024/09/13 - 13:07:45 | 200 | 10.676653407s | 172.17.0.1 | POST "/api/generate"
check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
[GIN] 2024/09/13 - 13:07:48 | 200 | 2.450888118s | 172.17.0.1 | POST "/api/generate"
check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
To Reproduce
Install twinny
Install ollama
Pull deepseek-coder-v2:16b
Configure providers in twinny (see screenshot below)
Try using chat or FIM
Expected behavior
It should work
Screenshots
If applicable, add screenshots to help explain your problem.
Logging
Rnable logging in the extension settings if not already enabled (you may need to restart vscode if you don't see logs). Proivide the log with the report.
API Provider
ollama
Chat or Auto Complete?
both chat and fim
Model Name
Provide the model name you are using e.g codellama:7b-code or codellama:7b-instruct
Desktop (please complete the following information):
OS: Windows 11, WSL2
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Hello, I think this was my mistake by updating the default chat path to /api/chat when it should be /v1/chat/completions I just fixed it in the most recent version. For FIM the path /api/generate should be working.
Describe the bug
Try using twinny with deepseek-coder-v2:16b via ollama, but both chat and FIM don't seem to work due to the following errors seen in the ollama logs:
To Reproduce
Expected behavior
It should work
Screenshots
If applicable, add screenshots to help explain your problem.
Logging
Rnable logging in the extension settings if not already enabled (you may need to restart vscode if you don't see logs). Proivide the log with the report.
API Provider
ollama
Chat or Auto Complete?
both chat and fim
Model Name
Provide the model name you are using e.g
codellama:7b-code
orcodellama:7b-instruct
Desktop (please complete the following information):
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: