You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Any OpenAI o1 model, does not work.
To Reproduce
Add the OpenAI provider using LiteLLM, and specify o1-mini or o1-preview.
Expected behavior
The model answers my questions.
Screenshots
N/A
Logging
Request body:
{
"model": "o1-mini",
"stream": true,
"max_tokens": 16000,
"messages": [
{
"role": "system",
"content": "You are a helpful, respectful and honest coding assistant.\nAlways reply using markdown.\nBe clear and concise, prioritizing brevity in your responses.\nFor code refactoring, use markdown with appropriate code formatting."
},
{
"role": "user",
"content": "How are you?"
},
{
"role": "assistant",
"content": "I'm just a program, but I'm here and ready to help you! How can I assist you today?"
},
{
"role": "user",
"content": "Hello"
}
],
"temperature": 0.2
}
Request options:
{
"hostname": "api.openai.com",
"port": 443,
"path": "/v1/chat/completions",
"protocol": "https",
"method": "POST",
"headers": {
"Content-Type": "application/json",
"Authorization": "Bearer <MY_API_KEY>"
}
}
Number characters in all messages = 318
workbench.desktop.main.js:671 ERR [Extension Host] Fetch error: Error: Server responded with status code: 400
at streamResponse (/home/impulse/.vscode-oss/extensions/rjmacarthy.twinny-3.17.4-linux-x64/out/index.js:73783:13)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
workbench.desktop.main.js:146 [Extension Host] Fetch error: Error: Server responded with status code: 400
at streamResponse (/home/impulse/.vscode-oss/extensions/rjmacarthy.twinny-3.17.4-linux-x64/out/index.js:73783:13)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
API Provider
LiteLLM
Chat or Auto Complete?
Chat
Model Name o1-mini and o1-preview
Desktop (please complete the following information):
OS: Archlinux
Browser: Zen(Firefox fork)
Version: 3.17.5
Additional context
I think it has to do with the fact the model does not support streaming.
Resolution
Let the user decide whether they want streaming turned on or off, in the litellm settings of the provider.
The text was updated successfully, but these errors were encountered:
Thanks for the report, currently only streaming is supported but it should be easy enough to enable xhr (disable streaming) as you suggest. Many thanks.
Describe the bug
Any OpenAI o1 model, does not work.
To Reproduce
Add the OpenAI provider using LiteLLM, and specify o1-mini or o1-preview.
Expected behavior
The model answers my questions.
Screenshots
N/A
Logging
API Provider
LiteLLM
Chat or Auto Complete?
Chat
Model Name
o1-mini
ando1-preview
Desktop (please complete the following information):
Additional context
I think it has to do with the fact the model does not support streaming.
Resolution
Let the user decide whether they want streaming turned on or off, in the litellm settings of the provider.
The text was updated successfully, but these errors were encountered: