-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix issue #4604: '[Bug]: Disable function calling for DeepSeek' #4616
base: main
Are you sure you want to change the base?
Conversation
""" | ||
# Set function calling support for DeepSeek models | ||
if 'deepseek' in self.model.lower(): | ||
self.supports_function_calling = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes to this file are not necessary. You can revert this file, from the main branch version. Make sure to only revert this file, and commit.
@@ -80,17 +80,32 @@ def serialize_model(self) -> dict: | |||
elif isinstance(item, ImageContent) and self.vision_enabled: | |||
content.extend(d) | |||
|
|||
# For DeepSeek, we need to ensure content is a string when using tool calls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For DeepSeek we need to ensure content is a string. But this change in message.py applies to all LLMs supported by liteLLM! That is a serious problem. We need to make the content into a single string ONLY for some LLMs:
- that don't support lists of dicts in content, but require a string instead.
This is important! The current function calling implementation uses the format of Anthropic Claude variants. Think again what we need to do in order to keep it working for Anthropic, and implement it differently for DeepSeek, while NOT breaking everything else.
This can help:
- verify how we define SUPPORT_PROMPT_CACHE_MODELS in llm.py
- verify how we use such module globals in llm.py
- DeepSeek might work with function calling, if it has the right format with a single string for content, at least for system message. Make a special case this time, so that you don't break everything else.
- there are LLMs that don't work with prompt caching, nor with vision, nor with function calling. For those, make a second serialization algo, to make their contents into one content string per message (not a list of contents).
'arguments': tc.function.arguments | ||
} | ||
} for tc in self.tool_calls | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change also breaks everything else, when trying to make it work for DeepSeek. Use a special case or attribute for DeepSeek rather than this.
self.model_info is not None | ||
and self.model_info.get('supports_function_calling', False) | ||
(self.model_info is not None and self.model_info.get('supports_function_calling', False)) | ||
or 'deepseek' in self.config.model.lower() # DeepSeek models support function calling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need to check if DeepSeek supports function calling. Let's assume model_info already returns True (it does).
End-user friendly description of the problem this fixes or functionality that this introduces
Give a summary of what the PR does, explaining any non-trivial design decisions
This PR attempts to fix Deepseek not working with the expected function calling format.
Link of any specific issues this addresses