Feat: add support for Deepseek model #113
Closed
zeroaddresss
started this conversation in
Ideas
Replies: 2 comments 2 replies
-
I wasn't aware of this. Very interesting and thanks for sharing. I'd absolutely be open to a PR on this. I've written a guide on how to create a adapter if you'd like to make a PR for this. |
Beta Was this translation helpful? Give feedback.
0 replies
-
You don't need to implement a new adapter because Deepseek's API is fully compatible with OpenAI's API. You just need to add the configuration like this: require("codecompanion").setup({
adapters = {
deepseek = require("codecompanion.adapters").extend("openai", {
env = {
api_key = "cmd:gpg --decrypt ~/.deepseek-api-key.gpg 2>/dev/null",
},
url = os.getenv("DEEPSEEK_API_BASE") .. "/chat/completions",
schema = {
model = {
default = "deepseek-chat",
choices = {
"deepseek-coder",
"deepseek-chat",
},
},
max_token = {
default = 8192,
},
temperature = {
default = 1,
},
},
}),
},
}) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Ollama offers the possibility to use Deepseek-coder-v2 model (one of the cheapest and best performing models for coding), but the 236B model is really heavy to be run locally. However, the deepseek API allows to use it, and it has great results as a code assistant.
It would be awesome if it could be integrated within the codecompanion plugin.
Beta Was this translation helpful? Give feedback.
All reactions