Replies: 4 comments
-
I've modified my configuration with this hook to override the form_messages function:
This now works with the specific LLM model that I have selected. It doesn't seem like the correct way to use this plugin. Am I missing something? On another note, I can't use my current configuration with other openai compatible models that are available to me because the models rejects the many additional parameters in the schema. Is there a prescribed way to remove these? |
Beta Was this translation helpful? Give feedback.
-
Update: In order to make the openai_compatible adapter work with my local Meta-Llama-3.1 LLM interface I needed to remove all optional definitions from the schema defined in the opanai_compatible adapter. There must be a better way! Please give me a hint if anyone has recommendations. I'm not a Lua programmer and I'm pretty sure I shouldn't be in there! |
Beta Was this translation helpful? Give feedback.
-
The adapters in the plugin map directly to the LLMs that they serve. I've created them based on the documentation that's available on their own site. So I'd advise doing the same for yours. Have you read the adapters documentation in the plugin? This walks-through how you create an adapter and will give much better insight on what you may need to do to customize yours. As an aside, I've used Llama3.1 with Ollama. Maybe that adapter and it's default settings could be a useful starting point. |
Beta Was this translation helpful? Give feedback.
-
Thank you for the information! Yes I've read the adapters documentation a couple of times now. I guess I just wanted sure what the appropriate workflow would be in using the adapters. It sounds like it would be normal to create a custom adapter for each LLM that I'm trying to interface with. I'll go ahead and do it this way. Also, thanks for the pointer on using the Ollama adapter with Llama3.1 I'll give this a shot. I'm not very experienced with using LLMs in this way so I was not certain if Ollama and Llama were similar in name only. Thank you for the awesome plugin! It's increased my productivity twofold at least. |
Beta Was this translation helpful? Give feedback.
-
Hello, I've configured my plugin to use the openai_compatible adapter for an in house LLM. When I send a request to the LLM I get an error message. I've changed the system prompt below in order to easily read the messages for debugging purposes. The output below is from the codecompanion.log file.
Here is the test message that I'm sending using the chat strategy:
And here is the error response:
It appears that my llm does not like receiving the id and opts fields. Is this a valid observation? If so, is there an easy way to remove these from the generated message?
Thanks for any insight!
Beta Was this translation helpful? Give feedback.
All reactions