Replies: 1 comment
-
To restore the behavior where the LLM processes the tool's response and creates a creative response in LangGraph, similar to how it worked in AgentExecutor, you need to ensure that the LangGraph agent is set up to handle the conversation history and process the tool's responses correctly. Here is how you can adjust your migrated code:
Additionally, you can leverage the LangChain's standard interface methods such as |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
Hello @dosu,
I have a question: I migrated my multi-agent project, which uses various tools, from AgentExecutor to LangGraph for a basic initial version, and it is working. However, in the AgentExecutor version, when a tool returned its response, the LLM would process this return and create a creative response. For example, I could return a JSON with multiple fields, and the LLM would use this information to craft a response.
Now, in LangGraph, I am not achieving the same effect. The text returned by the tool is passed along exactly as it is.
For example, I have a tool that fetches the prices of three products. It returns a dict with the product name and price, and in the AgentExecutor version, the LLM would format a nice response. Now, in the LangGraph version, this is no longer happening.
What do I need to adjust in my migrated code to restore this behavior?
Thank you.
System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-experimental==0.0.62
langchain-openai==0.1.13
langchain-qdrant==0.1.0
langchain-text-splitters==0.2.1
Beta Was this translation helpful? Give feedback.
All reactions