Replies: 4 comments 8 replies
-
🤖 Hey there, @Xakim1c! Great to see you back with another interesting challenge. Hope you've been doing well since our last chat! To incorporate user input into the Customer Creator agent before it proceeds to the Supervisor in the LangChain framework, you can modify the Here's how you can modify the function: def create_conversational_retrieval_agent(
llm: BaseLanguageModel,
tools: List[BaseTool],
user_input: str, # new parameter for user input
remember_intermediate_steps: bool = True,
memory_key: str = "chat_history",
system_message: Optional[SystemMessage] = None,
verbose: bool = False,
max_token_limit: int = 2000,
**kwargs: Any,
) -> AgentExecutor:
"""A convenience method for creating a conversational retrieval agent.
Args:
llm: The language model to use, should be ChatOpenAI
tools: A list of tools the agent has access to
user_input: The user input to incorporate into the agent
remember_intermediate_steps: Whether the agent should remember intermediate
steps or not. Intermediate steps refer to prior action/observation
pairs from previous questions. The benefit of remembering these is if
there is relevant information in there, the agent can use it to answer
follow up questions. The downside is it will take up more tokens.
memory_key: The name of the memory key in the prompt.
system_message: The system message to use. By default, a basic one will
be used.
verbose: Whether or not the final AgentExecutor should be verbose or not,
defaults to False.
max_token_limit: The max number of tokens to keep around in memory.
Defaults to 2000.
Returns:
An agent executor initialized appropriately
"""
if remember_intermediate_steps:
memory: BaseMemory = AgentTokenBufferMemory(
memory_key=memory_key, llm=llm, max_token_limit=max_token_limit
)
else:
memory = ConversationTokenBufferMemory(
memory_key=memory_key,
return_messages=True,
output_key="output",
llm=llm,
max_token_limit=max_token_limit,
)
_system_message = system_message or _get_default_system_message()
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=_system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key), user_input], # add user input here
)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
return AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=verbose,
return_intermediate_steps=remember_intermediate_steps,
**kwargs,
) In this modification, You can find more details about this function in the libs/langchain/langchain/agents/agent_toolkits/conversational_retrieval/openai_functions.py file in the LangChain repository. Please note that this modification assumes that the user input is available at the time of creating the agent. If the user input is not available at that time, you might need to modify the workflow to include a step for collecting user input before creating the agent. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
🤖 Based on your question, it seems like you want to incorporate user input into the Customer Creator agent before it proceeds to the Supervisor. To achieve this, you can use the Here's how you can modify the function: def create_conversational_retrieval_agent(
llm: BaseLanguageModel,
tools: List[BaseTool],
user_input: str, # new parameter for user input
remember_intermediate_steps: bool = True,
memory_key: str = "chat_history",
system_message: Optional[SystemMessage] = None,
verbose: bool = False,
max_token_limit: int = 2000,
**kwargs: Any,
) -> AgentExecutor:
"""A convenience method for creating a conversational retrieval agent."""
if remember_intermediate_steps:
memory: BaseMemory = AgentTokenBufferMemory(
memory_key=memory_key, llm=llm, max_token_limit=max_token_limit
)
else:
memory = ConversationTokenBufferMemory(
memory_key=memory_key,
return_messages=True,
output_key="output",
llm=llm,
max_token_limit=max_token_limit,
)
_system_message = system_message or _get_default_system_message()
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=_system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key), user_input], # add user input here
)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
return AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=verbose,
return_intermediate_steps=remember_intermediate_steps,
**kwargs,
) In this modification, You can find more details about this function in the LangChain repository. Please note that this modification does not override any private methods or add new methods to existing classes within the repository. It simply extends the functionality of the I hope this helps! If you have any further questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@Xakim1c Even I wanted to do something similar but I couldn't find anything. So, here is how I did it (modifying your code) ...
def user_node(state):
user_input = input("Enter your response:")
state.messages.append(HumanMessage(content=user_input))
return state
workflow = StateGraph(AgentState)
workflow.add_node("Items Creater", items_node)
workflow.add_node("Customer Creator", customer_node)
workflow.add_node("User Input", user_node)
workflow.add_node("supervisor", supervisor_chain)
workflow.add_edge("Customer Creator", "User Input")
workflow.add_edge("User Input", "supervisor") I might have made a mistake with the state, but I'm uncertain because I'm not utilizing the langgraph stateful graph. I had very specific memory management requirements. However, I believe you understand the situation. Another approach could involve using the user input tool with your agent, but unfortunately, it didn't align with my needs. However, I'm not sure if you'd be able to develop a frontend for it now, especially if your models are running on the backend. I don't believe we can achieve the request-response cycle when using graphs (let me know if you get something on this). @hwchase17 @hinthornw This might not be the optimal solution for meeting the requirement. Is there a better approach? |
Beta Was this translation helpful? Give feedback.
-
decision=lambda state: 'next' if isinstance(state['agent_outcome'],AgentFinish) else 'continue' workflow=StateGraph(State) workflow.add_node('developer',developer_node) workflow.set_entry_point('developer') This is the graph I made it is having conditions for each agent but the when i going to the executor am getting the same error (InvalidUpdateError: Invalid update for channel input: LastValue can only receive one value per step.) you guys faced if am coming from the developer agent . Is there a way.. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I'm using example of creating Agent Supervisor: https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/agent_supervisor.ipynb
The problem here that for Customer Creater agent I need to ask user's Firstname, Lastname and phone number. But I don't know how to put user input between Customer Creater agent and Supervisor or how to make an agent that can wait user's answer before go to Supervisor
As a solution I tried to modify last node of agent (before was supervisor, I changed to "END" + I tried to add memory to agents:
for member in members:
# We want our workers to ALWAYS "report back" to the supervisor when done
workflow.add_edge(member, END)
but this solution isn't reliable
System Info
langchain==0.1.0
langchain-community==0.0.11
langchain-core==0.1.9
langchain-google-genai==0.0.5
langchain-openai==0.0.2
langchainhub==0.1.14
langchainplus-sdk==0.0.20
Beta Was this translation helpful? Give feedback.
All reactions