Skip to content

What is the default request retry logic of model AzureChatOpenAI()? #7987

Answered by dosubot bot
axiangcoding asked this question in Q&A
Discussion options

You must be logged in to vote

Answer generated by a 🤖

Answer

I understand that you're inquiring about the default request retry logic of the AzureChatOpenAI() model in the LangChain framework and whether it's possible to customize this logic.

The default retry logic is encapsulated in the _create_retry_decorator function. This function uses the tenacity library to manage retries. By default, it will retry if exceptions such as openai.error.Timeout, openai.error.APIError, openai.error.APIConnectionError, openai.error.RateLimitError, or openai.error.ServiceUnavailableError occur. The number of retry attempts is determined by the self.max_retries attribute.

The wait time between each retry starts at 1 second and increase…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@raf-madrigal
Comment options

Answer selected by axiangcoding
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants