What is the default request retry logic of model AzureChatOpenAI()
?
#7987
-
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Answer generated by a 🤖 AnswerI understand that you're inquiring about the default request retry logic of the The default retry logic is encapsulated in the The wait time between each retry starts at 1 second and increases exponentially (2^x * 1 second) up to a maximum of 60 seconds. A warning message is logged before each sleep period. If you wish to customize this retry logic, you can modify the Please note that this is one potential solution based on the current implementation of the LangChain framework. If you find this solution helpful and believe it could benefit other users, we encourage you to make a pull request to update the documentation. This way, other users who encounter similar issues can benefit from your contribution. SourcesThis response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Answer generated by a 🤖
Answer
I understand that you're inquiring about the default request retry logic of the
AzureChatOpenAI()
model in the LangChain framework and whether it's possible to customize this logic.The default retry logic is encapsulated in the
_create_retry_decorator
function. This function uses thetenacity
library to manage retries. By default, it will retry if exceptions such asopenai.error.Timeout
,openai.error.APIError
,openai.error.APIConnectionError
,openai.error.RateLimitError
, oropenai.error.ServiceUnavailableError
occur. The number of retry attempts is determined by theself.max_retries
attribute.The wait time between each retry starts at 1 second and increase…