OpenAI HTTP request is failing a lot

Latency, using a reasoning model in OpenAi can take 10-20 seconds for a response so if your agent isn’t expecting that it might call it again too soon, using LLM is not instant and adds a LOT of latency, especially for specific non real time prompts
Was this page helpful?