harsh-harlequin•5mo ago
AI Provider Latency Mysteriously Increased
The latency for all AI providers have increased and affected my agents. Please see attachment. It appears that the listed 1000ms is not reflected in the breakdown, which shows as 2000ms. My agents have gone from 1200ms to 2600ms without any changes made.

10 Replies
Check Network Routing Configuration
Your latency increase might be due to traffic not being routed through the nearest data center. Contact Vapi support to verify your traffic is using the optimal regional endpoint for your location, as this can significantly impact response times.
Sources:
- Discord discussion about routing issues
- Discord thread about Claude model issues
⤵️ Mention @Vapi Support Bot for follow-ups 🐥
Check Network Routing Configuration
Your latency increase might be due to traffic not being routed through the nearest data center. Contact Vapi support to verify your traffic is using the optimal regional endpoint for your location, as this can significantly impact response times.
Sources:
- Discord discussion about routing issues
- Discord thread about Claude model issues
⤵️ Mention @Vapi Support Bot for follow-ups 🐥
Based on the information available, there are known issues with Anthropic Claude models in Vapi. Try these troubleshooting steps:
1. Check network connectivity between your system and API endpoints
2. Review your VAPI configuration, especially settings like:
3. Contact Vapi support with your organization ID and detailed error logs if the issue persists
Sources:
- Discord Channel
- Discord Channel
Based on the information available, there are known issues with Anthropic Claude models in Vapi. Try these troubleshooting steps:
1. Check network connectivity between your system and API endpoints
2. Review your VAPI configuration, especially settings like:
3. Contact Vapi support with your organization ID and detailed error logs if the issue persists
Sources:
- Discord Channel
- Discord Channel
harsh-harlequinOP•5mo ago
latency in drop down is 1000ms while in the breakdown it shows me as 2000ms

harsh-harlequinOP•5mo ago
@Vapi Team I'm experiencing increased latency with claude-3-5-sonnet-20241022 (2600ms vs previous 1200ms). Other models and providers affected as well.
Could you check if my traffic is routing through the closest data center? Org ID: 0153f067-55e2-429a-8088-03a4f01ff2bf
Can @Shubham Bajaj m Bajaj or @Kings_big💫 look into this?
correct-apricot•5mo ago
I have experienced with your issue.
But the clarify thing is latency of claude-3.5 is 2000ms
harsh-harlequinOP•5mo ago
Claude 3.5 used to be 1,000ms. It jumped to 2,000 overnight in the breakdown without me making any changes, also it still shows 1,000ms in the drop down menu.
dependent-tan•5mo ago
@LongBeard can you share your assistant ID?
harsh-harlequinOP•5mo ago
20bd47a4-e7ef-4357-b0b5-9f4832979b2b
Altho it's all my assistants
fascinating-indigo•5mo ago
The discrepancy the your experiencing (1000ms in dropdown vs 2000ms in breakdown) is likely due to:
1. The dropdown menu may be displaying the default Anthropic latency (1000ms) instead of the specific model latency.
2. The breakdown view is correctly using the specific model latency (2000ms).
This is a UI inconsistency, but the actual latency value in the codebase for Claude 3.5 Sonnet (Beta Computer Use) is correctly set to 2000ms.
The increase from 500ms to 2000ms was a recent change, which appears to reflect actual performance measurements of the model.