pipeline-error-openai-llm-failed using Vapi-hosted key – quota exceeded

Hi team,
We're consistently getting the error pipeline-error-openai-llm-failed with the following OpenAI message:
"You exceeded your current quota. Please check your plan and billing details."
This is happening with the Vapi-hosted OpenAI key — we are not using BYOK.

Since this started, we’ve been unable to run any assistants using OpenAI models (GPT-4o, 4.0, or 3.5).
No prompts are processed — the model request fails immediately.

Call ID: 33430449-f2ce-4b58-908c-b35b2900c16a
Timestamp: Jul 28, 2025, 11:38 UTC

Expected: Assistant would respond normally using GPT-4o.

Actual: Model request fails immediately due to quota.

We’d appreciate your help verifying whether there’s a quota cap or other restriction on our account.
Thanks.
Was this page helpful?