
Voice is powerful, fast, expressive, and human. It’s where Vapi started, and where our assistants shine.
Now with the new Chat API, that same intelligence extends to text. Your Vapi agents can run across both voice and chat using the same config, tools, and memory.
Chat adds a new surface for the same intelligence. Your Vapi assistants can now operate over messaging platforms, web UIs, and support widgets with zero extra setup.
Text conversations support both non-streaming and streaming modes. You can preserve context via sessions or link messages directly, just like you do in voice.
And you can now build assistants that communicate over voice and text and are backed by the same config, model orchestration, and context control you’ve already set up.
The Chat API is OpenAI-compatible. If you’ve already built with the OpenAI SDK, you don’t need to rewrite anything. Just swap in Vapi’s endpoint, add your assistantId, and you’re done.
That means you can:
It’s a smooth path from prompt → agent.
This is a new modality for deploying assistants both text and voice, side by side, backed by a single config.
You can now build once and deploy across phone calls, messaging apps, and browser chat. Maintain context across formats. Let users shift channels without losing state. And do it all without duplicating your logic or models.
Start where you are. If you’ve got a Vapi assistant, it’s already chat-ready.
Read more about this in documentation here.
Let us know what you're building, and what you'd like to see next.