Response From OpenAI custom LLM
I've developed a custom LLM using OpenAI and successfully configured Vapi to send requests to my endpoint. When I receive a response from my custom LLM, is there a specific structure that the response object should follow to ensure compatibility with Vapi's request?