Unnecessary responses from Vapi
I'm seeing overzealous responses from the platform - responses that quite often interrupt the other party.
The thing is, the base LLM understands when to reply and when not to - we've used prompts like
However this model output gets misinterpreted by the platform, and produces unpredictable behavior (e.g., random DTMF notes sent). We only want to respond when explicitly asked for information.
Is there some way to have the platform simply remain silent / do nothing unless explicitly asked for information? Or perhaps to have some specific model output (like '-') which Vapi understands as a no-op.
The thing is, the base LLM understands when to reply and when not to - we've used prompts like
respond with '-' when no response needed, and the model does output '-' appropriately!However this model output gets misinterpreted by the platform, and produces unpredictable behavior (e.g., random DTMF notes sent). We only want to respond when explicitly asked for information.
Is there some way to have the platform simply remain silent / do nothing unless explicitly asked for information? Or perhaps to have some specific model output (like '-') which Vapi understands as a no-op.