Critical Bug: Assistant Ignores All Successful Tool Responses (tool_outputs) and Stalls
Hello Vapi Team,
I'm reporting a critical, reproducible bug where our assistant ignores valid, successful tool responses, making our tools unusable.
The Bug Flow:
Our assistant correctly calls a tool (e.g., to check for calendar availability).
Our n8n webhook responds instantly (<1s) with a statusCode 200 and a perfectly formatted tool_outputs JSON body, including the correct tool_call_id and output.
The assistant receives this perfect response, goes completely silent, and then tells the user that the tool failed.
This is not a user-side configuration error. We have already ruled out:
Timeouts: The bug persists even with an instant (<1s) response.
Response Format: Our JSON is perfect per your docs.
Data Types / Architecture: We have tested sending the tool output as a pure boolean (true), a string ("TRUE"), and even the full, final sentence for the assistant to read verbatim. All methods fail in the same way.
Stale Config: The bug is fully reproducible on a brand new assistant with a minimal prompt and a different AI model (gpt-4o).
Conclusion:
This evidence strongly suggests an internal Vapi bug in handling the state after a successful tool call. The system receives the correct data, but the LLM fails to process it and follow the prompt's instructions.
This is a blocking issue for our project. Could you please investigate?
Call ID from the last test: ef445672-ecd8-4033-a68b-cc7b054ce45f
Organization ID: 723d277b-7485-44d5-9a86-9b167e796522
i can provide any logs, screenshots, or further details needed. Thank you.
I'm reporting a critical, reproducible bug where our assistant ignores valid, successful tool responses, making our tools unusable.
The Bug Flow:
Our assistant correctly calls a tool (e.g., to check for calendar availability).
Our n8n webhook responds instantly (<1s) with a statusCode 200 and a perfectly formatted tool_outputs JSON body, including the correct tool_call_id and output.
The assistant receives this perfect response, goes completely silent, and then tells the user that the tool failed.
This is not a user-side configuration error. We have already ruled out:
Timeouts: The bug persists even with an instant (<1s) response.
Response Format: Our JSON is perfect per your docs.
Data Types / Architecture: We have tested sending the tool output as a pure boolean (true), a string ("TRUE"), and even the full, final sentence for the assistant to read verbatim. All methods fail in the same way.
Stale Config: The bug is fully reproducible on a brand new assistant with a minimal prompt and a different AI model (gpt-4o).
Conclusion:
This evidence strongly suggests an internal Vapi bug in handling the state after a successful tool call. The system receives the correct data, but the LLM fails to process it and follow the prompt's instructions.
This is a blocking issue for our project. Could you please investigate?
Call ID from the last test: ef445672-ecd8-4033-a68b-cc7b054ce45f
Organization ID: 723d277b-7485-44d5-9a86-9b167e796522
i can provide any logs, screenshots, or further details needed. Thank you.