generous-apricot•4mo ago
Setup an MCP with a Custom LLM assistant returned from the Server Assistant callback
We have a custom LLM that we are using to proxy to multiple LLMs with specific context. Given that this is an enterprise solution with many clients we did not want to store the assistant configuration in VAPI. Instead, we are using the Server Message Assistant Request to return the assistant configuration needed. However, there is no way that we can specify an MCP server from the API to define tool calls dynamically. Is this possible?
Additionally, if it is, can we specify context per call so we can resolve the set of tools dynamically for each call?
3 Replies
It should be possible as long as the variable is defined and passed through before the tool is executed and is nested in the server object as the url parameter correctly as documented here: https://docs.vapi.ai/tools/mcp#mcp-tool
generous-apricotOP•4mo ago
Can we pass header variables in that tool?
Yes, as long as it's stringified for JSON formatting it should be acceptable. Run some tests and reach back out to us if you run into any issues