Open
Description
AI services are starting to support server-side use of MCP servers. For example, Anthropic's service:
https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connector
and OpenAI responses:
https://platform.openai.com/docs/guides/tools-remote-mcp
With the ModelContextProtocol library, it's easy to use any MCP server, including stdio ones, locally, treating every tool as an AIFunction, but there's not currently an abstraction for the hosted MCP server case, where the service uses the server directly.
Rough sketch of how we could handle this well:
- Add a
HostedMcpServer : AITool
. This would be configurable with all the common stuff: server url, optional list of allowed tool names, indication of whether to allow auto invocation, etc. - Add a
HostedMcpServerToolCall : AIContent
to represent the call the service makes to the server (including tool name and arguments) and similarly aHostedMcpServerToolResult
to represent the result of the operation. - The MCP spec recommends human in the loop on tool calls, and the OpenAI design defaults to not automatically invoking tools; interestingly Anthropic's doesn't, but I suspect that's coming. We might also need an AIContent to represent approval/denial for the server invoking a server-side tool.
- For
IChatClient
that don't have this capability, we can enable it via anMcpServer ChatClient : IChatClient
that itself uses MCP clients. It would translate aHostedMcpServer
tool into creating an McpClient and replacing the tool in the tool collection with the appropriate McpClientTool instances. With a FunctionInvokingChatClient in the pipeline, it would enable then similar automatic handling of MCP Server interactions.