pydantic_ai.mcp
Bases: RuntimeError
Raised when an MCP server returns an error response.
This exception wraps error responses from MCP servers, following the ErrorData schema from the MCP specification.
The error message.
Type: str Default: message
The error code returned by the server.
Type: int Default: code
Additional information about the error, if provided by the server.
Type: dict[str, Any] | None Default: data
@classmethod
def from_mcp_sdk(cls, error: mcp_exceptions.McpError) -> MCPError
Create an MCPError from an MCP SDK McpError.
MCPError
An McpError from the MCP SDK.
Additional properties describing MCP entities.
See the resource annotations in the MCP specification.
Intended audience for this entity.
Type: list[mcp_types.Role] | None Default: None
Priority level for this entity, ranging from 0.0 to 1.0.
Type: Annotated[float, Field(ge=0.0, le=1.0)] | None Default: None
@classmethod
def from_mcp_sdk(cls, mcp_annotations: mcp_types.Annotations) -> ResourceAnnotations
Convert from MCP SDK Annotations to ResourceAnnotations.
ResourceAnnotations
The MCP SDK annotations object.
Bases: ABC
Base class for MCP resources.
The programmatic name of the resource.
Type: str
Human-readable title for UI contexts.
Type: str | None Default: None
A description of what this resource represents.
Type: str | None Default: None
The MIME type of the resource, if known.
Type: str | None Default: None
Optional annotations for the resource.
Type: ResourceAnnotations | None Default: None
Optional metadata for the resource.
Type: dict[str, Any] | None Default: None
Bases: BaseResource
A resource that can be read from an MCP server.
See the resources in the MCP specification.
The URI of the resource.
Type: str
The size of the raw resource content in bytes (before base64 encoding), if known.
Type: int | None Default: None
@classmethod
def from_mcp_sdk(cls, mcp_resource: mcp_types.Resource) -> Resource
Convert from MCP SDK Resource to PydanticAI Resource.
Resource
The MCP SDK Resource object.
Bases: BaseResource
A template for parameterized resources on an MCP server.
See the resource templates in the MCP specification.
URI template (RFC 6570) for constructing resource URIs.
Type: str
@classmethod
def from_mcp_sdk(cls, mcp_template: mcp_types.ResourceTemplate) -> ResourceTemplate
Convert from MCP SDK ResourceTemplate to PydanticAI ResourceTemplate.
ResourceTemplate
The MCP SDK ResourceTemplate object.
Capabilities that an MCP server supports.
Experimental, non-standard capabilities that the server supports.
Type: list[str] | None Default: None
Whether the server supports sending log messages to the client.
Type: bool Default: False
Whether the server offers any prompt templates.
Type: bool Default: False
Whether the server will emit notifications when the list of prompts changes.
Type: bool Default: False
Whether the server offers any resources to read.
Type: bool Default: False
Whether the server will emit notifications when the list of resources changes.
Type: bool Default: False
Whether the server offers any tools to call.
Type: bool Default: False
Whether the server will emit notifications when the list of tools changes.
Type: bool Default: False
Whether the server offers autocompletion suggestions for prompts and resources.
Type: bool Default: False
@classmethod
def from_mcp_sdk(
cls,
mcp_capabilities: mcp_types.ServerCapabilities,
) -> ServerCapabilities
Convert from MCP SDK ServerCapabilities to PydanticAI ServerCapabilities.
ServerCapabilities
The MCP SDK ServerCapabilities object.
Bases: AbstractToolset[Any], ABC
Base class for attaching agents to MCP servers.
See https://modelcontextprotocol.io for more information.
A prefix to add to all tools that are registered with the server.
If not empty, will include a trailing underscore(_).
e.g. if tool_prefix='foo', then a tool named bar will be registered as foo_bar
Type: str | None Default: tool_prefix
The log level to set when connecting to the server, if any.
See https://modelcontextprotocol.io/specification/2025-03-26/server/utilities/logging#logging for more details.
If None, no log level will be set.
Type: mcp_types.LoggingLevel | None Default: log_level
A handler for logging messages from the server.
Type: LoggingFnT | None Default: log_handler
The timeout in seconds to wait for the client to initialize.
Type: float Default: timeout
Maximum time in seconds to wait for new messages before timing out.
This timeout applies to the long-lived connection after it’s established. If no new messages are received within this time, the connection will be considered stale and may be closed. Defaults to 5 minutes (300 seconds).
Type: float Default: read_timeout
Hook to customize tool calling and optionally pass extra metadata.
Type: ProcessToolCallback | None Default: process_tool_call
Whether to allow MCP sampling through this client.
Type: bool Default: allow_sampling
The model to use for sampling.
Type: models.Model | None Default: sampling_model
The maximum number of times to retry a tool call.
Type: int Default: max_retries
Callback function to handle elicitation requests from the server.
Type: ElicitationFnT | None Default: elicitation_callback
Whether to cache the list of tools.
When enabled (default), tools are fetched once and cached until either:
- The server sends a
notifications/tools/list_changednotification MCPServer.__aexit__is called (when the last context exits)
Set to False for servers that change tools dynamically without sending notifications.
Note: When using durable execution (Temporal, DBOS), tool definitions are additionally cached
at the wrapper level across activities/steps, to avoid redundant MCP connections. This
wrapper-level cache is not invalidated by tools/list_changed notifications.
Set to False to disable all caching if tools may change during a workflow.
Type: bool Default: cache_tools
Whether to cache the list of resources.
When enabled (default), resources are fetched once and cached until either:
- The server sends a
notifications/resources/list_changednotification MCPServer.__aexit__is called (when the last context exits)
Set to False for servers that change resources dynamically without sending notifications.
Type: bool Default: cache_resources
Whether to include the server’s instructions in the agent’s instructions.
Defaults to False for backward compatibility.
Type: bool Default: include_instructions
Access the information send by the MCP server during initialization.
Type: mcp_types.Implementation
Access the capabilities advertised by the MCP server during initialization.
Type: ServerCapabilities
Access the instructions sent by the MCP server during initialization.
Check if the MCP server is running.
Type: bool
@abstractmethod
@async
def client_streams(
) -> AsyncIterator[tuple[MemoryObjectReceiveStream[SessionMessage | Exception], MemoryObjectSendStream[SessionMessage]]]
Create the streams for the MCP server.
AsyncIterator[tuple[MemoryObjectReceiveStream[SessionMessage | Exception], MemoryObjectSendStream[SessionMessage]]]
@async
def get_instructions(ctx: RunContext[Any]) -> messages.InstructionPart | None
Return the MCP server’s instructions for how to use its tools.
If include_instructions is True, returns
the instructions sent by the MCP server during
initialization. Otherwise, returns None.
Instructions from external servers are marked as dynamic since they may change between connections.
messages.InstructionPart | None — An InstructionPart with the server’s instructions if include_instructions is enabled, otherwise None.
ctx : RunContext[Any]
The run context for this agent run.
@async
def list_tools() -> list[mcp_types.Tool]
Retrieve tools that are currently active on the server.
Tools are cached by default, with cache invalidation on:
notifications/tools/list_changednotifications from the server__aexit__when the last context exits
Set cache_tools=False for servers that change tools without sending notifications.
list[mcp_types.Tool]
@async
def direct_call_tool(
name: str,
args: dict[str, Any],
metadata: dict[str, Any] | None = None,
) -> ToolResult
Call a tool on the server.
ToolResult — The result of the tool call.
name : str
The name of the tool to call.
The arguments to pass to the tool.
Request-level metadata (optional)
ModelRetry— If the tool call fails.
@async
def list_resources() -> list[Resource]
Retrieve resources that are currently present on the server.
Resources are cached by default, with cache invalidation on:
notifications/resources/list_changednotifications from the server__aexit__when the last context exits
Set cache_resources=False for servers that change resources without sending notifications.
list[Resource]
MCPError— If the server returns an error.
@async
def list_resource_templates() -> list[ResourceTemplate]
Retrieve resource templates that are currently present on the server.
list[ResourceTemplate]
MCPError— If the server returns an error.
@async
def read_resource(
uri: str,
) -> str | messages.BinaryContent | list[str | messages.BinaryContent]
def read_resource(
uri: Resource,
) -> str | messages.BinaryContent | list[str | messages.BinaryContent]
Read the contents of a specific resource by URI.
str | messages.BinaryContent | list[str | messages.BinaryContent] — The resource contents. If the resource has a single content item, returns that item directly.
str | messages.BinaryContent | list[str | messages.BinaryContent] — If the resource has multiple content items, returns a list of items.
uri : str | Resource
The URI of the resource to read, or a Resource object.
MCPError— If the server returns an error.
@async
def __aenter__() -> Self
Enter the MCP server context.
This will initialize the connection to the server.
If this server is an MCPServerStdio, the server will first be started as a subprocess.
This is a no-op if the MCP server has already been entered.
Bases: MCPServer
Runs an MCP server in a subprocess and communicates with it over stdin/stdout.
This class implements the stdio transport from the MCP specification. See https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#stdio for more information.
Example:
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
server = MCPServerStdio( # (1)
'uv', args=['run', 'mcp-run-python', 'stdio'], timeout=10
)
agent = Agent('openai:gpt-5.2', toolsets=[server]) See MCP Run Python for more information.
The command to run.
Type: str Default: command
The arguments to pass to the command.
Type: Sequence[str] Default: args
The environment variables the CLI server will have access to.
By default the subprocess will not inherit any environment variables from the parent process.
If you want to inherit the environment variables from the parent process, use env=os.environ.
Type: dict[str, str] | None Default: env
The working directory to use when spawning the process.
Type: str | Path | None Default: cwd
def __init__(
command: str,
args: Sequence[str],
env: dict[str, str] | None = None,
cwd: str | Path | None = None,
tool_prefix: str | None = None,
log_level: mcp_types.LoggingLevel | None = None,
log_handler: LoggingFnT | None = None,
timeout: float = 5,
read_timeout: float = 5 * 60,
process_tool_call: ProcessToolCallback | None = None,
allow_sampling: bool = True,
sampling_model: models.Model | None = None,
max_retries: int = 1,
elicitation_callback: ElicitationFnT | None = None,
cache_tools: bool = True,
cache_resources: bool = True,
include_instructions: bool = False,
id: str | None = None,
client_info: mcp_types.Implementation | None = None,
)
Build a new MCP server.
command : str
The command to run.
The arguments to pass to the command.
The environment variables to set in the subprocess.
The working directory to use when spawning the process.
A prefix to add to all tools that are registered with the server.
log_level : mcp_types.LoggingLevel | None Default: None
The log level to set when connecting to the server, if any.
log_handler : LoggingFnT | None Default: None
A handler for logging messages from the server.
timeout : float Default: 5
The timeout in seconds to wait for the client to initialize.
read_timeout : float Default: 5 * 60
Maximum time in seconds to wait for new messages before timing out.
process_tool_call : ProcessToolCallback | None Default: None
Hook to customize tool calling and optionally pass extra metadata.
allow_sampling : bool Default: True
Whether to allow MCP sampling through this client.
sampling_model : models.Model | None Default: None
The model to use for sampling.
max_retries : int Default: 1
The maximum number of times to retry a tool call.
elicitation_callback : ElicitationFnT | None Default: None
Callback function to handle elicitation requests from the server.
cache_tools : bool Default: True
Whether to cache the list of tools.
See MCPServer.cache_tools.
cache_resources : bool Default: True
Whether to cache the list of resources.
See MCPServer.cache_resources.
include_instructions : bool Default: False
Whether to include the server’s instructions in the agent’s instructions.
See MCPServer.include_instructions.
An optional unique ID for the MCP server. An MCP server needs to have an ID in order to be used in a durable execution environment like Temporal, in which case the ID will be used to identify the server’s activities within the workflow.
client_info : mcp_types.Implementation | None Default: None
Information describing the MCP client implementation.
Bases: _MCPServerHTTP
An MCP server that connects over streamable HTTP connections.
This class implements the SSE transport from the MCP specification. See https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#http-with-sse for more information.
Example:
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerSSE
server = MCPServerSSE('http://localhost:3001/sse')
agent = Agent('openai:gpt-5.2', toolsets=[server])
Bases: MCPServerSSE
An MCP server that connects over HTTP using the old SSE transport.
This class implements the SSE transport from the MCP specification. See https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#http-with-sse for more information.
Example:
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerHTTP
server = MCPServerHTTP('http://localhost:3001/sse')
agent = Agent('openai:gpt-5.2', toolsets=[server])
Bases: _MCPServerHTTP
An MCP server that connects over HTTP using the Streamable HTTP transport.
This class implements the Streamable HTTP transport from the MCP specification. See https://modelcontextprotocol.io/introduction#streamable-http for more information.
Example:
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP
server = MCPServerStreamableHTTP('http://localhost:8000/mcp')
agent = Agent('openai:gpt-5.2', toolsets=[server])
Bases: BaseModel
Configuration for MCP servers.
def load_mcp_servers(
config_path: str | Path,
) -> list[MCPServerStdio | MCPServerStreamableHTTP | MCPServerSSE]
Load MCP servers from a configuration file.
Environment variables can be referenced in the configuration file using:
$\{VAR_NAME\}syntax - expands to the value of VAR_NAME, raises error if not defined$\{VAR_NAME:-default\}syntax - expands to VAR_NAME if set, otherwise uses the default value
list[MCPServerStdio | MCPServerStreamableHTTP | MCPServerSSE] — A list of MCP servers.
config_path : str | Path
The path to the configuration file.
FileNotFoundError— If the configuration file does not exist.ValidationError— If the configuration file does not match the schema.ValueError— If an environment variable referenced in the configuration is not defined and no default value is provided.
The result type of an MCP tool call.
Default: str | messages.BinaryContent | dict[str, Any] | list[Any] | Sequence[str | messages.BinaryContent | dict[str, Any] | list[Any]]
A function type that represents a tool call.
Default: Callable[[str, dict[str, Any], dict[str, Any] | None], Awaitable[ToolResult]]
A process tool callback.
It accepts a run context, the original tool call function, a tool name, and arguments.
Allows wrapping an MCP server tool call to customize it, including adding extra request metadata.
Default: Callable[[RunContext[Any], CallToolFunc, str, dict[str, Any]], Awaitable[ToolResult]]