pydantic_ai.builtin_tools
Bases: ABC
A builtin tool that can be used by an agent.
This class is abstract and cannot be instantiated directly.
The builtin tools are passed to the model as part of the ModelRequestParameters.
Built-in tool identifier, this should be available on all built-in tools as a discriminator.
Type: str Default: 'unknown_builtin_tool'
A unique identifier for the builtin tool.
If multiple instances of the same builtin tool can be passed to the model, subclasses should override this property to allow them to be distinguished.
Type: str
Human-readable label for UI display.
Subclasses should override this to provide a meaningful label.
Type: str
Bases: AbstractBuiltinTool
A builtin tool that allows your agent to search the web for information.
The parameters that PydanticAI passes depend on the model, as some parameters may not be supported by certain models.
Supported by:
- Anthropic
- OpenAI Responses
- Groq
- xAI
- OpenRouter
The search_context_size parameter controls how much context is retrieved from the web to help the tool formulate a response.
Supported by:
- OpenAI Responses
- OpenRouter
Type: Literal[‘low’, ‘medium’, ‘high’] Default: 'medium'
The user_location parameter allows you to localize search results based on a user’s location.
Supported by:
- Anthropic
- OpenAI Responses
Type: WebSearchUserLocation | None Default: None
If provided, these domains will never appear in results.
With Anthropic, you can only use one of blocked_domains or allowed_domains, not both.
Supported by:
- Anthropic, see https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool#domain-filtering
- Groq, see https://console.groq.com/docs/agentic-tooling#search-settings
- xAI, see https://docs.x.ai/docs/guides/tools/search-tools#web-search-parameters
Type: list[str] | None Default: None
If provided, only these domains will be included in results.
With Anthropic, you can only use one of blocked_domains or allowed_domains, not both.
Supported by:
- Anthropic, see https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool#domain-filtering
- Groq, see https://console.groq.com/docs/agentic-tooling#search-settings
- OpenAI Responses, see https://platform.openai.com/docs/guides/tools-web-search
- xAI, see https://docs.x.ai/docs/guides/tools/search-tools#web-search-parameters
Type: list[str] | None Default: None
If provided, the tool will stop searching the web after the given number of uses.
Supported by:
- Anthropic
Type: int | None Default: None
The kind of tool.
Type: str Default: 'web_search'
Bases: TypedDict
Allows you to localize search results based on a user’s location.
Supported by:
- Anthropic
- OpenAI Responses
The city where the user is located.
Type: str
The country where the user is located. For OpenAI, this must be a 2-letter country code (e.g., ‘US’, ‘GB’).
Type: str
The region or state where the user is located.
Type: str
The timezone of the user’s location.
Type: str
Bases: AbstractBuiltinTool
A builtin tool that allows your agent to execute code.
Supported by:
- Anthropic
- OpenAI Responses
- Bedrock (Nova2.0)
- xAI
The kind of tool.
Type: str Default: 'code_execution'
Bases: AbstractBuiltinTool
Allows your agent to access contents from URLs.
The parameters that PydanticAI passes depend on the model, as some parameters may not be supported by certain models.
Supported by:
- Anthropic
If provided, the tool will stop fetching URLs after the given number of uses.
Supported by:
- Anthropic
Type: int | None Default: None
If provided, only these domains will be fetched.
With Anthropic, you can only use one of blocked_domains or allowed_domains, not both.
Supported by:
- Anthropic, see https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-fetch-tool#domain-filtering
Type: list[str] | None Default: None
If provided, these domains will never be fetched.
With Anthropic, you can only use one of blocked_domains or allowed_domains, not both.
Supported by:
- Anthropic, see https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-fetch-tool#domain-filtering
Type: list[str] | None Default: None
If True, enables citations for fetched content.
Supported by:
- Anthropic
Type: bool Default: False
Maximum content length in tokens for fetched content.
Supported by:
- Anthropic
Type: int | None Default: None
The kind of tool.
Type: str Default: 'web_fetch'
Bases: WebFetchTool
Deprecated alias for WebFetchTool. Use WebFetchTool instead.
Overrides kind to ‘url_context’ so old serialized payloads with {“kind”: “url_context”, …} can be deserialized to UrlContextTool for backward compatibility.
The kind of tool (deprecated value for backward compatibility).
Type: str Default: 'url_context'
Bases: AbstractBuiltinTool
A builtin tool that allows your agent to generate images.
Supported by:
- OpenAI Responses
Background type for the generated image.
Supported by:
- OpenAI Responses. ‘transparent’ is only supported for ‘png’ and ‘webp’ output formats.
Type: Literal[‘transparent’, ‘opaque’, ‘auto’] Default: 'auto'
Control how much effort the model will exert to match the style and features, especially facial features, of input images.
Supported by:
- OpenAI Responses. Default: ‘low’.
Type: Literal[‘high’, ‘low’] | None Default: None
Moderation level for the generated image.
Supported by:
- OpenAI Responses
Type: Literal[‘auto’, ‘low’] Default: 'auto'
Compression level for the output image.
Supported by:
- OpenAI Responses. Only supported for ‘jpeg’ and ‘webp’ output formats. Default: 100.
- Google (Vertex AI only). Only supported for ‘jpeg’ output format. Default: 75.
Setting this will default
output_formatto ‘jpeg’ if not specified.
Type: int | None Default: None
The output format of the generated image.
Supported by:
- OpenAI Responses. Default: ‘png’.
- Google (Vertex AI only). Default: ‘png’, or ‘jpeg’ if
output_compressionis set.
Type: Literal[‘png’, ‘webp’, ‘jpeg’] | None Default: None
Number of partial images to generate in streaming mode.
Supported by:
- OpenAI Responses. Supports 0 to 3.
Type: int Default: 0
The quality of the generated image.
Supported by:
- OpenAI Responses
Type: Literal[‘low’, ‘medium’, ‘high’, ‘auto’] Default: 'auto'
The size of the generated image.
- OpenAI Responses: ‘auto’ (default: model selects the size based on the prompt), ‘1024x1024’, ‘1024x1536’, ‘1536x1024’
- Google (Gemini 3 Pro Image and later): ‘512’ (Gemini 3.1 Flash Image only), ‘1K’ (default), ‘2K’, ‘4K’
Type: Literal[‘auto’, ‘1024x1024’, ‘1024x1536’, ‘1536x1024’, ‘512’, ‘1K’, ‘2K’, ‘4K’] | None Default: None
The aspect ratio to use for generated images.
Supported by:
- Google image-generation models (Gemini)
- OpenAI Responses (maps ‘1:1’, ‘2:3’, and ‘3:2’ to supported sizes)
Type: ImageAspectRatio | None Default: None
The kind of tool.
Type: str Default: 'image_generation'
Bases: AbstractBuiltinTool
A builtin tool that allows your agent to use memory.
Supported by:
- Anthropic
The kind of tool.
Type: str Default: 'memory'
Bases: AbstractBuiltinTool
A builtin tool that allows your agent to use MCP servers.
Supported by:
- OpenAI Responses
- Anthropic
- xAI
A unique identifier for the MCP server.
Type: str
The URL of the MCP server to use.
For OpenAI Responses, it is possible to use connector_id by providing it as x-openai-connector:<connector_id>.
Type: str
Authorization header to use when making requests to the MCP server.
Supported by:
- OpenAI Responses
- Anthropic
- xAI
Type: str | None Default: None
A description of the MCP server.
Supported by:
- OpenAI Responses
- xAI
Type: str | None Default: None
A list of tools that the MCP server can use.
Supported by:
- OpenAI Responses
- Anthropic
- xAI
Type: list[str] | None Default: None
Optional HTTP headers to send to the MCP server.
Use for authentication or other purposes.
Supported by:
- OpenAI Responses
- xAI
Type: dict[str, str] | None Default: None
Bases: AbstractBuiltinTool
A builtin tool that allows your agent to search through uploaded files using vector search.
This tool provides a fully managed Retrieval-Augmented Generation (RAG) system that handles file storage, chunking, embedding generation, and context injection into prompts.
Supported by:
- OpenAI Responses
- Google (Gemini)
The file store IDs to search through.
For OpenAI, these are the IDs of vector stores created via the OpenAI API. For Google, these are file search store names that have been uploaded and processed via the Gemini Files API.
The kind of tool.
Type: str Default: 'file_search'
Registry of all builtin tool types, keyed by their kind string.
This dict is populated automatically via __init_subclass__ when tool classes are defined.
Type: dict[str, type[AbstractBuiltinTool]] Default: \{\}
Supported aspect ratios for image generation tools.
Default: Literal['21:9', '16:9', '4:3', '3:2', '1:1', '9:16', '3:4', '2:3', '5:4', '4:5']
Set of deprecated builtin tool IDs that should not be offered in new UIs.
Type: frozenset[type[AbstractBuiltinTool]] Default: frozenset(\{UrlContextTool\})
Get the set of all builtin tool types (excluding deprecated tools).
Default: frozenset(cls for cls in (BUILTIN_TOOL_TYPES.values()) if cls not in DEPRECATED_BUILTIN_TOOLS)