Skip to content

pydantic_ai.builtin_tools

AbstractBuiltinTool

Bases: ABC

A builtin tool that can be used by an agent.

This class is abstract and cannot be instantiated directly.

The builtin tools are passed to the model as part of the ModelRequestParameters.

Attributes

kind

Built-in tool identifier, this should be available on all built-in tools as a discriminator.

Type: str Default: 'unknown_builtin_tool'

unique_id

A unique identifier for the builtin tool.

If multiple instances of the same builtin tool can be passed to the model, subclasses should override this property to allow them to be distinguished.

Type: str

label

Human-readable label for UI display.

Subclasses should override this to provide a meaningful label.

Type: str

WebSearchTool

Bases: AbstractBuiltinTool

A builtin tool that allows your agent to search the web for information.

The parameters that PydanticAI passes depend on the model, as some parameters may not be supported by certain models.

Supported by:

  • Anthropic
  • OpenAI Responses
  • Groq
  • Google
  • xAI
  • OpenRouter

Attributes

search_context_size

The search_context_size parameter controls how much context is retrieved from the web to help the tool formulate a response.

Supported by:

  • OpenAI Responses
  • OpenRouter

Type: Literal[‘low’, ‘medium’, ‘high’] Default: 'medium'

user_location

The user_location parameter allows you to localize search results based on a user’s location.

Supported by:

  • Anthropic
  • OpenAI Responses

Type: WebSearchUserLocation | None Default: None

blocked_domains

If provided, these domains will never appear in results.

With Anthropic, you can only use one of blocked_domains or allowed_domains, not both.

Supported by:

Type: list[str] | None Default: None

allowed_domains

If provided, only these domains will be included in results.

With Anthropic, you can only use one of blocked_domains or allowed_domains, not both.

Supported by:

Type: list[str] | None Default: None

max_uses

If provided, the tool will stop searching the web after the given number of uses.

Supported by:

  • Anthropic

Type: int | None Default: None

kind

The kind of tool.

Type: str Default: 'web_search'

WebSearchUserLocation

Bases: TypedDict

Allows you to localize search results based on a user’s location.

Supported by:

  • Anthropic
  • OpenAI Responses

Attributes

city

The city where the user is located.

Type: str

country

The country where the user is located. For OpenAI, this must be a 2-letter country code (e.g., ‘US’, ‘GB’).

Type: str

region

The region or state where the user is located.

Type: str

timezone

The timezone of the user’s location.

Type: str

CodeExecutionTool

Bases: AbstractBuiltinTool

A builtin tool that allows your agent to execute code.

Supported by:

  • Anthropic
  • OpenAI Responses
  • Google
  • Bedrock (Nova2.0)
  • xAI

Attributes

kind

The kind of tool.

Type: str Default: 'code_execution'

WebFetchTool

Bases: AbstractBuiltinTool

Allows your agent to access contents from URLs.

The parameters that PydanticAI passes depend on the model, as some parameters may not be supported by certain models.

Supported by:

  • Anthropic
  • Google

Attributes

max_uses

If provided, the tool will stop fetching URLs after the given number of uses.

Supported by:

  • Anthropic

Type: int | None Default: None

allowed_domains

If provided, only these domains will be fetched.

With Anthropic, you can only use one of blocked_domains or allowed_domains, not both.

Supported by:

Type: list[str] | None Default: None

blocked_domains

If provided, these domains will never be fetched.

With Anthropic, you can only use one of blocked_domains or allowed_domains, not both.

Supported by:

Type: list[str] | None Default: None

enable_citations

If True, enables citations for fetched content.

Supported by:

  • Anthropic

Type: bool Default: False

max_content_tokens

Maximum content length in tokens for fetched content.

Supported by:

  • Anthropic

Type: int | None Default: None

kind

The kind of tool.

Type: str Default: 'web_fetch'

UrlContextTool

Bases: WebFetchTool

Deprecated alias for WebFetchTool. Use WebFetchTool instead.

Overrides kind to ‘url_context’ so old serialized payloads with {“kind”: “url_context”, …} can be deserialized to UrlContextTool for backward compatibility.

Attributes

kind

The kind of tool (deprecated value for backward compatibility).

Type: str Default: 'url_context'

ImageGenerationTool

Bases: AbstractBuiltinTool

A builtin tool that allows your agent to generate images.

Supported by:

  • OpenAI Responses
  • Google

Attributes

background

Background type for the generated image.

Supported by:

  • OpenAI Responses. ‘transparent’ is only supported for ‘png’ and ‘webp’ output formats.

Type: Literal[‘transparent’, ‘opaque’, ‘auto’] Default: 'auto'

input_fidelity

Control how much effort the model will exert to match the style and features, especially facial features, of input images.

Supported by:

  • OpenAI Responses. Default: ‘low’.

Type: Literal[‘high’, ‘low’] | None Default: None

moderation

Moderation level for the generated image.

Supported by:

  • OpenAI Responses

Type: Literal[‘auto’, ‘low’] Default: 'auto'

output_compression

Compression level for the output image.

Supported by:

  • OpenAI Responses. Only supported for ‘jpeg’ and ‘webp’ output formats. Default: 100.
  • Google (Vertex AI only). Only supported for ‘jpeg’ output format. Default: 75. Setting this will default output_format to ‘jpeg’ if not specified.

Type: int | None Default: None

output_format

The output format of the generated image.

Supported by:

  • OpenAI Responses. Default: ‘png’.
  • Google (Vertex AI only). Default: ‘png’, or ‘jpeg’ if output_compression is set.

Type: Literal[‘png’, ‘webp’, ‘jpeg’] | None Default: None

partial_images

Number of partial images to generate in streaming mode.

Supported by:

  • OpenAI Responses. Supports 0 to 3.

Type: int Default: 0

quality

The quality of the generated image.

Supported by:

  • OpenAI Responses

Type: Literal[‘low’, ‘medium’, ‘high’, ‘auto’] Default: 'auto'

size

The size of the generated image.

  • OpenAI Responses: ‘auto’ (default: model selects the size based on the prompt), ‘1024x1024’, ‘1024x1536’, ‘1536x1024’
  • Google (Gemini 3 Pro Image and later): ‘512’ (Gemini 3.1 Flash Image only), ‘1K’ (default), ‘2K’, ‘4K’

Type: Literal[‘auto’, ‘1024x1024’, ‘1024x1536’, ‘1536x1024’, ‘512’, ‘1K’, ‘2K’, ‘4K’] | None Default: None

aspect_ratio

The aspect ratio to use for generated images.

Supported by:

  • Google image-generation models (Gemini)
  • OpenAI Responses (maps ‘1:1’, ‘2:3’, and ‘3:2’ to supported sizes)

Type: ImageAspectRatio | None Default: None

kind

The kind of tool.

Type: str Default: 'image_generation'

MemoryTool

Bases: AbstractBuiltinTool

A builtin tool that allows your agent to use memory.

Supported by:

  • Anthropic

Attributes

kind

The kind of tool.

Type: str Default: 'memory'

MCPServerTool

Bases: AbstractBuiltinTool

A builtin tool that allows your agent to use MCP servers.

Supported by:

  • OpenAI Responses
  • Anthropic
  • xAI

Attributes

id

A unique identifier for the MCP server.

Type: str

url

The URL of the MCP server to use.

For OpenAI Responses, it is possible to use connector_id by providing it as x-openai-connector:<connector_id>.

Type: str

authorization_token

Authorization header to use when making requests to the MCP server.

Supported by:

  • OpenAI Responses
  • Anthropic
  • xAI

Type: str | None Default: None

description

A description of the MCP server.

Supported by:

  • OpenAI Responses
  • xAI

Type: str | None Default: None

allowed_tools

A list of tools that the MCP server can use.

Supported by:

  • OpenAI Responses
  • Anthropic
  • xAI

Type: list[str] | None Default: None

headers

Optional HTTP headers to send to the MCP server.

Use for authentication or other purposes.

Supported by:

  • OpenAI Responses
  • xAI

Type: dict[str, str] | None Default: None

FileSearchTool

Bases: AbstractBuiltinTool

A builtin tool that allows your agent to search through uploaded files using vector search.

This tool provides a fully managed Retrieval-Augmented Generation (RAG) system that handles file storage, chunking, embedding generation, and context injection into prompts.

Supported by:

  • OpenAI Responses
  • Google (Gemini)

Attributes

file_store_ids

The file store IDs to search through.

For OpenAI, these are the IDs of vector stores created via the OpenAI API. For Google, these are file search store names that have been uploaded and processed via the Gemini Files API.

Type: Sequence[str]

kind

The kind of tool.

Type: str Default: 'file_search'

BUILTIN_TOOL_TYPES

Registry of all builtin tool types, keyed by their kind string.

This dict is populated automatically via __init_subclass__ when tool classes are defined.

Type: dict[str, type[AbstractBuiltinTool]] Default: \{\}

ImageAspectRatio

Supported aspect ratios for image generation tools.

Default: Literal['21:9', '16:9', '4:3', '3:2', '1:1', '9:16', '3:4', '2:3', '5:4', '4:5']

DEPRECATED_BUILTIN_TOOLS

Set of deprecated builtin tool IDs that should not be offered in new UIs.

Type: frozenset[type[AbstractBuiltinTool]] Default: frozenset(\{UrlContextTool\})

SUPPORTED_BUILTIN_TOOLS

Get the set of all builtin tool types (excluding deprecated tools).

Default: frozenset(cls for cls in (BUILTIN_TOOL_TYPES.values()) if cls not in DEPRECATED_BUILTIN_TOOLS)