Skip to content

pydantic_ai.capabilities

Toolset

Bases: AbstractCapability[AgentDepsT]

A capability that provides a toolset.

Thinking

Bases: AbstractCapability[Any]

Enables and configures model thinking/reasoning.

Uses the unified thinking setting in ModelSettings to work portably across providers. Provider-specific thinking settings (e.g., anthropic_thinking, openai_reasoning_effort) take precedence when both are set.

Attributes

effort

The thinking effort level.

  • True: Enable thinking with the provider’s default effort.
  • False: Disable thinking (silently ignored on always-on models).
  • 'minimal'/'low'/'medium'/'high'/'xhigh': Enable thinking at a specific effort level.

Type: ThinkingLevel Default: True

PrepareTools

Bases: AbstractCapability[AgentDepsT]

Capability that filters or modifies function tool definitions using a callable.

Wraps a ToolsPrepareFunc as a capability — sugar for the agent-level prepare_tools= constructor argument, which injects this capability automatically. Filters/modifies function tools only; for output tools use PrepareOutputTools.

from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import PrepareTools
from pydantic_ai.tools import ToolDefinition


async def hide_admin_tools(
    ctx: RunContext[None], tool_defs: list[ToolDefinition]
) -> list[ToolDefinition] | None:
    return [td for td in tool_defs if not td.name.startswith('admin_')]


agent = Agent('openai:gpt-5', capabilities=[PrepareTools(hide_admin_tools)])

HandleDeferredToolCalls

Bases: AbstractCapability[AgentDepsT]

Resolves deferred tool calls inline during an agent run using a handler function.

When tools require approval or external execution, the agent normally pauses the run and returns DeferredToolRequests as output. This capability intercepts deferred tool calls, calls the provided handler to resolve them, and continues the agent run automatically.

The handler receives the RunContext and the DeferredToolRequests. It may return DeferredToolResults with results for some or all pending calls, or return None to decline handling (the next capability in the chain gets a chance, otherwise the calls bubble up as DeferredToolRequests output).

Attributes

handler

The handler function that resolves deferred tool requests.

Receives the run context and the deferred tool requests, and returns DeferredToolResults with results for some or all pending calls, or None to decline handling. Can be sync or async.

Type: Callable[[RunContext[AgentDepsT], DeferredToolRequests], DeferredToolResults | None | Awaitable[DeferredToolResults | None]]

IncludeToolReturnSchemas

Bases: AbstractCapability[AgentDepsT]

Capability that includes return schemas for selected tools.

When added to an agent’s capabilities, this sets include_return_schema to True on matching tool definitions, causing the model to receive return type information for those tools.

For models that natively support return schemas (e.g. Google Gemini), the schema is passed as a structured field. For other models, it is injected into the tool description as JSON text.

Per-tool overrides (Tool(..., include_return_schema=False)) take precedence — this capability only sets the flag on tools that haven’t explicitly opted out.

from pydantic_ai import Agent
from pydantic_ai.capabilities import IncludeToolReturnSchemas

agent = Agent('openai:gpt-5', capabilities=[IncludeToolReturnSchemas()])

Attributes

tools

Which tools should have their return schemas included.

  • 'all' (default): every tool gets its return schema included.
  • Sequence[str]: only tools whose names are listed.
  • dict[str, Any]: matches tools whose metadata deeply includes the specified key-value pairs.
  • Callable (ctx, tool_def) -> bool: custom sync or async predicate.

Type: ToolSelector[AgentDepsT] Default: 'all'

PrefixTools

Bases: WrapperCapability[AgentDepsT]

A capability that wraps another capability and prefixes its tool names.

Only the wrapped capability’s tools are prefixed; other agent tools are unaffected.

from pydantic_ai import Agent
from pydantic_ai.capabilities import PrefixTools, Toolset
from pydantic_ai.toolsets import FunctionToolset

toolset = FunctionToolset()

agent = Agent(
    'openai:gpt-5',
    capabilities=[
        PrefixTools(
            wrapped=Toolset(toolset),
            prefix='ns',
        ),
    ],
)

Methods

from_spec

@classmethod

def from_spec(cls, prefix: str, capability: CapabilitySpec) -> PrefixTools[Any]

Create from spec with a nested capability specification.

Returns

PrefixTools[Any]

Parameters

prefix : str

The prefix to add to tool names (e.g. 'mcp' turns 'search' into 'mcp_search').

capability : CapabilitySpec

A capability spec (same format as entries in the capabilities list).

SetToolMetadata

Bases: AbstractCapability[AgentDepsT]

Capability that merges metadata key-value pairs onto selected tools.

from pydantic_ai import Agent
from pydantic_ai.capabilities import SetToolMetadata

agent = Agent('openai:gpt-5', capabilities=[SetToolMetadata(code_mode=True)])

ThreadExecutor

Bases: AbstractCapability[Any]

Use a custom executor for running sync functions in threads.

By default, sync tool functions and other sync callbacks are run in threads using anyio.to_thread.run_sync, which creates ephemeral threads. In long-running servers (e.g. FastAPI), this can lead to thread accumulation under sustained load.

This capability provides a bounded ThreadPoolExecutor (or any Executor) to use instead, scoped to agent runs:

from concurrent.futures import ThreadPoolExecutor

from pydantic_ai import Agent
from pydantic_ai.capabilities import ThreadExecutor

executor = ThreadPoolExecutor(max_workers=16, thread_name_prefix='agent-worker')
agent = Agent('openai:gpt-5.2', capabilities=[ThreadExecutor(executor)])

To set an executor for all agents globally, use Agent.using_thread_executor().

Attributes

executor

The executor to use for running sync functions.

Type: Executor

NativeTool

Bases: AbstractCapability[AgentDepsT]

A capability that registers a native tool with the agent.

Wraps a single AgentNativeTool — either a static AbstractNativeTool instance or a callable that dynamically produces one.

Equivalent to passing the tool through Agent(capabilities=[NativeTool(my_tool)]). For provider-adaptive use (with a local fallback), see NativeOrLocalTool or its subclasses like WebSearch.

Methods

from_spec

@classmethod

def from_spec(
    cls,
    tool: AbstractNativeTool | None = None,
    kwargs: Any = {},
) -> NativeTool[Any]

Create from spec.

Supports two YAML forms:

  • Flat: {NativeTool: {kind: web_search, search_context_size: high}}
  • Explicit: {NativeTool: {tool: {kind: web_search}}}
Returns

NativeTool[Any]

WebFetch

Bases: NativeOrLocalTool[AgentDepsT]

URL fetching capability.

Uses the model’s native URL fetching when available, falling back to a local function tool (markdownify-based fetch by default) when it isn’t.

The local fallback requires the web-fetch optional group:

Terminal
pip install "pydantic-ai-slim[web-fetch]"

Attributes

allowed_domains

Only fetch from these domains. Enforced locally when native is unavailable.

Type: list[str] | None Default: allowed_domains

blocked_domains

Never fetch from these domains. Enforced locally when native is unavailable.

Type: list[str] | None Default: blocked_domains

max_uses

Maximum number of fetches per run. Requires native support.

Type: int | None Default: max_uses

enable_citations

Enable citations for fetched content. Native-only; ignored by local tools.

Type: bool | None Default: enable_citations

max_content_tokens

Maximum content length in tokens. Native-only; ignored by local tools.

Type: int | None Default: max_content_tokens

DynamicCapability

Bases: AbstractCapability[AgentDepsT]

A capability that builds another capability dynamically using a function that takes the run context.

The factory is called once per agent run from for_run. The returned capability replaces this wrapper for the rest of the run, so its instructions, model settings, toolset, native tools, and hooks all flow through normally.

Pass a CapabilityFunc directly to Agent(capabilities=[...]) or agent.run(capabilities=[...]) and it will be wrapped in a DynamicCapability automatically.

Attributes

capability_func

The function that takes the run context and returns a capability or None.

Type: CapabilityFunc[AgentDepsT]

ImageGeneration

Bases: NativeOrLocalTool[AgentDepsT]

Image generation capability.

Uses the model’s native image generation when available. When the model doesn’t support it and fallback_model is provided, falls back to a local tool that delegates to a subagent running the specified image-capable model.

Image generation settings (quality, size, etc.) are forwarded to the ImageGenerationTool used by both the native and the local fallback subagent. When passing a custom native instance, its settings are also used for the fallback subagent; capability-level fields override any native instance settings.

Attributes

fallback_model

Model to use for image generation when the agent’s model doesn’t support it natively.

Must be a model that supports image generation via the ImageGenerationTool native tool. This requires a conversational model with image generation support, not a dedicated image-only API. Examples:

  • 'openai-responses:gpt-5.4' — OpenAI model with image generation support
  • 'google-gla:gemini-3-pro-image-preview' — Google image generation model

Can be a model name string, Model instance, or a callable taking RunContext that returns a Model instance.

Type: ImageGenerationFallbackModel Default: fallback_model

action

Whether to generate a new image or edit an existing image.

Supported by: OpenAI Responses. Default: 'auto'.

Type: Literal[‘generate’, ‘edit’, ‘auto’] | None Default: action

background

Background type for the generated image.

Supported by: OpenAI Responses. 'transparent' only supported for 'png' and 'webp'.

Type: Literal[‘transparent’, ‘opaque’, ‘auto’] | None Default: background

input_fidelity

Input fidelity for matching style/features of input images.

Supported by: OpenAI Responses. Default: 'low'.

Type: Literal[‘high’, ‘low’] | None Default: input_fidelity

moderation

Moderation level for the generated image.

Supported by: OpenAI Responses.

Type: Literal[‘auto’, ‘low’] | None Default: moderation

image_model

The image generation model to use.

Supported by: OpenAI Responses.

Type: ImageGenerationModelName | None Default: image_model

output_compression

Compression level for the output image.

Supported by: OpenAI Responses (jpeg/webp, default: 100), Google Vertex AI (jpeg, default: 75).

Type: int | None Default: output_compression

output_format

Output format of the generated image.

Supported by: OpenAI Responses (default: 'png'), Google Vertex AI.

Type: Literal[‘png’, ‘webp’, ‘jpeg’] | None Default: output_format

quality

Quality of the generated image.

Supported by: OpenAI Responses.

Type: Literal[‘low’, ‘medium’, ‘high’, ‘auto’] | None Default: quality

size

Size of the generated image.

Supported by: OpenAI Responses ('auto', '1024x1024', '1024x1536', '1536x1024'), Google ('512', '1K', '2K', '4K').

Type: Literal[‘auto’, ‘1024x1024’, ‘1024x1536’, ‘1536x1024’, ‘512’, ‘1K’, ‘2K’, ‘4K’] | None Default: size

aspect_ratio

Aspect ratio for generated images.

Supported by: Google (Gemini), OpenAI Responses (maps '1:1', '2:3', '3:2' to sizes).

Type: ImageAspectRatio | None Default: aspect_ratio

NativeOrLocalTool

Bases: AbstractCapability[AgentDepsT]

Capability that pairs a provider-native tool with a local fallback.

When the model supports the native tool, the local fallback is removed. When the model doesn’t support the native tool, it is removed and the local tool stays.

Can be used directly:

from pydantic_ai.capabilities import NativeOrLocalTool

cap = NativeOrLocalTool(native=WebSearchTool(), local=my_search_func)

Or subclassed to set defaults by overriding _default_native, _default_local, and _requires_native. The built-in WebSearch, WebFetch, and ImageGeneration capabilities are all subclasses.

Attributes

native

Configure the provider-native tool.

  • True (default): use the default native tool configuration (subclasses only).
  • False: disable the native tool; always use the local tool.
  • An AbstractNativeTool instance: use this specific configuration.
  • A callable (NativeToolFunc): dynamically create the native tool per-run via RunContext.

Type: AgentNativeTool[AgentDepsT] | bool Default: native

local

Configure the local fallback tool.

  • None (default): auto-detect a local fallback via _default_local.
  • True: opt in to the default local fallback (resolved via _resolve_local_strategy).
  • False: disable the local fallback; only use the native tool.
  • A named strategy (e.g. 'duckduckgo'): resolved via _resolve_local_strategy in subclasses.
  • A Tool or AbstractToolset instance: use this specific local tool.
  • A bare callable: automatically wrapped in a Tool.

Type: str | Tool[AgentDepsT] | Callable[…, Any] | AbstractToolset[AgentDepsT] | bool | None Default: local

ProcessEventStream

Bases: AbstractCapability[AgentDepsT]

A capability that forwards the agent’s event stream to a user-provided async handler.

The handler receives the stream of AgentStreamEvents emitted during model streaming and tool execution for each ModelRequestNode and CallToolsNode. Two forms are supported:

  • An EventStreamHandler — an async def returning None. Events are forwarded to the handler while also being passed through unchanged to the rest of the capability chain, so multiple handlers (and the top-level event_stream_handler argument) can all see the same stream without changing each other’s view. A handler that returns early stops receiving events but does not affect downstream consumers; a handler that raises propagates the exception to the rest of the run. Events are delivered synchronously, so a slow handler back-pressures the rest of the stream.
  • An EventStreamProcessor — an async generator yielding AgentStreamEvents. The events it yields replace the inner stream for downstream wrappers and consumers, so it can modify, drop, or add events.

When this capability is registered, agent.run() automatically enables streaming so the handler fires without requiring an explicit event_stream_handler argument.

WebSearch

Bases: NativeOrLocalTool[AgentDepsT]

Web search capability.

Uses the model’s native web search when available, falling back to a local function tool (DuckDuckGo by default) when it isn’t.

Attributes

search_context_size

Controls how much context is retrieved from the web. Native-only; ignored by local tools.

Type: Literal[‘low’, ‘medium’, ‘high’] | None Default: search_context_size

user_location

Localize search results based on user location. Native-only; ignored by local tools.

Type: WebSearchUserLocation | None Default: user_location

blocked_domains

Domains to exclude from results. Requires native support.

Type: list[str] | None Default: blocked_domains

allowed_domains

Only include results from these domains. Requires native support.

Type: list[str] | None Default: allowed_domains

max_uses

Maximum number of web searches per run. Requires native support.

Type: int | None Default: max_uses

ReinjectSystemPrompt

Bases: AbstractCapability[AgentDepsT]

Capability that reinjects the agent’s configured system_prompt when missing from history.

Ensures the agent’s configured system_prompt is present at the head of the first ModelRequest on every model request.

Intended for callers that reconstruct a message_history from a source that doesn’t round-trip system prompts — UI frontends, database persistence layers, conversation compaction pipelines. By default, if any SystemPromptPart is already present anywhere in the history (for example, preserved from a prior run or handed off from another agent), this capability leaves the messages untouched so that existing system prompts remain authoritative. Set replace_existing=True to instead strip any existing SystemPromptParts before prepending the agent’s configured prompt — useful when the history comes from an untrusted source (such as a UI frontend) and the server’s prompt must win.

The UI adapters automatically add this capability in manage_system_prompt='server' mode with replace_existing=True. Add it explicitly with Agent(..., capabilities=[ReinjectSystemPrompt()]) or per-run via the capabilities= argument on Agent.run to get the same behavior anywhere.

Attributes

replace_existing

If True, strip any existing SystemPromptParts from the history before prepending the agent’s configured prompt. If False (the default), the capability is a no-op when any SystemPromptPart is already present.

Type: bool Default: False

ProcessHistory

Bases: AbstractCapability[AgentDepsT]

A capability that processes message history before model requests.

MCP

Bases: NativeOrLocalTool[AgentDepsT]

MCP server capability.

Uses the model’s native MCP server support when available, connecting directly via HTTP when it isn’t.

Attributes

url

The URL of the MCP server.

Type: str Default: url

id

Unique identifier for the MCP server. Defaults to a slug derived from the URL.

Type: str | None Default: id

authorization_token

Authorization header value for MCP server requests. Passed to both native and local.

Type: str | None Default: authorization_token

headers

HTTP headers for MCP server requests. Passed to both native and local.

Type: dict[str, str] | None Default: headers

allowed_tools

Filter to only these tools. Applied to both native and local.

Type: list[str] | None Default: allowed_tools

description

Description of the MCP server. Native-only; ignored by local tools.

Type: str | None Default: description

ToolSearch

Bases: AbstractCapability[AgentDepsT]

Capability that provides tool discovery for large toolsets.

Tools marked with defer_loading=True are hidden from the model until discovered. Auto-injected into every agent — zero overhead when no deferred tools exist.

When the model supports native tool search (Anthropic BM25/regex, OpenAI Responses), discovery is handled by the provider: the deferred tools are sent with defer_loading on the wire and the provider exposes them once they’ve been discovered. Otherwise, discovery happens locally via a search_tools function that the model can call.

On providers that support a native “client-executed” surface (Anthropic, OpenAI), the discovery message is delivered append-only — prompt cache is preserved across discovery turns, so growing the message history with discovered-tool results does not invalidate the cached prefix.

from collections.abc import Sequence

from pydantic_ai import Agent, RunContext, Tool
from pydantic_ai.capabilities import ToolSearch
from pydantic_ai.tools import ToolDefinition


# Tools become deferred via `defer_loading=True`. They stay hidden from the model
# until tool search discovers them.
def get_weather(city: str) -> str:
    ...


weather_tool = Tool(get_weather, defer_loading=True)

# Default: native search on supporting providers, local keyword matching elsewhere.
agent = Agent('anthropic:claude-sonnet-4-6', tools=[weather_tool], capabilities=[ToolSearch()])

# Force a specific Anthropic native strategy; errors on providers that can't honor it.
agent = Agent(
    'anthropic:claude-sonnet-4-6',
    tools=[weather_tool],
    capabilities=[ToolSearch(strategy='regex')],
)

# Always run the local keyword-overlap algorithm, regardless of provider.
agent = Agent(
    'anthropic:claude-sonnet-4-6',
    tools=[weather_tool],
    capabilities=[ToolSearch(strategy='keywords')],
)

# Custom search function — used locally, and by provider-native "client-executed"
# modes when supported.
def my_search(
    ctx: RunContext[None], queries: Sequence[str], tools: Sequence[ToolDefinition]
) -> list[str]:
    return [
        t.name
        for t in tools
        if any(q.lower() in (t.description or '').lower() for q in queries)
    ]

agent = Agent(
    'anthropic:claude-sonnet-4-6',
    tools=[weather_tool],
    capabilities=[ToolSearch(strategy=my_search)],
)

Attributes

strategy

The search strategy to use.

  • None (default): let Pydantic AI pick the best strategy for the current provider — native on supporting models (Anthropic BM25, OpenAI server-executed tool search), local keyword matching elsewhere. The choice may change in future versions.
  • 'keywords': always use the local keyword-overlap algorithm. Still prompt-cache compatible on providers that expose a “client-executed” native surface (Anthropic, OpenAI): the algorithm rides the same defer_loading wire as a custom callable, so the tool list stays stable across discovery rounds and the cached prefix is preserved.
  • 'bm25' / 'regex': force a specific Anthropic native strategy. Raises on providers that can’t honor the choice (including OpenAI, which has no named native strategies).
  • Callable (ctx, queries, tools) -> names: custom search function (sync or async). Used locally, and by the native “client-executed” surface on providers that support it (Anthropic custom tool-reference blocks, OpenAI execution='client').

Type: ToolSearchStrategy[AgentDepsT] | None Default: None

max_results

Maximum number of matches returned by the local search algorithm.

Type: int Default: 10

tool_description

Custom description for the local search_tools function shown to the model.

Type: str | None Default: None

parameter_description

Custom description for the queries parameter on the local search_tools function.

Type: str | None Default: None

CombinedCapability

Bases: AbstractCapability[AgentDepsT]

A capability that combines multiple capabilities.

WrapperCapability

Bases: AbstractCapability[AgentDepsT]

A capability that wraps another capability and delegates all methods.

Analogous to WrapperToolset for toolsets. Subclass and override specific methods to modify behavior while delegating the rest.

PrepareOutputTools

Bases: AbstractCapability[AgentDepsT]

Capability that filters or modifies output tool definitions using a callable.

Mirrors PrepareTools for output tools. ctx.retry/ctx.max_retries reflect the output retry budget (max_output_retries), matching the output hook lifecycle.

from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import PrepareOutputTools
from pydantic_ai.output import ToolOutput
from pydantic_ai.tools import ToolDefinition


async def only_after_first_step(
    ctx: RunContext[None], tool_defs: list[ToolDefinition]
) -> list[ToolDefinition] | None:
    return tool_defs if ctx.run_step > 0 else []


agent = Agent(
    'openai:gpt-5',
    output_type=ToolOutput(str),
    capabilities=[PrepareOutputTools(only_after_first_step)],
)

HistoryProcessor

Bases: ProcessHistory[AgentDepsT]

Deprecated alias for ProcessHistory.

Instrumentation

Bases: AbstractCapability[Any]

Capability that instruments agent runs with OpenTelemetry/Logfire tracing.

When added to an agent via capabilities=[Instrumentation(...)], this capability creates OpenTelemetry spans for the agent run, model requests, and tool executions.

Other capabilities can add attributes to these spans using either the OpenTelemetry API (opentelemetry.trace.get_current_span().set_attribute(key, value)) or the Logfire SDK (logfire.current_span().set_attribute(key, value)).

Attributes

settings

OTel/Logfire instrumentation settings. Defaults to InstrumentationSettings(), which uses the global TracerProvider/LoggerProvider (typically configured by logfire.configure()).

Type: InstrumentationSettings Default: field(default_factory=(lambda: _default_settings()))

Methods

from_spec

@classmethod

def from_spec(cls, kwargs: Any = {}) -> Instrumentation

Build an Instrumentation capability from a YAML/JSON spec.

Accepts the serializable subset of InstrumentationSettings kwargs (include_binary_content, include_content, version, event_mode, use_aggregated_usage_attribute_names). The OTel tracer_provider, meter_provider, and logger_provider fields can’t be expressed in YAML and default to the global providers (typically configured via logfire.configure()).

YAML form:

capabilities:

  • Instrumentation: {} # default settings
  • Instrumentation: version: 2 include_content: false
Returns

Instrumentation

for_run

@async

def for_run(ctx: RunContext[Any]) -> Instrumentation

Return a fresh copy for per-run state isolation.

Returns

Instrumentation

wrap_output_process

@async

def wrap_output_process(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: Any,
    handler: WrapOutputProcessHandler,
) -> Any

Emit a span for output-function execution.

Output processing for plain validation (no function) is not span-worthy — the validated value is the model’s response itself, no user code ran. We open a span only when an output function will execute, regardless of whether the output arrived via a tool call. The span name reflects the function (or tool name when the function name is unavailable, e.g. union processors).

Returns

Any

HookTimeoutError

Bases: TimeoutError

Raised when a hook function exceeds its configured timeout.

CapabilityOrdering

Ordering constraints for a capability within a combined capability chain.

Capabilities follow middleware semantics: the first capability in the list is the outermost layer, wrapping all others. Declare ordering constraints via get_ordering to control a capability’s position in the chain regardless of how the user lists them.

When a CombinedCapability is constructed, it topologically sorts its children to satisfy these constraints, preserving user-provided order as a tiebreaker.

Attributes

position

Fixed position in the chain, or None for user-provided order.

Type: CapabilityPosition | None Default: None

wraps

This capability wraps around (is outside of) these capabilities in the middleware chain.

Each entry can be a capability type (matches all instances of that type via issubclass) or a specific capability instance (matches by identity via is).

Note: instance refs use identity (is) matching, so if a capability’s for_run returns a new instance, refs to the original will no longer match. Use type refs when the target capability uses per-run state isolation.

Type: Sequence[CapabilityRef] Default: ()

wrapped_by

This capability is wrapped by (is inside of) these capabilities in the middleware chain.

Each entry can be a capability type (matches all instances of that type via issubclass) or a specific capability instance (matches by identity via is).

Note: instance refs use identity (is) matching, so if a capability’s for_run returns a new instance, refs to the original will no longer match. Use type refs when the target capability uses per-run state isolation.

Type: Sequence[CapabilityRef] Default: ()

requires

These types must be present in the chain (no ordering implied).

Type: Sequence[type[AbstractCapability[Any]]] Default: ()

AbstractCapability

Bases: ABC, Generic[AgentDepsT]

Abstract base class for agent capabilities.

A capability is a reusable, composable unit of agent behavior that can provide instructions, model settings, tools, and request/response hooks.

Lifecycle: capabilities are passed to an Agent at construction time, where most get_* methods are called to collect static configuration (instructions, model settings, toolsets, native tools). The exception is get_wrapper_toolset, which is called per-run during toolset assembly. Then, on each model request during a run, the before_model_request and after_model_request hooks are called to allow dynamic adjustments.

See the capabilities documentation for built-in capabilities.

get_serialization_name and from_spec support YAML/JSON specs (via Agent.from_spec); they have sensible defaults and typically don’t need to be overridden.

Attributes

has_wrap_node_run

Whether this capability (or any sub-capability) overrides wrap_node_run.

Type: bool

has_wrap_run_event_stream

Whether this capability (or any sub-capability) overrides wrap_run_event_stream.

Type: bool

Methods

apply
def apply(visitor: Callable[[AbstractCapability[AgentDepsT]], None]) -> None

Run a visitor function on all leaf capabilities in this tree.

For a single capability, calls the visitor on itself. Overridden by CombinedCapability to recursively visit all child capabilities, and by WrapperCapability to delegate to the wrapped capability.

Returns

None

get_serialization_name

@classmethod

def get_serialization_name(cls) -> str | None

Return the name used for spec serialization (CamelCase class name by default).

Return None to opt out of spec-based construction.

Returns

str | None

from_spec

@classmethod

def from_spec(cls, args: Any = (), kwargs: Any = {}) -> AbstractCapability[Any]

Create from spec arguments. Default: cls(*args, **kwargs).

Override when __init__ takes non-serializable types.

Returns

AbstractCapability[Any]

get_ordering
def get_ordering() -> CapabilityOrdering | None

Return ordering constraints for this capability, or None for default behavior.

Override to declare a fixed position ('outermost' / 'innermost'), relative ordering (wraps / wrapped_by other capability types or instances), or dependency requirements (requires).

CombinedCapability uses these to topologically sort its children at construction time.

Returns

CapabilityOrdering | None

for_run

@async

def for_run(ctx: RunContext[AgentDepsT]) -> AbstractCapability[AgentDepsT]

Return the capability instance to use for this agent run.

Called once per run, before get_*() re-extraction and before any hooks fire. Override to return a fresh instance for per-run state isolation. Default: return self (shared across runs).

Returns

AbstractCapability[AgentDepsT]

get_instructions
def get_instructions() -> AgentInstructions[AgentDepsT] | None

Return instructions to include in the system prompt, or None.

This method is called once at agent construction time. To get dynamic per-request behavior, return a callable that receives RunContext or a TemplateStr — not a dynamic string.

Returns

AgentInstructions[AgentDepsT] | None

get_model_settings
def get_model_settings() -> AgentModelSettings[AgentDepsT] | None

Return model settings to merge into the agent’s defaults, or None.

This method is called once at agent construction time. Return a static ModelSettings dict when the settings don’t change between requests. Return a callable that receives RunContext when settings need to vary per step (e.g. based on ctx.run_step or ctx.deps).

When the callable is invoked, ctx.model_settings contains the merged result of all layers resolved before this capability (model defaults and agent-level settings). The returned dict is merged on top of that.

Returns

AgentModelSettings[AgentDepsT] | None

get_toolset
def get_toolset() -> AgentToolset[AgentDepsT] | None

Return a toolset to register with the agent, or None.

Returns

AgentToolset[AgentDepsT] | None

get_native_tools
def get_native_tools() -> Sequence[AgentNativeTool[AgentDepsT]]

Return native tools to register with the agent.

Returns

Sequence[AgentNativeTool[AgentDepsT]]

get_builtin_tools
def get_builtin_tools() -> Sequence[AgentNativeTool[AgentDepsT]]

Deprecated: use get_native_tools instead.

Returns

Sequence[AgentNativeTool[AgentDepsT]]

get_wrapper_toolset
def get_wrapper_toolset(
    toolset: AbstractToolset[AgentDepsT],
) -> AbstractToolset[AgentDepsT] | None

Wrap the agent’s assembled toolset, or return None to leave it unchanged.

Called per-run with the combined non-output toolset (after the prepare_tools hook has already wrapped it). Output tools are added separately and are not included.

Unlike the other get_* methods which are called once at agent construction, this is called each run (after for_run). When multiple capabilities provide wrappers, they follow middleware semantics: the first capability in the list wraps outermost (matching wrap_* hooks).

Use this to apply cross-cutting toolset wrappers like PreparedToolset, FilteredToolset, or custom WrapperToolset subclasses.

Returns

AbstractToolset[AgentDepsT] | None

prepare_tools

@async

def prepare_tools(
    ctx: RunContext[AgentDepsT],
    tool_defs: list[ToolDefinition],
) -> list[ToolDefinition]

Filter or modify function tool definitions for this step.

Receives function tools only. For output tools, override prepare_output_tools — it runs separately, with ctx.retry/ctx.max_retries reflecting the output retry budget instead of the function-tool budget.

Return a filtered or modified list. The result flows into both the model’s request parameters and ToolManager.tools, so filtering also blocks tool execution.

Returns

list[ToolDefinition]

prepare_output_tools

@async

def prepare_output_tools(
    ctx: RunContext[AgentDepsT],
    tool_defs: list[ToolDefinition],
) -> list[ToolDefinition]

Filter or modify output tool definitions for this step.

Receives only output tools. ctx.retry and ctx.max_retries reflect the output retry budget (agent-level max_output_retries), matching the output hook lifecycle.

Return a filtered or modified list. The result flows into both the model’s request parameters and ToolManager.tools, so filtering also blocks tool execution.

Returns

list[ToolDefinition]

before_run

@async

def before_run(ctx: RunContext[AgentDepsT]) -> None

Called before the agent run starts. Observe-only; use wrap_run for modification.

Returns

None

after_run

@async

def after_run(
    ctx: RunContext[AgentDepsT],
    result: AgentRunResult[Any],
) -> AgentRunResult[Any]

Called after the agent run completes. Can modify the result.

Returns

AgentRunResult[Any]

wrap_run

@async

def wrap_run(
    ctx: RunContext[AgentDepsT],
    handler: WrapRunHandler,
) -> AgentRunResult[Any]

Wraps the entire agent run. handler() executes the run.

If handler() raises and this method catches the exception and returns a result instead, the error is suppressed and the recovery result is used.

If this method does not call handler() (short-circuit), the run is skipped and the returned result is used directly.

Note: if the caller cancels the run (e.g. by breaking out of an iter() loop), this method receives an asyncio.CancelledError. Implementations that hold resources should handle cleanup accordingly.

Returns

AgentRunResult[Any]

on_run_error

@async

def on_run_error(
    ctx: RunContext[AgentDepsT],
    error: BaseException,
) -> AgentRunResult[Any]

Called when the agent run fails with an exception.

This is the error counterpart to after_run: while after_run is called on success, on_run_error is called on failure (after wrap_run has had its chance to recover).

Raise the original error (or a different exception) to propagate it. Return an AgentRunResult to suppress the error and recover the run.

Not called for GeneratorExit or KeyboardInterrupt.

Returns

AgentRunResult[Any]

before_node_run

@async

def before_node_run(
    ctx: RunContext[AgentDepsT],
    node: AgentNode[AgentDepsT],
) -> AgentNode[AgentDepsT]

Called before each graph node executes. Can observe or replace the node.

Returns

AgentNode[AgentDepsT]

after_node_run

@async

def after_node_run(
    ctx: RunContext[AgentDepsT],
    node: AgentNode[AgentDepsT],
    result: NodeResult[AgentDepsT],
) -> NodeResult[AgentDepsT]

Called after each graph node succeeds. Can modify the result (next node or End).

Returns

NodeResult[AgentDepsT]

wrap_node_run

@async

def wrap_node_run(
    ctx: RunContext[AgentDepsT],
    node: AgentNode[AgentDepsT],
    handler: WrapNodeRunHandler[AgentDepsT],
) -> NodeResult[AgentDepsT]

Wraps execution of each agent graph node (run step).

Called for every node in the agent graph (UserPromptNode, ModelRequestNode, CallToolsNode). handler(node) executes the node and returns the next node (or End).

Override to inspect or modify nodes before execution, inspect or modify the returned next node, call handler multiple times (retry), or return a different node to redirect graph progression.

Note: this hook fires when using agent.run(), agent.run_stream(), and when manually driving an agent.iter() run with next(), but it does not fire when iterating over the run with bare async for (which yields stream events, not node results).

When using agent.run() with event_stream_handler, the handler wraps both streaming and graph advancement (i.e. the model call happens inside the wrapper). When using agent.run_stream(), the handler wraps only graph advancement — streaming happens before the wrapper because run_stream() must yield the stream to the caller while the stream context is still open, which cannot happen from inside a callback.

Returns

NodeResult[AgentDepsT]

on_node_run_error

@async

def on_node_run_error(
    ctx: RunContext[AgentDepsT],
    node: AgentNode[AgentDepsT],
    error: Exception,
) -> NodeResult[AgentDepsT]

Called when a graph node fails with an exception.

This is the error counterpart to after_node_run.

Raise the original error (or a different exception) to propagate it. Return a next node or End to recover and continue the graph.

Useful for recovering from UnexpectedModelBehavior by redirecting to a different node (e.g. retry with different model settings).

Returns

NodeResult[AgentDepsT]

wrap_run_event_stream

@async

def wrap_run_event_stream(
    ctx: RunContext[AgentDepsT],
    stream: AsyncIterable[AgentStreamEvent],
) -> AsyncIterable[AgentStreamEvent]

Wraps the event stream for a streamed node. Can observe or transform events.

Note: when this method is overridden (or Hooks.on.event / Hooks.on.run_event_stream are registered), agent.run() automatically enables streaming mode so this hook fires even without an explicit event_stream_handler.

Returns

AsyncIterable[AgentStreamEvent]

before_model_request

@async

def before_model_request(
    ctx: RunContext[AgentDepsT],
    request_context: ModelRequestContext,
) -> ModelRequestContext

Called before each model request. Can modify messages, settings, and parameters.

Returns

ModelRequestContext

after_model_request

@async

def after_model_request(
    ctx: RunContext[AgentDepsT],
    request_context: ModelRequestContext,
    response: ModelResponse,
) -> ModelResponse

Called after each model response. Can modify the response before further processing.

Raise ModelRetry to reject the response and ask the model to try again. The original response is still appended to message history so the model can see what it said. Retries count against output_retries.

Returns

ModelResponse

wrap_model_request

@async

def wrap_model_request(
    ctx: RunContext[AgentDepsT],
    request_context: ModelRequestContext,
    handler: WrapModelRequestHandler,
) -> ModelResponse

Wraps the model request. handler() calls the model.

Raise ModelRetry to skip on_model_request_error and directly retry the model request with a retry prompt. If the handler was called, the model response is preserved in history for context (same as after_model_request).

Returns

ModelResponse

on_model_request_error

@async

def on_model_request_error(
    ctx: RunContext[AgentDepsT],
    request_context: ModelRequestContext,
    error: Exception,
) -> ModelResponse

Called when a model request fails with an exception.

This is the error counterpart to after_model_request.

Raise the original error (or a different exception) to propagate it. Return a ModelResponse to suppress the error and use the response as if the model call succeeded. Raise ModelRetry to retry the model request with a retry prompt instead of recovering or propagating.

Not called for SkipModelRequest or ModelRetry.

Returns

ModelResponse

before_tool_validate

@async

def before_tool_validate(
    ctx: RunContext[AgentDepsT],
    call: ToolCallPart,
    tool_def: ToolDefinition,
    args: RawToolArgs,
) -> RawToolArgs

Modify raw args before validation.

Raise ModelRetry to skip validation and ask the model to redo the tool call.

Returns

RawToolArgs

after_tool_validate

@async

def after_tool_validate(
    ctx: RunContext[AgentDepsT],
    call: ToolCallPart,
    tool_def: ToolDefinition,
    args: ValidatedToolArgs,
) -> ValidatedToolArgs

Modify validated args. Called only on successful validation.

Raise ModelRetry to reject the validated args and ask the model to redo the tool call.

Returns

ValidatedToolArgs

wrap_tool_validate

@async

def wrap_tool_validate(
    ctx: RunContext[AgentDepsT],
    call: ToolCallPart,
    tool_def: ToolDefinition,
    args: RawToolArgs,
    handler: WrapToolValidateHandler,
) -> ValidatedToolArgs

Wraps tool argument validation. handler() runs the validation.

Returns

ValidatedToolArgs

on_tool_validate_error

@async

def on_tool_validate_error(
    ctx: RunContext[AgentDepsT],
    call: ToolCallPart,
    tool_def: ToolDefinition,
    args: RawToolArgs,
    error: ValidationError | ModelRetry,
) -> ValidatedToolArgs

Called when tool argument validation fails.

This is the error counterpart to after_tool_validate. Fires for ValidationError (schema mismatch) and ModelRetry (custom validator rejection).

Raise the original error (or a different exception) to propagate it. Return validated args to suppress the error and continue as if validation passed.

Not called for SkipToolValidation.

Returns

ValidatedToolArgs

before_tool_execute

@async

def before_tool_execute(
    ctx: RunContext[AgentDepsT],
    call: ToolCallPart,
    tool_def: ToolDefinition,
    args: ValidatedToolArgs,
) -> ValidatedToolArgs

Modify validated args before execution.

Raise ModelRetry to skip execution and ask the model to redo the tool call.

Returns

ValidatedToolArgs

after_tool_execute

@async

def after_tool_execute(
    ctx: RunContext[AgentDepsT],
    call: ToolCallPart,
    tool_def: ToolDefinition,
    args: ValidatedToolArgs,
    result: Any,
) -> Any

Modify result after execution.

Raise ModelRetry to reject the tool result and ask the model to redo the tool call.

Returns

Any

wrap_tool_execute

@async

def wrap_tool_execute(
    ctx: RunContext[AgentDepsT],
    call: ToolCallPart,
    tool_def: ToolDefinition,
    args: ValidatedToolArgs,
    handler: WrapToolExecuteHandler,
) -> Any

Wraps tool execution. handler() runs the tool.

Returns

Any

on_tool_execute_error

@async

def on_tool_execute_error(
    ctx: RunContext[AgentDepsT],
    call: ToolCallPart,
    tool_def: ToolDefinition,
    args: ValidatedToolArgs,
    error: Exception,
) -> Any

Called when tool execution fails with an exception.

This is the error counterpart to after_tool_execute.

Raise the original error (or a different exception) to propagate it. Return any value to suppress the error and use it as the tool result. Raise ModelRetry to ask the model to redo the tool call instead of recovering or propagating.

Not called for control flow exceptions (SkipToolExecution, CallDeferred, ApprovalRequired) or retry signals (ToolRetryError from ModelRetry). Use wrap_tool_execute to intercept retries.

Returns

Any

before_output_validate

@async

def before_output_validate(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: RawOutput,
) -> RawOutput

Modify raw model output before validation/parsing.

The primary hook for pre-parse repair and normalization of model output. Fires only for structured output that requires parsing: prompted, native, tool, and union output. Does not fire for plain text or image output.

For structured text output, output is the raw text string from the model. For tool output, output is the raw tool arguments (string or dict).

Raise ModelRetry to skip validation and ask the model to try again with a custom message.

During streaming, this hook fires on every partial validation attempt as well as the final result. Check ctx.partial_output to distinguish and avoid expensive work on partial results.

Returns

RawOutput

after_output_validate

@async

def after_output_validate(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: Any,
) -> Any

Modify validated output after successful parsing. Called only on success.

output is the semantic value the model was asked to produce — e.g., a MyModel instance for output_type=MyModel, or 42 for output_type=int, or the input to a single-arg output function. For multi-arg output functions, this is the dict of arguments (the genuine multi-value input).

Note: this differs from tool hooks (after_tool_validate), which always see dict[str, Any] — tool args follow the schema contract. Output hooks see the semantic output value, regardless of how it’s internally represented during validation.

Raise ModelRetry to reject the validated output and ask the model to try again.

Returns

Any

wrap_output_validate

@async

def wrap_output_validate(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: RawOutput,
    handler: WrapOutputValidateHandler,
) -> Any

Wraps output validation. handler(output) performs the validation.

ModelRetry from within the handler goes to on_output_validate_error. ModelRetry raised directly (not from the handler) bypasses the error hook.

Returns

Any

on_output_validate_error

@async

def on_output_validate_error(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: RawOutput,
    error: ValidationError | ModelRetry,
) -> Any

Called when output validation fails.

This is the error counterpart to after_output_validate.

Raise the original error (or a different exception) to propagate it. Return validated output to suppress the error and continue.

Returns

Any

before_output_process

@async

def before_output_process(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: Any,
) -> Any

Modify validated output before processing (extraction, output function call).

output is the semantic value — e.g., a MyModel instance or 42, matching after_output_validate. For multi-arg output functions, it’s the dict of args. See after_output_validate for a full explanation of the semantic-value contract.

Raise ModelRetry to skip processing and ask the model to try again.

Returns

Any

after_output_process

@async

def after_output_process(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: Any,
) -> Any

Modify result after output processing.

Raise ModelRetry to reject the result and ask the model to try again.

Returns

Any

wrap_output_process

@async

def wrap_output_process(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: Any,
    handler: WrapOutputProcessHandler,
) -> Any

Wraps output processing. handler(output) runs extraction + output function call.

ModelRetry bypasses on_output_process_error (treated as control flow, not an error).

During streaming, this fires only when partial validation succeeds, and on the final result. Check ctx.partial_output to skip expensive work on partial results.

Returns

Any

on_output_process_error

@async

def on_output_process_error(
    ctx: RunContext[AgentDepsT],
    output_context: OutputContext,
    output: Any,
    error: Exception,
) -> Any

Called when output processing fails with an exception.

This is the error counterpart to after_output_process.

Raise the original error (or a different exception) to propagate it. Return any value to suppress the error and use it as the output.

Not called for retry signals (ToolRetryError from ModelRetry).

Returns

Any

handle_deferred_tool_calls

@async

def handle_deferred_tool_calls(
    ctx: RunContext[AgentDepsT],
    requests: DeferredToolRequests,
) -> DeferredToolResults | None

Handle deferred tool calls (approval-required or externally-executed) inline during an agent run.

Called by ToolManager when:

Uses accumulation dispatch: each capability in the chain receives remaining unresolved requests and can resolve some or all of them. Results are merged and unresolved calls are passed to the next capability.

Return a DeferredToolResults to resolve some or all calls. Return None to leave all calls unresolved.

Returns

DeferredToolResults | None

prefix_tools
def prefix_tools(prefix: str) -> PrefixTools[AgentDepsT]

Returns a new capability that wraps this one and prefixes its tool names.

Only this capability’s tools are prefixed; other agent tools are unaffected.

Returns

PrefixTools[AgentDepsT]

OutputContext

Context about the output being processed, passed to output hooks.

Attributes

mode

The schema’s output mode (‘text’, ‘native’, ‘prompted’, ‘tool’, ‘image’, ‘auto’).

This reflects the configured schema, not the format of this particular response. For example, a ToolOutputSchema with a text_processor (hybrid mode) reports 'tool' even if the model returned text — check tool_call to distinguish.

Type: OutputMode

output_type

The resolved output type (e.g. MyModel, str). For output functions, the function’s input type (what the model produces).

Type: type[Any] | None

object_def

The output object definition (schema, name, description), if structured output.

Type: OutputObjectDefinition | None

has_function

Whether there’s an output function to call in the execute step.

Type: bool

function_name

Name of the output function that will run, when known. None for union processors that dispatch by output subtype, or when the schema has no function.

Type: str | None Default: None

tool_call

The tool call part, for tool-based output. None when the current output did not arrive via a tool call (text or image).

Type: ToolCallPart | None Default: None

tool_def

The tool definition, for tool-based output. None when the current output did not arrive via a tool call.

Type: ToolDefinition | None Default: None

allows_text

Whether the schema accepts text output (including via a text_processor on a ToolOutputSchema).

Type: bool Default: False

allows_image

Whether the schema accepts image output.

Type: bool Default: False

allows_deferred_tools

Whether the schema accepts deferred tool requests as output.

Type: bool Default: False

Hooks

Bases: AbstractCapability[AgentDepsT]

Register hook functions via decorators or constructor kwargs.

For extension developers building reusable capabilities, subclass AbstractCapability directly. For application code that needs a few hooks without the ceremony of a subclass, use Hooks.

Example using decorators:

hooks = Hooks()

@hooks.on.before_model_request
async def log_request(ctx, request_context):
    print(f'Request: {request_context}')
    return request_context

agent = Agent('openai:gpt-5', capabilities=[hooks])

Example using constructor kwargs:

agent = Agent('openai:gpt-5', capabilities=[
    Hooks(before_model_request=log_request)
])

Attributes

on

Decorator namespace for registering hook functions.

Type: _HookRegistration[AgentDepsT]

CapabilityFunc

A sync/async function which takes a run context and returns a capability.

Type: TypeAlias Default: Callable[[RunContext[AgentDepsT]], AbstractCapability[AgentDepsT] | None | Awaitable[AbstractCapability[AgentDepsT] | None]]

AgentNode

Type alias for an agent graph node (UserPromptNode, ModelRequestNode, CallToolsNode).

Type: TypeAlias Default: '_agent_graph.AgentNode[AgentDepsT, Any]'

NodeResult

Type alias for the result of executing an agent graph node: either the next node or End.

Type: TypeAlias Default: '_agent_graph.AgentNode[AgentDepsT, Any] | End[FinalResult[Any]]'

WrapRunHandler

Handler type for wrap_run.

Type: TypeAlias Default: 'Callable[[], Awaitable[AgentRunResult[Any]]]'

ToolSearchNativeStrategy

Named provider-native tool search strategy.

'bm25' and 'regex' correspond to Anthropic’s server-side tool search variants. OpenAI’s Responses API does not expose distinct named native strategies, so these values are rejected by the OpenAI adapter.

Default: Literal['bm25', 'regex']

WrapNodeRunHandler

Handler type for wrap_node_run.

Type: TypeAlias Default: 'Callable[[_agent_graph.AgentNode[AgentDepsT, Any]], Awaitable[_agent_graph.AgentNode[AgentDepsT, Any] | End[FinalResult[Any]]]]'

WrapModelRequestHandler

Handler type for wrap_model_request.

Type: TypeAlias Default: 'Callable[[ModelRequestContext], Awaitable[ModelResponse]]'

RawToolArgs

Type alias for raw (pre-validation) tool arguments.

Type: TypeAlias Default: str | dict[str, Any]

ToolSearchLocalStrategy

Named local tool search strategy.

'keywords' opts into the built-in keyword-overlap algorithm explicitly — use this to lock in the current local algorithm rather than the None default (which lets Pydantic AI pick the best algorithm per provider and may change over time).

Future local strategies (e.g. local BM25, TF-IDF, regex) will join this Literal as they’re added; the single-member shape today is forward-compat scaffolding.

Default: Literal['keywords']

ValidatedToolArgs

Type alias for validated tool arguments.

Type: TypeAlias Default: dict[str, Any]

WrapToolValidateHandler

Handler type for wrap_tool_validate.

Type: TypeAlias Default: Callable[[RawToolArgs], Awaitable[ValidatedToolArgs]]

AgentCapability

A capability or a CapabilityFunc that takes a run context and returns one.

Use as the item type for Agent(capabilities=[...]) and agent.run(capabilities=[...]). Functions are wrapped in a DynamicCapability automatically.

Type: TypeAlias Default: AbstractCapability[AgentDepsT] | CapabilityFunc[AgentDepsT]

WrapToolExecuteHandler

Handler type for wrap_tool_execute.

Type: TypeAlias Default: Callable[[ValidatedToolArgs], Awaitable[Any]]

ToolSearchFunc

Custom search function for ToolSearch’s strategy field.

Takes the run context, the list of search queries, and the deferred tool definitions, and returns the matching tool names ordered by relevance. Both sync and async implementations are accepted.

Usage ToolSearchFunc[AgentDepsT].

Default: Callable[[RunContext[AgentDepsT], Sequence[str], Sequence['ToolDefinition']], Sequence[str] | Awaitable[Sequence[str]]]

RawOutput

Type alias for raw output data (text or tool args).

Type: TypeAlias Default: str | dict[str, Any]

CAPABILITY_TYPES

Registry of all capability types that have a serialization name, mapping name to class.

Type: dict[str, type[AbstractCapability[Any]]] Default: {name: cls for cls in (NativeTool, ImageGeneration, IncludeToolReturnSchemas, Instrumentation, MCP, PrefixTools, PrepareTools, ProcessHistory, ReinjectSystemPrompt, SetToolMetadata, Thinking, ToolSearch, Toolset, WebFetch, WebSearch) if (name := (cls.get_serialization_name())) is not None}

WrapOutputValidateHandler

Handler type for wrap_output_validate.

Type: TypeAlias Default: Callable[[RawOutput], Awaitable[Any]]

WrapOutputProcessHandler

Handler type for wrap_output_process.

Type: TypeAlias Default: Callable[[Any], Awaitable[Any]]

CapabilityPosition

Position tier for a capability in the middleware chain.

  • 'outermost': in the outermost tier, before all non-outermost capabilities. Multiple capabilities can declare 'outermost'; original list order breaks ties within the tier, and wraps/wrapped_by edges refine order further.
  • 'innermost': in the innermost tier, after all non-innermost capabilities. Same tie-breaking rules apply.

Default: Literal['outermost', 'innermost']

ToolSearchStrategy

Strategy value accepted by ToolSearch.strategy.

  • 'keywords': force the local keyword-overlap algorithm regardless of provider.
  • 'bm25' / 'regex': force a specific provider-native strategy (Anthropic). The request fails on providers that can’t honor the choice.
  • Callable (ctx, queries, tools) -> names: custom search function. Used locally, and also by the native “client-executed” surface on providers that support it (Anthropic custom tool-reference blocks, OpenAI ToolSearchToolParam(execution='client')).

None is not part of the union — it’s accepted as the default on the ToolSearch.strategy field and means “let Pydantic AI pick”; see that field’s docstring for details.

Default: Union[ToolSearchFunc[AgentDepsT], ToolSearchLocalStrategy, ToolSearchNativeStrategy]

CapabilityRef

Reference to a capability — either a type (matches all instances of that type) or a specific instance (matches by identity).

Type: TypeAlias Default: 'type[AbstractCapability[Any]] | AbstractCapability[Any]'