Skip to content

pydantic_ai.ui

MessagesBuilder

Helper class to build Pydantic AI messages from request/response parts.

Methods

add
def add(part: ModelRequestPart | ModelResponsePart) -> None

Add a new part, creating a new request or response message if necessary.

Returns

None

StateHandler

Bases: Protocol

Protocol for state handlers in agent runs. Requires the class to be a dataclass with a state field.

Attributes

state

Get the current state of the agent run.

Type: Any

UIEventStream

Bases: ABC, Generic[RunInputT, EventT, AgentDepsT, OutputDataT]

Base class for UI event stream transformers.

This class is responsible for transforming Pydantic AI events into protocol-specific events.

Attributes

accept

The Accept header value of the request, used to determine how to encode the protocol-specific events for the streaming response.

Type: str | None Default: None

message_id

The message ID to use for the next event.

Type: str Default: field(default_factory=(lambda: str(uuid4())))

response_headers

Response headers to return to the frontend.

Type: Mapping[str, str] | None

content_type

Get the content type for the event stream, compatible with the Accept header value.

By default, this returns the Server-Sent Events content type (text/event-stream). If a subclass supports other types as well, it should consider self.accept in encode_event() and return the resulting content type.

Type: str

Methods

new_message_id
def new_message_id() -> str

Generate and store a new message ID.

Returns

str

encode_event

@abstractmethod

def encode_event(event: EventT) -> str

Encode a protocol-specific event as a string.

Returns

str

encode_stream

@async

def encode_stream(stream: AsyncIterator[EventT]) -> AsyncIterator[str]

Encode a stream of protocol-specific events as strings according to the Accept header value.

Returns

AsyncIterator[str]

streaming_response
def streaming_response(stream: AsyncIterator[EventT]) -> StreamingResponse

Generate a streaming response from a stream of protocol-specific events.

Returns

StreamingResponse

transform_stream

@async

def transform_stream(
    stream: AsyncIterator[NativeEvent],
    on_complete: OnCompleteFunc[EventT] | None = None,
) -> AsyncIterator[EventT]

Transform a stream of Pydantic AI events into protocol-specific events.

This method dispatches to specific hooks and handle_* methods that subclasses can override:

Returns

AsyncIterator[EventT]

Parameters

stream : AsyncIterator[NativeEvent]

The stream of Pydantic AI events to transform.

on_complete : OnCompleteFunc[EventT] | None Default: None

Optional callback function called when the agent run completes successfully. The callback receives the completed AgentRunResult and can optionally yield additional protocol-specific events.

handle_event

@async

def handle_event(event: NativeEvent) -> AsyncIterator[EventT]

Transform a Pydantic AI event into one or more protocol-specific events.

This method dispatches to specific handle_* methods based on event type:

Subclasses are encouraged to override the individual handle_* methods rather than this one. If you need specific behavior for all events, make sure you call the super method.

Returns

AsyncIterator[EventT]

handle_part_start

@async

def handle_part_start(event: PartStartEvent) -> AsyncIterator[EventT]

Handle a PartStartEvent.

This method dispatches to specific handle_* methods based on part type:

Subclasses are encouraged to override the individual handle_* methods rather than this one. If you need specific behavior for all part start events, make sure you call the super method.

Returns

AsyncIterator[EventT]

Parameters

The part start event.

handle_part_delta

@async

def handle_part_delta(event: PartDeltaEvent) -> AsyncIterator[EventT]

Handle a PartDeltaEvent.

This method dispatches to specific handle_*_delta methods based on part delta type:

Subclasses are encouraged to override the individual handle_*_delta methods rather than this one. If you need specific behavior for all part delta events, make sure you call the super method.

Returns

AsyncIterator[EventT]

Parameters

The PartDeltaEvent.

handle_part_end

@async

def handle_part_end(event: PartEndEvent) -> AsyncIterator[EventT]

Handle a PartEndEvent.

This method dispatches to specific handle_*_end methods based on part type:

Subclasses are encouraged to override the individual handle_*_end methods rather than this one. If you need specific behavior for all part end events, make sure you call the super method.

Returns

AsyncIterator[EventT]

Parameters

event : PartEndEvent

The part end event.

before_stream

@async

def before_stream() -> AsyncIterator[EventT]

Yield events before agent streaming starts.

This hook is called before any agent events are processed. Override this to inject custom events at the start of the stream.

Returns

AsyncIterator[EventT]

after_stream

@async

def after_stream() -> AsyncIterator[EventT]

Yield events after agent streaming completes.

This hook is called after all agent events have been processed. Override this to inject custom events at the end of the stream.

Returns

AsyncIterator[EventT]

on_error

@async

def on_error(error: Exception) -> AsyncIterator[EventT]

Handle errors that occur during streaming.

Returns

AsyncIterator[EventT]

Parameters

error : Exception

The error that occurred during streaming.

before_request

@async

def before_request() -> AsyncIterator[EventT]

Yield events before a model request is processed.

Override this to inject custom events at the start of the request.

Returns

AsyncIterator[EventT]

after_request

@async

def after_request() -> AsyncIterator[EventT]

Yield events after a model request is processed.

Override this to inject custom events at the end of the request.

Returns

AsyncIterator[EventT]

before_response

@async

def before_response() -> AsyncIterator[EventT]

Yield events before a model response is processed.

Override this to inject custom events at the start of the response.

Returns

AsyncIterator[EventT]

after_response

@async

def after_response() -> AsyncIterator[EventT]

Yield events after a model response is processed.

Override this to inject custom events at the end of the response.

Returns

AsyncIterator[EventT]

handle_text_start

@async

def handle_text_start(
    part: TextPart,
    follows_text: bool = False,
) -> AsyncIterator[EventT]

Handle the start of a TextPart.

Returns

AsyncIterator[EventT]

Parameters

part : TextPart

The text part.

follows_text : bool Default: False

Whether the part is directly preceded by another text part. In this case, you may want to yield a “text-delta” event instead of a “text-start” event.

handle_text_delta

@async

def handle_text_delta(delta: TextPartDelta) -> AsyncIterator[EventT]

Handle a TextPartDelta.

Returns

AsyncIterator[EventT]

Parameters

delta : TextPartDelta

The text part delta.

handle_text_end

@async

def handle_text_end(
    part: TextPart,
    followed_by_text: bool = False,
) -> AsyncIterator[EventT]

Handle the end of a TextPart.

Returns

AsyncIterator[EventT]

Parameters

part : TextPart

The text part.

followed_by_text : bool Default: False

Whether the part is directly followed by another text part. In this case, you may not want to yield a “text-end” event yet.

handle_thinking_start

@async

def handle_thinking_start(
    part: ThinkingPart,
    follows_thinking: bool = False,
) -> AsyncIterator[EventT]

Handle the start of a ThinkingPart.

Returns

AsyncIterator[EventT]

Parameters

part : ThinkingPart

The thinking part.

follows_thinking : bool Default: False

Whether the part is directly preceded by another thinking part. In this case, you may want to yield a “thinking-delta” event instead of a “thinking-start” event.

handle_thinking_delta

@async

def handle_thinking_delta(delta: ThinkingPartDelta) -> AsyncIterator[EventT]

Handle a ThinkingPartDelta.

Returns

AsyncIterator[EventT]

Parameters

The thinking part delta.

handle_thinking_end

@async

def handle_thinking_end(
    part: ThinkingPart,
    followed_by_thinking: bool = False,
) -> AsyncIterator[EventT]

Handle the end of a ThinkingPart.

Returns

AsyncIterator[EventT]

Parameters

part : ThinkingPart

The thinking part.

followed_by_thinking : bool Default: False

Whether the part is directly followed by another thinking part. In this case, you may not want to yield a “thinking-end” event yet.

handle_tool_call_start

@async

def handle_tool_call_start(part: ToolCallPart) -> AsyncIterator[EventT]

Handle the start of a ToolCallPart.

Returns

AsyncIterator[EventT]

Parameters

part : ToolCallPart

The tool call part.

handle_tool_call_delta

@async

def handle_tool_call_delta(delta: ToolCallPartDelta) -> AsyncIterator[EventT]

Handle a ToolCallPartDelta.

Returns

AsyncIterator[EventT]

Parameters

The tool call part delta.

handle_tool_call_end

@async

def handle_tool_call_end(part: ToolCallPart) -> AsyncIterator[EventT]

Handle the end of a ToolCallPart.

Returns

AsyncIterator[EventT]

Parameters

part : ToolCallPart

The tool call part.

handle_builtin_tool_call_start

@async

def handle_builtin_tool_call_start(part: BuiltinToolCallPart) -> AsyncIterator[EventT]

Handle a BuiltinToolCallPart at start.

Returns

AsyncIterator[EventT]

Parameters

The builtin tool call part.

handle_builtin_tool_call_end

@async

def handle_builtin_tool_call_end(part: BuiltinToolCallPart) -> AsyncIterator[EventT]

Handle the end of a BuiltinToolCallPart.

Returns

AsyncIterator[EventT]

Parameters

The builtin tool call part.

handle_builtin_tool_return

@async

def handle_builtin_tool_return(part: BuiltinToolReturnPart) -> AsyncIterator[EventT]

Handle a BuiltinToolReturnPart.

Returns

AsyncIterator[EventT]

Parameters

The builtin tool return part.

handle_file

@async

def handle_file(part: FilePart) -> AsyncIterator[EventT]

Handle a FilePart.

Returns

AsyncIterator[EventT]

Parameters

part : FilePart

The file part.

handle_final_result

@async

def handle_final_result(event: FinalResultEvent) -> AsyncIterator[EventT]

Handle a FinalResultEvent.

Returns

AsyncIterator[EventT]

Parameters

The final result event.

handle_function_tool_call

@async

def handle_function_tool_call(event: FunctionToolCallEvent) -> AsyncIterator[EventT]

Handle a FunctionToolCallEvent.

Returns

AsyncIterator[EventT]

Parameters

The function tool call event.

handle_function_tool_result

@async

def handle_function_tool_result(event: FunctionToolResultEvent) -> AsyncIterator[EventT]

Handle a FunctionToolResultEvent.

Returns

AsyncIterator[EventT]

Parameters

The function tool result event.

handle_run_result

@async

def handle_run_result(event: AgentRunResultEvent) -> AsyncIterator[EventT]

Handle an AgentRunResultEvent.

Returns

AsyncIterator[EventT]

Parameters

The agent run result event.

StateDeps

Bases: Generic[StateT]

Dependency type that holds state.

This class is used to manage the state of an agent run. It allows setting the state of the agent run with a specific type of state model, which must be a subclass of BaseModel.

The state is set using the state setter by the Adapter when the run starts.

Implements the StateHandler protocol.

UIAdapter

Bases: ABC, Generic[RunInputT, MessageT, EventT, AgentDepsT, OutputDataT]

Base class for UI adapters.

This class is responsible for transforming agent run input received from the frontend into arguments for Agent.run_stream_events(), running the agent, and then transforming Pydantic AI events into protocol-specific events.

The event stream transformation is handled by a protocol-specific UIEventStream subclass.

Attributes

agent

The Pydantic AI agent to run.

Type: AbstractAgent[AgentDepsT, OutputDataT]

run_input

The protocol-specific run input object.

Type: RunInputT

accept

The Accept header value of the request, used to determine how to encode the protocol-specific events for the streaming response.

Type: str | None Default: None

messages

Pydantic AI messages from the protocol-specific run input.

Type: list[ModelMessage]

toolset

Toolset representing frontend tools from the protocol-specific run input.

Type: AbstractToolset[AgentDepsT] | None

state

Frontend state from the protocol-specific run input.

Type: dict[str, Any] | None

deferred_tool_results

Deferred tool results extracted from the request, used for tool approval workflows.

Type: DeferredToolResults | None

Methods

from_request

@async

@classmethod

def from_request(
    cls,
    request: Request,
    agent: AbstractAgent[AgentDepsT, OutputDataT],
    kwargs: Any = {},
) -> Self

Create an adapter from a request.

Extra keyword arguments are forwarded to the adapter constructor, allowing subclasses to accept additional adapter-specific parameters.

Returns

Self

build_run_input

@abstractmethod

@classmethod

def build_run_input(cls, body: bytes) -> RunInputT

Build a protocol-specific run input object from the request body.

Returns

RunInputT

load_messages

@abstractmethod

@classmethod

def load_messages(cls, messages: Sequence[MessageT]) -> list[ModelMessage]

Transform protocol-specific messages into Pydantic AI messages.

Returns

list[ModelMessage]

dump_messages

@classmethod

def dump_messages(cls, messages: Sequence[ModelMessage]) -> list[MessageT]

Transform Pydantic AI messages into protocol-specific messages.

Returns

list[MessageT]

build_event_stream

@abstractmethod

def build_event_stream() -> UIEventStream[RunInputT, EventT, AgentDepsT, OutputDataT]

Build a protocol-specific event stream transformer.

Returns

UIEventStream[RunInputT, EventT, AgentDepsT, OutputDataT]

transform_stream
def transform_stream(
    stream: AsyncIterator[NativeEvent],
    on_complete: OnCompleteFunc[EventT] | None = None,
) -> AsyncIterator[EventT]

Transform a stream of Pydantic AI events into protocol-specific events.

Returns

AsyncIterator[EventT]

Parameters

stream : AsyncIterator[NativeEvent]

The stream of Pydantic AI events to transform.

on_complete : OnCompleteFunc[EventT] | None Default: None

Optional callback function called when the agent run completes successfully. The callback receives the completed AgentRunResult and can optionally yield additional protocol-specific events.

encode_stream
def encode_stream(stream: AsyncIterator[EventT]) -> AsyncIterator[str]

Encode a stream of protocol-specific events as strings according to the Accept header value.

Returns

AsyncIterator[str]

Parameters

stream : AsyncIterator[EventT]

The stream of protocol-specific events to encode.

streaming_response
def streaming_response(stream: AsyncIterator[EventT]) -> StreamingResponse

Generate a streaming response from a stream of protocol-specific events.

Returns

StreamingResponse

Parameters

stream : AsyncIterator[EventT]

The stream of protocol-specific events to encode.

run_stream_native
def run_stream_native(
    output_type: OutputSpec[Any] | None = None,
    message_history: Sequence[ModelMessage] | None = None,
    deferred_tool_results: DeferredToolResults | None = None,
    model: Model | KnownModelName | str | None = None,
    instructions: _instructions.AgentInstructions[AgentDepsT] = None,
    deps: AgentDepsT = None,
    model_settings: ModelSettings | None = None,
    usage_limits: UsageLimits | None = None,
    usage: RunUsage | None = None,
    metadata: AgentMetadata[AgentDepsT] | None = None,
    infer_name: bool = True,
    toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
    builtin_tools: Sequence[AbstractBuiltinTool] | None = None,
) -> AsyncIterator[NativeEvent]

Run the agent with the protocol-specific run input and stream Pydantic AI events.

Returns

AsyncIterator[NativeEvent]

Parameters

output_type : OutputSpec[Any] | None Default: None

Custom output type to use for this run, output_type may only be used if the agent has no output validators since output validators would expect an argument that matches the agent’s output type.

message_history : Sequence[ModelMessage] | None Default: None

History of the conversation so far.

deferred_tool_results : DeferredToolResults | None Default: None

Optional results for deferred tool calls in the message history.

model : Model | KnownModelName | str | None Default: None

Optional model to use for this run, required if model was not set when creating the agent.

instructions : _instructions.AgentInstructions[AgentDepsT] Default: None

Optional additional instructions to use for this run.

deps : AgentDepsT Default: None

Optional dependencies to use for this run.

model_settings : ModelSettings | None Default: None

Optional settings to use for this model’s request.

usage_limits : UsageLimits | None Default: None

Optional limits on model request count or token usage.

usage : RunUsage | None Default: None

Optional usage to start with, useful for resuming a conversation or agents used in tools.

metadata : AgentMetadata[AgentDepsT] | None Default: None

Optional metadata to attach to this run. Accepts a dictionary or a callable taking RunContext; merged with the agent’s configured metadata.

infer_name : bool Default: True

Whether to try to infer the agent name from the call frame if it’s not set.

toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None

Optional additional toolsets for this run.

builtin_tools : Sequence[AbstractBuiltinTool] | None Default: None

Optional additional builtin tools to use for this run.

run_stream
def run_stream(
    output_type: OutputSpec[Any] | None = None,
    message_history: Sequence[ModelMessage] | None = None,
    deferred_tool_results: DeferredToolResults | None = None,
    model: Model | KnownModelName | str | None = None,
    instructions: _instructions.AgentInstructions[AgentDepsT] = None,
    deps: AgentDepsT = None,
    model_settings: ModelSettings | None = None,
    usage_limits: UsageLimits | None = None,
    usage: RunUsage | None = None,
    metadata: AgentMetadata[AgentDepsT] | None = None,
    infer_name: bool = True,
    toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
    builtin_tools: Sequence[AbstractBuiltinTool] | None = None,
    on_complete: OnCompleteFunc[EventT] | None = None,
) -> AsyncIterator[EventT]

Run the agent with the protocol-specific run input and stream protocol-specific events.

Returns

AsyncIterator[EventT]

Parameters

output_type : OutputSpec[Any] | None Default: None

Custom output type to use for this run, output_type may only be used if the agent has no output validators since output validators would expect an argument that matches the agent’s output type.

message_history : Sequence[ModelMessage] | None Default: None

History of the conversation so far.

deferred_tool_results : DeferredToolResults | None Default: None

Optional results for deferred tool calls in the message history.

model : Model | KnownModelName | str | None Default: None

Optional model to use for this run, required if model was not set when creating the agent.

instructions : _instructions.AgentInstructions[AgentDepsT] Default: None

Optional additional instructions to use for this run.

deps : AgentDepsT Default: None

Optional dependencies to use for this run.

model_settings : ModelSettings | None Default: None

Optional settings to use for this model’s request.

usage_limits : UsageLimits | None Default: None

Optional limits on model request count or token usage.

usage : RunUsage | None Default: None

Optional usage to start with, useful for resuming a conversation or agents used in tools.

metadata : AgentMetadata[AgentDepsT] | None Default: None

Optional metadata to attach to this run. Accepts a dictionary or a callable taking RunContext; merged with the agent’s configured metadata.

infer_name : bool Default: True

Whether to try to infer the agent name from the call frame if it’s not set.

toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None

Optional additional toolsets for this run.

builtin_tools : Sequence[AbstractBuiltinTool] | None Default: None

Optional additional builtin tools to use for this run.

on_complete : OnCompleteFunc[EventT] | None Default: None

Optional callback function called when the agent run completes successfully. The callback receives the completed AgentRunResult and can optionally yield additional protocol-specific events.

dispatch_request

@async

@classmethod

def dispatch_request(
    cls,
    request: Request,
    agent: AbstractAgent[DispatchDepsT, DispatchOutputDataT],
    message_history: Sequence[ModelMessage] | None = None,
    deferred_tool_results: DeferredToolResults | None = None,
    model: Model | KnownModelName | str | None = None,
    instructions: _instructions.AgentInstructions[DispatchDepsT] = None,
    deps: DispatchDepsT = None,
    output_type: OutputSpec[Any] | None = None,
    model_settings: ModelSettings | None = None,
    usage_limits: UsageLimits | None = None,
    usage: RunUsage | None = None,
    metadata: AgentMetadata[DispatchDepsT] | None = None,
    infer_name: bool = True,
    toolsets: Sequence[AbstractToolset[DispatchDepsT]] | None = None,
    builtin_tools: Sequence[AbstractBuiltinTool] | None = None,
    on_complete: OnCompleteFunc[EventT] | None = None,
    kwargs: Any = {},
) -> Response

Handle a protocol-specific HTTP request by running the agent and returning a streaming response of protocol-specific events.

Extra keyword arguments are forwarded to from_request, allowing subclasses to accept additional adapter-specific parameters.

Returns

Response — A streaming Starlette response with protocol-specific events encoded per the request’s Accept header value.

Parameters

request : Request

The incoming Starlette/FastAPI request.

agent : AbstractAgent[DispatchDepsT, DispatchOutputDataT]

The agent to run.

output_type : OutputSpec[Any] | None Default: None

Custom output type to use for this run, output_type may only be used if the agent has no output validators since output validators would expect an argument that matches the agent’s output type.

message_history : Sequence[ModelMessage] | None Default: None

History of the conversation so far.

deferred_tool_results : DeferredToolResults | None Default: None

Optional results for deferred tool calls in the message history.

model : Model | KnownModelName | str | None Default: None

Optional model to use for this run, required if model was not set when creating the agent.

instructions : _instructions.AgentInstructions[DispatchDepsT] Default: None

Optional additional instructions to use for this run.

deps : DispatchDepsT Default: None

Optional dependencies to use for this run.

model_settings : ModelSettings | None Default: None

Optional settings to use for this model’s request.

usage_limits : UsageLimits | None Default: None

Optional limits on model request count or token usage.

usage : RunUsage | None Default: None

Optional usage to start with, useful for resuming a conversation or agents used in tools.

metadata : AgentMetadata[DispatchDepsT] | None Default: None

Optional metadata to attach to this run. Accepts a dictionary or a callable taking RunContext; merged with the agent’s configured metadata.

infer_name : bool Default: True

Whether to try to infer the agent name from the call frame if it’s not set.

toolsets : Sequence[AbstractToolset[DispatchDepsT]] | None Default: None

Optional additional toolsets for this run.

builtin_tools : Sequence[AbstractBuiltinTool] | None Default: None

Optional additional builtin tools to use for this run.

on_complete : OnCompleteFunc[EventT] | None Default: None

Optional callback function called when the agent run completes successfully. The callback receives the completed AgentRunResult and can optionally yield additional protocol-specific events.

**kwargs : Any Default: \{\}

Additional keyword arguments forwarded to from_request.

SSE_CONTENT_TYPE

Content type header value for Server-Sent Events (SSE).

Default: 'text/event-stream'

NativeEvent

Type alias for the native event type, which is either an AgentStreamEvent or an AgentRunResultEvent.

Type: TypeAlias Default: AgentStreamEvent | AgentRunResultEvent[Any]

OnCompleteFunc

Callback function type that receives the AgentRunResult of the completed run. Can be sync, async, or an async generator of protocol-specific events.

Type: TypeAlias Default: Callable[[AgentRunResult[Any]], None] | Callable[[AgentRunResult[Any]], Awaitable[None]] | Callable[[AgentRunResult[Any]], AsyncIterator[EventT]]