pydantic_ai.ui
Helper class to build Pydantic AI messages from request/response parts.
def add(part: ModelRequestPart | ModelResponsePart) -> None
Add a new part, creating a new request or response message if necessary.
Bases: Protocol
Protocol for state handlers in agent runs. Requires the class to be a dataclass with a state field.
Get the current state of the agent run.
Type: Any
Bases: ABC, Generic[RunInputT, EventT, AgentDepsT, OutputDataT]
Base class for UI event stream transformers.
This class is responsible for transforming Pydantic AI events into protocol-specific events.
The Accept header value of the request, used to determine how to encode the protocol-specific events for the streaming response.
Type: str | None Default: None
The message ID to use for the next event.
Type: str Default: field(default_factory=(lambda: str(uuid4())))
Response headers to return to the frontend.
Type: Mapping[str, str] | None
Get the content type for the event stream, compatible with the Accept header value.
By default, this returns the Server-Sent Events content type (text/event-stream).
If a subclass supports other types as well, it should consider self.accept in encode_event() and return the resulting content type.
Type: str
def new_message_id() -> str
Generate and store a new message ID.
@abstractmethod
def encode_event(event: EventT) -> str
Encode a protocol-specific event as a string.
@async
def encode_stream(stream: AsyncIterator[EventT]) -> AsyncIterator[str]
Encode a stream of protocol-specific events as strings according to the Accept header value.
def streaming_response(stream: AsyncIterator[EventT]) -> StreamingResponse
Generate a streaming response from a stream of protocol-specific events.
StreamingResponse
@async
def transform_stream(
stream: AsyncIterator[NativeEvent],
on_complete: OnCompleteFunc[EventT] | None = None,
) -> AsyncIterator[EventT]
Transform a stream of Pydantic AI events into protocol-specific events.
This method dispatches to specific hooks and handle_* methods that subclasses can override:
before_stream()after_stream()on_error()before_request()after_request()before_response()after_response()handle_event()
AsyncIterator[EventT]
stream : AsyncIterator[NativeEvent]
The stream of Pydantic AI events to transform.
on_complete : OnCompleteFunc[EventT] | None Default: None
Optional callback function called when the agent run completes successfully.
The callback receives the completed AgentRunResult and can optionally yield additional protocol-specific events.
@async
def handle_event(event: NativeEvent) -> AsyncIterator[EventT]
Transform a Pydantic AI event into one or more protocol-specific events.
This method dispatches to specific handle_* methods based on event type:
PartStartEvent->handle_part_start()PartDeltaEvent->handle_part_deltaPartEndEvent->handle_part_endFinalResultEvent->handle_final_resultFunctionToolCallEvent->handle_function_tool_callFunctionToolResultEvent->handle_function_tool_resultAgentRunResultEvent->handle_run_result
Subclasses are encouraged to override the individual handle_* methods rather than this one.
If you need specific behavior for all events, make sure you call the super method.
AsyncIterator[EventT]
@async
def handle_part_start(event: PartStartEvent) -> AsyncIterator[EventT]
Handle a PartStartEvent.
This method dispatches to specific handle_* methods based on part type:
TextPart->handle_text_start()ThinkingPart->handle_thinking_start()ToolCallPart->handle_tool_call_start()BuiltinToolCallPart->handle_builtin_tool_call_start()BuiltinToolReturnPart->handle_builtin_tool_return()FilePart->handle_file()
Subclasses are encouraged to override the individual handle_* methods rather than this one.
If you need specific behavior for all part start events, make sure you call the super method.
AsyncIterator[EventT]
event : PartStartEvent
The part start event.
@async
def handle_part_delta(event: PartDeltaEvent) -> AsyncIterator[EventT]
Handle a PartDeltaEvent.
This method dispatches to specific handle_*_delta methods based on part delta type:
TextPartDelta->handle_text_delta()ThinkingPartDelta->handle_thinking_delta()ToolCallPartDelta->handle_tool_call_delta()
Subclasses are encouraged to override the individual handle_*_delta methods rather than this one.
If you need specific behavior for all part delta events, make sure you call the super method.
AsyncIterator[EventT]
event : PartDeltaEvent
The PartDeltaEvent.
@async
def handle_part_end(event: PartEndEvent) -> AsyncIterator[EventT]
Handle a PartEndEvent.
This method dispatches to specific handle_*_end methods based on part type:
TextPart->handle_text_end()ThinkingPart->handle_thinking_end()ToolCallPart->handle_tool_call_end()BuiltinToolCallPart->handle_builtin_tool_call_end()
Subclasses are encouraged to override the individual handle_*_end methods rather than this one.
If you need specific behavior for all part end events, make sure you call the super method.
AsyncIterator[EventT]
event : PartEndEvent
The part end event.
@async
def before_stream() -> AsyncIterator[EventT]
Yield events before agent streaming starts.
This hook is called before any agent events are processed. Override this to inject custom events at the start of the stream.
AsyncIterator[EventT]
@async
def after_stream() -> AsyncIterator[EventT]
Yield events after agent streaming completes.
This hook is called after all agent events have been processed. Override this to inject custom events at the end of the stream.
AsyncIterator[EventT]
@async
def on_error(error: Exception) -> AsyncIterator[EventT]
Handle errors that occur during streaming.
AsyncIterator[EventT]
error : Exception
The error that occurred during streaming.
@async
def before_request() -> AsyncIterator[EventT]
Yield events before a model request is processed.
Override this to inject custom events at the start of the request.
AsyncIterator[EventT]
@async
def after_request() -> AsyncIterator[EventT]
Yield events after a model request is processed.
Override this to inject custom events at the end of the request.
AsyncIterator[EventT]
@async
def before_response() -> AsyncIterator[EventT]
Yield events before a model response is processed.
Override this to inject custom events at the start of the response.
AsyncIterator[EventT]
@async
def after_response() -> AsyncIterator[EventT]
Yield events after a model response is processed.
Override this to inject custom events at the end of the response.
AsyncIterator[EventT]
@async
def handle_text_start(
part: TextPart,
follows_text: bool = False,
) -> AsyncIterator[EventT]
Handle the start of a TextPart.
AsyncIterator[EventT]
part : TextPart
The text part.
follows_text : bool Default: False
Whether the part is directly preceded by another text part. In this case, you may want to yield a “text-delta” event instead of a “text-start” event.
@async
def handle_text_delta(delta: TextPartDelta) -> AsyncIterator[EventT]
Handle a TextPartDelta.
AsyncIterator[EventT]
delta : TextPartDelta
The text part delta.
@async
def handle_text_end(
part: TextPart,
followed_by_text: bool = False,
) -> AsyncIterator[EventT]
Handle the end of a TextPart.
AsyncIterator[EventT]
part : TextPart
The text part.
followed_by_text : bool Default: False
Whether the part is directly followed by another text part. In this case, you may not want to yield a “text-end” event yet.
@async
def handle_thinking_start(
part: ThinkingPart,
follows_thinking: bool = False,
) -> AsyncIterator[EventT]
Handle the start of a ThinkingPart.
AsyncIterator[EventT]
part : ThinkingPart
The thinking part.
follows_thinking : bool Default: False
Whether the part is directly preceded by another thinking part. In this case, you may want to yield a “thinking-delta” event instead of a “thinking-start” event.
@async
def handle_thinking_delta(delta: ThinkingPartDelta) -> AsyncIterator[EventT]
Handle a ThinkingPartDelta.
AsyncIterator[EventT]
delta : ThinkingPartDelta
The thinking part delta.
@async
def handle_thinking_end(
part: ThinkingPart,
followed_by_thinking: bool = False,
) -> AsyncIterator[EventT]
Handle the end of a ThinkingPart.
AsyncIterator[EventT]
part : ThinkingPart
The thinking part.
followed_by_thinking : bool Default: False
Whether the part is directly followed by another thinking part. In this case, you may not want to yield a “thinking-end” event yet.
@async
def handle_tool_call_start(part: ToolCallPart) -> AsyncIterator[EventT]
Handle the start of a ToolCallPart.
AsyncIterator[EventT]
part : ToolCallPart
The tool call part.
@async
def handle_tool_call_delta(delta: ToolCallPartDelta) -> AsyncIterator[EventT]
Handle a ToolCallPartDelta.
AsyncIterator[EventT]
delta : ToolCallPartDelta
The tool call part delta.
@async
def handle_tool_call_end(part: ToolCallPart) -> AsyncIterator[EventT]
Handle the end of a ToolCallPart.
AsyncIterator[EventT]
part : ToolCallPart
The tool call part.
@async
def handle_builtin_tool_call_start(part: BuiltinToolCallPart) -> AsyncIterator[EventT]
Handle a BuiltinToolCallPart at start.
AsyncIterator[EventT]
part : BuiltinToolCallPart
The builtin tool call part.
@async
def handle_builtin_tool_call_end(part: BuiltinToolCallPart) -> AsyncIterator[EventT]
Handle the end of a BuiltinToolCallPart.
AsyncIterator[EventT]
part : BuiltinToolCallPart
The builtin tool call part.
@async
def handle_builtin_tool_return(part: BuiltinToolReturnPart) -> AsyncIterator[EventT]
Handle a BuiltinToolReturnPart.
AsyncIterator[EventT]
part : BuiltinToolReturnPart
The builtin tool return part.
@async
def handle_file(part: FilePart) -> AsyncIterator[EventT]
Handle a FilePart.
AsyncIterator[EventT]
part : FilePart
The file part.
@async
def handle_final_result(event: FinalResultEvent) -> AsyncIterator[EventT]
Handle a FinalResultEvent.
AsyncIterator[EventT]
event : FinalResultEvent
The final result event.
@async
def handle_function_tool_call(event: FunctionToolCallEvent) -> AsyncIterator[EventT]
Handle a FunctionToolCallEvent.
AsyncIterator[EventT]
event : FunctionToolCallEvent
The function tool call event.
@async
def handle_function_tool_result(event: FunctionToolResultEvent) -> AsyncIterator[EventT]
Handle a FunctionToolResultEvent.
AsyncIterator[EventT]
event : FunctionToolResultEvent
The function tool result event.
@async
def handle_run_result(event: AgentRunResultEvent) -> AsyncIterator[EventT]
Handle an AgentRunResultEvent.
AsyncIterator[EventT]
event : AgentRunResultEvent
The agent run result event.
Bases: Generic[StateT]
Dependency type that holds state.
This class is used to manage the state of an agent run. It allows setting
the state of the agent run with a specific type of state model, which must
be a subclass of BaseModel.
The state is set using the state setter by the Adapter when the run starts.
Implements the StateHandler protocol.
Bases: ABC, Generic[RunInputT, MessageT, EventT, AgentDepsT, OutputDataT]
Base class for UI adapters.
This class is responsible for transforming agent run input received from the frontend into arguments for Agent.run_stream_events(), running the agent, and then transforming Pydantic AI events into protocol-specific events.
The event stream transformation is handled by a protocol-specific UIEventStream subclass.
The Pydantic AI agent to run.
Type: AbstractAgent[AgentDepsT, OutputDataT]
The protocol-specific run input object.
Type: RunInputT
The Accept header value of the request, used to determine how to encode the protocol-specific events for the streaming response.
Type: str | None Default: None
Pydantic AI messages from the protocol-specific run input.
Type: list[ModelMessage]
Toolset representing frontend tools from the protocol-specific run input.
Type: AbstractToolset[AgentDepsT] | None
Frontend state from the protocol-specific run input.
Deferred tool results extracted from the request, used for tool approval workflows.
Type: DeferredToolResults | None
@async
@classmethod
def from_request(
cls,
request: Request,
agent: AbstractAgent[AgentDepsT, OutputDataT],
kwargs: Any = {},
) -> Self
Create an adapter from a request.
Extra keyword arguments are forwarded to the adapter constructor, allowing subclasses to accept additional adapter-specific parameters.
@abstractmethod
@classmethod
def build_run_input(cls, body: bytes) -> RunInputT
Build a protocol-specific run input object from the request body.
RunInputT
@abstractmethod
@classmethod
def load_messages(cls, messages: Sequence[MessageT]) -> list[ModelMessage]
Transform protocol-specific messages into Pydantic AI messages.
@classmethod
def dump_messages(cls, messages: Sequence[ModelMessage]) -> list[MessageT]
Transform Pydantic AI messages into protocol-specific messages.
list[MessageT]
@abstractmethod
def build_event_stream() -> UIEventStream[RunInputT, EventT, AgentDepsT, OutputDataT]
Build a protocol-specific event stream transformer.
UIEventStream[RunInputT, EventT, AgentDepsT, OutputDataT]
def transform_stream(
stream: AsyncIterator[NativeEvent],
on_complete: OnCompleteFunc[EventT] | None = None,
) -> AsyncIterator[EventT]
Transform a stream of Pydantic AI events into protocol-specific events.
AsyncIterator[EventT]
stream : AsyncIterator[NativeEvent]
The stream of Pydantic AI events to transform.
on_complete : OnCompleteFunc[EventT] | None Default: None
Optional callback function called when the agent run completes successfully.
The callback receives the completed AgentRunResult and can optionally yield additional protocol-specific events.
def encode_stream(stream: AsyncIterator[EventT]) -> AsyncIterator[str]
Encode a stream of protocol-specific events as strings according to the Accept header value.
stream : AsyncIterator[EventT]
The stream of protocol-specific events to encode.
def streaming_response(stream: AsyncIterator[EventT]) -> StreamingResponse
Generate a streaming response from a stream of protocol-specific events.
StreamingResponse
stream : AsyncIterator[EventT]
The stream of protocol-specific events to encode.
def run_stream_native(
output_type: OutputSpec[Any] | None = None,
message_history: Sequence[ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: Model | KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: ModelSettings | None = None,
usage_limits: UsageLimits | None = None,
usage: RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AbstractBuiltinTool] | None = None,
) -> AsyncIterator[NativeEvent]
Run the agent with the protocol-specific run input and stream Pydantic AI events.
AsyncIterator[NativeEvent]
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
message_history : Sequence[ModelMessage] | None Default: None
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : ModelSettings | None Default: None
Optional settings to use for this model’s request.
usage_limits : UsageLimits | None Default: None
Optional limits on model request count or token usage.
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
Optional additional builtin tools to use for this run.
def run_stream(
output_type: OutputSpec[Any] | None = None,
message_history: Sequence[ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: Model | KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: ModelSettings | None = None,
usage_limits: UsageLimits | None = None,
usage: RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AbstractBuiltinTool] | None = None,
on_complete: OnCompleteFunc[EventT] | None = None,
) -> AsyncIterator[EventT]
Run the agent with the protocol-specific run input and stream protocol-specific events.
AsyncIterator[EventT]
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
message_history : Sequence[ModelMessage] | None Default: None
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : ModelSettings | None Default: None
Optional settings to use for this model’s request.
usage_limits : UsageLimits | None Default: None
Optional limits on model request count or token usage.
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
Optional additional builtin tools to use for this run.
on_complete : OnCompleteFunc[EventT] | None Default: None
Optional callback function called when the agent run completes successfully.
The callback receives the completed AgentRunResult and can optionally yield additional protocol-specific events.
@async
@classmethod
def dispatch_request(
cls,
request: Request,
agent: AbstractAgent[DispatchDepsT, DispatchOutputDataT],
message_history: Sequence[ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: Model | KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[DispatchDepsT] = None,
deps: DispatchDepsT = None,
output_type: OutputSpec[Any] | None = None,
model_settings: ModelSettings | None = None,
usage_limits: UsageLimits | None = None,
usage: RunUsage | None = None,
metadata: AgentMetadata[DispatchDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[DispatchDepsT]] | None = None,
builtin_tools: Sequence[AbstractBuiltinTool] | None = None,
on_complete: OnCompleteFunc[EventT] | None = None,
kwargs: Any = {},
) -> Response
Handle a protocol-specific HTTP request by running the agent and returning a streaming response of protocol-specific events.
Extra keyword arguments are forwarded to from_request,
allowing subclasses to accept additional adapter-specific parameters.
Response — A streaming Starlette response with protocol-specific events encoded per the request’s Accept header value.
The incoming Starlette/FastAPI request.
The agent to run.
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
message_history : Sequence[ModelMessage] | None Default: None
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : ModelSettings | None Default: None
Optional settings to use for this model’s request.
usage_limits : UsageLimits | None Default: None
Optional limits on model request count or token usage.
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[DispatchDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[DispatchDepsT]] | None Default: None
Optional additional toolsets for this run.
Optional additional builtin tools to use for this run.
on_complete : OnCompleteFunc[EventT] | None Default: None
Optional callback function called when the agent run completes successfully.
The callback receives the completed AgentRunResult and can optionally yield additional protocol-specific events.
**kwargs : Any Default: \{\}
Additional keyword arguments forwarded to from_request.
Content type header value for Server-Sent Events (SSE).
Default: 'text/event-stream'
Type alias for the native event type, which is either an AgentStreamEvent or an AgentRunResultEvent.
Type: TypeAlias Default: AgentStreamEvent | AgentRunResultEvent[Any]
Callback function type that receives the AgentRunResult of the completed run. Can be sync, async, or an async generator of protocol-specific events.
Type: TypeAlias Default: Callable[[AgentRunResult[Any]], None] | Callable[[AgentRunResult[Any]], Awaitable[None]] | Callable[[AgentRunResult[Any]], AsyncIterator[EventT]]