Skip to content

pydantic_ai.ui.ag_ui

AG-UI protocol integration for Pydantic AI agents.

AGUIEventStream

Bases: UIEventStream[RunAgentInput, BaseEvent, AgentDepsT, OutputDataT]

UI event stream transformer for the Agent-User Interaction (AG-UI) protocol.

Methods

handle_event

@async

def handle_event(event: NativeEvent) -> AsyncIterator[BaseEvent]

Override to set timestamps on all AG-UI events.

Returns

AsyncIterator[BaseEvent]

AGUIAdapter

Bases: UIAdapter[RunAgentInput, Message, BaseEvent, AgentDepsT, OutputDataT]

UI adapter for the Agent-User Interaction (AG-UI) protocol.

Attributes

ag_ui_version

AG-UI protocol version controlling behavior thresholds.

Accepts any version string (e.g. '0.1.13'). Defaults to the version detected from the installed ag-ui-protocol package.

Known thresholds:

  • < 0.1.13: emits THINKING_* events during streaming, drops ThinkingPart from dump_messages output.
  • >= 0.1.13: emits REASONING_* events with encrypted metadata during streaming, and includes ThinkingPart as ReasoningMessage in dump_messages output for full round-trip fidelity of thinking signatures and provider metadata.
  • >= 0.1.15: emits typed multimodal input content (ImageInputContent, AudioInputContent, VideoInputContent, DocumentInputContent) instead of generic BinaryInputContent.

load_messages always accepts ReasoningMessage and multimodal content types regardless of this setting.

Type: str Default: DEFAULT_AG_UI_VERSION

preserve_file_data

Whether to preserve agent-generated files and uploaded files in AG-UI message conversion.

When True, agent-generated files and uploaded files are stored as activity messages during dump_messages and restored during load_messages, enabling full round-trip fidelity. When False (default), they are silently dropped.

If your AG-UI frontend uses activities, be aware that pydantic_ai_* activity types are reserved for internal round-trip use and should be ignored by frontend activity handlers.

Type: bool Default: False

messages

Pydantic AI messages from the AG-UI run input.

Type: list[ModelMessage]

toolset

Toolset representing frontend tools from the AG-UI run input.

Type: AbstractToolset[AgentDepsT] | None

state

Frontend state from the AG-UI run input.

Type: dict[str, Any] | None

Methods

build_run_input

@classmethod

def build_run_input(cls, body: bytes) -> RunAgentInput

Build an AG-UI run input object from the request body.

Returns

RunAgentInput

build_event_stream
def build_event_stream(

) -> UIEventStream[RunAgentInput, BaseEvent, AgentDepsT, OutputDataT]

Build an AG-UI event stream transformer.

Returns

UIEventStream[RunAgentInput, BaseEvent, AgentDepsT, OutputDataT]

from_request

@async

@classmethod

def from_request(
    cls,
    request: Request,
    agent: AbstractAgent[AgentDepsT, OutputDataT],
    ag_ui_version: str = DEFAULT_AG_UI_VERSION,
    preserve_file_data: bool = False,
    manage_system_prompt: Literal['server', 'client'] = 'server',
    kwargs: Any = {},
) -> AGUIAdapter[AgentDepsT, OutputDataT]

Extends from_request with AG-UI-specific parameters.

Returns

AGUIAdapter[AgentDepsT, OutputDataT]

load_messages

@classmethod

def load_messages(
    cls,
    messages: Sequence[Message],
    preserve_file_data: bool = False,
) -> list[ModelMessage]

Transform AG-UI messages into Pydantic AI messages.

Returns

list[ModelMessage]

dump_messages

@classmethod

def dump_messages(
    cls,
    messages: Sequence[ModelMessage],
    ag_ui_version: str = DEFAULT_AG_UI_VERSION,
    preserve_file_data: bool = False,
) -> list[Message]

Transform Pydantic AI messages into AG-UI messages.

Note: The round-trip dump_messages -> load_messages is not fully lossless:

  • TextPart.id, .provider_name, .provider_details are lost.
  • ToolCallPart.id, .provider_name, .provider_details are lost.
  • BuiltinToolCallPart.id, .provider_details are lost (only .provider_name survives via the prefixed tool call ID).
  • BuiltinToolReturnPart.provider_details is lost.
  • RetryPromptPart becomes ToolReturnPart (or UserPromptPart) on reload.
  • CachePoint and UploadedFile content items are dropped (unless preserve_file_data=True).
  • ThinkingPart is dropped when ag_ui_version='0.1.10'.
  • FilePart is silently dropped unless preserve_file_data=True.
  • UploadedFile in a multi-item UserPromptPart is split into a separate activity message when preserve_file_data=True, which reloads as a separate UserPromptPart.
  • Part ordering within a ModelResponse may change when text follows tool calls.
Returns

list[Message] — A list of AG-UI Message objects.

Parameters

messages : Sequence[ModelMessage]

A sequence of ModelMessage objects to convert.

ag_ui_version : str Default: DEFAULT_AG_UI_VERSION

AG-UI protocol version controlling ThinkingPart emission.

preserve_file_data : bool Default: False

Whether to include FilePart and UploadedFile as ActivityMessage.

DEFAULT_AG_UI_VERSION

The default AG-UI version, auto-detected from the installed ag-ui-protocol package.

Type: str Default: detect_ag_ui_version()

AG-UI protocol integration for Pydantic AI agents.

AGUIApp

Bases: Generic[AgentDepsT, OutputDataT], Starlette

ASGI application for running Pydantic AI agents with AG-UI protocol support.

Methods

__init__
def __init__(
    agent: AbstractAgent[AgentDepsT, OutputDataT],
    ag_ui_version: str = DEFAULT_AG_UI_VERSION,
    preserve_file_data: bool = False,
    output_type: OutputSpec[Any] | None = None,
    message_history: Sequence[ModelMessage] | None = None,
    deferred_tool_results: DeferredToolResults | None = None,
    model: Model | KnownModelName | str | None = None,
    deps: AgentDepsT = None,
    model_settings: ModelSettings | None = None,
    usage_limits: UsageLimits | None = None,
    usage: RunUsage | None = None,
    infer_name: bool = True,
    toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
    builtin_tools: Sequence[AbstractBuiltinTool] | None = None,
    on_complete: OnCompleteFunc[Any] | None = None,
    debug: bool = False,
    routes: Sequence[BaseRoute] | None = None,
    middleware: Sequence[Middleware] | None = None,
    exception_handlers: Mapping[Any, ExceptionHandler] | None = None,
    on_startup: Sequence[Callable[[], Any]] | None = None,
    on_shutdown: Sequence[Callable[[], Any]] | None = None,
    lifespan: Lifespan[Self] | None = None,
) -> None

An ASGI application that handles every request by running the agent and streaming the response.

Note that the deps will be the same for each request, with the exception of the frontend state that’s injected into the state field of a deps object that implements the StateHandler protocol. To provide different deps for each request (e.g. based on the authenticated user), use AGUIAdapter.run_stream() or AGUIAdapter.dispatch_request() instead.

Returns

None

Parameters

agent : AbstractAgent[AgentDepsT, OutputDataT]

The agent to run.

ag_ui_version : str Default: DEFAULT_AG_UI_VERSION

AG-UI protocol version controlling thinking/reasoning event format.

preserve_file_data : bool Default: False

Whether to preserve agent-generated files and uploaded files in AG-UI message conversion. See AGUIAdapter.preserve_file_data.

output_type : OutputSpec[Any] | None Default: None

Custom output type to use for this run, output_type may only be used if the agent has no output validators since output validators would expect an argument that matches the agent’s output type.

message_history : Sequence[ModelMessage] | None Default: None

History of the conversation so far.

deferred_tool_results : DeferredToolResults | None Default: None

Optional results for deferred tool calls in the message history.

model : Model | KnownModelName | str | None Default: None

Optional model to use for this run, required if model was not set when creating the agent.

deps : AgentDepsT Default: None

Optional dependencies to use for this run.

model_settings : ModelSettings | None Default: None

Optional settings to use for this model’s request.

usage_limits : UsageLimits | None Default: None

Optional limits on model request count or token usage.

usage : RunUsage | None Default: None

Optional usage to start with, useful for resuming a conversation or agents used in tools.

infer_name : bool Default: True

Whether to try to infer the agent name from the call frame if it’s not set.

toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None

Optional additional toolsets for this run.

builtin_tools : Sequence[AbstractBuiltinTool] | None Default: None

Optional additional builtin tools for this run.

on_complete : OnCompleteFunc[Any] | None Default: None

Optional callback function called when the agent run completes successfully. The callback receives the completed AgentRunResult and can access all_messages() and other result data.

debug : bool Default: False

Boolean indicating if debug tracebacks should be returned on errors.

routes : Sequence[BaseRoute] | None Default: None

A list of routes to serve incoming HTTP and WebSocket requests.

middleware : Sequence[Middleware] | None Default: None

A list of middleware to run for every request. A starlette application will always automatically include two middleware classes. ServerErrorMiddleware is added as the very outermost middleware, to handle any uncaught errors occurring anywhere in the entire stack. ExceptionMiddleware is added as the very innermost middleware, to deal with handled exception cases occurring in the routing or endpoints.

exception_handlers : Mapping[Any, ExceptionHandler] | None Default: None

A mapping of either integer status codes, or exception class types onto callables which handle the exceptions. Exception handler callables should be of the form handler(request, exc) -> response and may be either standard functions, or async functions.

on_startup : Sequence[Callable[[], Any]] | None Default: None

A list of callables to run on application startup. Startup handler callables do not take any arguments, and may be either standard functions, or async functions.

on_shutdown : Sequence[Callable[[], Any]] | None Default: None

A list of callables to run on application shutdown. Shutdown handler callables do not take any arguments, and may be either standard functions, or async functions.

lifespan : Lifespan[Self] | None Default: None

A lifespan context function, which can be used to perform startup and shutdown tasks. This is a newer style that replaces the on_startup and on_shutdown handlers. Use one or the other, not both.