pydantic_ai.durable_exec
Temporal Workflow base class that provides __pydantic_ai_agents__ for direct agent registration.
Bases: RunContext[AgentDepsT]
The RunContext subclass to use to serialize and deserialize the run context for use inside a Temporal activity.
By default, only the deps, run_id, metadata, retries, tool_call_id, tool_name, tool_call_approved, tool_call_metadata, retry, max_retries, run_step, usage, and partial_output attributes will be available.
To make another attribute available, create a TemporalRunContext subclass with a custom serialize_run_context class method that returns a dictionary that includes the attribute and pass it to TemporalAgent.
@classmethod
def serialize_run_context(cls, ctx: RunContext[Any]) -> dict[str, Any]
Serialize the run context to a dict[str, Any].
@classmethod
def deserialize_run_context(
cls,
ctx: dict[str, Any],
deps: Any,
) -> TemporalRunContext[Any]
Deserialize the run context from a dict[str, Any].
TemporalRunContext[Any]
Bases: SimplePlugin
Temporal client plugin for Logfire.
Bases: WrapperAgent[AgentDepsT, OutputDataT]
def __init__(
wrapped: AbstractAgent[AgentDepsT, OutputDataT],
name: str | None = None,
models: Mapping[str, Model] | None = None,
provider_factory: TemporalProviderFactory | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
activity_config: ActivityConfig | None = None,
model_activity_config: ActivityConfig | None = None,
toolset_activity_config: dict[str, ActivityConfig] | None = None,
tool_activity_config: dict[str, dict[str, ActivityConfig | Literal[False]]] | None = None,
run_context_type: type[TemporalRunContext[AgentDepsT]] = TemporalRunContext[AgentDepsT],
temporalize_toolset_func: Callable[[AbstractToolset[AgentDepsT], str, ActivityConfig, dict[str, ActivityConfig | Literal[False]], type[AgentDepsT], type[TemporalRunContext[AgentDepsT]], AbstractAgent[AgentDepsT, Any] | None], AbstractToolset[AgentDepsT]] = temporalize_toolset,
)
Wrap an agent to enable it to be used inside a Temporal workflow, by automatically offloading model requests, tool calls, and MCP server communication to Temporal activities.
After wrapping, the original agent can still be used as normal outside of the Temporal workflow, but any changes to its model or toolsets after wrapping will not be reflected in the durable agent.
The agent to wrap.
Optional unique agent name to use in the Temporal activities’ names. If not provided, the agent’s name will be used.
Optional mapping of model instances to register with the agent.
Keys define the names that can be referenced at runtime and the values are Model instances.
Registered model instances can be passed directly to run(model=...).
If the wrapped agent doesn’t have a model set and none is provided to run(),
the first model in this mapping will be used as the default.
provider_factory : TemporalProviderFactory | None Default: None
Optional callable used when instantiating models from provider strings (those supplied at runtime).
The callable receives the provider name and the current run context, allowing custom configuration such as injecting API keys stored on deps.
Note: This factory is only used inside Temporal workflows. Outside workflows, model strings are resolved using the default provider behavior.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use instead of the one set on the wrapped agent.
activity_config : ActivityConfig | None Default: None
The base Temporal activity config to use for all activities. If no config is provided, a start_to_close_timeout of 60 seconds is used.
model_activity_config : ActivityConfig | None Default: None
The Temporal activity config to use for model request activities. This is merged with the base activity config.
The Temporal activity config to use for get-tools and call-tool activities for specific toolsets identified by ID. This is merged with the base activity config.
The Temporal activity config to use for specific tool call activities identified by toolset ID and tool name.
This is merged with the base and toolset-specific activity configs.
If a tool does not use IO, you can specify False to disable using an activity.
Note that the tool is required to be defined as an async function as non-async tools are run in threads which are non-deterministic and thus not supported outside of activities.
run_context_type : type[TemporalRunContext[AgentDepsT]] Default: TemporalRunContext[AgentDepsT]
The TemporalRunContext subclass to use to serialize and deserialize the run context for use inside a Temporal activity.
By default, only the deps, run_id, metadata, retries, tool_call_id, tool_name, tool_call_approved, retry, max_retries, run_step, usage, and partial_output attributes will be available.
To make another attribute available, create a TemporalRunContext subclass with a custom serialize_run_context class method that returns a dictionary that includes the attribute.
temporalize_toolset_func : Callable[[AbstractToolset[AgentDepsT], str, ActivityConfig, dict[str, ActivityConfig | Literal[False]], type[AgentDepsT], type[TemporalRunContext[AgentDepsT]], AbstractAgent[AgentDepsT, Any] | None], AbstractToolset[AgentDepsT]] Default: temporalize_toolset
Optional function to use to prepare “leaf” toolsets (i.e. those that implement their own tool listing and calling) for Temporal by wrapping them in a TemporalWrapperToolset that moves methods that require IO to Temporal activities.
If not provided, only FunctionToolset and MCPServer will be prepared for Temporal.
The function takes the toolset, the activity name prefix, the toolset-specific activity config, the tool-specific activity configs and the run context type.
@async
def run(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[OutputDataT]
def run(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[RunOutputDataT]
Run the agent with a user prompt in async mode.
This method builds an internal agent graph (using system prompts, tools and result schemas) and then runs the graph to completion. The result of the run is returned.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
agent_run = await agent.run('What is the capital of France?')
print(agent_run.output)
#> The capital of France is Paris.
AgentRunResult[Any] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Inside workflows, only registered model instances, registered names, or provider strings are valid.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
def run_sync(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[OutputDataT]
def run_sync(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[RunOutputDataT]
Synchronously run the agent with a user prompt.
This is a convenience method that wraps self.run with loop.run_until_complete(...).
You therefore can’t use this method inside async code or if there’s an active event loop.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
result_sync = agent.run_sync('What is the capital of Italy?')
print(result_sync.output)
#> The capital of Italy is Rome.
AgentRunResult[Any] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
@async
def run_stream(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AbstractAsyncContextManager[StreamedRunResult[AgentDepsT, OutputDataT]]
def run_stream(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AbstractAsyncContextManager[StreamedRunResult[AgentDepsT, RunOutputDataT]]
Run the agent with a user prompt in async mode, returning a streamed response.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
async with agent.run_stream('What is the capital of the UK?') as response:
print(await response.get_output())
#> The capital of the UK is London.
AsyncIterator[StreamedRunResult[AgentDepsT, Any]] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run. It will receive all the events up until the final result is found, which you can then read or stream from inside the context manager.
Optional agent spec to apply for this run.
def run_stream_events(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[OutputDataT]]
def run_stream_events(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[RunOutputDataT]]
Run the agent with a user prompt in async mode and stream events from the run.
This is a convenience method that wraps self.run and
uses the event_stream_handler kwarg to get a stream of events from the run.
Example:
from pydantic_ai import Agent, AgentRunResultEvent, AgentStreamEvent
agent = Agent('openai:gpt-5.2')
async def main():
events: list[AgentStreamEvent | AgentRunResultEvent] = []
async for event in agent.run_stream_events('What is the capital of France?'):
events.append(event)
print(events)
'''
[
PartStartEvent(index=0, part=TextPart(content='The capital of ')),
FinalResultEvent(tool_name=None, tool_call_id=None),
PartDeltaEvent(index=0, delta=TextPartDelta(content_delta='France is Paris. ')),
PartEndEvent(
index=0, part=TextPart(content='The capital of France is Paris. ')
),
AgentRunResultEvent(
result=AgentRunResult(output='The capital of France is Paris. ')
),
]
'''
Arguments are the same as for self.run,
except that event_stream_handler is now allowed.
AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[Any]] — An async iterable of stream events AgentStreamEvent and finally a AgentRunResultEvent with the final
AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[Any]] — run result.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
@async
def iter(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
_deprecated_kwargs: Never = {},
) -> AbstractAsyncContextManager[AgentRun[AgentDepsT, OutputDataT]]
def iter(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
_deprecated_kwargs: Never = {},
) -> AbstractAsyncContextManager[AgentRun[AgentDepsT, RunOutputDataT]]
A contextmanager which can be used to iterate over the agent graph’s nodes as they are executed.
This method builds an internal agent graph (using system prompts, tools and output schemas) and then returns an
AgentRun object. The AgentRun can be used to async-iterate over the nodes of the graph as they are
executed. This is the API to use if you want to consume the outputs coming from each LLM model response, or the
stream of events coming from the execution of tools.
The AgentRun also provides methods to access the full message history, new messages, and usage statistics,
and the final result of the run once it has completed.
For more details, see the documentation of AgentRun.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
nodes = []
async with agent.iter('What is the capital of France?') as agent_run:
async for node in agent_run:
nodes.append(node)
print(nodes)
'''
[
UserPromptNode(
user_prompt='What is the capital of France?',
instructions_functions=[],
system_prompts=(),
system_prompt_functions=[],
system_prompt_dynamic_functions={},
),
ModelRequestNode(
request=ModelRequest(
parts=[
UserPromptPart(
content='What is the capital of France?',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
run_id='...',
)
),
CallToolsNode(
model_response=ModelResponse(
parts=[TextPart(content='The capital of France is Paris.')],
usage=RequestUsage(input_tokens=56, output_tokens=7),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
)
),
End(data=FinalResult(output='The capital of France is Paris.')),
]
'''
print(agent_run.result.output)
#> The capital of France is Paris.
AsyncIterator[AgentRun[AgentDepsT, Any]] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
def override(
name: str | _utils.Unset = _utils.UNSET,
deps: AgentDepsT | _utils.Unset = _utils.UNSET,
model: models.Model | models.KnownModelName | str | _utils.Unset = _utils.UNSET,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | _utils.Unset = _utils.UNSET,
tools: Sequence[Tool[AgentDepsT] | ToolFuncEither[AgentDepsT, ...]] | _utils.Unset = _utils.UNSET,
instructions: _instructions.AgentInstructions[AgentDepsT] | _utils.Unset = _utils.UNSET,
model_settings: AgentModelSettings[AgentDepsT] | _utils.Unset = _utils.UNSET,
spec: dict[str, Any] | AgentSpec | None = None,
) -> Iterator[None]
Context manager to temporarily override agent name, dependencies, model, toolsets, tools, or instructions.
This is particularly useful when testing. You can find an example of this here.
name : str | _utils.Unset Default: _utils.UNSET
The name to use instead of the name passed to the agent constructor and agent run.
The dependencies to use instead of the dependencies passed to the agent run.
model : models.Model | models.KnownModelName | str | _utils.Unset Default: _utils.UNSET
The model to use instead of the model passed to the agent run.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | _utils.Unset Default: _utils.UNSET
The toolsets to use instead of the toolsets passed to the agent constructor and agent run.
tools : Sequence[Tool[AgentDepsT] | ToolFuncEither[AgentDepsT, …]] | _utils.Unset Default: _utils.UNSET
The tools to use instead of the tools registered with the agent.
The instructions to use instead of the instructions registered with the agent.
The model settings to use instead of the model settings passed to the agent constructor.
When set, any per-run model_settings argument is ignored.
Optional agent spec to apply as overrides.
Bases: WrapperToolset[AgentDepsT], ABC
Bases: SimplePlugin
Temporal client and worker plugin for Pydantic AI.
Bases: SimplePlugin
Temporal worker plugin for a specific Pydantic AI agent.
Bases: TypedDict
Configuration for a step in the DBOS workflow.
Bases: DBOSMCPToolset[AgentDepsT]
A wrapper for MCPServer that integrates with DBOS, turning call_tool and get_tools into DBOS steps.
Tool definitions are cached across steps to avoid redundant MCP server round-trips,
respecting the wrapped server’s cache_tools setting.
Bases: WrapperModel
A wrapper for Model that integrates with DBOS, turning request and request_stream to DBOS steps.
Bases: WrapperAgent[AgentDepsT, OutputDataT], DBOSConfiguredInstance
def __init__(
wrapped: AbstractAgent[AgentDepsT, OutputDataT],
name: str | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
mcp_step_config: StepConfig | None = None,
model_step_config: StepConfig | None = None,
parallel_execution_mode: DBOSParallelExecutionMode = 'parallel_ordered_events',
)
Wrap an agent to enable it with DBOS durable workflows, by automatically offloading model requests, tool calls, and MCP server communication to DBOS steps.
After wrapping, the original agent can still be used as normal outside of the DBOS workflow.
The agent to wrap.
Optional unique agent name to use as the DBOS configured instance name. If not provided, the agent’s name will be used.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use instead of the one set on the wrapped agent.
mcp_step_config : StepConfig | None Default: None
The base DBOS step config to use for MCP server steps. If no config is provided, use the default settings of DBOS.
model_step_config : StepConfig | None Default: None
The DBOS step config to use for model request steps. If no config is provided, use the default settings of DBOS.
The mode for executing tool calls:
- ‘parallel_ordered_events’ (default): Run tool calls in parallel, but events are emitted in order, after all calls complete.
- ‘sequential’: Run tool calls one at a time in order.
@async
def run(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[OutputDataT]
def run(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[RunOutputDataT]
Run the agent with a user prompt in async mode.
This method builds an internal agent graph (using system prompts, tools and result schemas) and then runs the graph to completion. The result of the run is returned.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
agent_run = await agent.run('What is the capital of France?')
print(agent_run.output)
#> The capital of France is Paris.
AgentRunResult[Any] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run.
Optional agent spec to apply for this run.
def run_sync(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[OutputDataT]
def run_sync(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[RunOutputDataT]
Synchronously run the agent with a user prompt.
This is a convenience method that wraps self.run with loop.run_until_complete(...).
You therefore can’t use this method inside async code or if there’s an active event loop.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
result_sync = agent.run_sync('What is the capital of Italy?')
print(result_sync.output)
#> The capital of Italy is Rome.
AgentRunResult[Any] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run.
Optional agent spec to apply for this run.
@async
def run_stream(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AbstractAsyncContextManager[StreamedRunResult[AgentDepsT, OutputDataT]]
def run_stream(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
deps: AgentDepsT = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AbstractAsyncContextManager[StreamedRunResult[AgentDepsT, RunOutputDataT]]
Run the agent with a user prompt in async mode, returning a streamed response.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
async with agent.run_stream('What is the capital of the UK?') as response:
print(await response.get_output())
#> The capital of the UK is London.
AsyncIterator[StreamedRunResult[AgentDepsT, Any]] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run. It will receive all the events up until the final result is found, which you can then read or stream from inside the context manager.
Optional agent spec to apply for this run.
def run_stream_events(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[OutputDataT]]
def run_stream_events(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[RunOutputDataT]]
Run the agent with a user prompt in async mode and stream events from the run.
This is a convenience method that wraps self.run and
uses the event_stream_handler kwarg to get a stream of events from the run.
Example:
from pydantic_ai import Agent, AgentRunResultEvent, AgentStreamEvent
agent = Agent('openai:gpt-5.2')
async def main():
events: list[AgentStreamEvent | AgentRunResultEvent] = []
async for event in agent.run_stream_events('What is the capital of France?'):
events.append(event)
print(events)
'''
[
PartStartEvent(index=0, part=TextPart(content='The capital of ')),
FinalResultEvent(tool_name=None, tool_call_id=None),
PartDeltaEvent(index=0, delta=TextPartDelta(content_delta='France is Paris. ')),
PartEndEvent(
index=0, part=TextPart(content='The capital of France is Paris. ')
),
AgentRunResultEvent(
result=AgentRunResult(output='The capital of France is Paris. ')
),
]
'''
Arguments are the same as for self.run,
except that event_stream_handler is now allowed.
AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[Any]] — An async iterable of stream events AgentStreamEvent and finally a AgentRunResultEvent with the final
AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[Any]] — run result.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
@async
def iter(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
_deprecated_kwargs: Never = {},
) -> AbstractAsyncContextManager[AgentRun[AgentDepsT, OutputDataT]]
def iter(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
_deprecated_kwargs: Never = {},
) -> AbstractAsyncContextManager[AgentRun[AgentDepsT, RunOutputDataT]]
A contextmanager which can be used to iterate over the agent graph’s nodes as they are executed.
This method builds an internal agent graph (using system prompts, tools and output schemas) and then returns an
AgentRun object. The AgentRun can be used to async-iterate over the nodes of the graph as they are
executed. This is the API to use if you want to consume the outputs coming from each LLM model response, or the
stream of events coming from the execution of tools.
The AgentRun also provides methods to access the full message history, new messages, and usage statistics,
and the final result of the run once it has completed.
For more details, see the documentation of AgentRun.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
nodes = []
async with agent.iter('What is the capital of France?') as agent_run:
async for node in agent_run:
nodes.append(node)
print(nodes)
'''
[
UserPromptNode(
user_prompt='What is the capital of France?',
instructions_functions=[],
system_prompts=(),
system_prompt_functions=[],
system_prompt_dynamic_functions={},
),
ModelRequestNode(
request=ModelRequest(
parts=[
UserPromptPart(
content='What is the capital of France?',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
run_id='...',
)
),
CallToolsNode(
model_response=ModelResponse(
parts=[TextPart(content='The capital of France is Paris.')],
usage=RequestUsage(input_tokens=56, output_tokens=7),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
)
),
End(data=FinalResult(output='The capital of France is Paris.')),
]
'''
print(agent_run.result.output)
#> The capital of France is Paris.
AsyncIterator[AgentRun[AgentDepsT, Any]] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
def override(
name: str | _utils.Unset = _utils.UNSET,
deps: AgentDepsT | _utils.Unset = _utils.UNSET,
model: models.Model | models.KnownModelName | str | _utils.Unset = _utils.UNSET,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | _utils.Unset = _utils.UNSET,
tools: Sequence[Tool[AgentDepsT] | ToolFuncEither[AgentDepsT, ...]] | _utils.Unset = _utils.UNSET,
instructions: _instructions.AgentInstructions[AgentDepsT] | _utils.Unset = _utils.UNSET,
model_settings: AgentModelSettings[AgentDepsT] | _utils.Unset = _utils.UNSET,
spec: dict[str, Any] | AgentSpec | None = None,
) -> Iterator[None]
Context manager to temporarily override agent name, dependencies, model, toolsets, tools, or instructions.
This is particularly useful when testing. You can find an example of this here.
name : str | _utils.Unset Default: _utils.UNSET
The name to use instead of the name passed to the agent constructor and agent run.
The dependencies to use instead of the dependencies passed to the agent run.
model : models.Model | models.KnownModelName | str | _utils.Unset Default: _utils.UNSET
The model to use instead of the model passed to the agent run.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | _utils.Unset Default: _utils.UNSET
The toolsets to use instead of the toolsets passed to the agent constructor and agent run.
tools : Sequence[Tool[AgentDepsT] | ToolFuncEither[AgentDepsT, …]] | _utils.Unset Default: _utils.UNSET
The tools to use instead of the tools registered with the agent.
The instructions to use instead of the instructions registered with the agent.
The model settings to use instead of the model settings passed to the agent constructor.
When set, any per-run model_settings argument is ignored.
Optional agent spec to apply as overrides.
The mode for executing tool calls in DBOS durable workflows. This is a subset of the ParallelExecutionMode because ‘parallel’ cannot guarantee deterministic ordering.
Default: Literal['sequential', 'parallel_ordered_events']
Bases: TypedDict
Configuration for a task in Prefect.
These options are passed to the @task decorator.
Maximum number of retries for the task.
Type: int
Delay between retries in seconds. Can be a single value or a list for custom backoff.
Maximum time in seconds for the task to complete.
Type: float
Prefect cache policy for the task.
Type: CachePolicy
Whether to persist the task result.
Type: bool
Prefect result storage for the task. Should be a storage block or a block slug like s3-bucket/my-storage.
Type: ResultStorage
Whether to log print statements from the task.
Type: bool
Bases: PrefectWrapperToolset[AgentDepsT]
A wrapper for FunctionToolset that integrates with Prefect, turning tool calls into Prefect tasks.
@async
def call_tool(
name: str,
tool_args: dict[str, Any],
ctx: RunContext[AgentDepsT],
tool: ToolsetTool[AgentDepsT],
) -> Any
Call a tool, wrapped as a Prefect task with a descriptive name.
Bases: PrefectWrapperToolset[AgentDepsT], ABC
A wrapper for MCPServer that integrates with Prefect, turning call_tool and get_tools into Prefect tasks.
@async
def call_tool(
name: str,
tool_args: dict[str, Any],
ctx: RunContext[AgentDepsT],
tool: ToolsetTool[AgentDepsT],
) -> ToolResult
Call an MCP tool, wrapped as a Prefect task with a descriptive name.
ToolResult
Bases: WrapperModel
A wrapper for Model that integrates with Prefect, turning request and request_stream into Prefect tasks.
@async
def request(
messages: list[ModelMessage],
model_settings: ModelSettings | None,
model_request_parameters: ModelRequestParameters,
) -> ModelResponse
Make a model request, wrapped as a Prefect task when in a flow.
@async
def request_stream(
messages: list[ModelMessage],
model_settings: ModelSettings | None,
model_request_parameters: ModelRequestParameters,
run_context: RunContext[Any] | None = None,
) -> AsyncIterator[StreamedResponse]
Make a streaming model request.
When inside a Prefect flow, the stream is consumed within a task and a non-streaming response is returned. When not in a flow, behaves normally.
AsyncIterator[StreamedResponse]
Bases: WrapperAgent[AgentDepsT, OutputDataT]
def __init__(
wrapped: AbstractAgent[AgentDepsT, OutputDataT],
name: str | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
mcp_task_config: TaskConfig | None = None,
model_task_config: TaskConfig | None = None,
tool_task_config: TaskConfig | None = None,
tool_task_config_by_name: dict[str, TaskConfig | None] | None = None,
event_stream_handler_task_config: TaskConfig | None = None,
prefectify_toolset_func: Callable[[AbstractToolset[AgentDepsT], TaskConfig, TaskConfig, dict[str, TaskConfig | None]], AbstractToolset[AgentDepsT]] = prefectify_toolset,
)
Wrap an agent to enable it with Prefect durable flows, by automatically offloading model requests, tool calls, and MCP server communication to Prefect tasks.
After wrapping, the original agent can still be used as normal outside of the Prefect flow.
The agent to wrap.
Optional unique agent name to use as the Prefect flow name prefix. If not provided, the agent’s name will be used.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use instead of the one set on the wrapped agent.
mcp_task_config : TaskConfig | None Default: None
The base Prefect task config to use for MCP server tasks. If no config is provided, use the default settings of Prefect.
model_task_config : TaskConfig | None Default: None
The Prefect task config to use for model request tasks. If no config is provided, use the default settings of Prefect.
tool_task_config : TaskConfig | None Default: None
The default Prefect task config to use for tool calls. If no config is provided, use the default settings of Prefect.
Per-tool task configuration. Keys are tool names, values are TaskConfig or None (None disables task wrapping for that tool).
event_stream_handler_task_config : TaskConfig | None Default: None
The Prefect task config to use for the event stream handler task. If no config is provided, use the default settings of Prefect.
prefectify_toolset_func : Callable[[AbstractToolset[AgentDepsT], TaskConfig, TaskConfig, dict[str, TaskConfig | None]], AbstractToolset[AgentDepsT]] Default: prefectify_toolset
Optional function to use to prepare toolsets for Prefect by wrapping them in a PrefectWrapperToolset that moves methods that require IO to Prefect tasks.
If not provided, only FunctionToolset and MCPServer will be prepared for Prefect.
The function takes the toolset, the task config, the tool-specific task config, and the tool-specific task config by name.
@async
def run(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[OutputDataT]
def run(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[RunOutputDataT]
Run the agent with a user prompt in async mode.
This method builds an internal agent graph (using system prompts, tools and result schemas) and then runs the graph to completion. The result of the run is returned.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
agent_run = await agent.run('What is the capital of France?')
print(agent_run.output)
#> The capital of France is Paris.
AgentRunResult[Any] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
def run_sync(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[OutputDataT]
def run_sync(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AgentRunResult[RunOutputDataT]
Synchronously run the agent with a user prompt.
This is a convenience method that wraps self.run with loop.run_until_complete(...).
You therefore can’t use this method inside async code or if there’s an active event loop.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
result_sync = agent.run_sync('What is the capital of Italy?')
print(result_sync.output)
#> The capital of Italy is Rome.
AgentRunResult[Any] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
@async
def run_stream(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AbstractAsyncContextManager[StreamedRunResult[AgentDepsT, OutputDataT]]
def run_stream(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
event_stream_handler: EventStreamHandler[AgentDepsT] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AbstractAsyncContextManager[StreamedRunResult[AgentDepsT, RunOutputDataT]]
Run the agent with a user prompt in async mode, returning a streamed response.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
async with agent.run_stream('What is the capital of the UK?') as response:
print(await response.get_output())
#> The capital of the UK is London.
AsyncIterator[StreamedRunResult[AgentDepsT, Any]] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
event_stream_handler : EventStreamHandler[AgentDepsT] | None Default: None
Optional event stream handler to use for this run. It will receive all the events up until the final result is found, which you can then read or stream from inside the context manager.
Optional agent spec to apply for this run.
def run_stream_events(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[OutputDataT]]
def run_stream_events(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[RunOutputDataT]]
Run the agent with a user prompt in async mode and stream events from the run.
This is a convenience method that wraps self.run and
uses the event_stream_handler kwarg to get a stream of events from the run.
Example:
from pydantic_ai import Agent, AgentRunResultEvent, AgentStreamEvent
agent = Agent('openai:gpt-5.2')
async def main():
events: list[AgentStreamEvent | AgentRunResultEvent] = []
async for event in agent.run_stream_events('What is the capital of France?'):
events.append(event)
print(events)
'''
[
PartStartEvent(index=0, part=TextPart(content='The capital of ')),
FinalResultEvent(tool_name=None, tool_call_id=None),
PartDeltaEvent(index=0, delta=TextPartDelta(content_delta='France is Paris. ')),
PartEndEvent(
index=0, part=TextPart(content='The capital of France is Paris. ')
),
AgentRunResultEvent(
result=AgentRunResult(output='The capital of France is Paris. ')
),
]
'''
Arguments are the same as for self.run,
except that event_stream_handler is now allowed.
AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[Any]] — An async iterable of stream events AgentStreamEvent and finally a AgentRunResultEvent with the final
AsyncIterator[_messages.AgentStreamEvent | AgentRunResultEvent[Any]] — run result.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional additional instructions to use for this run.
Optional dependencies to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
@async
def iter(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: None = None,
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AbstractAsyncContextManager[AgentRun[AgentDepsT, OutputDataT]]
def iter(
user_prompt: str | Sequence[_messages.UserContent] | None = None,
output_type: OutputSpec[RunOutputDataT],
message_history: Sequence[_messages.ModelMessage] | None = None,
deferred_tool_results: DeferredToolResults | None = None,
model: models.Model | models.KnownModelName | str | None = None,
instructions: _instructions.AgentInstructions[AgentDepsT] = None,
deps: AgentDepsT = None,
model_settings: AgentModelSettings[AgentDepsT] | None = None,
usage_limits: _usage.UsageLimits | None = None,
usage: _usage.RunUsage | None = None,
metadata: AgentMetadata[AgentDepsT] | None = None,
infer_name: bool = True,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | None = None,
builtin_tools: Sequence[AgentBuiltinTool[AgentDepsT]] | None = None,
spec: dict[str, Any] | AgentSpec | None = None,
) -> AbstractAsyncContextManager[AgentRun[AgentDepsT, RunOutputDataT]]
A contextmanager which can be used to iterate over the agent graph’s nodes as they are executed.
This method builds an internal agent graph (using system prompts, tools and output schemas) and then returns an
AgentRun object. The AgentRun can be used to async-iterate over the nodes of the graph as they are
executed. This is the API to use if you want to consume the outputs coming from each LLM model response, or the
stream of events coming from the execution of tools.
The AgentRun also provides methods to access the full message history, new messages, and usage statistics,
and the final result of the run once it has completed.
For more details, see the documentation of AgentRun.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
nodes = []
async with agent.iter('What is the capital of France?') as agent_run:
async for node in agent_run:
nodes.append(node)
print(nodes)
'''
[
UserPromptNode(
user_prompt='What is the capital of France?',
instructions_functions=[],
system_prompts=(),
system_prompt_functions=[],
system_prompt_dynamic_functions={},
),
ModelRequestNode(
request=ModelRequest(
parts=[
UserPromptPart(
content='What is the capital of France?',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
run_id='...',
)
),
CallToolsNode(
model_response=ModelResponse(
parts=[TextPart(content='The capital of France is Paris.')],
usage=RequestUsage(input_tokens=56, output_tokens=7),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
)
),
End(data=FinalResult(output='The capital of France is Paris.')),
]
'''
print(agent_run.result.output)
#> The capital of France is Paris.
AsyncIterator[AgentRun[AgentDepsT, Any]] — The result of the run.
User input to start/continue the conversation.
output_type : OutputSpec[RunOutputDataT] | None Default: None
Custom output type to use for this run, output_type may only be used if the agent has no
output validators since output validators would expect an argument that matches the agent’s output type.
History of the conversation so far.
deferred_tool_results : DeferredToolResults | None Default: None
Optional results for deferred tool calls in the message history.
model : models.Model | models.KnownModelName | str | None Default: None
Optional model to use for this run, required if model was not set when creating the agent.
Optional dependencies to use for this run.
Optional additional instructions to use for this run.
model_settings : AgentModelSettings[AgentDepsT] | None Default: None
Optional settings to use for this model’s request.
usage_limits : _usage.UsageLimits | None Default: None
Optional limits on model request count or token usage.
usage : _usage.RunUsage | None Default: None
Optional usage to start with, useful for resuming a conversation or agents used in tools.
metadata : AgentMetadata[AgentDepsT] | None Default: None
Optional metadata to attach to this run. Accepts a dictionary or a callable taking
RunContext; merged with the agent’s configured metadata.
infer_name : bool Default: True
Whether to try to infer the agent name from the call frame if it’s not set.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | None Default: None
Optional additional toolsets for this run.
builtin_tools : Sequence[AgentBuiltinTool[AgentDepsT]] | None Default: None
Optional additional builtin tools for this run.
Optional agent spec to apply for this run.
def override(
name: str | _utils.Unset = _utils.UNSET,
deps: AgentDepsT | _utils.Unset = _utils.UNSET,
model: models.Model | models.KnownModelName | str | _utils.Unset = _utils.UNSET,
toolsets: Sequence[AbstractToolset[AgentDepsT]] | _utils.Unset = _utils.UNSET,
tools: Sequence[Tool[AgentDepsT] | ToolFuncEither[AgentDepsT, ...]] | _utils.Unset = _utils.UNSET,
instructions: _instructions.AgentInstructions[AgentDepsT] | _utils.Unset = _utils.UNSET,
model_settings: AgentModelSettings[AgentDepsT] | _utils.Unset = _utils.UNSET,
spec: dict[str, Any] | AgentSpec | None = None,
) -> Iterator[None]
Context manager to temporarily override agent dependencies, model, toolsets, tools, or instructions.
This is particularly useful when testing. You can find an example of this here.
name : str | _utils.Unset Default: _utils.UNSET
The name to use instead of the name passed to the agent constructor and agent run.
The dependencies to use instead of the dependencies passed to the agent run.
model : models.Model | models.KnownModelName | str | _utils.Unset Default: _utils.UNSET
The model to use instead of the model passed to the agent run.
toolsets : Sequence[AbstractToolset[AgentDepsT]] | _utils.Unset Default: _utils.UNSET
The toolsets to use instead of the toolsets passed to the agent constructor and agent run.
tools : Sequence[Tool[AgentDepsT] | ToolFuncEither[AgentDepsT, …]] | _utils.Unset Default: _utils.UNSET
The tools to use instead of the tools registered with the agent.
The instructions to use instead of the instructions registered with the agent.
The model settings to use instead of the model settings passed to the agent constructor.
When set, any per-run model_settings argument is ignored.
Optional agent spec to apply as overrides.