pydantic_ai.run
Bases: Generic[AgentDepsT, OutputDataT]
A stateful, async-iterable run of an Agent.
You generally obtain an AgentRun instance by calling async with my_agent.iter(...) as agent_run:.
Once you have an instance, you can use it to iterate through the run’s nodes as they execute. When an
End is reached, the run finishes and result
becomes available.
Example:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
async def main():
nodes = []
# Iterate through the run, recording each node along the way:
async with agent.iter('What is the capital of France?') as agent_run:
async for node in agent_run:
nodes.append(node)
print(nodes)
'''
[
UserPromptNode(
user_prompt='What is the capital of France?',
instructions_functions=[],
system_prompts=(),
system_prompt_functions=[],
system_prompt_dynamic_functions={},
),
ModelRequestNode(
request=ModelRequest(
parts=[
UserPromptPart(
content='What is the capital of France?',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
run_id='...',
)
),
CallToolsNode(
model_response=ModelResponse(
parts=[TextPart(content='The capital of France is Paris.')],
usage=RequestUsage(input_tokens=56, output_tokens=7),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
)
),
End(data=FinalResult(output='The capital of France is Paris.')),
]
'''
print(agent_run.result.output)
#> The capital of France is Paris.
You can also manually drive the iteration using the next method for
more granular control.
The current context of the agent run.
Type: GraphRunContext[_agent_graph.GraphAgentState, _agent_graph.GraphAgentDeps[AgentDepsT, Any]]
The next node that will be run in the agent graph.
This is the next node that will be used during async iteration, or if a node is not passed to self.next(...).
Type: _agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]
The final result of the run if it has ended, otherwise None.
Once the run returns an End node, result is populated
with an AgentRunResult.
Type: AgentRunResult[OutputDataT] | None
Metadata associated with this agent run, if configured.
The unique identifier for the agent run.
Type: str
def all_messages() -> list[_messages.ModelMessage]
Return all messages for the run so far.
Messages from older runs are included.
list[_messages.ModelMessage]
def all_messages_json(output_tool_return_content: str | None = None) -> bytes
Return all messages from all_messages as JSON bytes.
bytes — JSON bytes representing the messages.
def new_messages() -> list[_messages.ModelMessage]
Return new messages for the run so far.
Messages from older runs are excluded.
list[_messages.ModelMessage]
def new_messages_json() -> bytes
Return new messages from new_messages as JSON bytes.
bytes — JSON bytes representing the new messages.
def __aiter__(
) -> AsyncIterator[_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]]
Provide async-iteration over the nodes in the agent run.
AsyncIterator[_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]]
@async
def __anext__(
) -> _agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]
Advance to the next node automatically based on the last returned node.
Note: this uses the graph run’s internal iteration which does NOT call
node hooks (before_node_run, wrap_node_run, after_node_run,
on_node_run_error). Use next() for capability-hooked iteration, or
use agent.run() which drives via next() automatically.
_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]
@async
def next(
node: _agent_graph.AgentNode[AgentDepsT, OutputDataT],
) -> _agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]
Manually drive the agent run by passing in the node you want to run next.
This lets you inspect or mutate the node before continuing execution, or skip certain nodes
under dynamic conditions. The agent run should be stopped when you return an End
node.
Example:
from pydantic_ai import Agent
from pydantic_graph import End
agent = Agent('openai:gpt-5.2')
async def main():
async with agent.iter('What is the capital of France?') as agent_run:
next_node = agent_run.next_node # start with the first node
nodes = [next_node]
while not isinstance(next_node, End):
next_node = await agent_run.next(next_node)
nodes.append(next_node)
# Once `next_node` is an End, we've finished:
print(nodes)
'''
[
UserPromptNode(
user_prompt='What is the capital of France?',
instructions_functions=[],
system_prompts=(),
system_prompt_functions=[],
system_prompt_dynamic_functions={},
),
ModelRequestNode(
request=ModelRequest(
parts=[
UserPromptPart(
content='What is the capital of France?',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
run_id='...',
)
),
CallToolsNode(
model_response=ModelResponse(
parts=[TextPart(content='The capital of France is Paris.')],
usage=RequestUsage(input_tokens=56, output_tokens=7),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
)
),
End(data=FinalResult(output='The capital of France is Paris.')),
]
'''
print('Final result:', agent_run.result.output)
#> Final result: The capital of France is Paris.
_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]] — The next node returned by the graph logic, or an End node if
_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]] — the run has completed.
The node to run next in the graph.
def usage() -> _usage.RunUsage
Get usage statistics for the run so far, including token usage, model requests, and so on.
_usage.RunUsage
Bases: Generic[OutputDataT]
The final result of an agent run.
The output data from the agent run.
Type: OutputDataT
Return the last response from the message history.
Type: _messages.ModelResponse
Metadata associated with this agent run, if configured.
The unique identifier for the agent run.
Type: str
def all_messages(
output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]
Return the history of _messages.
list[_messages.ModelMessage] — List of messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def all_messages_json(output_tool_return_content: str | None = None) -> bytes
Return all messages from all_messages as JSON bytes.
bytes — JSON bytes representing the messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def new_messages(
output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]
Return new messages associated with this run.
Messages from older runs are excluded.
list[_messages.ModelMessage] — List of new messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def new_messages_json(output_tool_return_content: str | None = None) -> bytes
Return new messages from new_messages as JSON bytes.
bytes — JSON bytes representing the new messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def usage() -> _usage.RunUsage
Return the usage of the whole run.
_usage.RunUsage
def timestamp() -> datetime
Return the timestamp of last response.
Bases: Generic[OutputDataT]
An event indicating the agent run ended and containing the final result of the agent run.
The result of the run.
Type: AgentRunResult[OutputDataT]
Event type identifier, used as a discriminator.
Type: Literal[‘agent_run_result’] Default: 'agent_run_result'