Skip to content

pydantic_ai.run

AgentRun

Bases: Generic[AgentDepsT, OutputDataT]

A stateful, async-iterable run of an Agent.

You generally obtain an AgentRun instance by calling async with my_agent.iter(...) as agent_run:.

Once you have an instance, you can use it to iterate through the run’s nodes as they execute. When an End is reached, the run finishes and result becomes available.

Example:

from pydantic_ai import Agent

agent = Agent('openai:gpt-5.2')

async def main():
    nodes = []
    # Iterate through the run, recording each node along the way:
    async with agent.iter('What is the capital of France?') as agent_run:
        async for node in agent_run:
            nodes.append(node)
    print(nodes)
    '''
    [
        UserPromptNode(
            user_prompt='What is the capital of France?',
            instructions_functions=[],
            system_prompts=(),
            system_prompt_functions=[],
            system_prompt_dynamic_functions={},
        ),
        ModelRequestNode(
            request=ModelRequest(
                parts=[
                    UserPromptPart(
                        content='What is the capital of France?',
                        timestamp=datetime.datetime(...),
                    )
                ],
                timestamp=datetime.datetime(...),
                run_id='...',
            )
        ),
        CallToolsNode(
            model_response=ModelResponse(
                parts=[TextPart(content='The capital of France is Paris.')],
                usage=RequestUsage(input_tokens=56, output_tokens=7),
                model_name='gpt-5.2',
                timestamp=datetime.datetime(...),
                run_id='...',
            )
        ),
        End(data=FinalResult(output='The capital of France is Paris.')),
    ]
    '''
    print(agent_run.result.output)
    #> The capital of France is Paris.

You can also manually drive the iteration using the next method for more granular control.

Attributes

ctx

The current context of the agent run.

Type: GraphRunContext[_agent_graph.GraphAgentState, _agent_graph.GraphAgentDeps[AgentDepsT, Any]]

next_node

The next node that will be run in the agent graph.

This is the next node that will be used during async iteration, or if a node is not passed to self.next(...).

Type: _agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]

result

The final result of the run if it has ended, otherwise None.

Once the run returns an End node, result is populated with an AgentRunResult.

Type: AgentRunResult[OutputDataT] | None

metadata

Metadata associated with this agent run, if configured.

Type: dict[str, Any] | None

run_id

The unique identifier for the agent run.

Type: str

Methods

all_messages
def all_messages() -> list[_messages.ModelMessage]

Return all messages for the run so far.

Messages from older runs are included.

Returns

list[_messages.ModelMessage]

all_messages_json
def all_messages_json(output_tool_return_content: str | None = None) -> bytes

Return all messages from all_messages as JSON bytes.

Returns

bytes — JSON bytes representing the messages.

new_messages
def new_messages() -> list[_messages.ModelMessage]

Return new messages for the run so far.

Messages from older runs are excluded.

Returns

list[_messages.ModelMessage]

new_messages_json
def new_messages_json() -> bytes

Return new messages from new_messages as JSON bytes.

Returns

bytes — JSON bytes representing the new messages.

__aiter__
def __aiter__(

) -> AsyncIterator[_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]]

Provide async-iteration over the nodes in the agent run.

Returns

AsyncIterator[_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]]

__anext__

@async

def __anext__(

) -> _agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]

Advance to the next node automatically based on the last returned node.

Note: this uses the graph run’s internal iteration which does NOT call node hooks (before_node_run, wrap_node_run, after_node_run, on_node_run_error). Use next() for capability-hooked iteration, or use agent.run() which drives via next() automatically.

Returns

_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]

next

@async

def next(
    node: _agent_graph.AgentNode[AgentDepsT, OutputDataT],
) -> _agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]]

Manually drive the agent run by passing in the node you want to run next.

This lets you inspect or mutate the node before continuing execution, or skip certain nodes under dynamic conditions. The agent run should be stopped when you return an End node.

Example:

from pydantic_ai import Agent
from pydantic_graph import End

agent = Agent('openai:gpt-5.2')

async def main():
    async with agent.iter('What is the capital of France?') as agent_run:
        next_node = agent_run.next_node  # start with the first node
        nodes = [next_node]
        while not isinstance(next_node, End):
            next_node = await agent_run.next(next_node)
            nodes.append(next_node)
        # Once `next_node` is an End, we've finished:
        print(nodes)
        '''
        [
            UserPromptNode(
                user_prompt='What is the capital of France?',
                instructions_functions=[],
                system_prompts=(),
                system_prompt_functions=[],
                system_prompt_dynamic_functions={},
            ),
            ModelRequestNode(
                request=ModelRequest(
                    parts=[
                        UserPromptPart(
                            content='What is the capital of France?',
                            timestamp=datetime.datetime(...),
                        )
                    ],
                    timestamp=datetime.datetime(...),
                    run_id='...',
                )
            ),
            CallToolsNode(
                model_response=ModelResponse(
                    parts=[TextPart(content='The capital of France is Paris.')],
                    usage=RequestUsage(input_tokens=56, output_tokens=7),
                    model_name='gpt-5.2',
                    timestamp=datetime.datetime(...),
                    run_id='...',
                )
            ),
            End(data=FinalResult(output='The capital of France is Paris.')),
        ]
        '''
        print('Final result:', agent_run.result.output)
        #> Final result: The capital of France is Paris.
Returns

_agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]] — The next node returned by the graph logic, or an End node if _agent_graph.AgentNode[AgentDepsT, OutputDataT] | End[FinalResult[OutputDataT]] — the run has completed.

Parameters

node : _agent_graph.AgentNode[AgentDepsT, OutputDataT]

The node to run next in the graph.

usage
def usage() -> _usage.RunUsage

Get usage statistics for the run so far, including token usage, model requests, and so on.

Returns

_usage.RunUsage

AgentRunResult

Bases: Generic[OutputDataT]

The final result of an agent run.

Attributes

output

The output data from the agent run.

Type: OutputDataT

response

Return the last response from the message history.

Type: _messages.ModelResponse

metadata

Metadata associated with this agent run, if configured.

Type: dict[str, Any] | None

run_id

The unique identifier for the agent run.

Type: str

Methods

all_messages
def all_messages(
    output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]

Return the history of _messages.

Returns

list[_messages.ModelMessage] — List of messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

all_messages_json
def all_messages_json(output_tool_return_content: str | None = None) -> bytes

Return all messages from all_messages as JSON bytes.

Returns

bytes — JSON bytes representing the messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

new_messages
def new_messages(
    output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]

Return new messages associated with this run.

Messages from older runs are excluded.

Returns

list[_messages.ModelMessage] — List of new messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

new_messages_json
def new_messages_json(output_tool_return_content: str | None = None) -> bytes

Return new messages from new_messages as JSON bytes.

Returns

bytes — JSON bytes representing the new messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

usage
def usage() -> _usage.RunUsage

Return the usage of the whole run.

Returns

_usage.RunUsage

timestamp
def timestamp() -> datetime

Return the timestamp of last response.

Returns

datetime

AgentRunResultEvent

Bases: Generic[OutputDataT]

An event indicating the agent run ended and containing the final result of the agent run.

Attributes

result

The result of the run.

Type: AgentRunResult[OutputDataT]

event_kind

Event type identifier, used as a discriminator.

Type: Literal[‘agent_run_result’] Default: 'agent_run_result'