Skip to content

pydantic_ai.result

StreamedRunResult

Bases: Generic[AgentDepsT, OutputDataT]

Result of a streamed run that returns structured data via a tool call.

Attributes

is_complete

Whether the stream has all been received.

This is set to True when one of stream_output, stream_text, stream_responses or get_output completes.

Type: bool Default: field(default=False, init=False)

response

Return the current state of the response.

Type: _messages.ModelResponse

metadata

Metadata associated with this agent run, if configured.

Type: dict[str, Any] | None

run_id

The unique identifier for the agent run.

Type: str

Methods

all_messages
def all_messages(
    output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]

Return the history of _messages.

Returns

list[_messages.ModelMessage] — List of messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

all_messages_json
def all_messages_json(output_tool_return_content: str | None = None) -> bytes

Return all messages from all_messages as JSON bytes.

Returns

bytes — JSON bytes representing the messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

new_messages
def new_messages(
    output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]

Return new messages associated with this run.

Messages from older runs are excluded.

Returns

list[_messages.ModelMessage] — List of new messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

new_messages_json
def new_messages_json(output_tool_return_content: str | None = None) -> bytes

Return new messages from new_messages as JSON bytes.

Returns

bytes — JSON bytes representing the new messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

stream

@async

@deprecated

def stream(debounce_by: float | None = 0.1) -> AsyncIterator[OutputDataT]
Returns

AsyncIterator[OutputDataT]

stream_output

@async

def stream_output(debounce_by: float | None = 0.1) -> AsyncIterator[OutputDataT]

Stream the output as an async iterable.

The pydantic validator for structured data will be called in partial mode on each iteration.

Returns

AsyncIterator[OutputDataT] — An async iterable of the response data.

Parameters

debounce_by : float | None Default: 0.1

by how much (if at all) to debounce/group the output chunks by. None means no debouncing. Debouncing is particularly important for long structured outputs to reduce the overhead of performing validation as each token is received.

stream_text

@async

def stream_text(
    delta: bool = False,
    debounce_by: float | None = 0.1,
) -> AsyncIterator[str]

Stream the text result as an async iterable.

Returns

AsyncIterator[str]

Parameters

delta : bool Default: False

if True, yield each chunk of text as it is received, if False (default), yield the full text up to the current point.

debounce_by : float | None Default: 0.1

by how much (if at all) to debounce/group the response chunks by. None means no debouncing. Debouncing is particularly important for long structured responses to reduce the overhead of performing validation as each token is received.

stream_structured

@async

@deprecated

def stream_structured(
    debounce_by: float | None = 0.1,
) -> AsyncIterator[tuple[_messages.ModelResponse, bool]]
Returns

AsyncIterator[tuple[_messages.ModelResponse, bool]]

stream_responses

@async

def stream_responses(
    debounce_by: float | None = 0.1,
) -> AsyncIterator[tuple[_messages.ModelResponse, bool]]

Stream the response as an async iterable of Structured LLM Messages.

Returns

AsyncIterator[tuple[_messages.ModelResponse, bool]] — An async iterable of the structured response message and whether that is the last message.

Parameters

debounce_by : float | None Default: 0.1

by how much (if at all) to debounce/group the response chunks by. None means no debouncing. Debouncing is particularly important for long structured responses to reduce the overhead of performing validation as each token is received.

get_output

@async

def get_output() -> OutputDataT

Stream the whole response, validate and return it.

Returns

OutputDataT

usage
def usage() -> RunUsage

Return the usage of the whole run.

Returns

RunUsage

timestamp
def timestamp() -> datetime

Get the timestamp of the response.

Returns

datetime

validate_structured_output

@async

@deprecated

def validate_structured_output(
    message: _messages.ModelResponse,
    allow_partial: bool = False,
) -> OutputDataT
Returns

OutputDataT

validate_response_output

@async

def validate_response_output(
    message: _messages.ModelResponse,
    allow_partial: bool = False,
) -> OutputDataT

Validate a structured result message.

Returns

OutputDataT

StreamedRunResultSync

Bases: Generic[AgentDepsT, OutputDataT]

Synchronous wrapper for StreamedRunResult that only exposes sync methods.

Attributes

response

Return the current state of the response.

Type: _messages.ModelResponse

run_id

The unique identifier for the agent run.

Type: str

metadata

Metadata associated with this agent run, if configured.

Type: dict[str, Any] | None

is_complete

Whether the stream has all been received.

This is set to True when one of stream_output, stream_text, stream_responses or get_output completes.

Type: bool

Methods

all_messages
def all_messages(
    output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]

Return the history of messages.

Returns

list[_messages.ModelMessage] — List of messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

all_messages_json
def all_messages_json(output_tool_return_content: str | None = None) -> bytes

Return all messages from all_messages as JSON bytes.

Returns

bytes — JSON bytes representing the messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

new_messages
def new_messages(
    output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]

Return new messages associated with this run.

Messages from older runs are excluded.

Returns

list[_messages.ModelMessage] — List of new messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

new_messages_json
def new_messages_json(output_tool_return_content: str | None = None) -> bytes

Return new messages from new_messages as JSON bytes.

Returns

bytes — JSON bytes representing the new messages.

Parameters

output_tool_return_content : str | None Default: None

The return content of the tool call to set in the last message. This provides a convenient way to modify the content of the output tool call if you want to continue the conversation and want to set the response to the output tool call. If None, the last message will not be modified.

stream_output
def stream_output(debounce_by: float | None = 0.1) -> Iterator[OutputDataT]

Stream the output as an iterable.

The pydantic validator for structured data will be called in partial mode on each iteration.

Returns

Iterator[OutputDataT] — An iterable of the response data.

Parameters

debounce_by : float | None Default: 0.1

by how much (if at all) to debounce/group the output chunks by. None means no debouncing. Debouncing is particularly important for long structured outputs to reduce the overhead of performing validation as each token is received.

stream_text
def stream_text(delta: bool = False, debounce_by: float | None = 0.1) -> Iterator[str]

Stream the text result as an iterable.

Returns

Iterator[str]

Parameters

delta : bool Default: False

if True, yield each chunk of text as it is received, if False (default), yield the full text up to the current point.

debounce_by : float | None Default: 0.1

by how much (if at all) to debounce/group the response chunks by. None means no debouncing. Debouncing is particularly important for long structured responses to reduce the overhead of performing validation as each token is received.

stream_responses
def stream_responses(
    debounce_by: float | None = 0.1,
) -> Iterator[tuple[_messages.ModelResponse, bool]]

Stream the response as an iterable of Structured LLM Messages.

Returns

Iterator[tuple[_messages.ModelResponse, bool]] — An iterable of the structured response message and whether that is the last message.

Parameters

debounce_by : float | None Default: 0.1

by how much (if at all) to debounce/group the response chunks by. None means no debouncing. Debouncing is particularly important for long structured responses to reduce the overhead of performing validation as each token is received.

get_output
def get_output() -> OutputDataT

Stream the whole response, validate and return it.

Returns

OutputDataT

usage
def usage() -> RunUsage

Return the usage of the whole run.

Returns

RunUsage

timestamp
def timestamp() -> datetime

Get the timestamp of the response.

Returns

datetime

validate_response_output
def validate_response_output(
    message: _messages.ModelResponse,
    allow_partial: bool = False,
) -> OutputDataT

Validate a structured result message.

Returns

OutputDataT