pydantic_ai.result
Bases: Generic[AgentDepsT, OutputDataT]
Result of a streamed run that returns structured data via a tool call.
Whether the stream has all been received.
This is set to True when one of
stream_output,
stream_text,
stream_responses or
get_output completes.
Type: bool Default: field(default=False, init=False)
Return the current state of the response.
Type: _messages.ModelResponse
Metadata associated with this agent run, if configured.
The unique identifier for the agent run.
Type: str
def all_messages(
output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]
Return the history of _messages.
list[_messages.ModelMessage] — List of messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def all_messages_json(output_tool_return_content: str | None = None) -> bytes
Return all messages from all_messages as JSON bytes.
bytes — JSON bytes representing the messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def new_messages(
output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]
Return new messages associated with this run.
Messages from older runs are excluded.
list[_messages.ModelMessage] — List of new messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def new_messages_json(output_tool_return_content: str | None = None) -> bytes
Return new messages from new_messages as JSON bytes.
bytes — JSON bytes representing the new messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
@async
@deprecated
def stream(debounce_by: float | None = 0.1) -> AsyncIterator[OutputDataT]
AsyncIterator[OutputDataT]
@async
def stream_output(debounce_by: float | None = 0.1) -> AsyncIterator[OutputDataT]
Stream the output as an async iterable.
The pydantic validator for structured data will be called in partial mode on each iteration.
AsyncIterator[OutputDataT] — An async iterable of the response data.
by how much (if at all) to debounce/group the output chunks by. None means no debouncing.
Debouncing is particularly important for long structured outputs to reduce the overhead of
performing validation as each token is received.
@async
def stream_text(
delta: bool = False,
debounce_by: float | None = 0.1,
) -> AsyncIterator[str]
Stream the text result as an async iterable.
delta : bool Default: False
if True, yield each chunk of text as it is received, if False (default), yield the full text
up to the current point.
by how much (if at all) to debounce/group the response chunks by. None means no debouncing.
Debouncing is particularly important for long structured responses to reduce the overhead of
performing validation as each token is received.
@async
@deprecated
def stream_structured(
debounce_by: float | None = 0.1,
) -> AsyncIterator[tuple[_messages.ModelResponse, bool]]
AsyncIterator[tuple[_messages.ModelResponse, bool]]
@async
def stream_responses(
debounce_by: float | None = 0.1,
) -> AsyncIterator[tuple[_messages.ModelResponse, bool]]
Stream the response as an async iterable of Structured LLM Messages.
AsyncIterator[tuple[_messages.ModelResponse, bool]] — An async iterable of the structured response message and whether that is the last message.
by how much (if at all) to debounce/group the response chunks by. None means no debouncing.
Debouncing is particularly important for long structured responses to reduce the overhead of
performing validation as each token is received.
@async
def get_output() -> OutputDataT
Stream the whole response, validate and return it.
OutputDataT
def usage() -> RunUsage
Return the usage of the whole run.
def timestamp() -> datetime
Get the timestamp of the response.
@async
@deprecated
def validate_structured_output(
message: _messages.ModelResponse,
allow_partial: bool = False,
) -> OutputDataT
OutputDataT
@async
def validate_response_output(
message: _messages.ModelResponse,
allow_partial: bool = False,
) -> OutputDataT
Validate a structured result message.
OutputDataT
Bases: Generic[AgentDepsT, OutputDataT]
Synchronous wrapper for StreamedRunResult that only exposes sync methods.
Return the current state of the response.
Type: _messages.ModelResponse
The unique identifier for the agent run.
Type: str
Metadata associated with this agent run, if configured.
Whether the stream has all been received.
This is set to True when one of
stream_output,
stream_text,
stream_responses or
get_output completes.
Type: bool
def all_messages(
output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]
Return the history of messages.
list[_messages.ModelMessage] — List of messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def all_messages_json(output_tool_return_content: str | None = None) -> bytes
Return all messages from all_messages as JSON bytes.
bytes — JSON bytes representing the messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def new_messages(
output_tool_return_content: str | None = None,
) -> list[_messages.ModelMessage]
Return new messages associated with this run.
Messages from older runs are excluded.
list[_messages.ModelMessage] — List of new messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def new_messages_json(output_tool_return_content: str | None = None) -> bytes
Return new messages from new_messages as JSON bytes.
bytes — JSON bytes representing the new messages.
The return content of the tool call to set in the last message.
This provides a convenient way to modify the content of the output tool call if you want to continue
the conversation and want to set the response to the output tool call. If None, the last message will
not be modified.
def stream_output(debounce_by: float | None = 0.1) -> Iterator[OutputDataT]
Stream the output as an iterable.
The pydantic validator for structured data will be called in partial mode on each iteration.
Iterator[OutputDataT] — An iterable of the response data.
by how much (if at all) to debounce/group the output chunks by. None means no debouncing.
Debouncing is particularly important for long structured outputs to reduce the overhead of
performing validation as each token is received.
def stream_text(delta: bool = False, debounce_by: float | None = 0.1) -> Iterator[str]
Stream the text result as an iterable.
delta : bool Default: False
if True, yield each chunk of text as it is received, if False (default), yield the full text
up to the current point.
by how much (if at all) to debounce/group the response chunks by. None means no debouncing.
Debouncing is particularly important for long structured responses to reduce the overhead of
performing validation as each token is received.
def stream_responses(
debounce_by: float | None = 0.1,
) -> Iterator[tuple[_messages.ModelResponse, bool]]
Stream the response as an iterable of Structured LLM Messages.
Iterator[tuple[_messages.ModelResponse, bool]] — An iterable of the structured response message and whether that is the last message.
by how much (if at all) to debounce/group the response chunks by. None means no debouncing.
Debouncing is particularly important for long structured responses to reduce the overhead of
performing validation as each token is received.
def get_output() -> OutputDataT
Stream the whole response, validate and return it.
OutputDataT
def usage() -> RunUsage
Return the usage of the whole run.
def timestamp() -> datetime
Get the timestamp of the response.
def validate_response_output(
message: _messages.ModelResponse,
allow_partial: bool = False,
) -> OutputDataT
Validate a structured result message.
OutputDataT