pydantic_ai.messages
The structure of ModelMessage can be shown as a graph:
graph RL
SystemPromptPart(SystemPromptPart) --- ModelRequestPart
UserPromptPart(UserPromptPart) --- ModelRequestPart
ToolReturnPart(ToolReturnPart) --- ModelRequestPart
RetryPromptPart(RetryPromptPart) --- ModelRequestPart
TextPart(TextPart) --- ModelResponsePart
ToolCallPart(ToolCallPart) --- ModelResponsePart
ThinkingPart(ThinkingPart) --- ModelResponsePart
ModelRequestPart("ModelRequestPart<br>(Union)") --- ModelRequest
ModelRequest("ModelRequest(parts=list[...])") --- ModelMessage
ModelResponsePart("ModelResponsePart<br>(Union)") --- ModelResponse
ModelResponse("ModelResponse(parts=list[...])") --- ModelMessage("ModelMessage<br>(Union)")
A system prompt, generally written by the application developer.
This gives the model context and guidance on how to respond.
The content of the prompt.
Type: str
The timestamp of the prompt.
Type: datetime Default: field(default_factory=_now_utc)
The ref of the dynamic system prompt function that generated this part.
Only set if system prompt is dynamic, see system_prompt for more information.
Type: str | None Default: None
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘system-prompt’] Default: 'system-prompt'
Bases: ABC
Abstract base class for any URL-based file.
The URL of the file.
Type: str
Controls whether the file is downloaded and how SSRF protection is applied:
- If
False, the URL is sent directly to providers that support it. For providers that don’t, the file is downloaded with SSRF protection (blocks private IPs and cloud metadata). - If
True, the file is always downloaded with SSRF protection (blocks private IPs and cloud metadata). - If
'allow-local', the file is always downloaded, allowing private IPs but still blocking cloud metadata.
Type: ForceDownloadMode Default: False
Vendor-specific metadata for the file.
Supported by:
GoogleModel:VideoUrl.vendor_metadatais used asvideo_metadata: https://ai.google.dev/gemini-api/docs/video-understanding#customize-video-processingOpenAIChatModel,OpenAIResponsesModel:ImageUrl.vendor_metadata['detail']is used asdetailsetting for imagesXaiModel:ImageUrl.vendor_metadata['detail']is used asdetailsetting for images
Type: dict[str, Any] | None Default: None
Return the media type of the file, based on the URL or the provided media_type.
Type: str
The identifier of the file, such as a unique ID.
This identifier can be provided to the model in a message to allow it to refer to this file in a tool call argument,
and the tool can look up the file in question by iterating over the message history and finding the matching FileUrl.
This identifier is only automatically passed to the model when the FileUrl is returned by a tool.
If you’re passing the FileUrl as a user message, it’s up to you to include a separate text part with the identifier,
e.g. “This is file <identifier>:” preceding the FileUrl.
It’s also included in inline-text delimiters for providers that require inlining text documents, so the model can distinguish multiple files.
Type: str
The file format.
Type: str
Bases: FileUrl
A URL to a video.
The URL of the video.
Type: str
Type identifier, this is available on all parts as a discriminator.
Type: Literal[‘video-url’] Default: 'video-url'
True if the URL has a YouTube domain.
Type: bool
The file format of the video.
The choice of supported formats were based on the Bedrock Converse API. Other APIs don’t require to use a format.
Type: VideoFormat
Bases: FileUrl
A URL to an audio file.
The URL of the audio file.
Type: str
Type identifier, this is available on all parts as a discriminator.
Type: Literal[‘audio-url’] Default: 'audio-url'
The file format of the audio file.
Type: AudioFormat
Bases: FileUrl
A URL to an image.
The URL of the image.
Type: str
Type identifier, this is available on all parts as a discriminator.
Type: Literal[‘image-url’] Default: 'image-url'
The file format of the image.
The choice of supported formats were based on the Bedrock Converse API. Other APIs don’t require to use a format.
Type: ImageFormat
Bases: FileUrl
The URL of the document.
The URL of the document.
Type: str
Type identifier, this is available on all parts as a discriminator.
Type: Literal[‘document-url’] Default: 'document-url'
The file format of the document.
The choice of supported formats were based on the Bedrock Converse API. Other APIs don’t require to use a format.
Type: DocumentFormat
String content that is tagged with additional metadata.
This is useful for including metadata that can be accessed programmatically by the application, but is not sent to the LLM.
The content that is sent to the LLM.
Type: str
Additional data that can be accessed programmatically by the application but is not sent to the LLM.
Type: Any Default: None
Type identifier, this is available on all parts as a discriminator.
Type: Literal[‘text-content’] Default: 'text-content'
Binary content, e.g. an audio or image file.
The binary file data.
Use .base64 to get the base64-encoded string.
Type: bytes
The media type of the binary data.
Type: AudioMediaType | ImageMediaType | DocumentMediaType | str
Vendor-specific metadata for the file.
Supported by:
GoogleModel:BinaryContent.vendor_metadatais used asvideo_metadata: https://ai.google.dev/gemini-api/docs/video-understanding#customize-video-processingOpenAIChatModel,OpenAIResponsesModel:BinaryContent.vendor_metadata['detail']is used asdetailsetting for imagesXaiModel:BinaryContent.vendor_metadata['detail']is used asdetailsetting for images
Type: dict[str, Any] | None Default: None
Type identifier, this is available on all parts as a discriminator.
Type: Literal[‘binary’] Default: 'binary'
Identifier for the binary content, such as a unique ID.
This identifier can be provided to the model in a message to allow it to refer to this file in a tool call argument,
and the tool can look up the file in question by iterating over the message history and finding the matching BinaryContent.
This identifier is only automatically passed to the model when the BinaryContent is returned by a tool.
If you’re passing the BinaryContent as a user message, it’s up to you to include a separate text part with the identifier,
e.g. “This is file <identifier>:” preceding the BinaryContent.
It’s also included in inline-text delimiters for providers that require inlining text documents, so the model can distinguish multiple files.
Type: str
Convert the BinaryContent to a data URI.
Type: str
Return the binary data as a base64-encoded string. Default encoding is UTF-8.
Type: str
Return True if the media type is an audio type.
Type: bool
Return True if the media type is an image type.
Type: bool
Return True if the media type is a video type.
Type: bool
Return True if the media type is a document type.
Type: bool
The file format of the binary content.
Type: str
@staticmethod
def narrow_type(bc: BinaryContent) -> BinaryContent | BinaryImage
Narrow the type of the BinaryContent to BinaryImage if it’s an image.
@classmethod
def from_data_uri(cls, data_uri: str) -> BinaryContent
Create a BinaryContent from a data URI.
@classmethod
def from_path(cls, path: PathLike[str]) -> BinaryContent
Create a BinaryContent from a path.
Defaults to ‘application/octet-stream’ if the media type cannot be inferred.
FileNotFoundError— if the file does not exist.PermissionError— if the file cannot be read.
Bases: BinaryContent
Binary content that’s guaranteed to be an image.
A cache point marker for prompt caching.
Can be inserted into UserPromptPart.content to mark cache boundaries. Models that don’t support caching will filter these out.
Supported by:
- Anthropic
- Amazon Bedrock (Converse API)
Type identifier, this is available on all parts as a discriminator.
Type: Literal[‘cache-point’] Default: 'cache-point'
The cache time-to-live, either “5m” (5 minutes) or “1h” (1 hour).
Supported by:
- Anthropic. See https://docs.claude.com/en/docs/build-with-claude/prompt-caching#1-hour-cache-duration for more information.
Type: Literal[‘5m’, ‘1h’] Default: '5m'
A reference to a file uploaded to a provider’s file storage by ID.
This allows referencing files that have been uploaded via provider-specific file APIs rather than providing the file content directly.
Supported by:
AnthropicModelOpenAIChatModelOpenAIResponsesModelBedrockConverseModelGoogleModel(GLA: Files API URIs, Vertex: GCSgs://URIs)XaiModel
The provider-specific file identifier.
For most providers, this is the file ID returned by the provider’s upload API.
For GoogleModel (Vertex), this must be a GCS URI (gs://bucket/path).
For GoogleModel (GLA), this must be a Google Files API URI (https://generativelanguage.googleapis.com/...).
For BedrockConverseModel, this must be an S3 URI (s3://bucket/key).
Type: str
The provider this file belongs to.
This is required because file IDs are not portable across providers, and using a file ID with the wrong provider will always result in an error.
Tip: Use model.system to get the provider name dynamically.
Type: UploadedFileProviderName
Vendor-specific metadata for the file.
The expected shape of this dictionary depends on the provider:
Supported by:
GoogleModel: used asvideo_metadatafor video files
Type: dict[str, Any] | None Default: None
Type identifier, this is available on all parts as a discriminator.
Type: Literal[‘uploaded-file’] Default: 'uploaded-file'
Return the media type of the file, inferred from file_id if not explicitly provided.
Note: Inference relies on the file extension in file_id.
For opaque file IDs (e.g., 'file-abc123'), the media type will default to 'application/octet-stream'.
Inference relies on Python’s mimetypes module, whose results may vary across platforms.
Required by some providers (e.g., Bedrock) for certain file types.
Type: str
The identifier of the file, such as a unique ID.
This identifier can be provided to the model in a message to allow it to refer to this file in a tool call argument,
and the tool can look up the file in question by iterating over the message history and finding the matching UploadedFile.
This identifier is only automatically passed to the model when the UploadedFile is returned by a tool.
If you’re passing the UploadedFile as a user message, it’s up to you to include a separate text part with the identifier,
e.g. “This is file <identifier>:” preceding the UploadedFile.
Type: str
A general-purpose media-type-to-format mapping.
Maps media types to format strings (e.g. 'image/png' -> 'png'). Covers image, video,
audio, and document types. Currently used by Bedrock, which requires explicit format strings.
Type: str
A structured tool return that separates the tool result from additional content sent to the model.
The return value to be used in the tool response.
Type: ToolReturnContent
Content sent to the model as a separate UserPromptPart.
Use this when you want content to appear outside the tool result message.
For multimodal content that should be sent natively in the tool result,
return it directly from the tool function or include it in return_value.
Type: str | Sequence[UserContent] | None Default: None
Additional data accessible by the application but not sent to the LLM.
Type: Any Default: None
A user prompt, generally written by the end user.
Content comes from the user_prompt parameter of Agent.run,
Agent.run_sync, and Agent.run_stream.
The content of the prompt.
Type: str | Sequence[UserContent]
The timestamp of the prompt.
Type: datetime Default: field(default_factory=_now_utc)
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘user-prompt’] Default: 'user-prompt'
Base class for tool return parts.
The name of the tool that was called.
Type: str
The tool return content, which may include multimodal files.
Type: ToolReturnContent
The tool call identifier, this is used by some models including OpenAI.
In case the tool call id is not provided by the model, Pydantic AI will generate a random one.
Type: str Default: field(default_factory=_generate_tool_call_id)
Additional data accessible by the application but not sent to the LLM.
Type: Any Default: None
The timestamp, when the tool returned.
Type: datetime Default: field(default_factory=_now_utc)
The outcome of the tool call.
'success': The tool executed successfully.'failed': The tool raised an error during execution.'denied': The tool call was denied by the approval mechanism.
Type: Literal[‘success’, ‘failed’, ‘denied’] Default: 'success'
The multimodal file parts from content (ImageUrl, AudioUrl, DocumentUrl, VideoUrl, BinaryContent).
Type: list[MultiModalContent]
def content_items(mode: Literal['raw'] = 'raw') -> list[ToolReturnContent]
def content_items(mode: Literal['str']) -> list[str | MultiModalContent]
def content_items(mode: Literal['jsonable']) -> list[Any | MultiModalContent]
Return content as a flat list for iteration, with optional serialization.
list[ToolReturnContent] | list[str | MultiModalContent] | list[Any | MultiModalContent]
mode : Literal[‘raw’, ‘str’, ‘jsonable’] Default: 'raw'
Controls serialization of non-file items:
'raw': No serialization. Returns items as-is.'str': Non-file items are serialized to strings viatool_return_ta. File items (MultiModalContent) pass through unchanged.'jsonable': Non-file items are serialized to JSON-compatible Python objects viatool_return_ta. File items pass through unchanged.
def model_response_str() -> str
Return a string representation of the data content for the model.
This excludes multimodal files - use .files to get those separately.
def model_response_object() -> dict[str, Any]
Return a dictionary representation of the data content, wrapping non-dict types appropriately.
This excludes multimodal files - use .files to get those separately.
Gemini supports JSON dict return values, but no other JSON types, hence we wrap anything else in a dict.
def model_response_str_and_user_content() -> tuple[str, list[UserContent]]
Build a text-only tool result with multimodal files extracted for a trailing user message.
For providers whose tool result API only accepts text. Multimodal files are referenced by identifier in the tool result text (‘See file {id}.’) and included in full in the returned file content list (‘This is file {id}:’ followed by the file).
def has_content() -> bool
Return True if the tool return has content.
Bases: BaseToolReturnPart
A tool return message, this encodes the result of running a tool.
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘tool-return’] Default: 'tool-return'
Bases: BaseToolReturnPart
A tool return message from a built-in tool.
The name of the provider that generated the response.
Required to be set when provider_details is set.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
This is used for data that is required to be sent back to APIs, as well as data users may want to access programmatically.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: dict[str, Any] | None Default: None
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘builtin-tool-return’] Default: 'builtin-tool-return'
A message back to a model asking it to try again.
This can be sent for a number of reasons:
- Pydantic validation of tool arguments failed, here content is derived from a Pydantic
ValidationError - a tool raised a
ModelRetryexception - no tool was found for the tool name
- the model returned plain text when a structured response was expected
- Pydantic validation of a structured response failed, here content is derived from a Pydantic
ValidationError - an output validator raised a
ModelRetryexception
Details of why and how the model should retry.
If the retry was triggered by a ValidationError, this will be a list of
error details.
Type: list[pydantic_core.ErrorDetails] | str
The name of the tool that was called, if any.
Type: str | None Default: None
The tool call identifier, this is used by some models including OpenAI.
In case the tool call id is not provided by the model, Pydantic AI will generate a random one.
Type: str Default: field(default_factory=_generate_tool_call_id)
The timestamp, when the retry was triggered.
Type: datetime Default: field(default_factory=_now_utc)
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘retry-prompt’] Default: 'retry-prompt'
def model_response() -> str
Return a string message describing why the retry is requested.
A single instruction block with metadata about its origin.
Instructions are composed of one or more parts, each of which can be static (from a literal string) or dynamic (from a function, template, or toolset). This distinction allows model implementations to make intelligent caching decisions — e.g. Anthropic’s prompt caching can cache the static prefix while leaving dynamic instructions uncached.
The text content of this instruction block.
Type: str
Whether this instruction came from a dynamic source (function, template, or toolset).
Static instructions (dynamic=False) come from literal strings passed to Agent(instructions=...).
Dynamic instructions (dynamic=True) come from @agent.instructions functions, TemplateStr,
or toolset get_instructions() methods.
Type: bool Default: False
Part type identifier, used as a discriminator for deserialization.
Type: Literal[‘instruction’] Default: 'instruction'
@staticmethod
def join(parts: Sequence[InstructionPart]) -> str | None
Join instruction parts into a single string, separated by double newlines.
@staticmethod
def sorted(parts: Sequence[InstructionPart]) -> list[InstructionPart]
Sort instruction parts with static (dynamic=False) before dynamic, preserving relative order.
A request generated by Pydantic AI and sent to a model, e.g. a message from the Pydantic AI app to the model.
The parts of the user message.
Type: Sequence[ModelRequestPart]
The timestamp when the request was sent to the model.
Type: datetime | None Default: None
The instructions string for this request, rendered from structured instruction parts.
Type: str | None Default: None
Message type identifier, this is available on all parts as a discriminator.
Type: Literal[‘request’] Default: 'request'
The unique identifier of the agent run in which this message originated.
Type: str | None Default: None
Additional data that can be accessed programmatically by the application but is not sent to the LLM.
Type: dict[str, Any] | None Default: None
@classmethod
def user_text_prompt(
cls,
user_prompt: str,
instructions: str | None = None,
) -> ModelRequest
Create a ModelRequest with a single user prompt as text.
A plain text response from a model.
The text content of the response.
Type: str
An optional identifier of the text part.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: str | None Default: None
The name of the provider that generated the response.
Required to be set when provider_details or id is set.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
This is used for data that is required to be sent back to APIs, as well as data users may want to access programmatically.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: dict[str, Any] | None Default: None
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘text’] Default: 'text'
def has_content() -> bool
Return True if the text content is non-empty.
A thinking response from a model.
The thinking content of the response.
Type: str
The identifier of the thinking part.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: str | None Default: None
The signature of the thinking.
Supported by:
- Anthropic (corresponds to the
signaturefield) - Bedrock (corresponds to the
signaturefield) - Google (corresponds to the
thought_signaturefield) - OpenAI (corresponds to the
encrypted_contentfield)
When this field is set, provider_name is required to identify the provider that generated this data.
Type: str | None Default: None
The name of the provider that generated the response.
Signatures are only sent back to the same provider.
Required to be set when provider_details, id or signature is set.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
This is used for data that is required to be sent back to APIs, as well as data users may want to access programmatically.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: dict[str, Any] | None Default: None
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘thinking’] Default: 'thinking'
def has_content() -> bool
Return True if the thinking content is non-empty.
A file response from a model.
The file content of the response.
Type: Annotated[BinaryContent, pydantic.AfterValidator(BinaryImage.narrow_type)]
The identifier of the file part.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: str | None Default: None
The name of the provider that generated the response.
Required to be set when provider_details or id is set.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
This is used for data that is required to be sent back to APIs, as well as data users may want to access programmatically.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: dict[str, Any] | None Default: None
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘file’] Default: 'file'
def has_content() -> bool
Return True if the file content is non-empty.
A tool call from a model.
The name of the tool to call.
Type: str
The arguments to pass to the tool.
This is stored either as a JSON string or a Python dictionary depending on how data was received.
Type: str | dict[str, Any] | None Default: None
The tool call identifier, this is used by some models including OpenAI.
In case the tool call id is not provided by the model, Pydantic AI will generate a random one.
Type: str Default: field(default_factory=_generate_tool_call_id)
An optional identifier of the tool call part, separate from the tool call ID.
This is used by some APIs like OpenAI Responses.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: str | None Default: None
The name of the provider that generated the response.
Builtin tool calls are only sent back to the same provider.
Required to be set when provider_details or id is set.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
This is used for data that is required to be sent back to APIs, as well as data users may want to access programmatically.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: dict[str, Any] | None Default: None
def args_as_dict(raise_if_invalid: bool = False) -> dict[str, Any]
Return the arguments as a Python dictionary.
This is just for convenience with models that require dicts as input.
raise_if_invalid : bool Default: False
If True, a ValueError or AssertionError
caused by malformed JSON in args will be re-raised. When
False (the default), malformed JSON is handled gracefully by
returning \{'INVALID_JSON': '<raw args>'\} so that the value
can still be sent to a model API (e.g. during a retry flow)
without crashing.
def args_as_json_str() -> str
Return the arguments as a JSON string.
This is just for convenience with models that require JSON strings as input.
def has_content() -> bool
Return True if the tool call has content.
Bases: BaseToolCallPart
A tool call from a model.
Part type identifier, this is available on all parts as a discriminator. Note that this is different from ToolCallPartDelta.part_delta_kind.
Type: Literal[‘tool-call’] Default: 'tool-call'
Bases: BaseToolCallPart
A tool call to a built-in tool.
Part type identifier, this is available on all parts as a discriminator.
Type: Literal[‘builtin-tool-call’] Default: 'builtin-tool-call'
A response from a model, e.g. a message from the model to the Pydantic AI app.
The parts of the model message.
Type: Sequence[ModelResponsePart]
Usage information for the request.
This has a default to make tests easier, and to support loading old messages where usage will be missing.
Type: RequestUsage Default: field(default_factory=RequestUsage)
The name of the model that generated the response.
Type: str | None Default: None
The timestamp when the response was received locally.
This is always a high-precision local datetime. Provider-specific timestamps
(if available) are stored in provider_details['timestamp'].
Type: datetime Default: field(default_factory=_now_utc)
Message type identifier, this is available on all parts as a discriminator.
Type: Literal[‘response’] Default: 'response'
The name of the LLM provider that generated the response.
Type: str | None Default: None
The base URL of the LLM provider that generated the response.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
Type: Annotated[dict[str, Any] | None, pydantic.Field(validation_alias=(pydantic.AliasChoices(provider_details, vendor_details)))] Default: None
request ID as specified by the model provider. This can be used to track the specific request to the model.
Type: Annotated[str | None, pydantic.Field(validation_alias=(pydantic.AliasChoices(provider_response_id, vendor_id)))] Default: None
Reason the model finished generating the response, normalized to OpenTelemetry values.
Type: FinishReason | None Default: None
The unique identifier of the agent run in which this message originated.
Type: str | None Default: None
Additional data that can be accessed programmatically by the application but is not sent to the LLM.
Type: dict[str, Any] | None Default: None
Get the text in the response.
Get the thinking in the response.
Get the files in the response.
Type: list[BinaryContent]
Get the images in the response.
Type: list[BinaryImage]
Get the tool calls in the response.
Type: list[ToolCallPart]
Get the builtin tool calls and results in the response.
Type: list[tuple[BuiltinToolCallPart, BuiltinToolReturnPart]]
@deprecated
def price() -> genai_types.PriceCalculation
genai_types.PriceCalculation
def cost() -> genai_types.PriceCalculation
Calculate the cost of the usage.
Uses genai-prices.
genai_types.PriceCalculation
def otel_events(settings: InstrumentationSettings) -> list[LogRecord]
Return OpenTelemetry events for the response.
list[LogRecord]
A partial update (delta) for a TextPart to append new text content.
The incremental text content to add to the existing TextPart content.
Type: str
The name of the provider that generated the response.
This is required to be set when provider_details is set and the initial TextPart does not have a provider_name or it has changed.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
This is used for data that is required to be sent back to APIs, as well as data users may want to access programmatically.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: dict[str, Any] | None Default: None
Part delta type identifier, used as a discriminator.
Type: Literal[‘text’] Default: 'text'
def apply(part: ModelResponsePart) -> TextPart
Apply this text delta to an existing TextPart.
TextPart — A new TextPart with updated text content.
part : ModelResponsePart
The existing model response part, which must be a TextPart.
ValueError— Ifpartis not aTextPart.
A partial update (delta) for a ThinkingPart to append new thinking content.
The incremental thinking content to add to the existing ThinkingPart content.
Type: str | None Default: None
Optional signature delta.
Note this is never treated as a delta — it can replace None.
Type: str | None Default: None
Optional provider name for the thinking part.
Signatures are only sent back to the same provider.
Required to be set when provider_details is set and the initial ThinkingPart does not have a provider_name or it has changed.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
Can be a dict to merge with existing details, or a callable that takes the existing details and returns updated details.
This is used for data that is required to be sent back to APIs, as well as data users may want to access programmatically.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: ProviderDetailsDelta Default: None
Part delta type identifier, used as a discriminator.
Type: Literal[‘thinking’] Default: 'thinking'
def apply(part: ModelResponsePart) -> ThinkingPart
def apply(
part: ModelResponsePart | ThinkingPartDelta,
) -> ThinkingPart | ThinkingPartDelta
Apply this thinking delta to an existing ThinkingPart.
ThinkingPart | ThinkingPartDelta — A new ThinkingPart with updated thinking content.
part : ModelResponsePart | ThinkingPartDelta
The existing model response part, which must be a ThinkingPart.
ValueError— Ifpartis not aThinkingPart.
A partial update (delta) for a ToolCallPart to modify tool name, arguments, or tool call ID.
Incremental text to add to the existing tool name, if any.
Type: str | None Default: None
Incremental data to add to the tool arguments.
If this is a string, it will be appended to existing JSON arguments. If this is a dict, it will be merged with existing dict arguments.
Type: str | dict[str, Any] | None Default: None
Optional tool call identifier, this is used by some models including OpenAI.
Note this is never treated as a delta — it can replace None, but otherwise if a non-matching value is provided an error will be raised.
Type: str | None Default: None
The name of the provider that generated the response.
This is required to be set when provider_details is set and the initial ToolCallPart does not have a provider_name or it has changed.
Type: str | None Default: None
Additional data returned by the provider that can’t be mapped to standard fields.
This is used for data that is required to be sent back to APIs, as well as data users may want to access programmatically.
When this field is set, provider_name is required to identify the provider that generated this data.
Type: dict[str, Any] | None Default: None
Part delta type identifier, used as a discriminator. Note that this is different from ToolCallPart.part_kind.
Type: Literal[‘tool_call’] Default: 'tool_call'
def as_part() -> ToolCallPart | None
Convert this delta to a fully formed ToolCallPart if possible, otherwise return None.
ToolCallPart | None — A ToolCallPart if tool_name_delta is set, otherwise None.
def apply(part: ModelResponsePart) -> ToolCallPart | BuiltinToolCallPart
def apply(
part: ModelResponsePart | ToolCallPartDelta,
) -> ToolCallPart | BuiltinToolCallPart | ToolCallPartDelta
Apply this delta to a part or delta, returning a new part or delta with the changes applied.
ToolCallPart | BuiltinToolCallPart | ToolCallPartDelta — Either a new ToolCallPart or BuiltinToolCallPart, or an updated ToolCallPartDelta.
part : ModelResponsePart | ToolCallPartDelta
The existing model response part or delta to update.
ValueError— Ifpartis neither aToolCallPart,BuiltinToolCallPart, nor aToolCallPartDelta.UnexpectedModelBehavior— If applying JSON deltas to dict arguments or vice versa.
An event indicating that a new part has started.
If multiple PartStartEvents are received with the same index,
the new one should fully replace the old one.
The index of the part within the overall response parts list.
Type: int
The newly started ModelResponsePart.
Type: ModelResponsePart
The kind of the previous part, if any.
This is useful for UI event streams to know whether to group parts of the same kind together when emitting events.
Type: Literal[‘text’, ‘thinking’, ‘tool-call’, ‘builtin-tool-call’, ‘builtin-tool-return’, ‘file’] | None Default: None
Event type identifier, used as a discriminator.
Type: Literal[‘part_start’] Default: 'part_start'
An event indicating a delta update for an existing part.
The index of the part within the overall response parts list.
Type: int
The delta to apply to the specified part.
Type: ModelResponsePartDelta
Event type identifier, used as a discriminator.
Type: Literal[‘part_delta’] Default: 'part_delta'
An event indicating that a part is complete.
The index of the part within the overall response parts list.
Type: int
The complete ModelResponsePart.
Type: ModelResponsePart
The kind of the next part, if any.
This is useful for UI event streams to know whether to group parts of the same kind together when emitting events.
Type: Literal[‘text’, ‘thinking’, ‘tool-call’, ‘builtin-tool-call’, ‘builtin-tool-return’, ‘file’] | None Default: None
Event type identifier, used as a discriminator.
Type: Literal[‘part_end’] Default: 'part_end'
An event indicating the response to the current model request matches the output schema and will produce a result.
The name of the output tool that was called. None if the result is from text content and not from a tool.
The tool call ID, if any, that this result is associated with.
Event type identifier, used as a discriminator.
Type: Literal[‘final_result’] Default: 'final_result'
An event indicating the start to a call to a function tool.
The (function) tool call to make.
Type: ToolCallPart
Whether the tool arguments passed validation. See the custom validation docs for more info.
True: Schema validation and custom validation (if configured) both passed; args are guaranteed valid.False: Validation was performed and failed.None: Validation was not performed.
Type: bool | None Default: None
Event type identifier, used as a discriminator.
Type: Literal[‘function_tool_call’] Default: 'function_tool_call'
An ID used for matching details about the call to its result.
Type: str
An ID used for matching details about the call to its result.
Type: str
An event indicating the result of a function tool call.
The result of the call to the function tool.
Type: ToolReturnPart | RetryPromptPart
The content that will be sent to the model as a UserPromptPart following the result.
Type: str | Sequence[UserContent] | None Default: None
Event type identifier, used as a discriminator.
Type: Literal[‘function_tool_result’] Default: 'function_tool_result'
An ID used to match the result to its original call.
Type: str
An event indicating the start to a call to a built-in tool.
The built-in tool call to make.
Type: BuiltinToolCallPart
Event type identifier, used as a discriminator.
Type: Literal[‘builtin_tool_call’] Default: 'builtin_tool_call'
An event indicating the result of a built-in tool call.
The result of the call to the built-in tool.
Type: BuiltinToolReturnPart
Event type identifier, used as a discriminator.
Type: Literal[‘builtin_tool_result’] Default: 'builtin_tool_result'
def is_multi_modal_content(obj: Any) -> TypeGuard[MultiModalContent]
Check if obj is a MultiModalContent type, enabling type narrowing.
Reason the model finished generating the response, normalized to OpenTelemetry values.
Type: TypeAlias Default: Literal['stop', 'length', 'content_filter', 'tool_call', 'error']
Type for the force_download parameter on FileUrl subclasses.
False: The URL is sent directly to providers that support it. For providers that don’t, the file is downloaded with SSRF protection (blocks private IPs and cloud metadata).True: The file is always downloaded with SSRF protection (blocks private IPs and cloud metadata).'allow-local': The file is always downloaded, allowing private IPs but still blocking cloud metadata.
Type: TypeAlias Default: bool | Literal['allow-local']
Type for provider_details input: can be a static dict, a callback to update existing details, or None.
Type: TypeAlias Default: dict[str, Any] | Callable[[dict[str, Any] | None], dict[str, Any]] | None
Provider names supported by UploadedFile.
Type: TypeAlias Default: Literal['anthropic', 'openai', 'google-gla', 'google-vertex', 'bedrock', 'xai']
Union of all multi-modal content types with a discriminator for Pydantic validation.
Default: Annotated[ImageUrl | AudioUrl | DocumentUrl | VideoUrl | BinaryContent | UploadedFile, pydantic.Discriminator('kind')]
Key used to wrap non-dict tool return values in model_response_object().
Default: 'return_value'
A message part sent by Pydantic AI to a model.
Default: Annotated[SystemPromptPart | UserPromptPart | ToolReturnPart | RetryPromptPart, pydantic.Discriminator('part_kind')]
A message part returned by a model.
Default: Annotated[TextPart | ToolCallPart | BuiltinToolCallPart | BuiltinToolReturnPart | ThinkingPart | FilePart, pydantic.Discriminator('part_kind')]
Any message sent to or returned by a model.
Default: Annotated[ModelRequest | ModelResponse, pydantic.Discriminator('kind')]
Pydantic TypeAdapter for (de)serializing messages.
Default: pydantic.TypeAdapter(list[ModelMessage], config=(pydantic.ConfigDict(defer_build=True, ser_json_bytes='base64', val_json_bytes='base64')))
A partial update (delta) for any model response part.
Default: Annotated[TextPartDelta | ThinkingPartDelta | ToolCallPartDelta, pydantic.Discriminator('part_delta_kind')]
An event in the model response stream, starting a new part, applying a delta to an existing one, indicating a part is complete, or indicating the final result.
Default: Annotated[PartStartEvent | PartDeltaEvent | PartEndEvent | FinalResultEvent, pydantic.Discriminator('event_kind')]
An event yielded when handling a model response, indicating tool calls and results.
Default: Annotated[FunctionToolCallEvent | FunctionToolResultEvent | BuiltinToolCallEvent | BuiltinToolResultEvent, pydantic.Discriminator('event_kind')]
An event in the agent stream: model response stream events and response-handling events.
Default: Annotated[ModelResponseStreamEvent | HandleResponseEvent, pydantic.Discriminator('event_kind')]