Skip to content

pydantic_ai.profiles

ModelProfile

Describes how requests to and responses from specific models or families of models need to be constructed and processed to get the best results, independent of the model and provider classes used.

Attributes

supports_tools

Whether the model supports tools.

Type: bool Default: True

supports_json_schema_output

Whether the model supports JSON schema output.

This is also referred to as ‘native’ support for structured output. Relates to the NativeOutput output type.

Type: bool Default: False

supports_json_object_output

Whether the model supports a dedicated mode to enforce JSON output, without necessarily sending a schema.

E.g. OpenAI’s JSON mode Relates to the PromptedOutput output type.

Type: bool Default: False

supports_image_output

Whether the model supports image output.

Type: bool Default: False

default_structured_output_mode

The default structured output mode to use for the model.

Type: StructuredOutputMode Default: 'tool'

prompted_output_template

The instructions template to use for prompted structured output. The ‘{schema}’ placeholder will be replaced with the JSON schema for the output.

Type: str Default: dedent("\n Always respond with a JSON object that's compatible with this schema:\n\n \{schema\}\n\n Don't include any text or Markdown fencing before or after.\n ")

native_output_requires_schema_in_instructions

Whether to add prompted output template in native structured output mode

Type: bool Default: False

json_schema_transformer

The transformer to use to make JSON schemas for tools and structured output compatible with the model.

Type: type[JsonSchemaTransformer] | None Default: None

supports_thinking

Whether the model supports thinking/reasoning configuration.

When False, the unified thinking setting in ModelSettings is silently ignored.

Type: bool Default: False

thinking_always_enabled

Whether the model always uses thinking/reasoning (e.g., OpenAI o-series, DeepSeek R1).

When True, thinking=False is silently ignored since the model cannot disable thinking. Implies supports_thinking=True.

Type: bool Default: False

thinking_tags

The tags used to indicate thinking parts in the model’s output. Defaults to (‘<think>’, ’</think>’).

Type: tuple[str, str] Default: ('<think>', '</think>')

ignore_streamed_leading_whitespace

Whether to ignore leading whitespace when streaming a response.

This is a workaround for models that emit `<think> </think>

or an empty text part ahead of tool calls (e.g. Ollama + Qwen3), which we don't want to end up treating as a final result when usingrun_streamwithstra validoutput_type`.

This is currently only used by OpenAIChatModel, HuggingFaceModel, and GroqModel.

Type: bool Default: False

supported_builtin_tools

The set of builtin tool types that this model/profile supports.

Defaults to ALL builtin tools. Profile functions should explicitly restrict this based on model capabilities.

Type: frozenset[type[AbstractBuiltinTool]] Default: field(default_factory=(lambda: SUPPORTED_BUILTIN_TOOLS))

Methods

from_profile

@classmethod

def from_profile(cls, profile: ModelProfile | None) -> Self

Build a ModelProfile subclass instance from a ModelProfile instance.

Returns

Self

update
def update(profile: ModelProfile | None) -> Self

Update this ModelProfile (subclass) instance with the non-default values from another ModelProfile instance.

Returns

Self

OpenAIModelProfile

Bases: ModelProfile

Profile for models used with OpenAIChatModel.

ALL FIELDS MUST BE openai_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.

Attributes

openai_chat_thinking_field

Non-standard field name used by some providers for model thinking content in Chat Completions API responses.

Plenty of providers use custom field names for thinking content. Ollama and newer versions of vLLM use reasoning, while DeepSeek, older vLLM and some others use reasoning_content.

Notice that the thinking field configured here is currently limited to str type content.

If openai_chat_send_back_thinking_parts is set to 'field', this field must be set to a non-None value.

Type: str | None Default: None

openai_chat_send_back_thinking_parts

Whether the model includes thinking content in requests.

This can be:

  • 'auto' (default): Automatically detects how to send thinking content. If thinking was received in a custom field (tracked via ThinkingPart.id and ThinkingPart.provider_name), it’s sent back in that same field. Otherwise, it’s sent using tags. Only the reasoning and reasoning_content fields are checked by default when receiving responses. If your provider uses a different field name, you must explicitly set openai_chat_thinking_field to that field name.
  • 'tags': The thinking content is included in the main content field, enclosed within thinking tags as specified in thinking_tags profile option.
  • 'field': The thinking content is included in a separate field specified by openai_chat_thinking_field.
  • False: No thinking content is sent in the request.

Defaults to 'auto' to ensure thinking is sent back in the format expected by the model/provider.

Type: Literal[‘auto’, ‘tags’, ‘field’, False] Default: 'auto'

openai_supports_strict_tool_definition

This can be set by a provider or user if the OpenAI-”compatible” API doesn’t support strict tool definitions.

Type: bool Default: True

openai_supports_sampling_settings

Turn off to don’t send sampling settings like temperature and top_p to models that don’t support them, like OpenAI’s o-series reasoning models.

Type: bool Default: True

openai_unsupported_model_settings

A list of model settings that are not supported by this model.

Type: Sequence[str] Default: ()

openai_supports_tool_choice_required

Whether the provider accepts the value tool_choice='required' in the request payload.

Type: bool Default: True

openai_system_prompt_role

The role to use for the system prompt message. If not provided, defaults to 'system'.

Type: OpenAISystemPromptRole | None Default: None

Whether the model supports web search in Chat Completions API.

Type: bool Default: False

openai_chat_audio_input_encoding

The encoding to use for audio input in Chat Completions requests.

  • 'base64': Raw base64 encoded string. (Default, used by OpenAI)
  • 'uri': Data URI (e.g. data:audio/wav;base64,...).

Type: Literal[‘base64’, ‘uri’] Default: 'base64'

openai_chat_supports_file_urls

Whether the Chat API supports file URLs directly in the file_data field.

OpenAI’s native Chat API only supports base64-encoded data, but some providers like OpenRouter support passing URLs directly.

Type: bool Default: False

openai_supports_encrypted_reasoning_content

Whether the model supports including encrypted reasoning content in the response.

Type: bool Default: False

openai_supports_reasoning

Whether the model supports reasoning (o-series, GPT-5+).

When True, sampling parameters may need to be dropped depending on reasoning_effort setting.

Type: bool Default: False

openai_supports_reasoning_effort_none

Whether the model supports sampling parameters (temperature, top_p, etc.) when reasoning_effort=‘none’.

Models like GPT-5.1 and GPT-5.2 default to reasoning_effort=‘none’ and support sampling params in that mode. When reasoning is enabled (low/medium/high/xhigh), sampling params are not supported.

Type: bool Default: False

openai_responses_requires_function_call_status_none

Whether the Responses API requires the status field on function tool calls to be None.

This is required by vLLM Responses API versions before https://github.com/vllm-project/vllm/pull/26706. See https://github.com/pydantic/pydantic-ai/issues/3245 for more details.

Type: bool Default: False

OpenAIJsonSchemaTransformer

Bases: JsonSchemaTransformer

Recursively handle the schema to make it compatible with OpenAI strict mode.

See https://platform.openai.com/docs/guides/function-calling?api-mode=responses#strict-mode for more details, but this basically just requires:

  • additionalProperties must be set to false for each object in the parameters
  • all fields in properties must be marked as required

openai_model_profile

def openai_model_profile(model_name: str) -> ModelProfile

Get the model profile for an OpenAI model.

Returns

ModelProfile

OPENAI_REASONING_EFFORT_MAP

Maps unified thinking values to OpenAI reasoning_effort strings.

Type: dict[ThinkingLevel, str] Default: \{True: 'medium', False: 'none', 'minimal': 'minimal', 'low': 'low', 'medium': 'medium', 'high': 'high', 'xhigh': 'xhigh'\}

SAMPLING_PARAMS

Sampling parameter names that are incompatible with reasoning.

These parameters are not supported when reasoning is enabled (reasoning_effort != ‘none’). See https://platform.openai.com/docs/guides/reasoning for details.

Default: ('temperature', 'top_p', 'presence_penalty', 'frequency_penalty', 'logit_bias', 'openai_logprobs', 'openai_top_logprobs')

AnthropicModelProfile

Bases: ModelProfile

Profile for models used with AnthropicModel.

ALL FIELDS MUST BE anthropic_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.

Attributes

anthropic_supports_adaptive_thinking

Whether the model supports adaptive thinking (Sonnet 4.6+, Opus 4.6+).

When True, unified thinking translates to \{'type': 'adaptive'\}. When False, it translates to \{'type': 'enabled', 'budget_tokens': N\}.

Type: bool Default: False

anthropic_supports_effort

Whether the model supports the effort parameter in output_config (Opus 4.5+, Sonnet 4.6+).

When True and the unified thinking level is a string (e.g. ‘high’), it is also mapped to output_config.effort.

Type: bool Default: False

anthropic_model_profile

def anthropic_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for an Anthropic model.

Returns

ModelProfile | None

ANTHROPIC_THINKING_BUDGET_MAP

Maps unified thinking values to Anthropic budget_tokens for non-adaptive models.

Type: dict[ThinkingLevel, int] Default: \{True: 10000, 'minimal': 1024, 'low': 2048, 'medium': 10000, 'high': 16384, 'xhigh': 32768\}

GoogleModelProfile

Bases: ModelProfile

Profile for models used with GoogleModel.

ALL FIELDS MUST BE google_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.

Attributes

google_supports_native_output_with_builtin_tools

Whether the model supports native output with builtin tools. See https://ai.google.dev/gemini-api/docs/structured-output?example=recipe#structured_outputs_with_tools

Type: bool Default: False

google_supported_mime_types_in_tool_returns

MIME types supported in native FunctionResponseDict.parts. See https://ai.google.dev/gemini-api/docs/function-calling#multimodal-function-responses

Type: tuple[str, …] Default: ()

google_supports_thinking_level

Whether the model uses thinking_level (enum: LOW/MEDIUM/HIGH) instead of thinking_budget (int).

Gemini 3+ models use thinking_level; Gemini 2.5 uses thinking_budget.

Type: bool Default: False

GoogleJsonSchemaTransformer

Bases: JsonSchemaTransformer

Transforms the JSON Schema from Pydantic to be suitable for Gemini.

Gemini supports a subset of OpenAPI v3.0.3.

google_model_profile

def google_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for a Google model.

Returns

ModelProfile | None

meta_model_profile

def meta_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for a Meta model.

Returns

ModelProfile | None

amazon_model_profile

def amazon_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for an Amazon model.

Returns

ModelProfile | None

deepseek_model_profile

def deepseek_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for a DeepSeek model.

Returns

ModelProfile | None

GrokModelProfile

Bases: ModelProfile

Profile for Grok models (used with both GrokProvider and XaiProvider).

ALL FIELDS MUST BE grok_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.

Attributes

grok_supports_builtin_tools

Whether the model supports builtin tools (web_search, code_execution, mcp).

Type: bool Default: False

grok_supports_tool_choice_required

Whether the provider accepts the value tool_choice='required' in the request payload.

Type: bool Default: True

grok_model_profile

def grok_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for a Grok model.

Returns

ModelProfile | None

mistral_model_profile

def mistral_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for a Mistral model.

Returns

ModelProfile | None

qwen_model_profile

def qwen_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for a Qwen model.

Returns

ModelProfile | None