pydantic_ai.profiles
Describes how requests to and responses from specific models or families of models need to be constructed and processed to get the best results, independent of the model and provider classes used.
Whether the model supports tools.
Type: bool Default: True
Whether the model supports JSON schema output.
This is also referred to as ‘native’ support for structured output.
Relates to the NativeOutput output type.
Type: bool Default: False
Whether the model supports a dedicated mode to enforce JSON output, without necessarily sending a schema.
E.g. OpenAI’s JSON mode
Relates to the PromptedOutput output type.
Type: bool Default: False
Whether the model supports image output.
Type: bool Default: False
The default structured output mode to use for the model.
Type: StructuredOutputMode Default: 'tool'
The instructions template to use for prompted structured output. The ‘{schema}’ placeholder will be replaced with the JSON schema for the output.
Type: str Default: dedent("\n Always respond with a JSON object that's compatible with this schema:\n\n \{schema\}\n\n Don't include any text or Markdown fencing before or after.\n ")
Whether to add prompted output template in native structured output mode
Type: bool Default: False
The transformer to use to make JSON schemas for tools and structured output compatible with the model.
Type: type[JsonSchemaTransformer] | None Default: None
Whether the model supports thinking/reasoning configuration.
When False, the unified thinking setting in ModelSettings is silently ignored.
Type: bool Default: False
Whether the model always uses thinking/reasoning (e.g., OpenAI o-series, DeepSeek R1).
When True, thinking=False is silently ignored since the model cannot disable thinking.
Implies supports_thinking=True.
Type: bool Default: False
The tags used to indicate thinking parts in the model’s output. Defaults to (‘<think>’, ’</think>’).
Type: tuple[str, str] Default: ('<think>', '</think>')
Whether to ignore leading whitespace when streaming a response.
This is a workaround for models that emit `<think> </think>
or an empty text part ahead of tool calls (e.g. Ollama + Qwen3), which we don't want to end up treating as a final result when usingrun_streamwithstra validoutput_type`.
This is currently only used by OpenAIChatModel, HuggingFaceModel, and GroqModel.
Type: bool Default: False
The set of builtin tool types that this model/profile supports.
Defaults to ALL builtin tools. Profile functions should explicitly restrict this based on model capabilities.
Type: frozenset[type[AbstractBuiltinTool]] Default: field(default_factory=(lambda: SUPPORTED_BUILTIN_TOOLS))
@classmethod
def from_profile(cls, profile: ModelProfile | None) -> Self
Build a ModelProfile subclass instance from a ModelProfile instance.
def update(profile: ModelProfile | None) -> Self
Update this ModelProfile (subclass) instance with the non-default values from another ModelProfile instance.
Bases: ModelProfile
Profile for models used with OpenAIChatModel.
ALL FIELDS MUST BE openai_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.
Non-standard field name used by some providers for model thinking content in Chat Completions API responses.
Plenty of providers use custom field names for thinking content. Ollama and newer versions of vLLM use reasoning,
while DeepSeek, older vLLM and some others use reasoning_content.
Notice that the thinking field configured here is currently limited to str type content.
If openai_chat_send_back_thinking_parts is set to 'field', this field must be set to a non-None value.
Type: str | None Default: None
Whether the model includes thinking content in requests.
This can be:
'auto'(default): Automatically detects how to send thinking content. If thinking was received in a custom field (tracked viaThinkingPart.idandThinkingPart.provider_name), it’s sent back in that same field. Otherwise, it’s sent using tags. Only thereasoningandreasoning_contentfields are checked by default when receiving responses. If your provider uses a different field name, you must explicitly setopenai_chat_thinking_fieldto that field name.'tags': The thinking content is included in the maincontentfield, enclosed within thinking tags as specified inthinking_tagsprofile option.'field': The thinking content is included in a separate field specified byopenai_chat_thinking_field.False: No thinking content is sent in the request.
Defaults to 'auto' to ensure thinking is sent back in the format expected by the model/provider.
Type: Literal[‘auto’, ‘tags’, ‘field’, False] Default: 'auto'
This can be set by a provider or user if the OpenAI-”compatible” API doesn’t support strict tool definitions.
Type: bool Default: True
Turn off to don’t send sampling settings like temperature and top_p to models that don’t support them, like OpenAI’s o-series reasoning models.
Type: bool Default: True
A list of model settings that are not supported by this model.
Type: Sequence[str] Default: ()
Whether the provider accepts the value tool_choice='required' in the request payload.
Type: bool Default: True
The role to use for the system prompt message. If not provided, defaults to 'system'.
Type: OpenAISystemPromptRole | None Default: None
Whether the model supports web search in Chat Completions API.
Type: bool Default: False
The encoding to use for audio input in Chat Completions requests.
'base64': Raw base64 encoded string. (Default, used by OpenAI)'uri': Data URI (e.g.data:audio/wav;base64,...).
Type: Literal[‘base64’, ‘uri’] Default: 'base64'
Whether the Chat API supports file URLs directly in the file_data field.
OpenAI’s native Chat API only supports base64-encoded data, but some providers like OpenRouter support passing URLs directly.
Type: bool Default: False
Whether the model supports including encrypted reasoning content in the response.
Type: bool Default: False
Whether the model supports reasoning (o-series, GPT-5+).
When True, sampling parameters may need to be dropped depending on reasoning_effort setting.
Type: bool Default: False
Whether the model supports sampling parameters (temperature, top_p, etc.) when reasoning_effort=‘none’.
Models like GPT-5.1 and GPT-5.2 default to reasoning_effort=‘none’ and support sampling params in that mode. When reasoning is enabled (low/medium/high/xhigh), sampling params are not supported.
Type: bool Default: False
Whether the Responses API requires the status field on function tool calls to be None.
This is required by vLLM Responses API versions before https://github.com/vllm-project/vllm/pull/26706. See https://github.com/pydantic/pydantic-ai/issues/3245 for more details.
Type: bool Default: False
Bases: JsonSchemaTransformer
Recursively handle the schema to make it compatible with OpenAI strict mode.
See https://platform.openai.com/docs/guides/function-calling?api-mode=responses#strict-mode for more details, but this basically just requires:
additionalPropertiesmust be set to false for each object in the parameters- all fields in properties must be marked as required
def openai_model_profile(model_name: str) -> ModelProfile
Get the model profile for an OpenAI model.
Maps unified thinking values to OpenAI reasoning_effort strings.
Type: dict[ThinkingLevel, str] Default: \{True: 'medium', False: 'none', 'minimal': 'minimal', 'low': 'low', 'medium': 'medium', 'high': 'high', 'xhigh': 'xhigh'\}
Sampling parameter names that are incompatible with reasoning.
These parameters are not supported when reasoning is enabled (reasoning_effort != ‘none’). See https://platform.openai.com/docs/guides/reasoning for details.
Default: ('temperature', 'top_p', 'presence_penalty', 'frequency_penalty', 'logit_bias', 'openai_logprobs', 'openai_top_logprobs')
Bases: ModelProfile
Profile for models used with AnthropicModel.
ALL FIELDS MUST BE anthropic_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.
Whether the model supports adaptive thinking (Sonnet 4.6+, Opus 4.6+).
When True, unified thinking translates to \{'type': 'adaptive'\}.
When False, it translates to \{'type': 'enabled', 'budget_tokens': N\}.
Type: bool Default: False
Whether the model supports the effort parameter in output_config (Opus 4.5+, Sonnet 4.6+).
When True and the unified thinking level is a string (e.g. ‘high’), it is also
mapped to output_config.effort.
Type: bool Default: False
def anthropic_model_profile(model_name: str) -> ModelProfile | None
Get the model profile for an Anthropic model.
Maps unified thinking values to Anthropic budget_tokens for non-adaptive models.
Type: dict[ThinkingLevel, int] Default: \{True: 10000, 'minimal': 1024, 'low': 2048, 'medium': 10000, 'high': 16384, 'xhigh': 32768\}
Bases: ModelProfile
Profile for models used with GoogleModel.
ALL FIELDS MUST BE google_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.
Whether the model supports native output with builtin tools. See https://ai.google.dev/gemini-api/docs/structured-output?example=recipe#structured_outputs_with_tools
Type: bool Default: False
MIME types supported in native FunctionResponseDict.parts. See https://ai.google.dev/gemini-api/docs/function-calling#multimodal-function-responses
Type: tuple[str, …] Default: ()
Whether the model uses thinking_level (enum: LOW/MEDIUM/HIGH) instead of thinking_budget (int).
Gemini 3+ models use thinking_level; Gemini 2.5 uses thinking_budget.
Type: bool Default: False
Bases: JsonSchemaTransformer
Transforms the JSON Schema from Pydantic to be suitable for Gemini.
Gemini supports a subset of OpenAPI v3.0.3.
def google_model_profile(model_name: str) -> ModelProfile | None
Get the model profile for a Google model.
def meta_model_profile(model_name: str) -> ModelProfile | None
Get the model profile for a Meta model.
def amazon_model_profile(model_name: str) -> ModelProfile | None
Get the model profile for an Amazon model.
def deepseek_model_profile(model_name: str) -> ModelProfile | None
Get the model profile for a DeepSeek model.
Bases: ModelProfile
Profile for Grok models (used with both GrokProvider and XaiProvider).
ALL FIELDS MUST BE grok_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.
Whether the model supports builtin tools (web_search, code_execution, mcp).
Type: bool Default: False
Whether the provider accepts the value tool_choice='required' in the request payload.
Type: bool Default: True
def grok_model_profile(model_name: str) -> ModelProfile | None
Get the model profile for a Grok model.
def mistral_model_profile(model_name: str) -> ModelProfile | None
Get the model profile for a Mistral model.
def qwen_model_profile(model_name: str) -> ModelProfile | None
Get the model profile for a Qwen model.