Skip to content

Thinking

Thinking (or reasoning) is the process by which a model works through a problem step-by-step before providing its final answer.

This capability is typically disabled by default and depends on the specific model being used. See the sections below for how to enable thinking for each provider.

Unified thinking settings

The simplest way to enable thinking across any supported provider is the thinking field in ModelSettings:

unified_thinking.py
from pydantic_ai import Agent

agent = Agent('anthropic:claude-opus-4-6', model_settings={'thinking': 'high'})

Or using the Thinking capability:

thinking_capability.py
from pydantic_ai import Agent
from pydantic_ai.capabilities import Thinking

agent = Agent('anthropic:claude-opus-4-6', capabilities=[Thinking(effort='high')])

The thinking setting accepts:

  • True — enable thinking with the provider’s default effort level
  • False — disable thinking (silently ignored on always-on models)
  • 'minimal' / 'low' / 'medium' / 'high' / 'xhigh' — enable thinking at a specific effort level (unsupported levels map to the closest available value)

When omitted, the model uses its default behavior. Provider-specific settings (documented in the sections below) take precedence when both are set.

Provider translation

The unified thinking setting maps to each provider’s native format:

Providerthinking=Truethinking='high'Notes
Anthropic (Opus 4.6+)anthropic_thinking=\{'type': 'adaptive'\}\{type: 'adaptive'\} + effort='high'All truthy values → adaptive; effort via output_config
Anthropic (older)anthropic_thinking=\{'type': 'enabled', 'budget_tokens': 10000\}budget_tokens=16384Budget-based; 'low' → 2048 tokens
OpenAIreasoning_effort='medium'reasoning_effort='high'
Google (Gemini 3+)include_thoughts=Truethinking_level='HIGH'
Google (Gemini 2.5)include_thoughts=Truethinking_budget=24576
Groqreasoning_format='parsed'reasoning_format='parsed'thinking=False'hidden' (no true disable)
OpenRouterreasoning.effort='medium'reasoning.effort='high'Via extra_body
Cerebrasdisable_reasoning=Falsedisable_reasoning=Falsethinking=Falsedisable_reasoning=True
xAIreasoning_effort='high'reasoning_effort='high'Only 'low' and 'high'
Bedrock (Claude)thinking.type='enabled'budget_tokens=16384No adaptive support
Bedrock (OpenAI)reasoning_effort='medium'reasoning_effort='high'

OpenAI

When using the OpenAIChatModel, text output inside <think> tags are converted to ThinkingPart objects. You can customize the tags using the thinking_tags field on the model profile.

Some OpenAI-compatible model providers might also support native thinking parts that are not delimited by tags. Instead, they are sent and received as separate, custom fields in the API. Typically, if you are calling the model via the <provider>:<model> shorthand, Pydantic AI handles it for you. Nonetheless, you can still configure the fields with openai_chat_thinking_field.

If your provider recommends to send back these custom fields not changed, for caching or interleaved thinking benefits, you can also achieve this with openai_chat_send_back_thinking_parts.

OpenAI Responses

The OpenAIResponsesModel can generate native thinking parts. To enable this functionality, you need to set the OpenAIResponsesModelSettings.openai_reasoning_effort and OpenAIResponsesModelSettings.openai_reasoning_summary model settings.

By default, the unique IDs of reasoning, text, and function call parts from the message history are sent to the model, which can result in errors like "Item 'rs_123' of type 'reasoning' was provided without its required following item." if the message history you’re sending does not match exactly what was received from the Responses API in a previous response, for example if you’re using a history processor. To disable this, you can disable the OpenAIResponsesModelSettings.openai_send_reasoning_ids model setting.

openai_thinking_part.py
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIResponsesModel, OpenAIResponsesModelSettings

model = OpenAIResponsesModel('gpt-5.2')
settings = OpenAIResponsesModelSettings(
    openai_reasoning_effort='low',
    openai_reasoning_summary='detailed',
)
agent = Agent(model, model_settings=settings)
...

Anthropic

To enable thinking, use the AnthropicModelSettings.anthropic_thinking model setting.

anthropic_thinking_part.py
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel, AnthropicModelSettings

model = AnthropicModel('claude-sonnet-4-5')
settings = AnthropicModelSettings(
    anthropic_thinking={'type': 'enabled', 'budget_tokens': 1024},
)
agent = Agent(model, model_settings=settings)
...

Interleaved Thinking

To enable interleaved thinking, you need to include the beta header in your model settings:

anthropic_interleaved_thinking.py
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel, AnthropicModelSettings

model = AnthropicModel('claude-sonnet-4-5')
settings = AnthropicModelSettings(
    anthropic_thinking={'type': 'enabled', 'budget_tokens': 10000},
    extra_headers={'anthropic-beta': 'interleaved-thinking-2025-05-14'},
)
agent = Agent(model, model_settings=settings)
...

Adaptive Thinking & Effort

Starting with claude-opus-4-6, Anthropic supports adaptive thinking, where the model dynamically decides when and how much to think based on the complexity of each request. This replaces extended thinking (type: 'enabled' with budget_tokens) which is deprecated on Opus 4.6. Adaptive thinking also automatically enables interleaved thinking.

anthropic_adaptive_thinking.py
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel, AnthropicModelSettings

model = AnthropicModel('claude-opus-4-6')
settings = AnthropicModelSettings(
    anthropic_thinking={'type': 'adaptive'},
    anthropic_effort='high',
)
agent = Agent(model, model_settings=settings)
...

The anthropic_effort setting controls how much effort the model puts into its response (independent of thinking). See the Anthropic effort docs for details.

Google

To enable thinking, use the GoogleModelSettings.google_thinking_config model setting.

google_thinking_part.py
from pydantic_ai import Agent
from pydantic_ai.models.google import GoogleModel, GoogleModelSettings

model = GoogleModel('gemini-3-pro-preview')
settings = GoogleModelSettings(google_thinking_config={'include_thoughts': True})
agent = Agent(model, model_settings=settings)
...

xAI

xAI reasoning models (Grok) support native thinking. To preserve the thinking content for multi-turn conversations, enable XaiModelSettings.xai_include_encrypted_content.

xai_thinking_part.py
from pydantic_ai import Agent
from pydantic_ai.models.xai import XaiModel, XaiModelSettings

model = XaiModel('grok-4-fast-reasoning')
settings = XaiModelSettings(xai_include_encrypted_content=True)
agent = Agent(model, model_settings=settings)
...

Bedrock

Although Bedrock Converse doesn’t provide a unified API to enable thinking, you can still use BedrockModelSettings.bedrock_additional_model_requests_fields model setting to pass provider-specific configuration:

bedrock_claude_thinking_part.py
from pydantic_ai import Agent
from pydantic_ai.models.bedrock import BedrockConverseModel, BedrockModelSettings

model = BedrockConverseModel('us.anthropic.claude-sonnet-4-5-20250929-v1:0')
model_settings = BedrockModelSettings(
    bedrock_additional_model_requests_fields={
        'thinking': {'type': 'enabled', 'budget_tokens': 1024}
    }
)
agent = Agent(model=model, model_settings=model_settings)

Groq

Groq supports different formats to receive thinking parts:

  • "raw": The thinking part is included in the text content inside <think> tags, which are automatically converted to ThinkingPart objects.
  • "hidden": The thinking part is not included in the text content.
  • "parsed": The thinking part has its own structured part in the response which is converted into a ThinkingPart object.

To enable thinking, use the GroqModelSettings.groq_reasoning_format model setting:

groq_thinking_part.py
from pydantic_ai import Agent
from pydantic_ai.models.groq import GroqModel, GroqModelSettings

model = GroqModel('qwen/qwen3-32b')
settings = GroqModelSettings(groq_reasoning_format='parsed')
agent = Agent(model, model_settings=settings)
...

OpenRouter

To enable thinking, use the OpenRouterModelSettings.openrouter_reasoning model setting.

openrouter_thinking_part.py
from pydantic_ai import Agent
from pydantic_ai.models.openrouter import OpenRouterModel, OpenRouterModelSettings

model = OpenRouterModel('openai/gpt-5.2')
settings = OpenRouterModelSettings(openrouter_reasoning={'effort': 'high'})
agent = Agent(model, model_settings=settings)
...

Mistral

Thinking is supported by the magistral family of models. It does not need to be specifically enabled.

Cohere

Thinking is supported by the command-a-reasoning-08-2025 model. It does not need to be specifically enabled.

Hugging Face

Text output inside <think> tags is automatically converted to ThinkingPart objects. You can customize the tags using the thinking_tags field on the model profile.

Outlines

Some local models run through Outlines include in their text output a thinking part delimited by tags. In that case, it will be handled by Pydantic AI that will separate the thinking part from the final answer without the need to specifically enable it. The thinking tags used by default are "<think>" and "</think>". If your model uses different tags, you can specify them in the model profile using the thinking_tags field.

Outlines currently does not support thinking along with structured output. If you provide an output_type, the model text output will not contain a thinking part with the associated tags, and you may experience degraded performance.