Anthropic
To use AnthropicModel models, you need to either install pydantic-ai, or install pydantic-ai-slim with the anthropic optional group:
pip install "pydantic-ai-slim[anthropic]"
uv add "pydantic-ai-slim[anthropic]"
To use Anthropic through their API, go to console.anthropic.com/settings/keys to generate an API key.
AnthropicModelName contains a list of available Anthropic models.
Once you have the API key, you can set it as an environment variable:
export ANTHROPIC_API_KEY='your-api-key'
You can then use AnthropicModel by name:
from pydantic_ai import Agent
agent = Agent('anthropic:claude-sonnet-4-6')
...
Or initialise the model directly with just the model name:
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel
model = AnthropicModel('claude-sonnet-4-5')
agent = Agent(model)
...
You can provide a custom Provider via the provider argument:
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers.anthropic import AnthropicProvider
model = AnthropicModel(
'claude-sonnet-4-5', provider=AnthropicProvider(api_key='your-api-key')
)
agent = Agent(model)
...
You can customize the AnthropicProvider with a custom httpx.AsyncClient:
from httpx import AsyncClient
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers.anthropic import AnthropicProvider
custom_http_client = AsyncClient(timeout=30)
model = AnthropicModel(
'claude-sonnet-4-5',
provider=AnthropicProvider(api_key='your-api-key', http_client=custom_http_client),
)
agent = Agent(model)
...
You can use Anthropic models through cloud platforms by passing a custom client to AnthropicProvider.
To use Claude models via AWS Bedrock, follow the Anthropic documentation on how to set up an AsyncAnthropicBedrock client and then pass it to AnthropicProvider:
from anthropic import AsyncAnthropicBedrock
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers.anthropic import AnthropicProvider
bedrock_client = AsyncAnthropicBedrock() # Uses AWS credentials from environment
provider = AnthropicProvider(anthropic_client=bedrock_client)
model = AnthropicModel('us.anthropic.claude-sonnet-4-5-20250929-v1:0', provider=provider)
agent = Agent(model)
...
To use Claude models via Google Cloud Vertex AI, follow the Anthropic documentation on how to set up an AsyncAnthropicVertex client and then pass it to AnthropicProvider:
from anthropic import AsyncAnthropicVertex
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers.anthropic import AnthropicProvider
vertex_client = AsyncAnthropicVertex(region='us-east5', project_id='your-project-id')
provider = AnthropicProvider(anthropic_client=vertex_client)
model = AnthropicModel('claude-sonnet-4-5', provider=provider)
agent = Agent(model)
...
To use Claude models via Microsoft Foundry, follow the Anthropic documentation on how to set up an AsyncAnthropicFoundry client and then pass it to AnthropicProvider:
from anthropic import AsyncAnthropicFoundry
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers.anthropic import AnthropicProvider
foundry_client = AsyncAnthropicFoundry(
api_key='your-foundry-api-key', # Or set ANTHROPIC_FOUNDRY_API_KEY
resource='your-resource-name',
)
provider = AnthropicProvider(anthropic_client=foundry_client)
model = AnthropicModel('claude-sonnet-4-5', provider=provider)
agent = Agent(model)
...
See Anthropic’s Microsoft Foundry documentation for setup instructions including Entra ID authentication.
Anthropic supports prompt caching to reduce costs by caching parts of your prompts. Pydantic AI supports both automatic and explicit caching approaches:
The simplest way to enable prompt caching is with AnthropicModelSettings.anthropic_cache. This uses Anthropic’s automatic caching, passing a top-level cache_control parameter so the server automatically applies a cache breakpoint to the last cacheable block in each request:
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModelSettings
agent = Agent(
'anthropic:claude-sonnet-4-6',
instructions='You are a helpful assistant.',
model_settings=AnthropicModelSettings(
anthropic_cache=True,
),
)
result1 = agent.run_sync('What is the capital of France?')
result2 = agent.run_sync(
'What is the capital of Germany?', message_history=result1.all_messages()
)
print(f'Cache write: {result1.usage().cache_write_tokens}')
print(f'Cache read: {result2.usage().cache_read_tokens}')
This is ideal for multi-turn conversations where the cache breakpoint should move forward as the conversation grows. You can also specify a custom TTL with anthropic_cache='1h'.
In addition to automatic caching, Pydantic AI provides several ways to place cache breakpoints on specific content:
- Cache User Messages with
CachePoint: Insert aCachePointmarker in your user messages to cache everything before it - Cache System Instructions: Set
AnthropicModelSettings.anthropic_cache_instructionstoTrue(uses 5m TTL by default) or specify'5m'/'1h'directly - Cache Tool Definitions: Set
AnthropicModelSettings.anthropic_cache_tool_definitionstoTrue(uses 5m TTL by default) or specify'5m'/'1h'directly
Combine automatic caching with explicit breakpoints for maximum savings. Automatic caching handles the conversation, while explicit breakpoints pin system instructions and tool definitions:
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.anthropic import AnthropicModelSettings
agent = Agent(
'anthropic:claude-sonnet-4-6',
instructions='Detailed instructions...',
model_settings=AnthropicModelSettings(
anthropic_cache=True, # Server auto-caches last block
anthropic_cache_instructions=True, # Explicitly cache system instructions
anthropic_cache_tool_definitions='1h', # Explicitly cache tool definitions with 1h TTL
),
)
@agent.tool
def search_docs(ctx: RunContext, query: str) -> str:
"""Search documentation."""
return f'Results for {query}'
result = agent.run_sync('Search for Python best practices')
print(result.output)
When you use anthropic_cache_instructions with both static and dynamic instructions, Pydantic AI automatically places the cache boundary at the optimal point. Static instructions (from Agent(instructions=...)) are sorted before dynamic instructions (from @agent.instructions functions or toolsets), and the cache point is placed after the last static instruction block.
This means your stable, static instructions are cached efficiently, while dynamic instructions (which may change between requests) remain outside the cache boundary and don’t cause cache invalidation.
from datetime import date
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.anthropic import AnthropicModelSettings
agent = Agent(
'anthropic:claude-sonnet-4-6',
deps_type=str,
instructions='You are a helpful customer service agent. Follow company policy.', # (1)
model_settings=AnthropicModelSettings(
anthropic_cache_instructions=True, # (2)
),
)
@agent.instructions
def dynamic_context(ctx: RunContext[str]) -> str: # (3)
return f"Customer name: {ctx.deps}. Today's date: {date.today()}."
result = agent.run_sync('What is your return policy?', deps='Alice')
print(result.output) Static instructions are cached across requests.
Enables smart cache placement at the static/dynamic boundary.
Dynamic instructions change per-request and are not cached.
Use manual CachePoint markers to control cache locations precisely:
from pydantic_ai import Agent, CachePoint
agent = Agent(
'anthropic:claude-sonnet-4-6',
instructions='Instructions...',
)
# Manually control cache points for specific content blocks
result = agent.run_sync([
'Long context from documentation...',
CachePoint(), # Cache everything up to this point
'First question'
])
print(result.output)
Access cache usage statistics via result.usage():
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModelSettings
agent = Agent(
'anthropic:claude-sonnet-4-6',
instructions='Instructions...',
model_settings=AnthropicModelSettings(
anthropic_cache=True,
),
)
result = agent.run_sync('Your question')
usage = result.usage()
print(f'Cache write tokens: {usage.cache_write_tokens}')
print(f'Cache read tokens: {usage.cache_read_tokens}')
Anthropic enforces a maximum of 4 cache points per request. Pydantic AI automatically manages this limit to ensure your requests always comply without errors.
Cache points can come from several sources:
- Automatic caching: Via
anthropic_cache(the server applies 1 cache point to the last cacheable block) - System Prompt: Via
anthropic_cache_instructionssetting (adds cache point to last system prompt block) - Tool Definitions: Via
anthropic_cache_tool_definitionssetting (adds cache point to last tool definition) - Messages: Via
CachePointmarkers (adds cache points to message content)
Each setting uses at most 1 cache point, but you can combine them. If the total exceeds 4, Pydantic AI automatically trims excess cache points from older messages.
Define an agent with automatic caching plus explicit breakpoints:
from pydantic_ai import Agent, CachePoint
from pydantic_ai.models.anthropic import AnthropicModelSettings
agent = Agent(
'anthropic:claude-sonnet-4-6',
instructions='Detailed instructions...',
model_settings=AnthropicModelSettings(
anthropic_cache=True, # 1 cache point (server-applied)
anthropic_cache_instructions=True, # 1 cache point
anthropic_cache_tool_definitions=True, # 1 cache point
),
)
@agent.tool_plain
def my_tool() -> str:
return 'result'
# 3 of 4 slots used (1 automatic + 1 instructions + 1 tools)
# Room for 1 more explicit CachePoint marker
result = agent.run_sync([
'Context', CachePoint(), # 4th cache point - OK
'Question'
])
print(result.output)
usage = result.usage()
print(f'Cache write tokens: {usage.cache_write_tokens}')
print(f'Cache read tokens: {usage.cache_read_tokens}')
When explicit cache points from all sources (settings + CachePoint markers) exceed the available budget, Pydantic AI automatically removes excess cache points from older message content (keeping the most recent ones).
Define an agent with 2 explicit cache points from settings:
from pydantic_ai import Agent, CachePoint
from pydantic_ai.models.anthropic import AnthropicModelSettings
agent = Agent(
'anthropic:claude-sonnet-4-6',
instructions='Instructions...',
model_settings=AnthropicModelSettings(
anthropic_cache_instructions=True, # 1 cache point
anthropic_cache_tool_definitions=True, # 1 cache point
),
)
@agent.tool_plain
def search() -> str:
return 'data'
# Already using 2 cache points (instructions + tools)
# Can add 2 more CachePoint markers (4 total limit)
result = agent.run_sync([
'Context 1', CachePoint(), # Oldest - will be removed
'Context 2', CachePoint(), # Will be kept (3rd point)
'Context 3', CachePoint(), # Will be kept (4th point)
'Question'
])
# Final cache points: instructions + tools + Context 2 + Context 3 = 4
print(result.output)
usage = result.usage()
print(f'Cache write tokens: {usage.cache_write_tokens}')
print(f'Cache read tokens: {usage.cache_read_tokens}')
Key Points:
- System and tool cache points are always preserved
anthropic_cachecounts as 1 cache point, just likeanthropic_cache_instructionsandanthropic_cache_tool_definitions- Excess
CachePointmarkers in messages are removed from oldest to newest when the limit is exceeded - This ensures critical caching (instructions/tools) is maintained while still benefiting from message-level caching
Anthropic supports automatic context compaction to manage long conversations. When input tokens exceed a configured threshold, the API automatically generates a summary that replaces older messages while preserving context.
The easiest way to enable compaction is with the AnthropicCompaction capability:
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicCompaction
agent = Agent(
'anthropic:claude-sonnet-4-6',
capabilities=[AnthropicCompaction(token_threshold=100_000)],
)
The capability accepts:
token_threshold(default: 150,000, minimum: 50,000): Compaction triggers when input tokens exceed this value.instructions: Custom instructions for how the summary should be generated.pause_after_compaction: WhenTrue, the response stops after the compaction block withstop_reason='compaction', allowing explicit handling before continuing.
Alternatively, you can configure compaction directly via model settings using anthropic_context_management:
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModelSettings
agent = Agent('anthropic:claude-sonnet-4-6')
result = agent.run_sync(
'Hello!',
model_settings=AnthropicModelSettings(
anthropic_context_management={
'edits': [{'type': 'compact_20260112', 'trigger': {'type': 'input_tokens', 'value': 100_000}}]
}
),
)