PyAI Conf
Register now

Pydantic AI

Production-grade applications with Generative AI

Type-safe Python framework for building agents and LLM applications. Model-agnostic with built-in validation, structured outputs, and seamless observability.

Get started

Companies that trust Pydantic AI

akamai
atlassian
banorte
cisco
duolingo
expedia
janestreet
jpmorganchase
meta
microsoft
nato
nvidia
roche
seatlech
tngtech
walmart
xero
akamai
atlassian
banorte
cisco
duolingo
expedia
janestreet
jpmorganchase
meta
microsoft
nato
nvidia
roche
seatlech
tngtech
walmart
xero

Monitor your AI agents with Logfire

Build intelligent AI agents

Create agents that can reason, use tools, and interact with external systems. Pydantic AI provides a modular, type-safe platform for building production-ready AI agents with any model provider.

Built-in integration with Pydantic Logfire for complete visibility into agent runs. Trace LLM calls, track token costs, debug failures, and understand latency across your entire AI stack.

import logfire
from pydantic import BaseModel
from pydantic_ai import Agent

logfire.configure()
logfire.instrument_pydantic_ai()

class MyModel(BaseModel):
    city: str
    country: str

agent = Agent("openai:gpt-5.2", output_type=MyModel)

if __name__ == "__main__":
    result = agent.run_sync("The windy city in the US of A.")
    logfire.info(str(result.output))

Model Context Protocol

Connect to MCP servers

The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. Pydantic AI has built-in support for MCP servers, enabling your agents to access file systems, databases, APIs, and more.

Streaming

Stream AI agent events to your frontend in real time

Stream text, tool calls, and reasoning to your frontend as they happen. Pydantic AI offers out of the box support for AG-UI protocol for standardized agent-to-UI communication and Vercel AI Data Stream Protocol .

Durable Execution

Build fault-tolerant agents

Build durable agents that preserve their progress across transient API failures and application errors or restarts. Handle long-running, asynchronous, and human-in-the-loop workflows with production-grade reliability.

Durable agents have full support for streaming and MCP, with the added benefit of fault tolerance.

Pydantic AI natively supports three durable execution solutions:

These integrations only use Pydantic AI's public interface, so they also serve as a reference for integrating with other durable systems.

Why Pydantic AI?

Validated structured outputs

Leverage the power of Pydantic validation that guarantees type-safety on structured outputs. Trusted by OpenAI, Anthropic, Google, and millions of developers.

Integrated AI model routing

Built-in cost control and model routing without performance overhead with Pydantic AI Gateway. BYOK or built-in providers for single-key access models.

Production observability

Built-in integration with Pydantic Logfire for real-time debugging, tracing, and cost tracking with massive AI workloads.

Testing & evaluation

Test your agents with Pydantic Evals. Create datasets, run evaluations, track model performance and visualise it on your CLI or on Pydantic Logfire.

Streaming support

Stream responses token-by-token for real-time user feedback. Access structured data as it arrives.

Multi-agent workflows

Build complex systems with multiple specialized agents. Coordinate with graph-based workflows.

Function Tools

Give agents access to your code

Use @agent.tool or @agent.tool_plain decorators to register tool access to an agent context. Pydantic AI automatically generates JSON schemas from your type hints and docstrings, enabling models to call your functions correctly.

import random

from pydantic_ai import Agent, RunContext

agent = Agent(
    "gateway/gemini-3-pro-preview",
    deps_type=str,
    system_prompt=(
        "You're a dice game, you should roll the die and see if the number "
        "you get back matches the user's guess. If so, tell them they're a winner. "
        "Use the player's name in the response."
    ),
)


@agent.tool_plain
def roll_dice() -> int:
    """Roll a six-sided die and return the result."""
    return random.randint(1, 6)


@agent.tool
def get_player_name(ctx: RunContext[str]) -> str:
    """Get the player's name."""
    return ctx.deps


dice_result = agent.run_sync("My guess is 4", deps="Anne")
print(dice_result.output)
# > Congratulations Anne, you guessed correctly! You're a winner!

Already use Pydantic AI? Try Logfire!

Part of The Pydantic Stack

Pydantic AI integrates seamlessly with Pydantic Logfire for complete observability, Pydantic AI Gateway for intelligent model routing, and Pydantic Evals for systematic evaluation. Build with AI at at scale, without fail.

Ready to build?

Open source (under MIT license). Install with uv (or pip) and start building production-grade AI applications today.

Get started