Pydantic AI now supports Vercel AI frontends natively via the VercelAIAdapter class.
Pydantic AI and Vercel AI Elements are a popular choice when building AI chatbots. The issue is they stream data using incompatible event formats. To solve this problem, you had to write translation code between Pydantic AI events and Vercel AI's expected format.
Before, you needed to write translation code in your agent logic:
@app.post('/chat')
async def chat(request: Request):
# 100 lines of translation code...
Now, because Pydantic AI supports the Vercel AI Data Stream Protocol, you can use the dispatch_request() method to handle the event translation:
@app.post('/chat')
async def chat(request: Request) -> Response:
return await VercelAIAdapter.dispatch_request(request, agent=agent)
Use with Starlette-based web frameworks
If your app uses FastAPI or another Starlette-based web framework, the VercelAIAdapter.dispatch_request(request, agent=agent) class method parses the request body, runs the agent with streaming, and encodes the response as server-sent events (SSE), as detailed on the code snippet below.
from fastapi import FastAPI
from starlette.requests import Request
from starlette.responses import Response
from pydantic_ai import Agent
from pydantic_ai.ui.vercel_ai import VercelAIAdapter
agent = Agent('openai:gpt-5')
app = FastAPI()
@app.post('/chat')
async def chat(request: Request) -> Response:
return await VercelAIAdapter.dispatch_request(request, agent=agent)
Reference docs available at: ai.pydantic.dev/ui/vercel-ai.
Use VercelAIAdapter methods directly
For backends that use non-Starlette-based frameworks such as Django and Flask, or applications that require more granular control over input and output, it is possible to create a VercelAIAdapter instance and use its individual methods to build a custom adapter.
Check out the official docs for details on individual method's usage.
Why we built the VercelAIAdapter interface
When building a chat app or other interactive frontend for an AI agent, your backend will need to receive agent run input (like a chat message or complete message history) from the frontend. You will also need to stream the agent's events (like text, thinking, and tool calls) to the frontend in real time. While your frontend could use Pydantic AI's ModelRequest and AgentStreamEvent directly, you will typically want to use a UI event stream protocol that is natively supported by your frontend framework. That's why we built the Vercel AI Data Stream Protocol integration, to help you bridge the two frameworks seamlessly.
Give it a try and let us know what you think.
FAQ
Does this work with Django or Flask?
Yes, but you'll need to use the adapter's individual methods instead of the dispatch_request() convenience one, which only works with Starlette-based frameworks like FastAPI. The docs show how to use build_run_input(), run_stream(), and encode_stream() methods directly. This way, you can modify events before they reach the frontend.
What events get streamed to the frontend?
Everything your agent does: text chunks as they are generated, tool calls with their arguments, thinking steps, errors, and completion events. The adapter transforms each Pydantic AI event type into its Vercel AI equivalent.
What's the on_complete callback for?
It lets you access the agent output and message history, and inject additional events after the agent finishes. Pass a callback function to dispatch_request() or run_stream() that receives the completed AgentRunResult and optionally yields more Vercel AI events. Useful for logging, analytics, storing conversations, or triggering follow-up actions.
Is there performance overhead?
Minimal. Events are transformed as they stream through, with no buffering. The overhead is just the event transformation itself.
Does this work with Pydantic Logfire?
Yes. Logfire supports cross-language observability through OpenTelemetry, which means you can trace requests across your entire application stack. Logfire gives you end-to-end visibility for debugging streaming issues, monitoring performance, and understanding how users interact with your application's AI features.