Messages and chat history
Pydantic AI provides access to messages exchanged during an agent run. These messages can be used both to continue a coherent conversation, and to understand how an agent performed.
After running an agent, you can access the messages exchanged during that run from the result object.
Both RunResult
(returned by Agent.run, Agent.run_sync)
and StreamedRunResult (returned by Agent.run_stream) have the following methods:
all_messages(): returns all messages, including messages from prior runs. There’s also a variant that returns JSON bytes,all_messages_json().new_messages(): returns only the messages from the current run. There’s also a variant that returns JSON bytes,new_messages_json().
Example of accessing methods on a RunResult :
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2', instructions='Be a helpful assistant.')
result = agent.run_sync('Tell me a joke.')
print(result.output)
#> Did you hear about the toothpaste scandal? They called it Colgate.
# all messages from the run
print(result.all_messages())
"""
[
ModelRequest(
parts=[
UserPromptPart(
content='Tell me a joke.',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
instructions='Be a helpful assistant.',
run_id='...',
conversation_id='...',
),
ModelResponse(
parts=[
TextPart(
content='Did you hear about the toothpaste scandal? They called it Colgate.'
)
],
usage=RequestUsage(input_tokens=55, output_tokens=12),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
conversation_id='...',
),
]
"""
(This example is complete, it can be run “as is”)
Example of accessing methods on a StreamedRunResult :
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2', instructions='Be a helpful assistant.')
async def main():
async with agent.run_stream('Tell me a joke.') as result:
# incomplete messages before the stream finishes
print(result.all_messages())
"""
[
ModelRequest(
parts=[
UserPromptPart(
content='Tell me a joke.',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
instructions='Be a helpful assistant.',
run_id='...',
conversation_id='...',
)
]
"""
async for text in result.stream_text():
print(text)
#> Did you hear
#> Did you hear about the toothpaste
#> Did you hear about the toothpaste scandal? They called
#> Did you hear about the toothpaste scandal? They called it Colgate.
# complete messages once the stream finishes
print(result.all_messages())
"""
[
ModelRequest(
parts=[
UserPromptPart(
content='Tell me a joke.',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
instructions='Be a helpful assistant.',
run_id='...',
conversation_id='...',
),
ModelResponse(
parts=[
TextPart(
content='Did you hear about the toothpaste scandal? They called it Colgate.'
)
],
usage=RequestUsage(input_tokens=50, output_tokens=12),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
conversation_id='...',
),
]
"""
(This example is complete, it can be run “as is” — you’ll need to add asyncio.run(main()) to run main)
The primary use of message histories in Pydantic AI is to maintain context across multiple agent runs.
To use existing messages in a run, pass them to the message_history parameter of
Agent.run, Agent.run_sync or
Agent.run_stream.
If message_history is set and not empty, a new system prompt is not generated — we assume the existing message history includes a system prompt. If your history comes from a source that doesn’t round-trip system prompts (a UI frontend, a database that didn’t persist them, a compaction pipeline), add the ReinjectSystemPrompt capability so the agent’s configured system_prompt is reinjected at the head of the first request when it’s missing.
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2', instructions='Be a helpful assistant.')
result1 = agent.run_sync('Tell me a joke.')
print(result1.output)
#> Did you hear about the toothpaste scandal? They called it Colgate.
result2 = agent.run_sync('Explain?', message_history=result1.new_messages())
print(result2.output)
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.
print(result2.all_messages())
"""
[
ModelRequest(
parts=[
UserPromptPart(
content='Tell me a joke.',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
instructions='Be a helpful assistant.',
run_id='...',
conversation_id='...',
),
ModelResponse(
parts=[
TextPart(
content='Did you hear about the toothpaste scandal? They called it Colgate.'
)
],
usage=RequestUsage(input_tokens=55, output_tokens=12),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
conversation_id='...',
),
ModelRequest(
parts=[
UserPromptPart(
content='Explain?',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
instructions='Be a helpful assistant.',
run_id='...',
conversation_id='...',
),
ModelResponse(
parts=[
TextPart(
content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.'
)
],
usage=RequestUsage(input_tokens=56, output_tokens=26),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
conversation_id='...',
),
]
"""
(This example is complete, it can be run “as is”)
Each ModelRequest and ModelResponse carries two identifiers:
run_id— unique per agent run; emitted on the OpenTelemetry agent run span asgen_ai.agent.call.id.conversation_id— shared across all runs that build on the samemessage_history; emitted asgen_ai.conversation.id.
A fresh conversation_id is generated on the first run, stamped onto every message produced by that run, and inherited by subsequent runs that pass the messages back via message_history. This means you can correlate traces from a multi-turn conversation in Logfire (or any OpenTelemetry backend) without tracking anything yourself — as long as the message history round-trips, the conversation ID does too.
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
result1 = agent.run_sync('Tell me a joke.')
result2 = agent.run_sync('Explain?', message_history=result1.all_messages())
assert result1.conversation_id == result2.conversation_id
To override or fork:
- Pass
conversation_id='<your-id>'to use an ID from your own application (e.g. a chat thread ID stored in your database). - Pass
conversation_id='new'to start a fresh conversation that ignores anyconversation_idalready onmessage_history— useful for branching off an existing thread without making the caller generate an ID.
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2')
result1 = agent.run_sync('Tell me a joke.')
forked = agent.run_sync(
'Tell me a different joke.',
message_history=result1.all_messages(),
conversation_id='new',
)
assert forked.conversation_id != result1.conversation_id
The UI adapters auto-populate conversation_id from the protocol’s own thread/chat ID, so frontends using these protocols get correlation for free.
While maintaining conversation state in memory is enough for many applications, often times you may want to store the messages history of an agent run on disk or in a database. This might be for evals, for sharing data between Python and JavaScript/TypeScript, or any number of other use cases.
The intended way to do this is using a TypeAdapter.
We export ModelMessagesTypeAdapter that can be used for this, or you can create your own.
Here’s an example showing how:
from pydantic_core import to_jsonable_python
from pydantic_ai import (
Agent,
ModelMessagesTypeAdapter, # (1)
)
agent = Agent('openai:gpt-5.2', instructions='Be a helpful assistant.')
result1 = agent.run_sync('Tell me a joke.')
history_step_1 = result1.all_messages()
as_python_objects = to_jsonable_python(history_step_1) # (2)
same_history_as_step_1 = ModelMessagesTypeAdapter.validate_python(as_python_objects)
result2 = agent.run_sync( # (3)
'Tell me a different joke.', message_history=same_history_as_step_1
) Alternatively, you can create a TypeAdapter from scratch:
from pydantic import TypeAdapter
from pydantic_ai import ModelMessage
ModelMessagesTypeAdapter = TypeAdapter(list[ModelMessage])
Alternatively you can serialize to/from JSON directly:
from pydantic_core import to_json
...
as_json_objects = to_json(history_step_1)
same_history_as_step_1 = ModelMessagesTypeAdapter.validate_json(as_json_objects)
You can now continue the conversation with history same_history_as_step_1 despite creating a new agent run.
(This example is complete, it can be run “as is”)
Since messages are defined by simple dataclasses, you can manually create and manipulate, e.g. for testing.
The message format is independent of the model used, so you can use messages in different agents, or the same agent with different models.
In the example below, we reuse the message from the first agent run, which uses the openai:gpt-5.2 model, in a second agent run using the google-gla:gemini-3-pro-preview model.
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2', instructions='Be a helpful assistant.')
result1 = agent.run_sync('Tell me a joke.')
print(result1.output)
#> Did you hear about the toothpaste scandal? They called it Colgate.
result2 = agent.run_sync(
'Explain?',
model='google-gla:gemini-3-pro-preview',
message_history=result1.new_messages(),
)
print(result2.output)
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.
print(result2.all_messages())
"""
[
ModelRequest(
parts=[
UserPromptPart(
content='Tell me a joke.',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
instructions='Be a helpful assistant.',
run_id='...',
conversation_id='...',
),
ModelResponse(
parts=[
TextPart(
content='Did you hear about the toothpaste scandal? They called it Colgate.'
)
],
usage=RequestUsage(input_tokens=55, output_tokens=12),
model_name='gpt-5.2',
timestamp=datetime.datetime(...),
run_id='...',
conversation_id='...',
),
ModelRequest(
parts=[
UserPromptPart(
content='Explain?',
timestamp=datetime.datetime(...),
)
],
timestamp=datetime.datetime(...),
instructions='Be a helpful assistant.',
run_id='...',
conversation_id='...',
),
ModelResponse(
parts=[
TextPart(
content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.'
)
],
usage=RequestUsage(input_tokens=56, output_tokens=26),
model_name='gemini-3-pro-preview',
timestamp=datetime.datetime(...),
run_id='...',
conversation_id='...',
),
]
"""
Sometimes you may want to modify the message history before it’s sent to the model. This could be for privacy reasons (filtering out sensitive information), to save costs on tokens, to give less context to the LLM, or custom processing logic.
Pydantic AI provides the ProcessHistory capability that allows
you to intercept and modify the message history before each model request.
Each ProcessHistory wraps a callable that takes a list of
ModelMessage and returns a modified list of the same type.
Each processor is applied in sequence, and processors can be either synchronous or asynchronous.
from pydantic_ai import (
Agent,
ModelMessage,
ModelRequest,
ModelResponse,
TextPart,
UserPromptPart,
)
from pydantic_ai.capabilities import ProcessHistory
def filter_responses(messages: list[ModelMessage]) -> list[ModelMessage]:
"""Remove all ModelResponse messages, keeping only ModelRequest messages."""
return [msg for msg in messages if isinstance(msg, ModelRequest)]
# Create agent with history processor
agent = Agent('openai:gpt-5.2', capabilities=[ProcessHistory(filter_responses)])
# Example: Create some conversation history
message_history = [
ModelRequest(parts=[UserPromptPart(content='What is 2+2?')]),
ModelResponse(parts=[TextPart(content='2+2 equals 4')]), # This will be filtered out
]
# When you run the agent, the history processor will filter out ModelResponse messages
# result = agent.run_sync('What about 3+3?', message_history=message_history)
You can use the history_processor to only keep the recent messages:
from pydantic_ai import Agent, ModelMessage
from pydantic_ai.capabilities import ProcessHistory
async def keep_recent_messages(messages: list[ModelMessage]) -> list[ModelMessage]:
"""Keep only the last 5 messages to manage token usage."""
return messages[-5:] if len(messages) > 5 else messages
agent = Agent('openai:gpt-5.2', capabilities=[ProcessHistory(keep_recent_messages)])
# Example: Even with a long conversation history, only the last 5 messages are sent to the model
long_conversation_history: list[ModelMessage] = [] # Your long conversation history here
# result = agent.run_sync('What did we discuss?', message_history=long_conversation_history)
History processors can optionally accept a RunContext parameter to access
additional information about the current run, such as dependencies, model information, and usage statistics:
from pydantic_ai import Agent, ModelMessage, RunContext
from pydantic_ai.capabilities import ProcessHistory
def context_aware_processor(
ctx: RunContext[None],
messages: list[ModelMessage],
) -> list[ModelMessage]:
# Access current usage
current_tokens = ctx.usage.total_tokens
# Filter messages based on context
if current_tokens > 1000:
return messages[-3:] # Keep only recent messages when token usage is high
return messages
agent = Agent('openai:gpt-5.2', capabilities=[ProcessHistory(context_aware_processor)])
This allows for more sophisticated message processing based on the current state of the agent run.
Use an LLM to summarize older messages to preserve context while reducing tokens.
from pydantic_ai import Agent, ModelMessage
from pydantic_ai.capabilities import ProcessHistory
# Use a cheaper model to summarize old messages.
summarize_agent = Agent(
'openai:gpt-5-mini',
instructions="""
Summarize this conversation, omitting small talk and unrelated topics.
Focus on the technical discussion and next steps.
""",
)
async def summarize_old_messages(messages: list[ModelMessage]) -> list[ModelMessage]:
# Summarize the oldest 10 messages
if len(messages) > 10:
oldest_messages = messages[:10]
summary = await summarize_agent.run(message_history=oldest_messages)
# Return the last message and the summary
return summary.new_messages() + messages[-1:]
return messages
agent = Agent('openai:gpt-5.2', capabilities=[ProcessHistory(summarize_old_messages)])
You can test what messages are actually sent to the model provider using
FunctionModel:
import pytest
from pydantic_ai import (
Agent,
ModelMessage,
ModelRequest,
ModelResponse,
TextPart,
UserPromptPart,
)
from pydantic_ai.capabilities import ProcessHistory
from pydantic_ai.models.function import AgentInfo, FunctionModel
@pytest.fixture
def received_messages() -> list[ModelMessage]:
return []
@pytest.fixture
def function_model(received_messages: list[ModelMessage]) -> FunctionModel:
def capture_model_function(messages: list[ModelMessage], info: AgentInfo) -> ModelResponse:
# Capture the messages that the provider actually receives
received_messages.clear()
received_messages.extend(messages)
return ModelResponse(parts=[TextPart(content='Provider response')])
return FunctionModel(capture_model_function)
def test_history_processor(function_model: FunctionModel, received_messages: list[ModelMessage]):
def filter_responses(messages: list[ModelMessage]) -> list[ModelMessage]:
return [msg for msg in messages if isinstance(msg, ModelRequest)]
agent = Agent(function_model, capabilities=[ProcessHistory(filter_responses)])
message_history = [
ModelRequest(parts=[UserPromptPart(content='Question 1')]),
ModelResponse(parts=[TextPart(content='Answer 1')]),
]
agent.run_sync('Question 2', message_history=message_history)
assert received_messages == [
ModelRequest(parts=[UserPromptPart(content='Question 1')]),
ModelRequest(parts=[UserPromptPart(content='Question 2')]),
]
You can also use multiple processors:
from pydantic_ai import Agent, ModelMessage, ModelRequest
from pydantic_ai.capabilities import ProcessHistory
def filter_responses(messages: list[ModelMessage]) -> list[ModelMessage]:
return [msg for msg in messages if isinstance(msg, ModelRequest)]
def summarize_old_messages(messages: list[ModelMessage]) -> list[ModelMessage]:
return messages[-5:]
agent = Agent(
'openai:gpt-5.2',
capabilities=[ProcessHistory(filter_responses), ProcessHistory(summarize_old_messages)],
)
In this case, the filter_responses processor will be applied first, and the
summarize_old_messages processor will be applied second.
For a more complete example of using messages in conversations, see the chat app example.