TypeAdapter
Bases: Generic[T]
Usage docs: https://docs.pydantic.dev/2.10/concepts/type_adapter/
Type adapters provide a flexible way to perform validation and serialization based on a Python type.
A TypeAdapter instance exposes some of the functionality from BaseModel instance methods
for types that do not have such methods (such as dataclasses, primitive types, and more).
Note: TypeAdapter instances are not types, and cannot be used as type annotations for fields.
The type associated with the TypeAdapter.
Configuration for the TypeAdapter, should be a dictionary conforming to
ConfigDict.
Depth at which to search for the parent frame. This frame is used when
resolving forward annotations during schema building, by looking for the globals and locals of this
frame. Defaults to 2, which will result in the frame where the TypeAdapter was instantiated.
The module that passes to plugin if provided.
Compatibility with mypy
Depending on the type used, mypy might raise an error when instantiating a TypeAdapter. As a workaround, you can explicitly
annotate your variable:
from typing import Union
from pydantic import TypeAdapter
ta: TypeAdapter[Union[str, int]] = TypeAdapter(Union[str, int]) # type: ignore[arg-type]
Namespace management nuances and implementation details
Here, we collect some notes on namespace management, and subtle differences from BaseModel:
BaseModel uses its own __module__ to find out where it was defined
and then looks for symbols to resolve forward references in those globals.
On the other hand, TypeAdapter can be initialized with arbitrary objects,
which may not be types and thus do not have a __module__ available.
So instead we look at the globals in our parent stack frame.
It is expected that the ns_resolver passed to this function will have the correct
namespace for the type we’re adapting. See the source code for TypeAdapter.__init__
and TypeAdapter.rebuild for various ways to construct this namespace.
This works for the case where this function is called in a module that has the target of forward references in its scope, but does not always work for more complex cases.
For example, take the following:
from typing import Dict, List
IntList = List[int]
OuterDict = Dict[str, 'IntList']
from a import OuterDict
from pydantic import TypeAdapter
IntList = int # replaces the symbol the forward reference is looking for
v = TypeAdapter(OuterDict)
v({'x': 1}) # should fail but doesn't
If OuterDict were a BaseModel, this would work because it would resolve
the forward reference within the a.py namespace.
But TypeAdapter(OuterDict) can’t determine what module OuterDict came from.
In other words, the assumption that all forward references exist in the
module we are being called from is not technically always true.
Although most of the time it is and it works fine for recursive models and such,
BaseModel’s behavior isn’t perfect either and can break in similar ways,
so there is no right or wrong between the two.
But at the very least this behavior is subtly different from BaseModel’s.
Type: CoreSchema
Type: SchemaValidator | PluggableSchemaValidator
Type: SchemaSerializer
Type: bool Default: False
def rebuild(
force: bool = False,
raise_errors: bool = True,
_parent_namespace_depth: int = 2,
_types_namespace: _namespace_utils.MappingNamespace | None = None,
) -> bool | None
Try to rebuild the pydantic-core schema for the adapter’s type.
This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.
bool | None — Returns None if the schema is already “complete” and rebuilding was not required.
bool | None — If rebuilding was required, returns True if rebuilding was successful, otherwise False.
Whether to force the rebuilding of the type adapter’s schema, defaults to False.
Whether to raise errors, defaults to True.
Depth at which to search for the parent frame. This frame is used when resolving forward annotations during schema rebuilding, by looking for the locals of this frame. Defaults to 2, which will result in the frame where the method was called.
An explicit types namespace to use, instead of using the local namespace
from the parent frame. Defaults to None.
def validate_python(
object: Any,
strict: bool | None = None,
from_attributes: bool | None = None,
context: dict[str, Any] | None = None,
experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
) -> T
Validate a Python object against the model.
T — The validated object.
The Python object to validate against the model.
Whether to strictly check types.
Whether to extract data from object attributes.
Additional context to pass to the validator.
Experimental whether to enable partial validation, e.g. to process streams.
- False / ‘off’: Default behavior, no partial validation.
- True / ‘on’: Enable partial validation.
- ‘trailing-strings’: Enable partial validation and allow trailing strings in the input.
def validate_json(
data: str | bytes | bytearray,
strict: bool | None = None,
context: dict[str, Any] | None = None,
experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
) -> T
Usage docs: https://docs.pydantic.dev/2.10/concepts/json/#json-parsing
Validate a JSON string or bytes against the model.
T — The validated object.
The JSON data to validate against the model.
Whether to strictly check types.
Additional context to use during validation.
Experimental whether to enable partial validation, e.g. to process streams.
- False / ‘off’: Default behavior, no partial validation.
- True / ‘on’: Enable partial validation.
- ‘trailing-strings’: Enable partial validation and allow trailing strings in the input.
def validate_strings(
obj: Any,
strict: bool | None = None,
context: dict[str, Any] | None = None,
experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
) -> T
Validate object contains string data against the model.
T — The validated object.
The object contains string data to validate.
Whether to strictly check types.
Additional context to use during validation.
Experimental whether to enable partial validation, e.g. to process streams.
- False / ‘off’: Default behavior, no partial validation.
- True / ‘on’: Enable partial validation.
- ‘trailing-strings’: Enable partial validation and allow trailing strings in the input.
def get_default_value(
strict: bool | None = None,
context: dict[str, Any] | None = None,
) -> Some[T] | None
Get the default value for the wrapped type.
Some[T] | None — The default value wrapped in a Some if there is one or None if not.
Whether to strictly check types.
Additional context to pass to the validator.
def dump_python(
instance: T,
mode: Literal['json', 'python'] = 'python',
include: IncEx | None = None,
exclude: IncEx | None = None,
by_alias: bool = False,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
round_trip: bool = False,
warnings: bool | Literal['none', 'warn', 'error'] = True,
serialize_as_any: bool = False,
context: dict[str, Any] | None = None,
) -> Any
Dump an instance of the adapted type to a Python object.
Any — The serialized object.
The Python object to serialize.
The output format.
Fields to include in the output.
Fields to exclude from the output.
Whether to use alias names for field names.
Whether to exclude unset fields.
Whether to exclude fields with default values.
Whether to exclude fields with None values.
Whether to output the serialized data in a way that is compatible with deserialization.
How to handle serialization errors. False/“none” ignores them, True/“warn” logs errors,
“error” raises a PydanticSerializationError.
Whether to serialize fields with duck-typing serialization behavior.
Additional context to pass to the serializer.
def dump_json(
instance: T,
indent: int | None = None,
include: IncEx | None = None,
exclude: IncEx | None = None,
by_alias: bool = False,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
round_trip: bool = False,
warnings: bool | Literal['none', 'warn', 'error'] = True,
serialize_as_any: bool = False,
context: dict[str, Any] | None = None,
) -> bytes
Usage docs: https://docs.pydantic.dev/2.10/concepts/json/#json-serialization
Serialize an instance of the adapted type to JSON.
bytes — The JSON representation of the given instance as bytes.
The instance to be serialized.
Number of spaces for JSON indentation.
Fields to include.
Fields to exclude.
Whether to use alias names for field names.
Whether to exclude unset fields.
Whether to exclude fields with default values.
Whether to exclude fields with a value of None.
Whether to serialize and deserialize the instance to ensure round-tripping.
How to handle serialization errors. False/“none” ignores them, True/“warn” logs errors,
“error” raises a PydanticSerializationError.
Whether to serialize fields with duck-typing serialization behavior.
Additional context to pass to the serializer.
def json_schema(
by_alias: bool = True,
ref_template: str = DEFAULT_REF_TEMPLATE,
schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
mode: JsonSchemaMode = 'validation',
) -> dict[str, Any]
Generate a JSON schema for the adapted type.
dict[str, Any] — The JSON schema for the model as a dictionary.
Whether to use alias names for field names.
The format string used for generating $ref strings.
The generator class used for creating the schema.
The mode to use for schema generation.
@staticmethod
def json_schemas(
inputs: Iterable[tuple[JsonSchemaKeyT, JsonSchemaMode, TypeAdapter[Any]]],
by_alias: bool = True,
title: str | None = None,
description: str | None = None,
ref_template: str = DEFAULT_REF_TEMPLATE,
schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
) -> tuple[dict[tuple[JsonSchemaKeyT, JsonSchemaMode], JsonSchemaValue], JsonSchemaValue]
Generate a JSON schema including definitions from multiple type adapters.
tuple[dict[tuple[JsonSchemaKeyT, JsonSchemaMode], JsonSchemaValue], JsonSchemaValue] — A tuple where:
- The first element is a dictionary whose keys are tuples of JSON schema key type and JSON mode, and whose values are the JSON schema corresponding to that pair of inputs. (These schemas may have JsonRef references to definitions that are defined in the second returned element.)
- The second element is a JSON schema containing all definitions referenced in the first returned element, along with the optional title and description keys.
Inputs to schema generation. The first two items will form the keys of the (first) output mapping; the type adapters will provide the core schemas that get converted into definitions in the output JSON schema.
Whether to use alias names.
The title for the schema.
The description for the schema.
The format string used for generating $ref strings.
The generator class used for creating the schema.