Type Adapter
Bases: Generic[T]
Type adapters provide a flexible way to perform validation and serialization based on a Python type.
A TypeAdapter instance exposes some of the functionality from BaseModel instance methods
for types that do not have such methods (such as dataclasses, primitive types, and more).
Note: TypeAdapter instances are not types, and cannot be used as type annotations for fields.
type : Any
The type associated with the TypeAdapter.
config : ConfigDict | None Default: None
Configuration for the TypeAdapter, should be a dictionary conforming to
ConfigDict.
_parent_depth : int Default: 2
Depth at which to search for the parent frame. This frame is used when
resolving forward annotations during schema building, by looking for the globals and locals of this
frame. Defaults to 2, which will result in the frame where the TypeAdapter was instantiated.
The module that passes to plugin if provided.
The core schema for the type.
Type: CoreSchema
The schema validator for the type.
Type: SchemaValidator | PluggableSchemaValidator
The schema serializer for the type.
Type: SchemaSerializer
Whether the core schema for the type is successfully built.
Type: bool
Compatibility with mypy
Depending on the type used, mypy might raise an error when instantiating a TypeAdapter. As a workaround, you can explicitly
annotate your variable:
from typing import Union
from pydantic import TypeAdapter
ta: TypeAdapter[Union[str, int]] = TypeAdapter(Union[str, int]) # type: ignore[arg-type]
Namespace management nuances and implementation details
Here, we collect some notes on namespace management, and subtle differences from BaseModel:
BaseModel uses its own __module__ to find out where it was defined
and then looks for symbols to resolve forward references in those globals.
On the other hand, TypeAdapter can be initialized with arbitrary objects,
which may not be types and thus do not have a __module__ available.
So instead we look at the globals in our parent stack frame.
It is expected that the ns_resolver passed to this function will have the correct
namespace for the type we’re adapting. See the source code for TypeAdapter.__init__
and TypeAdapter.rebuild for various ways to construct this namespace.
This works for the case where this function is called in a module that has the target of forward references in its scope, but does not always work for more complex cases.
For example, take the following:
IntList = list[int]
OuterDict = dict[str, 'IntList']
from a import OuterDict
from pydantic import TypeAdapter
IntList = int # replaces the symbol the forward reference is looking for
v = TypeAdapter(OuterDict)
v({'x': 1}) # should fail but doesn't
If OuterDict were a BaseModel, this would work because it would resolve
the forward reference within the a.py namespace.
But TypeAdapter(OuterDict) can’t determine what module OuterDict came from.
In other words, the assumption that all forward references exist in the
module we are being called from is not technically always true.
Although most of the time it is and it works fine for recursive models and such,
BaseModel’s behavior isn’t perfect either and can break in similar ways,
so there is no right or wrong between the two.
But at the very least this behavior is subtly different from BaseModel’s.
def rebuild(
force: bool = False,
raise_errors: bool = True,
_parent_namespace_depth: int = 2,
_types_namespace: _namespace_utils.MappingNamespace | None = None,
) -> bool | None
Try to rebuild the pydantic-core schema for the adapter’s type.
This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.
bool | None — Returns None if the schema is already “complete” and rebuilding was not required.
bool | None — If rebuilding was required, returns True if rebuilding was successful, otherwise False.
force : bool Default: False
Whether to force the rebuilding of the type adapter’s schema, defaults to False.
raise_errors : bool Default: True
Whether to raise errors, defaults to True.
_parent_namespace_depth : int Default: 2
Depth at which to search for the parent frame. This frame is used when resolving forward annotations during schema rebuilding, by looking for the locals of this frame. Defaults to 2, which will result in the frame where the method was called.
_types_namespace : _namespace_utils.MappingNamespace | None Default: None
An explicit types namespace to use, instead of using the local namespace
from the parent frame. Defaults to None.
def validate_python(
object: Any,
strict: bool | None = None,
extra: ExtraValues | None = None,
from_attributes: bool | None = None,
context: Any | None = None,
experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> T
Validate a Python object against the model.
T — The validated object.
object : Any
The Python object to validate against the model.
Whether to strictly check types.
extra : ExtraValues | None Default: None
Whether to ignore, allow, or forbid extra data during model validation.
See the extra configuration value for details.
Whether to extract data from object attributes.
Additional context to pass to the validator.
Experimental whether to enable partial validation, e.g. to process streams.
- False / ‘off’: Default behavior, no partial validation.
- True / ‘on’: Enable partial validation.
- ‘trailing-strings’: Enable partial validation and allow trailing strings in the input.
Whether to use the field’s alias when validating against the provided input data.
Whether to use the field’s name when validating against the provided input data.
def validate_json(
data: str | bytes | bytearray,
strict: bool | None = None,
extra: ExtraValues | None = None,
context: Any | None = None,
experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> T
Validate a JSON string or bytes against the model.
T — The validated object.
The JSON data to validate against the model.
Whether to strictly check types.
extra : ExtraValues | None Default: None
Whether to ignore, allow, or forbid extra data during model validation.
See the extra configuration value for details.
Additional context to use during validation.
Experimental whether to enable partial validation, e.g. to process streams.
- False / ‘off’: Default behavior, no partial validation.
- True / ‘on’: Enable partial validation.
- ‘trailing-strings’: Enable partial validation and allow trailing strings in the input.
Whether to use the field’s alias when validating against the provided input data.
Whether to use the field’s name when validating against the provided input data.
def validate_strings(
obj: Any,
strict: bool | None = None,
extra: ExtraValues | None = None,
context: Any | None = None,
experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> T
Validate object contains string data against the model.
T — The validated object.
obj : Any
The object contains string data to validate.
Whether to strictly check types.
extra : ExtraValues | None Default: None
Whether to ignore, allow, or forbid extra data during model validation.
See the extra configuration value for details.
Additional context to use during validation.
Experimental whether to enable partial validation, e.g. to process streams.
- False / ‘off’: Default behavior, no partial validation.
- True / ‘on’: Enable partial validation.
- ‘trailing-strings’: Enable partial validation and allow trailing strings in the input.
Whether to use the field’s alias when validating against the provided input data.
Whether to use the field’s name when validating against the provided input data.
def get_default_value(
strict: bool | None = None,
context: Any | None = None,
) -> Some[T] | None
Get the default value for the wrapped type.
Some[T] | None — The default value wrapped in a Some if there is one or None if not.
Whether to strictly check types.
Additional context to pass to the validator.
def dump_python(
instance: T,
mode: Literal['json', 'python'] = 'python',
include: IncEx | None = None,
exclude: IncEx | None = None,
by_alias: bool | None = None,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
exclude_computed_fields: bool = False,
round_trip: bool = False,
warnings: bool | Literal['none', 'warn', 'error'] = True,
fallback: Callable[[Any], Any] | None = None,
serialize_as_any: bool = False,
context: Any | None = None,
) -> Any
Dump an instance of the adapted type to a Python object.
Any — The serialized object.
The Python object to serialize.
mode : Literal[‘json’, ‘python’] Default: 'python'
The output format.
include : IncEx | None Default: None
Fields to include in the output.
exclude : IncEx | None Default: None
Fields to exclude from the output.
Whether to use alias names for field names.
exclude_unset : bool Default: False
Whether to exclude unset fields.
exclude_defaults : bool Default: False
Whether to exclude fields with default values.
exclude_none : bool Default: False
Whether to exclude fields with None values.
exclude_computed_fields : bool Default: False
Whether to exclude computed fields.
While this can be useful for round-tripping, it is usually recommended to use the dedicated
round_trip parameter instead.
round_trip : bool Default: False
Whether to output the serialized data in a way that is compatible with deserialization.
How to handle serialization errors. False/“none” ignores them, True/“warn” logs errors,
“error” raises a PydanticSerializationError.
A function to call when an unknown value is encountered. If not provided,
a PydanticSerializationError error is raised.
serialize_as_any : bool Default: False
Whether to serialize fields with duck-typing serialization behavior.
Additional context to pass to the serializer.
def dump_json(
instance: T,
indent: int | None = None,
ensure_ascii: bool = False,
include: IncEx | None = None,
exclude: IncEx | None = None,
by_alias: bool | None = None,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
exclude_computed_fields: bool = False,
round_trip: bool = False,
warnings: bool | Literal['none', 'warn', 'error'] = True,
fallback: Callable[[Any], Any] | None = None,
serialize_as_any: bool = False,
context: Any | None = None,
) -> bytes
Serialize an instance of the adapted type to JSON.
bytes — The JSON representation of the given instance as bytes.
The instance to be serialized.
Number of spaces for JSON indentation.
ensure_ascii : bool Default: False
If True, the output is guaranteed to have all incoming non-ASCII characters escaped.
If False (the default), these characters will be output as-is.
include : IncEx | None Default: None
Fields to include.
exclude : IncEx | None Default: None
Fields to exclude.
Whether to use alias names for field names.
exclude_unset : bool Default: False
Whether to exclude unset fields.
exclude_defaults : bool Default: False
Whether to exclude fields with default values.
exclude_none : bool Default: False
Whether to exclude fields with a value of None.
exclude_computed_fields : bool Default: False
Whether to exclude computed fields.
While this can be useful for round-tripping, it is usually recommended to use the dedicated
round_trip parameter instead.
round_trip : bool Default: False
Whether to serialize and deserialize the instance to ensure round-tripping.
How to handle serialization errors. False/“none” ignores them, True/“warn” logs errors,
“error” raises a PydanticSerializationError.
A function to call when an unknown value is encountered. If not provided,
a PydanticSerializationError error is raised.
serialize_as_any : bool Default: False
Whether to serialize fields with duck-typing serialization behavior.
Additional context to pass to the serializer.
def json_schema(
by_alias: bool = True,
ref_template: str = DEFAULT_REF_TEMPLATE,
union_format: Literal['any_of', 'primitive_type_array'] = 'any_of',
schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
mode: JsonSchemaMode = 'validation',
) -> dict[str, Any]
Generate a JSON schema for the adapted type.
dict[str, Any] — The JSON schema for the model as a dictionary.
by_alias : bool Default: True
Whether to use alias names for field names.
ref_template : str Default: DEFAULT_REF_TEMPLATE
The format string used for generating $ref strings.
union_format : Literal[‘any_of’, ‘primitive_type_array’] Default: 'any_of'
The format to use when combining schemas from unions together. Can be one of:
'any_of': Use theanyOfkeyword to combine schemas (the default).'primitive_type_array': Use thetypekeyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive type (string,boolean,null,integerornumber) or contains constraints/metadata, falls back toany_of.
schema_generator : type[GenerateJsonSchema] Default: GenerateJsonSchema
To override the logic used to generate the JSON schema, as a subclass of
GenerateJsonSchema with your desired modifications
The mode in which to generate the schema.
schema_generator : type[GenerateJsonSchema] Default: GenerateJsonSchema
The generator class used for creating the schema.
The mode to use for schema generation.
@staticmethod
def json_schemas(
inputs: Iterable[tuple[JsonSchemaKeyT, JsonSchemaMode, TypeAdapter[Any]]],
by_alias: bool = True,
title: str | None = None,
description: str | None = None,
ref_template: str = DEFAULT_REF_TEMPLATE,
union_format: Literal['any_of', 'primitive_type_array'] = 'any_of',
schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
) -> tuple[dict[tuple[JsonSchemaKeyT, JsonSchemaMode], JsonSchemaValue], JsonSchemaValue]
Generate a JSON schema including definitions from multiple type adapters.
tuple[dict[tuple[JsonSchemaKeyT, JsonSchemaMode], JsonSchemaValue], JsonSchemaValue] — A tuple where:
- The first element is a dictionary whose keys are tuples of JSON schema key type and JSON mode, and whose values are the JSON schema corresponding to that pair of inputs. (These schemas may have JsonRef references to definitions that are defined in the second returned element.)
- The second element is a JSON schema containing all definitions referenced in the first returned element, along with the optional title and description keys.
inputs : Iterable[tuple[JsonSchemaKeyT, JsonSchemaMode, TypeAdapter[Any]]]
Inputs to schema generation. The first two items will form the keys of the (first) output mapping; the type adapters will provide the core schemas that get converted into definitions in the output JSON schema.
by_alias : bool Default: True
Whether to use alias names.
The title for the schema.
The description for the schema.
ref_template : str Default: DEFAULT_REF_TEMPLATE
The format string used for generating $ref strings.
union_format : Literal[‘any_of’, ‘primitive_type_array’] Default: 'any_of'
The format to use when combining schemas from unions together. Can be one of:
'any_of': Use theanyOfkeyword to combine schemas (the default).'primitive_type_array': Use thetypekeyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive type (string,boolean,null,integerornumber) or contains constraints/metadata, falls back toany_of.
schema_generator : type[GenerateJsonSchema] Default: GenerateJsonSchema
The generator class used for creating the schema.