Pydantic Types
The types module contains custom types used by pydantic.
Handler to call into the next CoreSchema schema generation function.
Get the name of the closest field to this validator.
Type: str | None
def __call__(source_type: Any) -> core_schema.CoreSchema
Call the inner handler and get the CoreSchema it returns.
This will call the next CoreSchema modifying function up until it calls
into Pydantic’s internal schema generation machinery, which will raise a
pydantic.errors.PydanticSchemaGenerationError error if it cannot generate
a CoreSchema for the given source type.
core_schema.CoreSchema — The pydantic-core CoreSchema generated.
The input type.
def generate_schema(source_type: Any) -> core_schema.CoreSchema
Generate a schema unrelated to the current context.
Use this function if e.g. you are handling schema generation for a sequence
and want to generate a schema for its items.
Otherwise, you may end up doing something like applying a min_length constraint
that was intended for the sequence itself to its items!
core_schema.CoreSchema — The pydantic-core CoreSchema generated.
The input type.
def resolve_ref_schema(
maybe_ref_schema: core_schema.CoreSchema,
) -> core_schema.CoreSchema
Get the real schema for a definition-ref schema.
If the schema given is not a definition-ref schema, it will be returned as is.
This means you don’t have to check before calling this function.
core_schema.CoreSchema — A concrete CoreSchema.
A CoreSchema, ref-based or not.
LookupError— If therefis not found.
Handler to call into the next JSON schema generation function.
Type: JsonSchemaMode
def __call__(core_schema: CoreSchemaOrField) -> JsonSchemaValue
Call the inner handler and get the JsonSchemaValue it returns.
This will call the next JSON schema modifying function up until it calls
into pydantic.json_schema.GenerateJsonSchema, which will raise a
pydantic.errors.PydanticInvalidForJsonSchema error if it cannot generate
a JSON schema.
JsonSchemaValue — The JSON schema generated by the inner JSON schema modify
JsonSchemaValue — functions.
A pydantic_core.core_schema.CoreSchema.
def resolve_ref_schema(maybe_ref_json_schema: JsonSchemaValue) -> JsonSchemaValue
Get the real schema for a \{"$ref": ...\} schema.
If the schema given is not a $ref schema, it will be returned as is.
This means you don’t have to check before calling this function.
JsonSchemaValue — A JsonSchemaValue that has no $ref.
A JsonSchemaValue which may be a $ref schema.
LookupError— If the ref is not found.
Bases: PydanticErrorMixin, TypeError
An error raised due to incorrect use of Pydantic.
Bases: PydanticDeprecationWarning
A specific PydanticDeprecationWarning subclass defining functionality deprecated since Pydantic 2.0.
Bases: TypedDict
A TypedDict for holding the metadata dict of the schema.
TODO: Perhaps we should move this structure to pydantic-core. At the moment, though, it’s easier to iterate on if we leave it in pydantic until we feel there is a semi-stable API.
TODO: It’s unfortunate how functionally oriented JSON schema generation is, especially that which occurs during the core schema generation process. It’s inevitable that we need to store some json schema related information on core schemas, given that we generate JSON schemas directly from core schemas. That being said, debugging related issues is quite difficult when JSON schema information is disguised via dynamically defined functions.
Type: list[GetJsonSchemaFunction]
Type: list[GetJsonSchemaFunction]
Type: bool
Type: JsonDict
Type: JsonDict | JsonSchemaExtraCallable
Type: str
Type: str
Bases: PydanticMetadata, BaseMetadata
A field metadata class to indicate that a field should be validated in strict mode.
Use this class as an annotation via Annotated, as seen below.
Type: bool Default: True
Bases: PydanticMetadata
A field metadata class to indicate that a field should allow -inf, inf, and nan.
Use this class as an annotation via Annotated, as seen below.
Type: bool Default: True
Bases: GroupedMetadata
A field metadata class to apply constraints to str types.
Use this class as an annotation via Annotated, as seen below.
Type: bool | None Default: None
Type: bool | None Default: None
Type: bool | None Default: None
Type: bool | None Default: None
Type: int | None Default: None
Type: int | None Default: None
Type: str | Pattern[str] | None Default: None
A type that can be used to import a Python object from a string.
ImportString expects a string and loads the Python object importable at that dotted path.
Attributes of modules may be separated from the module by : or ., e.g. if 'math:cos' is provided,
the resulting field value would be the function cos. If a . is used and both an attribute and submodule
are present at the same path, the module will be preferred.
On model instantiation, pointers will be evaluated and imported. There is some nuance to this behavior, demonstrated in the examples below.
import math
from pydantic import BaseModel, Field, ImportString, ValidationError
class ImportThings(BaseModel):
obj: ImportString
# A string value will cause an automatic import
my_cos = ImportThings(obj='math.cos')
# You can use the imported function as you would expect
cos_of_0 = my_cos.obj(0)
assert cos_of_0 == 1
# A string whose value cannot be imported will raise an error
try:
ImportThings(obj='foo.bar')
except ValidationError as e:
print(e)
'''
1 validation error for ImportThings
obj
Invalid python path: No module named 'foo.bar' [type=import_error, input_value='foo.bar', input_type=str]
'''
# Actual python objects can be assigned as well
my_cos = ImportThings(obj=math.cos)
my_cos_2 = ImportThings(obj='math.cos')
my_cos_3 = ImportThings(obj='math:cos')
assert my_cos == my_cos_2 == my_cos_3
# You can set default field value either as Python object:
class ImportThingsDefaultPyObj(BaseModel):
obj: ImportString = math.cos
# or as a string value (but only if used with `validate_default=True`)
class ImportThingsDefaultString(BaseModel):
obj: ImportString = Field(default='math.cos', validate_default=True)
my_cos_default1 = ImportThingsDefaultPyObj()
my_cos_default2 = ImportThingsDefaultString()
assert my_cos_default1.obj == my_cos_default2.obj == math.cos
# note: this will not work!
class ImportThingsMissingValidateDefault(BaseModel):
obj: ImportString = 'math.cos'
my_cos_default3 = ImportThingsMissingValidateDefault()
assert my_cos_default3.obj == 'math.cos' # just string, not evaluated
Serializing an ImportString type to json is also possible.
from pydantic import BaseModel, ImportString
class ImportThings(BaseModel):
obj: ImportString
# Create an instance
m = ImportThings(obj='math.cos')
print(m)
#> obj=<built-in function cos>
print(m.model_dump_json())
#> {"obj":"math.cos"}
A field metadata class to indicate a UUID version.
Use this class as an annotation via Annotated, as seen below.
Type: Literal[1, 3, 4, 5, 6, 7, 8]
Type: Literal['file', 'dir', 'new', 'socket']
@staticmethod
def validate_file(path: Path, _: core_schema.ValidationInfo) -> Path
Path
@staticmethod
def validate_socket(path: Path, _: core_schema.ValidationInfo) -> Path
Path
@staticmethod
def validate_directory(path: Path, _: core_schema.ValidationInfo) -> Path
Path
@staticmethod
def validate_new(path: Path, _: core_schema.ValidationInfo) -> Path
Path
A special type wrapper which loads JSON before parsing.
You can use the Json data type to make Pydantic first load a raw JSON string before
validating the loaded data into the parametrized type:
from typing import Any
from pydantic import BaseModel, Json, ValidationError
class AnyJsonModel(BaseModel):
json_obj: Json[Any]
class ConstrainedJsonModel(BaseModel):
json_obj: Json[list[int]]
print(AnyJsonModel(json_obj='{"b": 1}'))
#> json_obj={'b': 1}
print(ConstrainedJsonModel(json_obj='[1, 2, 3]'))
#> json_obj=[1, 2, 3]
try:
ConstrainedJsonModel(json_obj=12)
except ValidationError as e:
print(e)
'''
1 validation error for ConstrainedJsonModel
json_obj
JSON input should be string, bytes or bytearray [type=json_type, input_value=12, input_type=int]
'''
try:
ConstrainedJsonModel(json_obj='[a, b]')
except ValidationError as e:
print(e)
'''
1 validation error for ConstrainedJsonModel
json_obj
Invalid JSON: expected value at line 1 column 2 [type=json_invalid, input_value='[a, b]', input_type=str]
'''
try:
ConstrainedJsonModel(json_obj='["a", "b"]')
except ValidationError as e:
print(e)
'''
2 validation errors for ConstrainedJsonModel
json_obj.0
Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='a', input_type=str]
json_obj.1
Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='b', input_type=str]
'''
When you dump the model using model_dump or model_dump_json, the dumped value will be the result of validation,
not the original JSON string. However, you can use the argument round_trip=True to get the original JSON string back:
from pydantic import BaseModel, Json
class ConstrainedJsonModel(BaseModel):
json_obj: Json[list[int]]
print(ConstrainedJsonModel(json_obj='[1, 2, 3]').model_dump_json())
#> {"json_obj":[1,2,3]}
print(
ConstrainedJsonModel(json_obj='[1, 2, 3]').model_dump_json(round_trip=True)
)
#> {"json_obj":"[1,2,3]"}
Bases: _SecretBase[SecretType]
A generic base class used for defining a field with sensitive information that you do not want to be visible in logging or tracebacks.
You may either directly parametrize Secret with a type, or subclass from Secret with a parametrized type. The benefit of subclassing
is that you can define a custom _display method, which will be used for repr() and str() methods. The examples below demonstrate both
ways of using Secret to create a new secret type.
- Directly parametrizing
Secretwith a type:
from pydantic import BaseModel, Secret
SecretBool = Secret[bool]
class Model(BaseModel):
secret_bool: SecretBool
m = Model(secret_bool=True)
print(m.model_dump())
#> {'secret_bool': Secret('**********')}
print(m.model_dump_json())
#> {"secret_bool":"**********"}
print(m.secret_bool.get_secret_value())
#> True
- Subclassing from parametrized
Secret:
from datetime import date
from pydantic import BaseModel, Secret
class SecretDate(Secret[date]):
def _display(self) -> str:
return '****/**/**'
class Model(BaseModel):
secret_date: SecretDate
m = Model(secret_date=date(2022, 1, 1))
print(m.model_dump())
#> {'secret_date': SecretDate('****/**/**')}
print(m.model_dump_json())
#> {"secret_date":"****/**/**"}
print(m.secret_date.get_secret_value())
#> 2022-01-01
The value returned by the _display method will be used for repr() and str().
You can enforce constraints on the underlying type through annotations: For example:
from typing import Annotated
from pydantic import BaseModel, Field, Secret, ValidationError
SecretPosInt = Secret[Annotated[int, Field(gt=0, strict=True)]]
class Model(BaseModel):
sensitive_int: SecretPosInt
m = Model(sensitive_int=42)
print(m.model_dump())
#> {'sensitive_int': Secret('**********')}
try:
m = Model(sensitive_int=-42) # (1)
except ValidationError as exc_info:
print(exc_info.errors(include_url=False, include_input=False))
'''
[
{
'type': 'greater_than',
'loc': ('sensitive_int',),
'msg': 'Input should be greater than 0',
'ctx': {'gt': 0},
}
]
'''
try:
m = Model(sensitive_int='42') # (2)
except ValidationError as exc_info:
print(exc_info.errors(include_url=False, include_input=False))
'''
[
{
'type': 'int_type',
'loc': ('sensitive_int',),
'msg': 'Input should be a valid integer',
}
]
''' The input value is not greater than 0, so it raises a validation error.
The input value is not an integer, so it raises a validation error because the SecretPosInt type has strict mode enabled.
Bases: _SecretField[str]
A string used for storing sensitive information that you do not want to be visible in logging or tracebacks.
When the secret value is nonempty, it is displayed as '**********' instead of the underlying value in
calls to repr() and str(). If the value is empty, it is displayed as ''.
from pydantic import BaseModel, SecretStr
class User(BaseModel):
username: str
password: SecretStr
user = User(username='scolvin', password='password1')
print(user)
#> username='scolvin' password=SecretStr('**********')
print(user.password.get_secret_value())
#> password1
print((SecretStr('password'), SecretStr('')))
#> (SecretStr('**********'), SecretStr(''))
As seen above, by default, SecretStr (and SecretBytes)
will be serialized as ********** when serializing to json.
You can use the field_serializer to dump the
secret as plain-text when serializing to json.
from pydantic import BaseModel, SecretBytes, SecretStr, field_serializer
class Model(BaseModel):
password: SecretStr
password_bytes: SecretBytes
@field_serializer('password', 'password_bytes', when_used='json')
def dump_secret(self, v):
return v.get_secret_value()
model = Model(password='IAmSensitive', password_bytes=b'IAmSensitiveBytes')
print(model)
#> password=SecretStr('**********') password_bytes=SecretBytes(b'**********')
print(model.password)
#> **********
print(model.model_dump())
'''
{
'password': SecretStr('**********'),
'password_bytes': SecretBytes(b'**********'),
}
'''
print(model.model_dump_json())
#> {"password":"IAmSensitive","password_bytes":"IAmSensitiveBytes"}
Bases: _SecretField[bytes]
A bytes used for storing sensitive information that you do not want to be visible in logging or tracebacks.
It displays b'**********' instead of the string value on repr() and str() calls.
When the secret value is nonempty, it is displayed as b'**********' instead of the underlying value in
calls to repr() and str(). If the value is empty, it is displayed as b''.
from pydantic import BaseModel, SecretBytes
class User(BaseModel):
username: str
password: SecretBytes
user = User(username='scolvin', password=b'password1')
#> username='scolvin' password=SecretBytes(b'**********')
print(user.password.get_secret_value())
#> b'password1'
print((SecretBytes(b'password'), SecretBytes(b'')))
#> (SecretBytes(b'**********'), SecretBytes(b''))
Bases: str, Enum
Default: 'American Express'
Default: 'Mastercard'
Default: 'Visa'
Default: 'other'
Bases: str
Based on: https://en.wikipedia.org/wiki/Payment_card_number.
Type: bool Default: True
Type: int Default: 12
Type: int Default: 19
Type: str Default: card_number[:6]
Type: str Default: card_number[(-4):]
Type: PaymentCardBrand Default: self.validate_brand(card_number)
Mask all but the last 4 digits of the card number.
Type: str
@classmethod
def validate(cls, input_value: str, _: core_schema.ValidationInfo) -> PaymentCardNumber
Validate the card number and return a PaymentCardNumber instance.
PaymentCardNumber
@classmethod
def validate_digits(cls, card_number: str) -> None
Validate that the card number is all digits.
None
@classmethod
def validate_luhn_check_digit(cls, card_number: str) -> str
Based on: https://en.wikipedia.org/wiki/Luhn_algorithm.
str
@staticmethod
def validate_brand(card_number: str) -> PaymentCardBrand
Validate length based on BIN for major brands: https://en.wikipedia.org/wiki/Payment_card_number#Issuer_identification_number_(IIN).
PaymentCardBrand
Bases: int
Converts a string representing a number of bytes with units (such as '1KB' or '11.5MiB') into an integer.
You can use the ByteSize data type to (case-insensitively) convert a string representation of a number of bytes into
an integer, and also to print out human-readable strings representing a number of bytes.
In conformance with IEC 80000-13 Standard we interpret '1KB' to mean 1000 bytes,
and '1KiB' to mean 1024 bytes. In general, including a middle 'i' will cause the unit to be interpreted as a power of 2,
rather than a power of 10 (so, for example, '1 MB' is treated as 1_000_000 bytes, whereas '1 MiB' is treated as 1_048_576 bytes).
from pydantic import BaseModel, ByteSize
class MyModel(BaseModel):
size: ByteSize
print(MyModel(size=52000).size)
#> 52000
print(MyModel(size='3000 KiB').size)
#> 3072000
m = MyModel(size='50 PB')
print(m.size.human_readable())
#> 44.4PiB
print(m.size.human_readable(decimal=True))
#> 50.0PB
print(m.size.human_readable(separator=' '))
#> 44.4 PiB
print(m.size.to('TiB'))
#> 45474.73508864641
Default: \{'b': 1, 'kb': 10 ** 3, 'mb': 10 ** 6, 'gb': 10 ** 9, 'tb': 10 ** 12, 'pb': 10 ** 15, 'eb': 10 ** 18, 'kib': 2 ** 10, 'mib': 2 ** 20, 'gib': 2 ** 30, 'tib': 2 ** 40, 'pib': 2 ** 50, 'eib': 2 ** 60, 'bit': 1 / 8, 'kbit': 10 ** 3 / 8, 'mbit': 10 ** 6 / 8, 'gbit': 10 ** 9 / 8, 'tbit': 10 ** 12 / 8, 'pbit': 10 ** 15 / 8, 'ebit': 10 ** 18 / 8, 'kibit': 2 ** 10 / 8, 'mibit': 2 ** 20 / 8, 'gibit': 2 ** 30 / 8, 'tibit': 2 ** 40 / 8, 'pibit': 2 ** 50 / 8, 'eibit': 2 ** 60 / 8\}
Default: '^\\s*(\\d*\\.?\\d+)\\s*(\\w+)?'
Default: re.compile(byte_string_pattern, re.IGNORECASE)
def human_readable(decimal: bool = False, separator: str = '') -> str
Converts a byte size to a human readable string.
str — A human readable string representation of the byte size.
If True, use decimal units (e.g. 1000 bytes per KB). If False, use binary units (e.g. 1024 bytes per KiB).
A string used to split the value and unit. Defaults to an empty string (”).
def to(unit: str) -> float
Converts a byte size to another unit, including both byte and bit units.
float — The byte size in the new unit.
The unit to convert to. Must be one of the following: B, KB, MB, GB, TB, PB, EB, KiB, MiB, GiB, TiB, PiB, EiB (byte units) and bit, kbit, mbit, gbit, tbit, pbit, ebit, kibit, mibit, gibit, tibit, pibit, eibit (bit units).
A date in the past.
A date in the future.
A datetime that requires timezone info.
A datetime that doesn’t require timezone info.
A datetime that must be in the past.
A datetime that must be in the future.
Bases: Protocol
Protocol for encoding and decoding data to and from bytes.
@classmethod
def decode(cls, data: bytes) -> bytes
Decode the data using the encoder.
bytes — The decoded data.
The data to decode.
@classmethod
def encode(cls, value: bytes) -> bytes
Encode the data using the encoder.
bytes — The encoded data.
The data to encode.
@classmethod
def get_json_format(cls) -> str
Get the JSON format for the encoded data.
str — The JSON format for the encoded data.
Bases: EncoderProtocol
Standard (non-URL-safe) Base64 encoder.
@classmethod
def decode(cls, data: bytes) -> bytes
Decode the data from base64 encoded bytes to original bytes data.
bytes — The decoded data.
The data to decode.
@classmethod
def encode(cls, value: bytes) -> bytes
Encode the data from bytes to a base64 encoded bytes.
bytes — The encoded data.
The data to encode.
@classmethod
def get_json_format(cls) -> Literal['base64']
Get the JSON format for the encoded data.
Literal['base64'] — The JSON format for the encoded data.
Bases: EncoderProtocol
URL-safe Base64 encoder.
@classmethod
def decode(cls, data: bytes) -> bytes
Decode the data from base64 encoded bytes to original bytes data.
bytes — The decoded data.
The data to decode.
@classmethod
def encode(cls, value: bytes) -> bytes
Encode the data from bytes to a base64 encoded bytes.
bytes — The encoded data.
The data to encode.
@classmethod
def get_json_format(cls) -> Literal['base64url']
Get the JSON format for the encoded data.
Literal['base64url'] — The JSON format for the encoded data.
A bytes type that is encoded and decoded using the specified encoder.
EncodedBytes needs an encoder that implements EncoderProtocol to operate.
from typing import Annotated
from pydantic import BaseModel, EncodedBytes, EncoderProtocol, ValidationError
class MyEncoder(EncoderProtocol):
@classmethod
def decode(cls, data: bytes) -> bytes:
if data == b'**undecodable**':
raise ValueError('Cannot decode data')
return data[13:]
@classmethod
def encode(cls, value: bytes) -> bytes:
return b'**encoded**: ' + value
@classmethod
def get_json_format(cls) -> str:
return 'my-encoder'
MyEncodedBytes = Annotated[bytes, EncodedBytes(encoder=MyEncoder)]
class Model(BaseModel):
my_encoded_bytes: MyEncodedBytes
# Initialize the model with encoded data
m = Model(my_encoded_bytes=b'**encoded**: some bytes')
# Access decoded value
print(m.my_encoded_bytes)
#> b'some bytes'
# Serialize into the encoded form
print(m.model_dump())
#> {'my_encoded_bytes': b'**encoded**: some bytes'}
# Validate encoded data
try:
Model(my_encoded_bytes=b'**undecodable**')
except ValidationError as e:
print(e)
'''
1 validation error for Model
my_encoded_bytes
Value error, Cannot decode data [type=value_error, input_value=b'**undecodable**', input_type=bytes]
'''
Type: type[EncoderProtocol]
def decode(data: bytes, _: core_schema.ValidationInfo) -> bytes
Decode the data using the specified encoder.
bytes — The decoded data.
The data to decode.
def encode(value: bytes) -> bytes
Encode the data using the specified encoder.
bytes — The encoded data.
The data to encode.
A str type that is encoded and decoded using the specified encoder.
EncodedStr needs an encoder that implements EncoderProtocol to operate.
from typing import Annotated
from pydantic import BaseModel, EncodedStr, EncoderProtocol, ValidationError
class MyEncoder(EncoderProtocol):
@classmethod
def decode(cls, data: bytes) -> bytes:
if data == b'**undecodable**':
raise ValueError('Cannot decode data')
return data[13:]
@classmethod
def encode(cls, value: bytes) -> bytes:
return b'**encoded**: ' + value
@classmethod
def get_json_format(cls) -> str:
return 'my-encoder'
MyEncodedStr = Annotated[str, EncodedStr(encoder=MyEncoder)]
class Model(BaseModel):
my_encoded_str: MyEncodedStr
# Initialize the model with encoded data
m = Model(my_encoded_str='**encoded**: some str')
# Access decoded value
print(m.my_encoded_str)
#> some str
# Serialize into the encoded form
print(m.model_dump())
#> {'my_encoded_str': '**encoded**: some str'}
# Validate encoded data
try:
Model(my_encoded_str='**undecodable**')
except ValidationError as e:
print(e)
'''
1 validation error for Model
my_encoded_str
Value error, Cannot decode data [type=value_error, input_value='**undecodable**', input_type=str]
'''
Type: type[EncoderProtocol]
def decode_str(data: str, _: core_schema.ValidationInfo) -> str
Decode the data using the specified encoder.
str — The decoded data.
The data to decode.
def encode_str(value: str) -> str
Encode the data using the specified encoder.
str — The encoded data.
The data to encode.
A convenience class for creating an annotation that provides pydantic custom type hooks.
This class is intended to eliminate the need to create a custom “marker” which defines the
__get_pydantic_core_schema__ and __get_pydantic_json_schema__ custom hook methods.
For example, to have a field treated by type checkers as int, but by pydantic as Any, you can do:
from typing import Annotated, Any
from pydantic import BaseModel, GetPydanticSchema
HandleAsAny = GetPydanticSchema(lambda _s, h: h(Any))
class Model(BaseModel):
x: Annotated[int, HandleAsAny] # pydantic sees `x: Any`
print(repr(Model(x='abc').x))
#> 'abc'
Type: Callable[[Any, GetCoreSchemaHandler], CoreSchema] | None Default: None
Type: Callable[[Any, GetJsonSchemaHandler], JsonSchemaValue] | None Default: None
def __getattr__(item: str) -> Any
Use this rather than defining __get_pydantic_core_schema__ etc. to reduce the number of nested calls.
Any
Provides a way to specify the expected tag to use for a case of a (callable) discriminated union.
Also provides a way to label a union case in error messages.
When using a callable Discriminator, attach a Tag to each case in the Union to specify the tag that
should be used to identify that case. For example, in the below example, the Tag is used to specify that
if get_discriminator_value returns 'apple', the input should be validated as an ApplePie, and if it
returns 'pumpkin', the input should be validated as a PumpkinPie.
The primary role of the Tag here is to map the return value from the callable Discriminator function to
the appropriate member of the Union in question.
from typing import Annotated, Any, Literal, Union
from pydantic import BaseModel, Discriminator, Tag
class Pie(BaseModel):
time_to_cook: int
num_ingredients: int
class ApplePie(Pie):
fruit: Literal['apple'] = 'apple'
class PumpkinPie(Pie):
filling: Literal['pumpkin'] = 'pumpkin'
def get_discriminator_value(v: Any) -> str:
if isinstance(v, dict):
return v.get('fruit', v.get('filling'))
return getattr(v, 'fruit', getattr(v, 'filling', None))
class ThanksgivingDinner(BaseModel):
dessert: Annotated[
Union[
Annotated[ApplePie, Tag('apple')],
Annotated[PumpkinPie, Tag('pumpkin')],
],
Discriminator(get_discriminator_value),
]
apple_variation = ThanksgivingDinner.model_validate(
{'dessert': {'fruit': 'apple', 'time_to_cook': 60, 'num_ingredients': 8}}
)
print(repr(apple_variation))
'''
ThanksgivingDinner(dessert=ApplePie(time_to_cook=60, num_ingredients=8, fruit='apple'))
'''
pumpkin_variation = ThanksgivingDinner.model_validate(
{
'dessert': {
'filling': 'pumpkin',
'time_to_cook': 40,
'num_ingredients': 6,
}
}
)
print(repr(pumpkin_variation))
'''
ThanksgivingDinner(dessert=PumpkinPie(time_to_cook=40, num_ingredients=6, filling='pumpkin'))
'''
See the Discriminated Unions concepts docs for more details on how to use Tags.
Type: str
Provides a way to use a custom callable as the way to extract the value of a union discriminator.
This allows you to get validation behavior like you’d get from Field(discriminator=<field_name>),
but without needing to have a single shared field across all the union choices. This also makes it
possible to handle unions of models and primitive types with discriminated-union-style validation errors.
Finally, this allows you to use a custom callable as the way to identify which member of a union a value
belongs to, while still seeing all the performance benefits of a discriminated union.
Consider this example, which is much more performant with the use of Discriminator and thus a TaggedUnion
than it would be as a normal Union.
from typing import Annotated, Any, Literal, Union
from pydantic import BaseModel, Discriminator, Tag
class Pie(BaseModel):
time_to_cook: int
num_ingredients: int
class ApplePie(Pie):
fruit: Literal['apple'] = 'apple'
class PumpkinPie(Pie):
filling: Literal['pumpkin'] = 'pumpkin'
def get_discriminator_value(v: Any) -> str:
if isinstance(v, dict):
return v.get('fruit', v.get('filling'))
return getattr(v, 'fruit', getattr(v, 'filling', None))
class ThanksgivingDinner(BaseModel):
dessert: Annotated[
Union[
Annotated[ApplePie, Tag('apple')],
Annotated[PumpkinPie, Tag('pumpkin')],
],
Discriminator(get_discriminator_value),
]
apple_variation = ThanksgivingDinner.model_validate(
{'dessert': {'fruit': 'apple', 'time_to_cook': 60, 'num_ingredients': 8}}
)
print(repr(apple_variation))
'''
ThanksgivingDinner(dessert=ApplePie(time_to_cook=60, num_ingredients=8, fruit='apple'))
'''
pumpkin_variation = ThanksgivingDinner.model_validate(
{
'dessert': {
'filling': 'pumpkin',
'time_to_cook': 40,
'num_ingredients': 6,
}
}
)
print(repr(pumpkin_variation))
'''
ThanksgivingDinner(dessert=PumpkinPie(time_to_cook=40, num_ingredients=6, filling='pumpkin'))
'''
See the Discriminated Unions concepts docs for more details on how to use Discriminators.
The callable or field name for discriminating the type in a tagged union.
A Callable discriminator must extract the value of the discriminator from the input.
A str discriminator must be the name of a field to discriminate against.
Type: str | Callable[[Any], Hashable]
Type to use in custom errors replacing the standard discriminated union validation errors.
Type: str | None Default: None
Message to use in custom errors.
Type: str | None Default: None
Context to use in custom errors.
Type: dict[str, int | str | float] | None Default: None
Bases: PydanticMetadata, BaseMetadata
A FailFast annotation can be used to specify that validation should stop at the first error.
This can be useful when you want to validate a large amount of data and you only need to know if it’s valid or not.
You might want to enable this setting if you want to validate your data faster (basically, if you use this, validation will be more performant with the caveat that you get less information).
from typing import Annotated
from pydantic import BaseModel, FailFast, ValidationError
class Model(BaseModel):
x: Annotated[list[int], FailFast()]
# This will raise a single error for the first invalid value and stop validation
try:
obj = Model(x=[1, 2, 'a', 4, 5, 'b', 7, 8, 9, 'c'])
except ValidationError as e:
print(e)
'''
1 validation error for Model
x.2
Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='a', input_type=str]
'''
Type: bool Default: True
def getattr_migration(module: str) -> Callable[[str], Any]
Implement PEP 562 for objects that were either moved or removed on the migration to V2.
Callable[[str], Any] — A callable that will raise an error if the object is not found.
The module name.
def conint(
strict: bool | None = None,
gt: int | None = None,
ge: int | None = None,
lt: int | None = None,
le: int | None = None,
multiple_of: int | None = None,
) -> type[int]
from pydantic import BaseModel, conint
class Foo(BaseModel):
bar: conint(strict=True, gt=0)
from typing import Annotated
from pydantic import BaseModel, Field
class Foo(BaseModel):
bar: Annotated[int, Field(strict=True, gt=0)]
A wrapper around int that allows for additional constraints.
from pydantic import BaseModel, ValidationError, conint
class ConstrainedExample(BaseModel):
constrained_int: conint(gt=1)
m = ConstrainedExample(constrained_int=2)
print(repr(m))
#> ConstrainedExample(constrained_int=2)
try:
ConstrainedExample(constrained_int=0)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'greater_than',
'loc': ('constrained_int',),
'msg': 'Input should be greater than 1',
'input': 0,
'ctx': {'gt': 1},
'url': 'https://errors.pydantic.dev/2/v/greater_than',
}
]
'''
type[int] — The wrapped integer type.
Whether to validate the integer in strict mode. Defaults to None.
The value must be greater than this.
The value must be greater than or equal to this.
The value must be less than this.
The value must be less than or equal to this.
The value must be a multiple of this.
def confloat(
strict: bool | None = None,
gt: float | None = None,
ge: float | None = None,
lt: float | None = None,
le: float | None = None,
multiple_of: float | None = None,
allow_inf_nan: bool | None = None,
) -> type[float]
from pydantic import BaseModel, confloat
class Foo(BaseModel):
bar: confloat(strict=True, gt=0)
from typing import Annotated
from pydantic import BaseModel, Field
class Foo(BaseModel):
bar: Annotated[float, Field(strict=True, gt=0)]
A wrapper around float that allows for additional constraints.
from pydantic import BaseModel, ValidationError, confloat
class ConstrainedExample(BaseModel):
constrained_float: confloat(gt=1.0)
m = ConstrainedExample(constrained_float=1.1)
print(repr(m))
#> ConstrainedExample(constrained_float=1.1)
try:
ConstrainedExample(constrained_float=0.9)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'greater_than',
'loc': ('constrained_float',),
'msg': 'Input should be greater than 1',
'input': 0.9,
'ctx': {'gt': 1.0},
'url': 'https://errors.pydantic.dev/2/v/greater_than',
}
]
'''
type[float] — The wrapped float type.
Whether to validate the float in strict mode.
The value must be greater than this.
The value must be greater than or equal to this.
The value must be less than this.
The value must be less than or equal to this.
The value must be a multiple of this.
Whether to allow -inf, inf, and nan.
def conbytes(
min_length: int | None = None,
max_length: int | None = None,
strict: bool | None = None,
) -> type[bytes]
A wrapper around bytes that allows for additional constraints.
type[bytes] — The wrapped bytes type.
The minimum length of the bytes.
The maximum length of the bytes.
Whether to validate the bytes in strict mode.
def constr(
strip_whitespace: bool | None = None,
to_upper: bool | None = None,
to_lower: bool | None = None,
strict: bool | None = None,
min_length: int | None = None,
max_length: int | None = None,
pattern: str | Pattern[str] | None = None,
) -> type[str]
from pydantic import BaseModel, constr
class Foo(BaseModel):
bar: constr(strip_whitespace=True, to_upper=True, pattern=r'^[A-Z]+$')
from typing import Annotated
from pydantic import BaseModel, StringConstraints
class Foo(BaseModel):
bar: Annotated[
str,
StringConstraints(
strip_whitespace=True, to_upper=True, pattern=r'^[A-Z]+$'
),
]
A wrapper around str that allows for additional constraints.
from pydantic import BaseModel, constr
class Foo(BaseModel):
bar: constr(strip_whitespace=True, to_upper=True)
foo = Foo(bar=' hello ')
print(foo)
#> bar='HELLO'
type[str] — The wrapped string type.
Whether to remove leading and trailing whitespace.
Whether to turn all characters to uppercase.
Whether to turn all characters to lowercase.
Whether to validate the string in strict mode.
The minimum length of the string.
The maximum length of the string.
A regex pattern to validate the string against.
def conset(
item_type: type[HashableItemType],
min_length: int | None = None,
max_length: int | None = None,
) -> type[set[HashableItemType]]
A wrapper around typing.Set that allows for additional constraints.
type[set[HashableItemType]] — The wrapped set type.
The type of the items in the set.
The minimum length of the set.
The maximum length of the set.
def confrozenset(
item_type: type[HashableItemType],
min_length: int | None = None,
max_length: int | None = None,
) -> type[frozenset[HashableItemType]]
A wrapper around typing.FrozenSet that allows for additional constraints.
type[frozenset[HashableItemType]] — The wrapped frozenset type.
The type of the items in the frozenset.
The minimum length of the frozenset.
The maximum length of the frozenset.
def conlist(
item_type: type[AnyItemType],
min_length: int | None = None,
max_length: int | None = None,
unique_items: bool | None = None,
) -> type[list[AnyItemType]]
A wrapper around list that adds validation.
type[list[AnyItemType]] — The wrapped list type.
The type of the items in the list.
The minimum length of the list. Defaults to None.
The maximum length of the list. Defaults to None.
Whether the items in the list must be unique. Defaults to None.
def condecimal(
strict: bool | None = None,
gt: int | Decimal | None = None,
ge: int | Decimal | None = None,
lt: int | Decimal | None = None,
le: int | Decimal | None = None,
multiple_of: int | Decimal | None = None,
max_digits: int | None = None,
decimal_places: int | None = None,
allow_inf_nan: bool | None = None,
) -> type[Decimal]
from pydantic import BaseModel, condecimal
class Foo(BaseModel):
bar: condecimal(strict=True, allow_inf_nan=True)
from decimal import Decimal
from typing import Annotated
from pydantic import BaseModel, Field
class Foo(BaseModel):
bar: Annotated[Decimal, Field(strict=True, allow_inf_nan=True)]
A wrapper around Decimal that adds validation.
from decimal import Decimal
from pydantic import BaseModel, ValidationError, condecimal
class ConstrainedExample(BaseModel):
constrained_decimal: condecimal(gt=Decimal('1.0'))
m = ConstrainedExample(constrained_decimal=Decimal('1.1'))
print(repr(m))
#> ConstrainedExample(constrained_decimal=Decimal('1.1'))
try:
ConstrainedExample(constrained_decimal=Decimal('0.9'))
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'greater_than',
'loc': ('constrained_decimal',),
'msg': 'Input should be greater than 1.0',
'input': Decimal('0.9'),
'ctx': {'gt': Decimal('1.0')},
'url': 'https://errors.pydantic.dev/2/v/greater_than',
}
]
'''
type[Decimal]
Whether to validate the value in strict mode. Defaults to None.
The value must be greater than this. Defaults to None.
The value must be greater than or equal to this. Defaults to None.
The value must be less than this. Defaults to None.
The value must be less than or equal to this. Defaults to None.
The value must be a multiple of this. Defaults to None.
The maximum number of digits. Defaults to None.
The number of decimal places. Defaults to None.
Whether to allow infinity and NaN. Defaults to None.
def condate(
strict: bool | None = None,
gt: date | None = None,
ge: date | None = None,
lt: date | None = None,
le: date | None = None,
) -> type[date]
A wrapper for date that adds constraints.
type[date] — A date type with the specified constraints.
Whether to validate the date value in strict mode. Defaults to None.
The value must be greater than this. Defaults to None.
The value must be greater than or equal to this. Defaults to None.
The value must be less than this. Defaults to None.
The value must be less than or equal to this. Defaults to None.
A type alias for a JSON schema value. This is a dictionary of string keys to arbitrary JSON values.
Default: dict[str, Any]
Default: TypeVar('T')
A boolean that must be either True or False.
Default: Annotated[bool, Strict()]
An integer that must be greater than zero.
from pydantic import BaseModel, PositiveInt, ValidationError
class Model(BaseModel):
positive_int: PositiveInt
m = Model(positive_int=1)
print(repr(m))
#> Model(positive_int=1)
try:
Model(positive_int=-1)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'greater_than',
'loc': ('positive_int',),
'msg': 'Input should be greater than 0',
'input': -1,
'ctx': {'gt': 0},
'url': 'https://errors.pydantic.dev/2/v/greater_than',
}
]
'''
Default: Annotated[int, annotated_types.Gt(0)]
An integer that must be less than zero.
from pydantic import BaseModel, NegativeInt, ValidationError
class Model(BaseModel):
negative_int: NegativeInt
m = Model(negative_int=-1)
print(repr(m))
#> Model(negative_int=-1)
try:
Model(negative_int=1)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'less_than',
'loc': ('negative_int',),
'msg': 'Input should be less than 0',
'input': 1,
'ctx': {'lt': 0},
'url': 'https://errors.pydantic.dev/2/v/less_than',
}
]
'''
Default: Annotated[int, annotated_types.Lt(0)]
An integer that must be less than or equal to zero.
from pydantic import BaseModel, NonPositiveInt, ValidationError
class Model(BaseModel):
non_positive_int: NonPositiveInt
m = Model(non_positive_int=0)
print(repr(m))
#> Model(non_positive_int=0)
try:
Model(non_positive_int=1)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'less_than_equal',
'loc': ('non_positive_int',),
'msg': 'Input should be less than or equal to 0',
'input': 1,
'ctx': {'le': 0},
'url': 'https://errors.pydantic.dev/2/v/less_than_equal',
}
]
'''
Default: Annotated[int, annotated_types.Le(0)]
An integer that must be greater than or equal to zero.
from pydantic import BaseModel, NonNegativeInt, ValidationError
class Model(BaseModel):
non_negative_int: NonNegativeInt
m = Model(non_negative_int=0)
print(repr(m))
#> Model(non_negative_int=0)
try:
Model(non_negative_int=-1)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'greater_than_equal',
'loc': ('non_negative_int',),
'msg': 'Input should be greater than or equal to 0',
'input': -1,
'ctx': {'ge': 0},
'url': 'https://errors.pydantic.dev/2/v/greater_than_equal',
}
]
'''
Default: Annotated[int, annotated_types.Ge(0)]
An integer that must be validated in strict mode.
from pydantic import BaseModel, StrictInt, ValidationError
class StrictIntModel(BaseModel):
strict_int: StrictInt
try:
StrictIntModel(strict_int=3.14159)
except ValidationError as e:
print(e)
'''
1 validation error for StrictIntModel
strict_int
Input should be a valid integer [type=int_type, input_value=3.14159, input_type=float]
'''
Default: Annotated[int, Strict()]
A float that must be greater than zero.
from pydantic import BaseModel, PositiveFloat, ValidationError
class Model(BaseModel):
positive_float: PositiveFloat
m = Model(positive_float=1.0)
print(repr(m))
#> Model(positive_float=1.0)
try:
Model(positive_float=-1.0)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'greater_than',
'loc': ('positive_float',),
'msg': 'Input should be greater than 0',
'input': -1.0,
'ctx': {'gt': 0.0},
'url': 'https://errors.pydantic.dev/2/v/greater_than',
}
]
'''
Default: Annotated[float, annotated_types.Gt(0)]
A float that must be less than zero.
from pydantic import BaseModel, NegativeFloat, ValidationError
class Model(BaseModel):
negative_float: NegativeFloat
m = Model(negative_float=-1.0)
print(repr(m))
#> Model(negative_float=-1.0)
try:
Model(negative_float=1.0)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'less_than',
'loc': ('negative_float',),
'msg': 'Input should be less than 0',
'input': 1.0,
'ctx': {'lt': 0.0},
'url': 'https://errors.pydantic.dev/2/v/less_than',
}
]
'''
Default: Annotated[float, annotated_types.Lt(0)]
A float that must be less than or equal to zero.
from pydantic import BaseModel, NonPositiveFloat, ValidationError
class Model(BaseModel):
non_positive_float: NonPositiveFloat
m = Model(non_positive_float=0.0)
print(repr(m))
#> Model(non_positive_float=0.0)
try:
Model(non_positive_float=1.0)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'less_than_equal',
'loc': ('non_positive_float',),
'msg': 'Input should be less than or equal to 0',
'input': 1.0,
'ctx': {'le': 0.0},
'url': 'https://errors.pydantic.dev/2/v/less_than_equal',
}
]
'''
Default: Annotated[float, annotated_types.Le(0)]
A float that must be greater than or equal to zero.
from pydantic import BaseModel, NonNegativeFloat, ValidationError
class Model(BaseModel):
non_negative_float: NonNegativeFloat
m = Model(non_negative_float=0.0)
print(repr(m))
#> Model(non_negative_float=0.0)
try:
Model(non_negative_float=-1.0)
except ValidationError as e:
print(e.errors())
'''
[
{
'type': 'greater_than_equal',
'loc': ('non_negative_float',),
'msg': 'Input should be greater than or equal to 0',
'input': -1.0,
'ctx': {'ge': 0.0},
'url': 'https://errors.pydantic.dev/2/v/greater_than_equal',
}
]
'''
Default: Annotated[float, annotated_types.Ge(0)]
A float that must be validated in strict mode.
from pydantic import BaseModel, StrictFloat, ValidationError
class StrictFloatModel(BaseModel):
strict_float: StrictFloat
try:
StrictFloatModel(strict_float='1.0')
except ValidationError as e:
print(e)
'''
1 validation error for StrictFloatModel
strict_float
Input should be a valid number [type=float_type, input_value='1.0', input_type=str]
'''
Default: Annotated[float, Strict(True)]
A float that must be finite (not -inf, inf, or nan).
from pydantic import BaseModel, FiniteFloat
class Model(BaseModel):
finite: FiniteFloat
m = Model(finite=1.0)
print(m)
#> finite=1.0
Default: Annotated[float, AllowInfNan(False)]
A bytes that must be validated in strict mode.
Default: Annotated[bytes, Strict()]
A string that must be validated in strict mode.
Default: Annotated[str, Strict()]
Default: TypeVar('HashableItemType', bound=Hashable)
Default: TypeVar('AnyItemType')
Default: TypeVar('AnyType')
A UUID that must be version 1.
import uuid
from pydantic import UUID1, BaseModel
class Model(BaseModel):
uuid1: UUID1
Model(uuid1=uuid.uuid1())
Default: Annotated[UUID, UuidVersion(1)]
A UUID that must be version 3.
import uuid
from pydantic import UUID3, BaseModel
class Model(BaseModel):
uuid3: UUID3
Model(uuid3=uuid.uuid3(uuid.NAMESPACE_DNS, 'pydantic.org'))
Default: Annotated[UUID, UuidVersion(3)]
A UUID that must be version 4.
import uuid
from pydantic import UUID4, BaseModel
class Model(BaseModel):
uuid4: UUID4
Model(uuid4=uuid.uuid4())
Default: Annotated[UUID, UuidVersion(4)]
A UUID that must be version 5.
import uuid
from pydantic import UUID5, BaseModel
class Model(BaseModel):
uuid5: UUID5
Model(uuid5=uuid.uuid5(uuid.NAMESPACE_DNS, 'pydantic.org'))
Default: Annotated[UUID, UuidVersion(5)]
A UUID that must be version 6.
import uuid
from pydantic import UUID6, BaseModel
class Model(BaseModel):
uuid6: UUID6
Model(uuid6=uuid.UUID('1efea953-c2d6-6790-aa0a-69db8c87df97'))
Default: Annotated[UUID, UuidVersion(6)]
A UUID that must be version 7.
import uuid
from pydantic import UUID7, BaseModel
class Model(BaseModel):
uuid7: UUID7
Model(uuid7=uuid.UUID('0194fdcb-1c47-7a09-b52c-561154de0b4a'))
Default: Annotated[UUID, UuidVersion(7)]
A UUID that must be version 8.
import uuid
from pydantic import UUID8, BaseModel
class Model(BaseModel):
uuid8: UUID8
Model(uuid8=uuid.UUID('81a0b92e-6078-8551-9c81-8ccb666bdab8'))
Default: Annotated[UUID, UuidVersion(8)]
A path that must point to a file.
from pathlib import Path
from pydantic import BaseModel, FilePath, ValidationError
class Model(BaseModel):
f: FilePath
path = Path('text.txt')
path.touch()
m = Model(f='text.txt')
print(m.model_dump())
#> {'f': PosixPath('text.txt')}
path.unlink()
path = Path('directory')
path.mkdir(exist_ok=True)
try:
Model(f='directory') # directory
except ValidationError as e:
print(e)
'''
1 validation error for Model
f
Path does not point to a file [type=path_not_file, input_value='directory', input_type=str]
'''
path.rmdir()
try:
Model(f='not-exists-file')
except ValidationError as e:
print(e)
'''
1 validation error for Model
f
Path does not point to a file [type=path_not_file, input_value='not-exists-file', input_type=str]
'''
Default: Annotated[Path, PathType('file')]
A path that must point to a directory.
from pathlib import Path
from pydantic import BaseModel, DirectoryPath, ValidationError
class Model(BaseModel):
f: DirectoryPath
path = Path('directory/')
path.mkdir()
m = Model(f='directory/')
print(m.model_dump())
#> {'f': PosixPath('directory')}
path.rmdir()
path = Path('file.txt')
path.touch()
try:
Model(f='file.txt') # file
except ValidationError as e:
print(e)
'''
1 validation error for Model
f
Path does not point to a directory [type=path_not_directory, input_value='file.txt', input_type=str]
'''
path.unlink()
try:
Model(f='not-exists-directory')
except ValidationError as e:
print(e)
'''
1 validation error for Model
f
Path does not point to a directory [type=path_not_directory, input_value='not-exists-directory', input_type=str]
'''
Default: Annotated[Path, PathType('dir')]
A path for a new file or directory that must not already exist. The parent directory must already exist.
Default: Annotated[Path, PathType('new')]
A path to an existing socket file
Default: Annotated[Path, PathType('socket')]
Default: TypeVar('SecretType')
A bytes type that is encoded and decoded using the standard (non-URL-safe) base64 encoder.
from pydantic import Base64Bytes, BaseModel, ValidationError
class Model(BaseModel):
base64_bytes: Base64Bytes
# Initialize the model with base64 data
m = Model(base64_bytes=b'VGhpcyBpcyB0aGUgd2F5')
# Access decoded value
print(m.base64_bytes)
#> b'This is the way'
# Serialize into the base64 form
print(m.model_dump())
#> {'base64_bytes': b'VGhpcyBpcyB0aGUgd2F5'}
# Validate base64 data
try:
print(Model(base64_bytes=b'undecodable').base64_bytes)
except ValidationError as e:
print(e)
'''
1 validation error for Model
base64_bytes
Base64 decoding error: 'Incorrect padding' [type=base64_decode, input_value=b'undecodable', input_type=bytes]
'''
Default: Annotated[bytes, EncodedBytes(encoder=Base64Encoder)]
A str type that is encoded and decoded using the standard (non-URL-safe) base64 encoder.
from pydantic import Base64Str, BaseModel, ValidationError
class Model(BaseModel):
base64_str: Base64Str
# Initialize the model with base64 data
m = Model(base64_str='VGhlc2UgYXJlbid0IHRoZSBkcm9pZHMgeW91J3JlIGxvb2tpbmcgZm9y')
# Access decoded value
print(m.base64_str)
#> These aren't the droids you're looking for
# Serialize into the base64 form
print(m.model_dump())
#> {'base64_str': 'VGhlc2UgYXJlbid0IHRoZSBkcm9pZHMgeW91J3JlIGxvb2tpbmcgZm9y'}
# Validate base64 data
try:
print(Model(base64_str='undecodable').base64_str)
except ValidationError as e:
print(e)
'''
1 validation error for Model
base64_str
Base64 decoding error: 'Incorrect padding' [type=base64_decode, input_value='undecodable', input_type=str]
'''
Default: Annotated[str, EncodedStr(encoder=Base64Encoder)]
A bytes type that is encoded and decoded using the URL-safe base64 encoder.
from pydantic import Base64UrlBytes, BaseModel
class Model(BaseModel):
base64url_bytes: Base64UrlBytes
# Initialize the model with base64 data
m = Model(base64url_bytes=b'SHc_dHc-TXc==')
print(m)
#> base64url_bytes=b'Hw?tw>Mw'
Default: Annotated[bytes, EncodedBytes(encoder=Base64UrlEncoder)]
A str type that is encoded and decoded using the URL-safe base64 encoder.
from pydantic import Base64UrlStr, BaseModel
class Model(BaseModel):
base64url_str: Base64UrlStr
# Initialize the model with base64 data
m = Model(base64url_str='SHc_dHc-TXc==')
print(m)
#> base64url_str='Hw?tw>Mw'
Default: Annotated[str, EncodedStr(encoder=Base64UrlEncoder)]
A JsonValue is used to represent a value that can be serialized to JSON.
It may be one of:
list['JsonValue']dict[str, 'JsonValue']strboolintfloatNone
The following example demonstrates how to use JsonValue to validate JSON data,
and what kind of errors to expect when input data is not json serializable.
import json
from pydantic import BaseModel, JsonValue, ValidationError
class Model(BaseModel):
j: JsonValue
valid_json_data = {'j': {'a': {'b': {'c': 1, 'd': [2, None]}}}}
invalid_json_data = {'j': {'a': {'b': ...}}}
print(repr(Model.model_validate(valid_json_data)))
#> Model(j={'a': {'b': {'c': 1, 'd': [2, None]}}})
print(repr(Model.model_validate_json(json.dumps(valid_json_data))))
#> Model(j={'a': {'b': {'c': 1, 'd': [2, None]}}})
try:
Model.model_validate(invalid_json_data)
except ValidationError as e:
print(e)
'''
1 validation error for Model
j.dict.a.dict.b
input was not a valid JSON value [type=invalid-json-value, input_value=Ellipsis, input_type=ellipsis]
'''
Type: TypeAlias Default: Union[list['JsonValue'], dict[str, 'JsonValue'], str, bool, int, float, None]
When used as an item in a list, the key type in a dict, optional values of a TypedDict, etc.
this annotation omits the item from the iteration if there is any error validating it.
That is, instead of a ValidationError being propagated up and the entire iterable being discarded
any invalid items are discarded and the valid ones are returned.
Default: Annotated[T, _OnErrorOmit]