/Release

Pydantic v2.9

Sydney Runkle avatar
Sydney Runkle
10 mins

Pydantic v2.9 is now available! You can install it now via PyPI or your favorite package manager:

pip install --upgrade pydantic

This release features the work of over 25 contributors! In this post, we'll cover the highlights of the release. You can see the full changelog on GitHub.

This release contains significant performance improvements, union serialization improvements, and a handful of new features.

We've added support for stdlib complex numbers in Pydantic. For validation, we support both complex instances and strings that can be parsed into complex numbers.

from pydantic import TypeAdapter


ta = TypeAdapter(complex)

complex_number = ta.validate_python('1+2j')
assert complex_number == complex(1, 2)

assert ta.dump_json(complex_number) == b'"1+2j"'

Credit for this goes to @changhc! For implementation details, see #9654.

Pydantic now supports the ZoneInfo type explicitly (in Python v3.9+). Here's an example of validation and serialization with the new type:

from pydantic import TypeAdapter
from zoneinfo import ZoneInfo

ta = TypeAdapter(ZoneInfo)

tz = ta.validate_python('America/Los_Angeles')
assert tz == ZoneInfo('America/Los_Angeles')

assert ta.dump_json(tz) == b'"America/Los_Angeles"'

Thanks for the contribution, @Youssefares! See #9896 for more details regarding the new implementation.

The new val_json_bytes setting enables users to specify which encoding to use when decoding bytes data from JSON. This setting, in combination with the existing ser_json_bytes, supports consistent JSON round-tripping for bytes data.

For example:

from pydantic import TypeAdapter, ConfigDict

ta = TypeAdapter(bytes, config=ConfigDict(ser_json_bytes='base64', val_json_bytes='base64'))

some_bytes = b'hello'
validated_bytes = ta.validate_python(some_bytes)

encoded_bytes = b'"aGVsbG8="'
assert ta.dump_json(validated_bytes) == encoded_bytes

# verifying round trip
# before we added support for val_json_bytes, the default encoding was 'utf-8' for validation, so this would fail
assert ta.validate_json(encoded_bytes) == validated_bytes

Thanks for the addition, @josh-newman! You can see the full implementation details here.

Previously, when using custom validators like BeforeValidator or field_validator, it wasn't possible to customize the mode='validation' JSON schema associated with the field / type in question.

Now, you can use the json_schema_input_type specification to customize the JSON schema for fields with custom validators. For example:

from typing import Any, Union

from pydantic_core import PydanticKnownError
from typing_extensions import Annotated

from pydantic import PlainValidator, TypeAdapter


def validate_maybe_int(v: Any) -> int:
    if isinstance(v, int):
        return v
    elif isinstance(v, str):
        try:
            return int(v)
        except ValueError:
            ...

    raise PydanticKnownError('int_parsing')


ta = TypeAdapter(Annotated[int, PlainValidator(validate_maybe_int, json_schema_input_type=Union[int, str])])
print(ta.json_schema(mode='validation'))
# > {'anyOf': [{'type': 'integer'}, {'type': 'string'}]}

!!! note You can't use this new feature with mode='after' validators, as customizing mode='validation' JSON schema doesn't make sense in this context.

For implementation details, see #10094. You can find documentation for json_schema_input_type in the API docs for all custom validators that support said specification.

During our v2.9.0 development cycle, we placed a large emphasis on improving the performance of Pydantic. Specifically, we've made significant improvements to the schema building process, which results in faster import times and reduced memory allocation.

Consider this use case: you have a large number of Pydantic models in a file, say models.py. You import a few of these models in another file, main.py. This is a relatively common pattern for Pydantic users.

For cases like the above, we've achieved up to a 10x improvement in import times, and a significant reduction in temporary memory allocations, which can be a huge win for users with an abundance of models.

We'll discuss a few of the specific improvements that we've made to the schema building process:

  1. Decrease pydantic import times by ~35%, see #10009 This covers cases like import pydantic and from pydantic import BaseModel
  2. Speed up schema building by ~5% via optimizing imports in hot loops, see #10013
  3. Speed up schema building (and memory allocations) by up to 10x by skipping namespace caches, see #10113
  4. Reduce temporary memory allocations by avoiding namespace copy operations, see #10267

We have plans to continue with schema building performance improvements in v2.10 and beyond. You can find lots of additional detail discussed in the above PRs.

Pydantic is well known for its tagged union validation capabilities. In pydantic/pydantic-core#1397, we've added support for a tagged union serializer, which should make more intuitive serialization decisions when using tagged unions. We've also made some tangential fixes such as improving serialization choices for float | int, or Decimal | float unions.

In general, during schema generation, Pydantic is generous in applying validator / constraint logic to types. This can backfire in some cases, when at runtime it becomes evident that a given validator / constraint isn't compatible with some input data. In this release, we've designed some more intuitive error messages for these cases, and moved them to the validation (runtime) phase, rather than failing in some valid cases at schema build time. For implementation details, see #9999

This change shouldn't affect anything except specialized usage of `json_schema_extra. That being said, if you'd like to replicate the old behavior, see these docs.

Any affected JSON syntax is now valid, and more simple! See #10029 for details.

This is relatively self-explanatory. See #10181 for more details. This change encourages syntactically valid JSON schemas.

We are excited to announce that Pydantic v2.9.0 is here, and it's the most feature-rich and fastest version of Pydantic yet. If you have any questions or feedback, please open a GitHub discussion. If you encounter any bugs, please open a GitHub issue.

Thank you to all of our contributors for making this release possible! We would especially like to acknowledge the following individuals for their significant contributions to this release:

If you're enjoying Pydantic, you might really like Pydantic Logfire, a new observability tool built by the team behind Pydantic. You can now try Logfire for free. We'd love it if you'd join the Pydantic Logfire Slack and let us know what you think!