Skip to content

pydantic_ai.models.mistral

Setup

For details on how to set up authentication with this model, see model configuration for Mistral.

MistralModelSettings

Bases: ModelSettings

Settings used for a Mistral model request.

MistralModel

Bases: Model

A model that uses Mistral.

Internally, this uses the Mistral Python client to interact with the API.

API Documentation

Attributes

model_name

The model name.

Type: MistralModelName

system

The model provider.

Type: str

Methods

__init__
def __init__(
    model_name: MistralModelName,
    provider: Literal['mistral'] | Provider[Mistral] = 'mistral',
    profile: ModelProfileSpec | None = None,
    json_mode_schema_prompt: str = 'Answer in JSON Object, respect the format:\n```\n{schema}\n```\n',
    settings: ModelSettings | None = None,
)

Initialize a Mistral model.

Parameters

model_name : MistralModelName

The name of the model to use.

provider : Literal[‘mistral’] | Provider[Mistral] Default: 'mistral'

The provider to use for authentication and API access. Can be either the string ‘mistral’ or an instance of Provider[Mistral]. If not provided, a new provider will be created using the other parameters.

profile : ModelProfileSpec | None Default: None

The model profile to use. Defaults to a profile picked by the provider based on the model name.

json_mode_schema_prompt : str Default: 'Answer in JSON Object, respect the format:\n```\n\{schema\}\n```\n'

The prompt to show when the model expects a JSON object as input.

settings : ModelSettings | None Default: None

Model-specific settings that will be used as defaults for this model.

request

@async

def request(
    messages: list[ModelMessage],
    model_settings: ModelSettings | None,
    model_request_parameters: ModelRequestParameters,
) -> ModelResponse

Make a non-streaming request to the model from Pydantic AI call.

Returns

ModelResponse

request_stream

@async

def request_stream(
    messages: list[ModelMessage],
    model_settings: ModelSettings | None,
    model_request_parameters: ModelRequestParameters,
    run_context: RunContext[Any] | None = None,
) -> AsyncIterator[StreamedResponse]

Make a streaming request to the model from Pydantic AI call.

Returns

AsyncIterator[StreamedResponse]

MistralStreamedResponse

Bases: StreamedResponse

Implementation of StreamedResponse for Mistral models.

Attributes

model_name

Get the model name of the response.

Type: MistralModelName

provider_name

Get the provider name.

Type: str

provider_url

Get the provider base URL.

Type: str

timestamp

Get the timestamp of the response.

Type: datetime

LatestMistralModelNames

Latest Mistral models.

Default: Literal['mistral-large-latest', 'mistral-small-latest', 'codestral-latest', 'mistral-moderation-latest']

MistralModelName

Possible Mistral model names.

Since Mistral supports a variety of date-stamped models, we explicitly list the most popular models but allow any name in the type hints. Since the Mistral docs for a full list.

Default: str | LatestMistralModelNames