Skip to content

pydantic_ai.models.xai

Setup

For details on how to set up authentication with this model, see model configuration for xAI.

xAI model implementation using xAI SDK.

XaiModelSettings

Bases: ModelSettings

Settings specific to xAI models.

See xAI SDK documentation for more details on these parameters.

Attributes

xai_logprobs

Whether to return log probabilities of the output tokens or not.

Type: bool

xai_top_logprobs

An integer between 0 and 20 specifying the number of most likely tokens to return at each position.

Type: int

xai_user

A unique identifier representing your end-user, which can help xAI to monitor and detect abuse.

Type: str

xai_store_messages

Whether to store messages on xAI’s servers for conversation continuity.

Type: bool

xai_previous_response_id

The ID of the previous response to continue the conversation.

Type: str

xai_include_encrypted_content

Whether to include the encrypted content in the response.

Corresponds to the use_encrypted_content value of the model settings in the Responses API.

Type: bool

xai_include_code_execution_output

Whether to include the code execution results in the response.

Corresponds to the code_interpreter_call.outputs value of the include parameter in the Responses API.

Type: bool

xai_include_web_search_output

Whether to include the web search results in the response.

Corresponds to the web_search_call.action.sources value of the include parameter in the Responses API.

Type: bool

xai_include_inline_citations

Whether to include inline citations in the response.

Corresponds to the inline_citations option in the xAI include parameter.

Type: bool

xai_include_mcp_output

Whether to include the MCP results in the response.

Corresponds to the mcp_call.outputs value of the include parameter in the Responses API.

Type: bool

xai_reasoning_effort

Reasoning effort level for Grok reasoning models.

See https://docs.x.ai for details.

Type: Literal[‘low’, ‘high’]

XaiModel

Bases: Model

A model that uses the xAI SDK to interact with xAI models.

Attributes

model_name

The model name.

Type: str

system

The model provider.

Type: str

Methods

__init__
def __init__(
    model_name: XaiModelName,
    provider: Literal['xai'] | Provider[AsyncClient] = 'xai',
    profile: ModelProfileSpec | None = None,
    settings: ModelSettings | None = None,
)

Initialize the xAI model.

Parameters

model_name : XaiModelName

The name of the xAI model to use (e.g., “grok-4-1-fast-non-reasoning”)

provider : Literal[‘xai’] | Provider[AsyncClient] Default: 'xai'

The provider to use for API calls. Defaults to 'xai'.

profile : ModelProfileSpec | None Default: None

Optional model profile specification. Defaults to a profile picked by the provider based on the model name.

settings : ModelSettings | None Default: None

Optional model settings.

supported_builtin_tools

@classmethod

def supported_builtin_tools(cls) -> frozenset[type]

Return the set of builtin tool types this model can handle.

Returns

frozenset[type]

request

@async

def request(
    messages: list[ModelMessage],
    model_settings: ModelSettings | None,
    model_request_parameters: ModelRequestParameters,
) -> ModelResponse

Make a request to the xAI model.

Returns

ModelResponse

request_stream

@async

def request_stream(
    messages: list[ModelMessage],
    model_settings: ModelSettings | None,
    model_request_parameters: ModelRequestParameters,
    run_context: RunContext[Any] | None = None,
) -> AsyncIterator[StreamedResponse]

Make a streaming request to the xAI model.

Returns

AsyncIterator[StreamedResponse]

XaiStreamedResponse

Bases: StreamedResponse

Implementation of StreamedResponse for xAI SDK.

Attributes

system

The model provider system name.

Type: str

provider_url

Get the provider base URL.

Type: str

model_name

Get the model name of the response.

Type: str

provider_name

The model provider.

Type: str

timestamp

Get the timestamp of the response.

Type: datetime

XAI_EFFORT_MAP

Maps unified thinking values to xAI reasoning_effort. xAI only supports ‘low’ and ‘high’.

Type: dict[ThinkingLevel, Literal[‘low’, ‘high’]] Default: \{True: 'high', 'minimal': 'low', 'low': 'low', 'medium': 'high', 'high': 'high', 'xhigh': 'high'\}

XaiModelName

Possible xAI model names.

Default: str | ChatModel