pydantic_ai.models.xai
For details on how to set up authentication with this model, see model configuration for xAI.
xAI model implementation using xAI SDK.
Bases: ModelSettings
Settings specific to xAI models.
See xAI SDK documentation for more details on these parameters.
Whether to return log probabilities of the output tokens or not.
Type: bool
An integer between 0 and 20 specifying the number of most likely tokens to return at each position.
Type: int
A unique identifier representing your end-user, which can help xAI to monitor and detect abuse.
Type: str
Whether to store messages on xAI’s servers for conversation continuity.
Type: bool
The ID of the previous response to continue the conversation.
Type: str
Whether to include the encrypted content in the response.
Corresponds to the use_encrypted_content value of the model settings in the Responses API.
Type: bool
Whether to include the code execution results in the response.
Corresponds to the code_interpreter_call.outputs value of the include parameter in the Responses API.
Type: bool
Whether to include the web search results in the response.
Corresponds to the web_search_call.action.sources value of the include parameter in the Responses API.
Type: bool
Whether to include inline citations in the response.
Corresponds to the inline_citations option in the xAI include parameter.
Type: bool
Whether to include the MCP results in the response.
Corresponds to the mcp_call.outputs value of the include parameter in the Responses API.
Type: bool
Reasoning effort level for Grok reasoning models.
See https://docs.x.ai for details.
Type: Literal[‘low’, ‘high’]
Bases: Model
A model that uses the xAI SDK to interact with xAI models.
The model name.
Type: str
The model provider.
Type: str
def __init__(
model_name: XaiModelName,
provider: Literal['xai'] | Provider[AsyncClient] = 'xai',
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None,
)
Initialize the xAI model.
The name of the xAI model to use (e.g., “grok-4-1-fast-non-reasoning”)
provider : Literal[‘xai’] | Provider[AsyncClient] Default: 'xai'
The provider to use for API calls. Defaults to 'xai'.
profile : ModelProfileSpec | None Default: None
Optional model profile specification. Defaults to a profile picked by the provider based on the model name.
settings : ModelSettings | None Default: None
Optional model settings.
@classmethod
def supported_builtin_tools(cls) -> frozenset[type]
Return the set of builtin tool types this model can handle.
@async
def request(
messages: list[ModelMessage],
model_settings: ModelSettings | None,
model_request_parameters: ModelRequestParameters,
) -> ModelResponse
Make a request to the xAI model.
@async
def request_stream(
messages: list[ModelMessage],
model_settings: ModelSettings | None,
model_request_parameters: ModelRequestParameters,
run_context: RunContext[Any] | None = None,
) -> AsyncIterator[StreamedResponse]
Make a streaming request to the xAI model.
AsyncIterator[StreamedResponse]
Bases: StreamedResponse
Implementation of StreamedResponse for xAI SDK.
The model provider system name.
Type: str
Get the provider base URL.
Type: str
Get the model name of the response.
Type: str
The model provider.
Type: str
Get the timestamp of the response.
Type: datetime
Maps unified thinking values to xAI reasoning_effort. xAI only supports ‘low’ and ‘high’.
Type: dict[ThinkingLevel, Literal[‘low’, ‘high’]] Default: \{True: 'high', 'minimal': 'low', 'low': 'low', 'medium': 'high', 'high': 'high', 'xhigh': 'high'\}
Possible xAI model names.
Default: str | ChatModel