pydantic_ai.models.openrouter
For details on how to set up authentication with this model, see model configuration for OpenRouter.
Bases: TypedDict
Represents the ‘Provider’ object from the OpenRouter API.
List of provider slugs to try in order (e.g. [“anthropic”, “openai”]). See details
Type: list[OpenRouterProviderName]
Whether to allow backup providers when the primary is unavailable. See details
Type: bool
Only use providers that support all parameters in your request.
Type: bool
Control whether to use providers that may store data. See details
Type: Literal[‘allow’, ‘deny’]
Restrict routing to only ZDR (Zero Data Retention) endpoints. See details
Type: bool
List of provider slugs to allow for this request. See details
Type: list[OpenRouterProviderName]
List of provider slugs to skip for this request. See details
List of quantization levels to filter by (e.g. [“int4”, “int8”]). See details
Type: list[Literal[‘int4’, ‘int8’, ‘fp4’, ‘fp6’, ‘fp8’, ‘fp16’, ‘bf16’, ‘fp32’, ‘unknown’]]
Sort providers by price or throughput. (e.g. “price” or “throughput”). See details
Type: Literal[‘price’, ‘throughput’, ‘latency’]
The maximum pricing you want to pay for this request. See details
Type: _OpenRouterMaxPrice
Bases: TypedDict
Configuration for reasoning tokens in OpenRouter requests.
Reasoning tokens allow models to show their step-by-step thinking process. You can configure this using either OpenAI-style effort levels or Anthropic-style token limits, but not both simultaneously.
OpenAI-style reasoning effort level. Cannot be used with max_tokens.
Type: Literal[‘xhigh’, ‘high’, ‘medium’, ‘low’, ‘minimal’, ‘none’]
Anthropic-style specific token limit for reasoning. Cannot be used with effort.
Type: int
Whether to exclude reasoning tokens from the response. Default is False. All models support this.
Type: bool
Whether to enable reasoning with default parameters. Default is inferred from effort or max_tokens.
Type: bool
Bases: TypedDict
Configuration for OpenRouter usage.
Bases: ModelSettings
Settings used for an OpenRouter model request.
A list of fallback models.
These models will be tried, in order, if the main model returns an error. See details
OpenRouter routes requests to the best available providers for your model. By default, requests are load balanced across the top providers to maximize uptime.
You can customize how your requests are routed using the provider object. See more
Type: OpenRouterProviderConfig
Presets allow you to separate your LLM configuration from your code.
Create and manage presets through the OpenRouter web application to control provider routing, model selection, system prompts, and other parameters, then reference them in OpenRouter API requests. See more
Type: str
To help with prompts that exceed the maximum context size of a model.
Transforms work by removing or truncating messages from the middle of the prompt, until the prompt fits within the model’s context window. See more
Type: list[OpenRouterTransforms]
To control the reasoning tokens in the request.
The reasoning config object consolidates settings for controlling reasoning strength across different models. See more
Type: OpenRouterReasoning
To control the usage of the model.
The usage config object consolidates settings for enabling detailed usage information. See more
Type: OpenRouterUsageConfig
Bases: OpenAIChatModel
Extends OpenAIModel to capture extra metadata for Openrouter.
def __init__(
model_name: str,
provider: Literal['openrouter'] | Provider[AsyncOpenAI] = 'openrouter',
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None,
)
Initialize an OpenRouter model.
model_name : str
The name of the model to use.
provider : Literal[‘openrouter’] | Provider[AsyncOpenAI] Default: 'openrouter'
The provider to use for authentication and API access. If not provided, a new provider will be created with the default settings.
profile : ModelProfileSpec | None Default: None
The model profile to use. Defaults to a profile picked by the provider based on the model name.
settings : ModelSettings | None Default: None
Model-specific settings that will be used as defaults for this model.
@classmethod
def supported_builtin_tools(cls) -> frozenset[type[AbstractBuiltinTool]]
Return the set of builtin tool types this model can handle.
OpenRouter supports web search via its plugins system.
frozenset[type[AbstractBuiltinTool]]
Bases: OpenAIStreamedResponse
Implementation of StreamedResponse for OpenRouter models.
Known providers in the OpenRouter marketplace
Default: Literal['z-ai', 'cerebras', 'venice', 'moonshotai', 'morph', 'stealth', 'wandb', 'klusterai', 'openai', 'sambanova', 'amazon-bedrock', 'mistral', 'nextbit', 'atoma', 'ai21', 'minimax', 'baseten', 'anthropic', 'featherless', 'groq', 'lambda', 'azure', 'ncompass', 'deepseek', 'hyperbolic', 'crusoe', 'cohere', 'mancer', 'avian', 'perplexity', 'novita', 'siliconflow', 'switchpoint', 'xai', 'inflection', 'fireworks', 'deepinfra', 'inference-net', 'inception', 'atlas-cloud', 'nvidia', 'alibaba', 'friendli', 'infermatic', 'targon', 'ubicloud', 'aion-labs', 'liquid', 'nineteen', 'cloudflare', 'nebius', 'chutes', 'enfer', 'crofai', 'open-inference', 'phala', 'gmicloud', 'meta', 'relace', 'parasail', 'together', 'google-ai-studio', 'google-vertex']
Possible OpenRouter provider names.
Since OpenRouter is constantly updating their list of providers, we explicitly list some known providers but allow any name in the type hints. See the OpenRouter API for a full list.
Default: str | KnownOpenRouterProviders
Available messages transforms for OpenRouter models with limited token windows.
Currently only supports ‘middle-out’, but is expected to grow in the future.
Default: Literal['middle-out']