Skip to content

pydantic_ai.providers

Provider

Bases: ABC, Generic[InterfaceClient]

Abstract class for a provider.

The provider is in charge of providing an authenticated client to the API.

Each provider only supports a specific interface. A interface can be supported by multiple providers.

For example, the OpenAIChatModel interface can be supported by the OpenAIProvider and the DeepSeekProvider.

Attributes

name

The provider name.

Type: str

base_url

The base URL for the provider API.

Type: str

client

The client for the provider.

Type: InterfaceClient

Methods

model_profile

@staticmethod

def model_profile(model_name: str) -> ModelProfile | None

The model profile for the named model, if available.

Returns

ModelProfile | None

gateway_provider

def gateway_provider(
    upstream_provider: Literal['openai', 'openai-chat', 'openai-responses', 'chat', 'responses'],
    route: str | None = None,
    api_key: str | None = None,
    base_url: str | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> Provider[AsyncOpenAI]
def gateway_provider(
    upstream_provider: Literal['groq'],
    route: str | None = None,
    api_key: str | None = None,
    base_url: str | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> Provider[AsyncGroq]
def gateway_provider(
    upstream_provider: Literal['anthropic'],
    route: str | None = None,
    api_key: str | None = None,
    base_url: str | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> Provider[AsyncAnthropicClient]
def gateway_provider(
    upstream_provider: Literal['bedrock', 'converse'],
    route: str | None = None,
    api_key: str | None = None,
    base_url: str | None = None,
) -> Provider[BaseClient]
def gateway_provider(
    upstream_provider: Literal['gemini', 'google-vertex'],
    route: str | None = None,
    api_key: str | None = None,
    base_url: str | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> Provider[GoogleClient]
def gateway_provider(
    upstream_provider: str,
    route: str | None = None,
    api_key: str | None = None,
    base_url: str | None = None,
) -> Provider[Any]

Create a new Gateway provider.

Returns

Provider[Any]

Parameters

upstream_provider : UpstreamProvider | str

The upstream provider to use.

route : str | None Default: None

The name of the provider or routing group to use to handle the request. If not provided, the default routing group for the API format will be used.

api_key : str | None Default: None

The API key to use for authentication. If not provided, the PYDANTIC_AI_GATEWAY_API_KEY environment variable will be used if available.

base_url : str | None Default: None

The base URL to use for the Gateway. If not provided, the PYDANTIC_AI_GATEWAY_BASE_URL environment variable will be used if available. Otherwise, defaults to https://gateway.pydantic.dev/proxy.

http_client : httpx.AsyncClient | None Default: None

The HTTP client to use for the Gateway.

AnthropicProvider

Bases: Provider[AsyncAnthropicClient]

Provider for Anthropic API.

Methods

__init__
def __init__(anthropic_client: AsyncAnthropicClient | None = None) -> None
def __init__(
    api_key: str | None = None,
    base_url: str | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Create a new Anthropic provider.

Returns

None

Parameters

api_key : str | None Default: None

The API key to use for authentication, if not provided, the ANTHROPIC_API_KEY environment variable will be used if available.

base_url : str | None Default: None

The base URL to use for the Anthropic API.

anthropic_client : AsyncAnthropicClient | None Default: None

An existing Anthropic client to use. Accepts AsyncAnthropic, AsyncAnthropicBedrock, AsyncAnthropicFoundry, or AsyncAnthropicVertex. If provided, the api_key and http_client arguments will be ignored.

http_client : httpx.AsyncClient | None Default: None

An existing httpx.AsyncClient to use for making HTTP requests.

GoogleProvider

Bases: Provider[Client]

Provider for Google.

Methods

__init__
def __init__(
    api_key: str,
    http_client: httpx.AsyncClient | None = None,
    base_url: str | None = None,
) -> None
def __init__(
    credentials: Credentials | None = None,
    project: str | None = None,
    location: VertexAILocation | Literal['global'] | str | None = None,
    http_client: httpx.AsyncClient | None = None,
    base_url: str | None = None,
) -> None
def __init__(client: Client) -> None
def __init__(
    vertexai: bool = False,
    api_key: str | None = None,
    http_client: httpx.AsyncClient | None = None,
    base_url: str | None = None,
) -> None

Create a new Google provider.

Returns

None

Parameters

api_key : str | None Default: None

The API key <https://ai.google.dev/gemini-api/docs/api-key>_ to use for authentication. It can also be set via the GOOGLE_API_KEY environment variable.

credentials : Credentials | None Default: None

The credentials to use for authentication when calling the Vertex AI APIs. Credentials can be obtained from environment variables and default credentials. For more information, see Set up Application Default Credentials. Applies to the Vertex AI API only.

project : str | None Default: None

The Google Cloud project ID to use for quota. Can be obtained from environment variables (for example, GOOGLE_CLOUD_PROJECT). Applies to the Vertex AI API only.

location : VertexAILocation | Literal[‘global’] | str | None Default: None

The location to send API requests to (for example, us-central1). Can be obtained from environment variables. Applies to the Vertex AI API only.

vertexai : bool | None Default: None

Force the use of the Vertex AI API. If False, the Google Generative Language API will be used. Defaults to False unless location, project, or credentials are provided.

client : Client | None Default: None

A pre-initialized client to use.

http_client : httpx.AsyncClient | None Default: None

An existing httpx.AsyncClient to use for making HTTP requests.

base_url : str | None Default: None

The base URL for the Google API.

VertexAILocation

Regions available for Vertex AI. More details here.

Default: Literal['asia-east1', 'asia-east2', 'asia-northeast1', 'asia-northeast3', 'asia-south1', 'asia-southeast1', 'australia-southeast1', 'europe-central2', 'europe-north1', 'europe-southwest1', 'europe-west1', 'europe-west2', 'europe-west3', 'europe-west4', 'europe-west6', 'europe-west8', 'europe-west9', 'me-central1', 'me-central2', 'me-west1', 'northamerica-northeast1', 'southamerica-east1', 'us-central1', 'us-east1', 'us-east4', 'us-east5', 'us-south1', 'us-west1', 'us-west4']

OpenAIProvider

Bases: Provider[AsyncOpenAI]

Provider for OpenAI API.

Methods

__init__
def __init__(openai_client: AsyncOpenAI) -> None
def __init__(
    base_url: str | None = None,
    api_key: str | None = None,
    openai_client: None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Create a new OpenAI provider.

Returns

None

Parameters

base_url : str | None Default: None

The base url for the OpenAI requests. If not provided, the OPENAI_BASE_URL environment variable will be used if available. Otherwise, defaults to OpenAI’s base url.

api_key : str | None Default: None

The API key to use for authentication, if not provided, the OPENAI_API_KEY environment variable will be used if available.

openai_client : AsyncOpenAI | None Default: None

An existing AsyncOpenAI client to use. If provided, base_url, api_key, and http_client must be None.

http_client : httpx.AsyncClient | None Default: None

An existing httpx.AsyncClient to use for making HTTP requests.

XaiProvider

Bases: Provider[AsyncClient]

Provider for xAI API (native xAI SDK).

Methods

__init__
def __init__() -> None
def __init__(api_key: str) -> None
def __init__(xai_client: AsyncClient) -> None

Create a new xAI provider.

Returns

None

Parameters

api_key : str | None Default: None

The API key to use for authentication, if not provided, the XAI_API_KEY environment variable will be used if available.

xai_client : AsyncClient | None Default: None

An existing xai_sdk.AsyncClient to use. This takes precedence over api_key.

DeepSeekProvider

Bases: Provider[AsyncOpenAI]

Provider for DeepSeek API.

BedrockModelProfile

Bases: ModelProfile

Profile for models used with BedrockModel.

ALL FIELDS MUST BE bedrock_ PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS.

Attributes

bedrock_thinking_variant

Which thinking API shape to use for unified thinking translation.

  • 'anthropic': Uses \{'thinking': \{'type': 'enabled', 'budget_tokens': N\}\}
  • 'openai': Uses \{'reasoning_effort': 'low'|'medium'|'high'\}
  • 'qwen': Uses \{'reasoning_config': 'low'|'high'\}
  • None: No unified thinking support.

Type: Literal[‘anthropic’, ‘openai’, ‘qwen’] | None Default: None

BedrockProvider

Bases: Provider[BaseClient]

Provider for AWS Bedrock.

Methods

__init__
def __init__(bedrock_client: BaseClient) -> None
def __init__(
    api_key: str,
    base_url: str | None = None,
    region_name: str | None = None,
    profile_name: str | None = None,
    aws_read_timeout: float | None = None,
    aws_connect_timeout: float | None = None,
) -> None
def __init__(
    aws_access_key_id: str | None = None,
    aws_secret_access_key: str | None = None,
    aws_session_token: str | None = None,
    base_url: str | None = None,
    region_name: str | None = None,
    profile_name: str | None = None,
    aws_read_timeout: float | None = None,
    aws_connect_timeout: float | None = None,
) -> None

Initialize the Bedrock provider.

Returns

None

Parameters

bedrock_client : BaseClient | None Default: None

A boto3 client for Bedrock Runtime. If provided, other arguments are ignored.

aws_access_key_id : str | None Default: None

The AWS access key ID. If not set, the AWS_ACCESS_KEY_ID environment variable will be used if available.

aws_secret_access_key : str | None Default: None

The AWS secret access key. If not set, the AWS_SECRET_ACCESS_KEY environment variable will be used if available.

aws_session_token : str | None Default: None

The AWS session token. If not set, the AWS_SESSION_TOKEN environment variable will be used if available.

api_key : str | None Default: None

The API key for Bedrock client. Can be used instead of aws_access_key_id, aws_secret_access_key, and aws_session_token. If not set, the AWS_BEARER_TOKEN_BEDROCK environment variable will be used if available.

base_url : str | None Default: None

The base URL for the Bedrock client.

region_name : str | None Default: None

The AWS region name. If not set, the AWS_DEFAULT_REGION environment variable will be used if available.

profile_name : str | None Default: None

The AWS profile name.

aws_read_timeout : float | None Default: None

The read timeout for Bedrock client.

aws_connect_timeout : float | None Default: None

The connect timeout for Bedrock client.

bedrock_amazon_model_profile

def bedrock_amazon_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for an Amazon model used via Bedrock.

Returns

ModelProfile | None

bedrock_deepseek_model_profile

def bedrock_deepseek_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for a DeepSeek model used via Bedrock.

Returns

ModelProfile | None

remove_bedrock_geo_prefix

def remove_bedrock_geo_prefix(model_name: str) -> str

Remove inference geographic prefix from model ID if present.

Bedrock supports cross-region inference using geographic prefixes like ‘us.’, ‘eu.’, ‘apac.’, etc. This function strips those prefixes.

Returns

str

GroqProvider

Bases: Provider[AsyncGroq]

Provider for Groq API.

Methods

__init__
def __init__(groq_client: AsyncGroq | None = None) -> None
def __init__(
    api_key: str | None = None,
    base_url: str | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Create a new Groq provider.

Returns

None

Parameters

api_key : str | None Default: None

The API key to use for authentication, if not provided, the GROQ_API_KEY environment variable will be used if available.

base_url : str | None Default: None

The base url for the Groq requests. If not provided, the GROQ_BASE_URL environment variable will be used if available. Otherwise, defaults to Groq’s base url.

groq_client : AsyncGroq | None Default: None

An existing AsyncGroq client to use. If provided, api_key and http_client must be None.

http_client : httpx.AsyncClient | None Default: None

An existing AsyncClient to use for making HTTP requests.

groq_moonshotai_model_profile

def groq_moonshotai_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for an MoonshotAI model used with the Groq provider.

Returns

ModelProfile | None

meta_groq_model_profile

def meta_groq_model_profile(model_name: str) -> ModelProfile | None

Get the model profile for a Meta model used with the Groq provider.

Returns

ModelProfile | None

AzureProvider

Bases: Provider[AsyncOpenAI]

Provider for Azure OpenAI API.

See https://azure.microsoft.com/en-us/products/ai-foundry for more information.

Methods

__init__
def __init__(openai_client: AsyncAzureOpenAI) -> None
def __init__(
    azure_endpoint: str | None = None,
    api_version: str | None = None,
    api_key: str | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Create a new Azure provider.

Returns

None

Parameters

azure_endpoint : str | None Default: None

The Azure endpoint to use for authentication, if not provided, the AZURE_OPENAI_ENDPOINT environment variable will be used if available.

api_version : str | None Default: None

The API version to use for authentication, if not provided, the OPENAI_API_VERSION environment variable will be used if available.

api_key : str | None Default: None

The API key to use for authentication, if not provided, the AZURE_OPENAI_API_KEY environment variable will be used if available.

openai_client : AsyncAzureOpenAI | None Default: None

An existing AsyncAzureOpenAI client to use. If provided, base_url, api_key, and http_client must be None.

http_client : httpx.AsyncClient | None Default: None

An existing httpx.AsyncClient to use for making HTTP requests.

CohereProvider

Bases: Provider[AsyncClientV2]

Provider for Cohere API.

Methods

__init__
def __init__(
    api_key: str | None = None,
    cohere_client: AsyncClientV2 | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Create a new Cohere provider.

Returns

None

Parameters

api_key : str | None Default: None

The API key to use for authentication, if not provided, the CO_API_KEY environment variable will be used if available.

cohere_client : AsyncClientV2 | None Default: None

An existing AsyncClientV2 client to use. If provided, api_key and http_client must be None.

http_client : httpx.AsyncClient | None Default: None

An existing httpx.AsyncClient to use for making HTTP requests.

VoyageAIProvider

Bases: Provider[AsyncClient]

Provider for VoyageAI API.

Methods

__init__
def __init__(voyageai_client: AsyncClient) -> None
def __init__(api_key: str | None = None) -> None

Create a new VoyageAI provider.

Returns

None

Parameters

api_key : str | None Default: None

The API key to use for authentication, if not provided, the VOYAGE_API_KEY environment variable will be used if available.

voyageai_client : AsyncClient | None Default: None

An existing AsyncClient client to use. If provided, api_key must be None.

CerebrasProvider

Bases: Provider[AsyncOpenAI]

Provider for Cerebras API.

Methods

__init__
def __init__() -> None
def __init__(api_key: str) -> None
def __init__(api_key: str, http_client: httpx.AsyncClient) -> None
def __init__(http_client: httpx.AsyncClient) -> None
def __init__(openai_client: AsyncOpenAI | None = None) -> None

Create a new Cerebras provider.

Returns

None

Parameters

api_key : str | None Default: None

The API key to use for authentication, if not provided, the CEREBRAS_API_KEY environment variable will be used if available.

openai_client : AsyncOpenAI | None Default: None

An existing AsyncOpenAI client to use. If provided, api_key and http_client must be None.

http_client : httpx.AsyncClient | None Default: None

An existing httpx.AsyncClient to use for making HTTP requests.

MistralProvider

Bases: Provider[Mistral]

Provider for Mistral API.

Methods

__init__
def __init__(mistral_client: Mistral | None = None) -> None
def __init__(
    api_key: str | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Create a new Mistral provider.

Returns

None

Parameters

api_key : str | None Default: None

The API key to use for authentication, if not provided, the MISTRAL_API_KEY environment variable will be used if available.

mistral_client : Mistral | None Default: None

An existing Mistral client to use, if provided, api_key and http_client must be None.

base_url : str | None Default: None

The base url for the Mistral requests.

http_client : httpx.AsyncClient | None Default: None

An existing async client to use for making HTTP requests.

FireworksProvider

Bases: Provider[AsyncOpenAI]

Provider for Fireworks AI API.

GrokProvider

Bases: Provider[AsyncOpenAI]

Provider for Grok API (OpenAI-compatible interface).

TogetherProvider

Bases: Provider[AsyncOpenAI]

Provider for Together AI API.

HerokuProvider

Bases: Provider[AsyncOpenAI]

Provider for Heroku API.

GitHubProvider

Bases: Provider[AsyncOpenAI]

Provider for GitHub Models API.

GitHub Models provides access to various AI models through an OpenAI-compatible API. See https://docs.github.com/en/github-models for more information.

Methods

__init__
def __init__() -> None
def __init__(api_key: str) -> None
def __init__(api_key: str, http_client: httpx.AsyncClient) -> None
def __init__(openai_client: AsyncOpenAI | None = None) -> None

Create a new GitHub Models provider.

Returns

None

Parameters

api_key : str | None Default: None

The GitHub token to use for authentication. If not provided, the GITHUB_API_KEY environment variable will be used if available.

openai_client : AsyncOpenAI | None Default: None

An existing AsyncOpenAI client to use. If provided, api_key and http_client must be None.

http_client : httpx.AsyncClient | None Default: None

An existing httpx.AsyncClient to use for making HTTP requests.

OpenRouterProvider

Bases: Provider[AsyncOpenAI]

Provider for OpenRouter API.

Methods

__init__
def __init__(openai_client: AsyncOpenAI) -> None
def __init__(
    api_key: str | None = None,
    app_url: str | None = None,
    app_title: str | None = None,
    openai_client: None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Configure the provider with either an API key or prebuilt client.

Returns

None

Parameters

api_key : str | None Default: None

OpenRouter API key. Falls back to OPENROUTER_API_KEY when omitted and required unless openai_client is provided.

app_url : str | None Default: None

Optional url for app attribution. Falls back to OPENROUTER_APP_URL when omitted.

app_title : str | None Default: None

Optional title for app attribution. Falls back to OPENROUTER_APP_TITLE when omitted.

openai_client : AsyncOpenAI | None Default: None

Existing AsyncOpenAI client to reuse instead of creating one internally.

http_client : httpx.AsyncClient | None Default: None

Custom httpx.AsyncClient to pass into the AsyncOpenAI constructor when building a client.

Raises
  • UserError — If no API key is available and no openai_client is provided.

VercelProvider

Bases: Provider[AsyncOpenAI]

Provider for Vercel AI Gateway API.

HuggingFaceProvider

Bases: Provider[AsyncInferenceClient]

Provider for Hugging Face.

Methods

__init__
def __init__(base_url: str, api_key: str | None = None) -> None
def __init__(provider_name: str, api_key: str | None = None) -> None
def __init__(hf_client: AsyncInferenceClient, api_key: str | None = None) -> None
def __init__(
    hf_client: AsyncInferenceClient,
    base_url: str,
    api_key: str | None = None,
) -> None
def __init__(
    hf_client: AsyncInferenceClient,
    provider_name: str,
    api_key: str | None = None,
) -> None
def __init__(api_key: str | None = None) -> None

Create a new Hugging Face provider.

Returns

None

Parameters

base_url : str | None Default: None

The base url for the Hugging Face requests.

api_key : str | None Default: None

The API key to use for authentication, if not provided, the HF_TOKEN environment variable will be used if available.

hf_client : AsyncInferenceClient | None Default: None

An existing AsyncInferenceClient client to use. If not provided, a new instance will be created.

http_client : AsyncClient | None Default: None

(currently ignored) An existing httpx.AsyncClient to use for making HTTP requests.

provider_name : str | None Default: None

Name of the provider to use for inference. available providers can be found in the HF Inference Providers documentation. defaults to “auto”, which will select the first available provider for the model, the first of the providers available for the model, sorted by the user’s order in https://hf.co/settings/inference-providers. If base_url is passed, then provider_name is not used.

MoonshotAIProvider

Bases: Provider[AsyncOpenAI]

Provider for MoonshotAI platform (Kimi models).

OllamaProvider

Bases: Provider[AsyncOpenAI]

Provider for local or remote Ollama API.

Methods

__init__
def __init__(
    base_url: str | None = None,
    api_key: str | None = None,
    openai_client: AsyncOpenAI | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Create a new Ollama provider.

Returns

None

Parameters

base_url : str | None Default: None

The base url for the Ollama requests. If not provided, the OLLAMA_BASE_URL environment variable will be used if available.

api_key : str | None Default: None

The API key to use for authentication, if not provided, the OLLAMA_API_KEY environment variable will be used if available.

openai_client : AsyncOpenAI | None Default: None

An existing AsyncOpenAI client to use. If provided, base_url, api_key, and http_client must be None.

http_client : httpx.AsyncClient | None Default: None

An existing httpx.AsyncClient to use for making HTTP requests.

LiteLLMProvider

Bases: Provider[AsyncOpenAI]

Provider for LiteLLM API.

Methods

__init__
def __init__(api_key: str | None = None, api_base: str | None = None) -> None
def __init__(
    api_key: str | None = None,
    api_base: str | None = None,
    http_client: AsyncHTTPClient,
) -> None
def __init__(openai_client: AsyncOpenAI) -> None

Initialize a LiteLLM provider.

Returns

None

Parameters

api_key : str | None Default: None

API key for the model provider. If None, LiteLLM will try to get it from environment variables.

api_base : str | None Default: None

Base URL for the model provider. Use this for custom endpoints or self-hosted models.

openai_client : AsyncOpenAI | None Default: None

Pre-configured OpenAI client. If provided, other parameters are ignored.

http_client : AsyncHTTPClient | None Default: None

Custom HTTP client to use.

NebiusProvider

Bases: Provider[AsyncOpenAI]

Provider for Nebius AI Studio API.

OVHcloudProvider

Bases: Provider[AsyncOpenAI]

Provider for OVHcloud AI Endpoints.

AlibabaProvider

Bases: Provider[AsyncOpenAI]

Provider for Alibaba Cloud Model Studio (DashScope) OpenAI-compatible API.

SambaNovaProvider

Bases: Provider[AsyncOpenAI]

Provider for SambaNova AI models.

SambaNova uses an OpenAI-compatible API.

Attributes

name

Return the provider name.

Type: str

base_url

Return the base URL.

Type: str

client

Return the AsyncOpenAI client.

Type: AsyncOpenAI

Methods

model_profile

@staticmethod

def model_profile(model_name: str) -> ModelProfile | None

Get model profile for SambaNova models.

SambaNova serves models from multiple families including Meta Llama, DeepSeek, Qwen, and Mistral. Model profiles are matched based on model name prefixes.

Returns

ModelProfile | None

__init__
def __init__(
    api_key: str | None = None,
    base_url: str | None = None,
    openai_client: AsyncOpenAI | None = None,
    http_client: httpx.AsyncClient | None = None,
) -> None

Initialize SambaNova provider.

Returns

None

Parameters

api_key : str | None Default: None

SambaNova API key. If not provided, reads from SAMBANOVA_API_KEY env var.

base_url : str | None Default: None

Custom API base URL. Defaults to https://api.sambanova.ai/v1

openai_client : AsyncOpenAI | None Default: None

Optional pre-configured OpenAI client

http_client : httpx.AsyncClient | None Default: None

Optional custom httpx.AsyncClient for making HTTP requests

Raises
  • UserError — If API key is not provided and SAMBANOVA_API_KEY env var is not set