Skip to content

pydantic_ai.models.outlines

Setup

For details on how to set up this model, see model configuration for Outlines.

OutlinesModel

Bases: Model

A model that relies on the Outlines library to run non API-based models.

Methods

__init__
def __init__(
    model: OutlinesBaseModel | OutlinesAsyncBaseModel,
    provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
    profile: ModelProfileSpec | None = None,
    settings: ModelSettings | None = None,
)

Initialize an Outlines model.

Parameters

model : OutlinesBaseModel | OutlinesAsyncBaseModel

The Outlines model used for the model.

provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'

The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.

profile : ModelProfileSpec | None Default: None

The model profile to use. Defaults to a profile picked by the provider.

settings : ModelSettings | None Default: None

Default model settings for this model instance.

from_transformers

@classmethod

def from_transformers(
    cls,
    hf_model: transformers.modeling_utils.PreTrainedModel,
    hf_tokenizer_or_processor: transformers.PreTrainedTokenizer | transformers.processing_utils.ProcessorMixin,
    provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
    profile: ModelProfileSpec | None = None,
    settings: ModelSettings | None = None,
)

Create an Outlines model from a Hugging Face model and tokenizer.

Parameters

hf_model : transformers.modeling_utils.PreTrainedModel

The Hugging Face PreTrainedModel or any model that is compatible with the transformers API.

hf_tokenizer_or_processor : transformers.PreTrainedTokenizer | transformers.processing_utils.ProcessorMixin

Either a HuggingFace PreTrainedTokenizer or any tokenizer that is compatible with the transformers API, or a HuggingFace processor inheriting from ProcessorMixin. If a tokenizer is provided, a regular model will be used, while if you provide a processor, it will be a multimodal model.

provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'

The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.

profile : ModelProfileSpec | None Default: None

The model profile to use. Defaults to a profile picked by the provider.

settings : ModelSettings | None Default: None

Default model settings for this model instance.

from_llamacpp

@classmethod

def from_llamacpp(
    cls,
    llama_model: llama_cpp.Llama,
    provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
    profile: ModelProfileSpec | None = None,
    settings: ModelSettings | None = None,
)

Create an Outlines model from a LlamaCpp model.

Parameters

llama_model : llama_cpp.Llama

The llama_cpp.Llama model to use.

provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'

The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.

profile : ModelProfileSpec | None Default: None

The model profile to use. Defaults to a profile picked by the provider.

settings : ModelSettings | None Default: None

Default model settings for this model instance.

from_mlxlm

@classmethod

def from_mlxlm(
    cls,
    mlx_model: nn.Module,
    mlx_tokenizer: transformers.PreTrainedTokenizer,
    provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
    profile: ModelProfileSpec | None = None,
    settings: ModelSettings | None = None,
)

Create an Outlines model from a MLXLM model.

Parameters

mlx_model : nn.Module

The nn.Module model to use.

mlx_tokenizer : transformers.PreTrainedTokenizer

The PreTrainedTokenizer to use.

provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'

The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.

profile : ModelProfileSpec | None Default: None

The model profile to use. Defaults to a profile picked by the provider.

settings : ModelSettings | None Default: None

Default model settings for this model instance.

from_sglang

@classmethod

def from_sglang(
    cls,
    base_url: str,
    api_key: str | None = None,
    model_name: str | None = None,
    provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
    profile: ModelProfileSpec | None = None,
    settings: ModelSettings | None = None,
)

Create an Outlines model to send requests to an SGLang server.

Parameters

base_url : str

The url of the SGLang server.

api_key : str | None Default: None

The API key to use for authenticating requests to the SGLang server.

model_name : str | None Default: None

The name of the model to use.

provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'

The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.

profile : ModelProfileSpec | None Default: None

The model profile to use. Defaults to a profile picked by the provider.

settings : ModelSettings | None Default: None

Default model settings for this model instance.

from_vllm_offline

@classmethod

def from_vllm_offline(
    cls,
    vllm_model: Any,
    provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
    profile: ModelProfileSpec | None = None,
    settings: ModelSettings | None = None,
)

Create an Outlines model from a vLLM offline inference model.

Parameters

vllm_model : Any

The vllm.LLM local model to use.

provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'

The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.

profile : ModelProfileSpec | None Default: None

The model profile to use. Defaults to a profile picked by the provider.

settings : ModelSettings | None Default: None

Default model settings for this model instance.

format_inference_kwargs
def format_inference_kwargs(model_settings: ModelSettings | None) -> dict[str, Any]

Format the model settings for the inference kwargs.

Returns

dict[str, Any]

OutlinesStreamedResponse

Bases: StreamedResponse

Implementation of StreamedResponse for Outlines models.

Attributes

model_name

Get the model name of the response.

Type: str

provider_name

Get the provider name.

Type: str

provider_url

Get the provider base URL.

Type: str | None

timestamp

Get the timestamp of the response.

Type: datetime