pydantic_ai.models.outlines
For details on how to set up this model, see model configuration for Outlines.
Bases: Model
A model that relies on the Outlines library to run non API-based models.
def __init__(
model: OutlinesBaseModel | OutlinesAsyncBaseModel,
provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None,
)
Initialize an Outlines model.
The Outlines model used for the model.
provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'
The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an
instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.
profile : ModelProfileSpec | None Default: None
The model profile to use. Defaults to a profile picked by the provider.
settings : ModelSettings | None Default: None
Default model settings for this model instance.
@classmethod
def from_transformers(
cls,
hf_model: transformers.modeling_utils.PreTrainedModel,
hf_tokenizer_or_processor: transformers.PreTrainedTokenizer | transformers.processing_utils.ProcessorMixin,
provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None,
)
Create an Outlines model from a Hugging Face model and tokenizer.
The Hugging Face PreTrainedModel or any model that is compatible with the
transformers API.
hf_tokenizer_or_processor : transformers.PreTrainedTokenizer | transformers.processing_utils.ProcessorMixin
Either a HuggingFace PreTrainedTokenizer or any tokenizer that is compatible
with the transformers API, or a HuggingFace processor inheriting from ProcessorMixin. If a
tokenizer is provided, a regular model will be used, while if you provide a processor, it will be a
multimodal model.
provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'
The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an
instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.
profile : ModelProfileSpec | None Default: None
The model profile to use. Defaults to a profile picked by the provider.
settings : ModelSettings | None Default: None
Default model settings for this model instance.
@classmethod
def from_llamacpp(
cls,
llama_model: llama_cpp.Llama,
provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None,
)
Create an Outlines model from a LlamaCpp model.
The llama_cpp.Llama model to use.
provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'
The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an
instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.
profile : ModelProfileSpec | None Default: None
The model profile to use. Defaults to a profile picked by the provider.
settings : ModelSettings | None Default: None
Default model settings for this model instance.
@classmethod
def from_mlxlm(
cls,
mlx_model: nn.Module,
mlx_tokenizer: transformers.PreTrainedTokenizer,
provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None,
)
Create an Outlines model from a MLXLM model.
The nn.Module model to use.
The PreTrainedTokenizer to use.
provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'
The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an
instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.
profile : ModelProfileSpec | None Default: None
The model profile to use. Defaults to a profile picked by the provider.
settings : ModelSettings | None Default: None
Default model settings for this model instance.
@classmethod
def from_sglang(
cls,
base_url: str,
api_key: str | None = None,
model_name: str | None = None,
provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None,
)
Create an Outlines model to send requests to an SGLang server.
base_url : str
The url of the SGLang server.
The API key to use for authenticating requests to the SGLang server.
The name of the model to use.
provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'
The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an
instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.
profile : ModelProfileSpec | None Default: None
The model profile to use. Defaults to a profile picked by the provider.
settings : ModelSettings | None Default: None
Default model settings for this model instance.
@classmethod
def from_vllm_offline(
cls,
vllm_model: Any,
provider: Literal['outlines'] | Provider[OutlinesBaseModel] = 'outlines',
profile: ModelProfileSpec | None = None,
settings: ModelSettings | None = None,
)
Create an Outlines model from a vLLM offline inference model.
vllm_model : Any
The vllm.LLM local model to use.
provider : Literal[‘outlines’] | Provider[OutlinesBaseModel] Default: 'outlines'
The provider to use for OutlinesModel. Can be either the string ‘outlines’ or an
instance of Provider[OutlinesBaseModel]. If not provided, the other parameters will be used.
profile : ModelProfileSpec | None Default: None
The model profile to use. Defaults to a profile picked by the provider.
settings : ModelSettings | None Default: None
Default model settings for this model instance.
def format_inference_kwargs(model_settings: ModelSettings | None) -> dict[str, Any]
Format the model settings for the inference kwargs.
Bases: StreamedResponse
Implementation of StreamedResponse for Outlines models.
Get the model name of the response.
Type: str
Get the provider name.
Type: str
Get the provider base URL.
Get the timestamp of the response.
Type: datetime