/LLMs

Minimize LLM Hallucinations with Pydantic Validators

Jason Liu avatar
Jason Liu

In our previous post we introduced Pydantic as a tool to steer language models.

This post, however, shifts focus on how we can leverage Pydantic's validation mechanism to minimize hallucinations. We'll explain how validation works and explore how incorporating context into validators can enrich language model result.

The intention is by the end of this article, you'll see some examples of how we can use Pydantic to minimize hallucinations and gain more confidence in the model's output.

But before we do that, lets go over some validation basics.

Validators are functions that take a value, check a property, raise an error, and return a value. They can be used to enforce constraints on model inputs and outputs.

def validation_function(value):
    if condition(value):
        raise ValueError("Value is not valid")
    return mutation(value)

For instance, consider validating a name field. Here’s how you can enforce a space in the name using Annotated and AfterValidator:

from typing_extensions import Annotated
from pydantic import BaseModel, ValidationError, AfterValidator

def name_must_contain_space(v: str) -> str:
    if " " not in v:
        raise ValueError("Name must contain a space.")
    return v.lower()

class UserDetail(BaseModel):
    age: int
    name: Annotated[str, AfterValidator(name_must_contain_space)] #(1)!

person = UserDetail.model_validate({"age": 24, "name": "Jason"}) #(2)!
  1. AfterValidator applies a custom validation via Annotated.
  2. The absence of a space in 'Jason' triggers a validation error.

Validators can also be used to enforce context-specific constraints. For instance, consider a validator that checks if a name is in a list of names, and raises an error if it isn't. Enhancing validators with ValidationInfo adds nuanced control. For example, removing dynamic stopwords from a text requires us to pass in some context:

def remove_stopwords(v: str, info: ValidationInfo):
    context = info.context
    if context:
        stopwords = context.get('stopwords', set())
        v = ' '.join(w for w in v.split() if w.lower() not in stopwords)
    return v

class Response(BaseModel):
    message: Annotated[str, AfterValidator(remove_stopwords)]

Passing dynamic context to the validator:

data = {'text': 'This is an example document'}
print(Model.model_validate(data))  # Without context #(1)!
#> text='This is an example document'
print(Model.model_validate(
    data, context={
        'stopwords': ['this', 'is', 'an'] #(2)!
    }))
#> text='example document'
  1. Without context, the validator does nothing.
  2. Passing context removes stopwords from the text.

Now lets revisit the instructor package from our previous article, which employs Pydantic to control language output.

  1. response_model: already seen in the previous article.
  2. validation_context: similar to ValidationInfo, provides validator context, that can be used augment the validation process.
  3. llm_validation: a validator that uses an LLM to validate the output.

Some rules are easier to express using natural language. For instance, consider the following rule: 'don't say objectionable things'. This rule is difficult to express using a validator function, but easy to express using natural language. We can use an LLM to generate a validator function from this rule.

Consider this example where we want some light moderation on a question answering model. We want to ensure that the answer does not contain objectionable content. We can use an LLM to generate a validator function that checks if the answer contains objectionable content.

import instructor

from openai import OpenAI
from instructor import llm_validator
from pydantic import BaseModel, BeforeValidator
from typing_extensions import Annotated

client = instructor.patch(OpenAI())

NoEvil = Annotated[
    str,
    BeforeValidator(
        llm_validator("don't say objectionable things", openai_client=client)
    )]

class QuestionAnswer(BaseModel):
    question: str
    answer: NoEvil

QuestionAnswer.model_validate({
    "question": "What is the meaning of life?",
    "answer": "Sex, drugs, and rock'n roll"
})

The above code will fail with the following error:

1 validation error for QuestionAnswer
answer
Assertion failed, The statement promotes objectionable behavior. [type=assertion_error, input_value='The meaning of life is to be evil and steal', input_type=str]
    Details at https://errors.pydantic.dev/2.4/v/assertion_error

Notice how the error message is generated by the LLM.

Many organizations worry about hallucinations in their llm responses. To address this we can use validators to ensure that the model's responses are grounded in the context used to generate the prompt.

For instance, let's consider a question-answering model that provides answers based on a text chunk. To ensure that the model's response is firmly based on the given text chunk, we can employ a validator. In this case, we can use ValidationInfo to verify the response. By using a straightforward validator, we can guarantee that the model's response is firmly grounded in the provided text chunk.

def citation_exists(v: str, info: ValidationInfo):
    context = info.context
    if context:
        context = context.get("text_chunk")
        if v not in context: # (1)!
            raise ValueError(f"Citation `{v}` not found in text")
    return v

Citation = Annotated[str, AfterValidator(citation_exists)]

class AnswerWithCitation(BaseModel):
    answer: str
    citation: Citation

Now lets consider an example where we want to answer a question using a text chunk. We can use a validator to ensure that the model's response is grounded in the provided text chunk.

AnswerWithCitation.model_validate({
    "answer": "The Capital of France is Paris",
    "citation": "Paris is the capital."
}, context={"text_chunk": "please note that currently, paris now no longer is the capital of france."})
1 validation error for AnswerWithCitation
citation
Citation `Paris is the capital.` not found in text [type=value_error, input_value='Paris is the capital.', input_type=str]
    Details at https://errors.pydantic.dev/2.4/v/value_error

Alhought the answer in this example was correct, the validator will raise an error because the citation is not in the text chunk. Which can help us identify and correct the model's 'hallucination' which can not be defined as incorrectly cited information.

We can use OpenAI to generate a response to a question using a text chunk. We can use a validator to ensure that the model's response is grounded in the provided text chunk.

resp = client.chat.completions.create(
    model="gpt-3.5-turbo",
    response_model=AnswerWithCitation,
    messages=[
        {"role": "user", "content": f"Answer the question `{q}` using the text chunk\n`{text_chunk}`"},
    ],
    validation_context={"text_chunk": text_chunk},
)

By asking the language model to cite the text chunk, and subsequently verifying that the citation is in the text chunk, we can ensure that the model's response is grounded in the provided text chunk, minimizing hallucinations and giving us more confidence in the model's output.

The power of these techniques lies in the flexibility and precision with which we can use Pydantic to describe and control outputs.

Whether it's moderating content, avoiding specific topics or competitors, or even ensuring responses are grounded in provided context, Pydantic's BaseModel offers a very natural way to describe the data structure we want, while validation functions and ValidationInfo provide the flexibility to enforce these constraints.