PyAI Conf
Register now
MindsDB logo
Data Infrastructure

10x Agent Performance: How MindsDB Switched to Pydantic AI for Programmatic Control and Production-Ready Agents

MindsDB, a company building AI data analysts, faced challenges with their agent implementation using LangChain, experiencing performance issues and a lack of programmatic control over agent behavior. They migrated to Pydantic AI, adopting a philosophy that treats agents as software through structured data validation and explicit state management.

In a month, we were already 10 times better, just because we could really control the state of each of the steps that we have.
Jorge Torres, Co-founder & CEO, MindsDB

Products Used:

Pydantic AI
4 mins

MindsDB & Pydantic AI: How migrating to Pydantic AI helped achieve 10x better agent performance

MindsDB builds AI data analysts that lets users query any database using natural language. Their open-source query engine for AI analytics powers both community and enterprise products, serving customers globally. But their agent implementation wasn’t giving them the control they needed. Performance issues were blocking an enterprise deal, and the team couldn't easily manage agent state on each step of their workflows. "It was 2 years of dealing with various issues in the LangChain agent. It felt like a whack-a-mole game," explains Torres. "We needed a cleaner approach." The team wanted programmatic control over agent behaviour, the ability to validate outputs at each step and make deterministic decisions based on structured data. They also needed a codebase that any backend engineer could contribute to, not just AI specialists. MindsDB was looking for a framework that treated agents as software, not magic.

MindsDB found that Pydantic AI's philosophy aligned with what they were building internally: programmatic control over agent behaviour through structured data validation.

"We really like the Pydantic AI approach. The more we let the LLM figure things out in loops, the more mistakes it tends to make."

Jorge Torres

Pydantic AI also allows for easier experimentation, Jorge explains that “It is easier for us to learn from one implementation and implement it from scratch again with a different workflow, for example. The more we do it, it looks more programmatic than agentic”. The team tested Pydantic AI on their enterprise product first, building a proof-of-concept over a weekend. Now, they are switching to Pydantic AI also on their open source MindsDB product. “That will introduce a breaking change but it is a great way not only for a better agent but for better everything, it is a cleaner way to build code with less dependencies.”, says Jorge.

Within a month of migrating to Pydantic AI, MindsDB transformed how they build and maintain agents: 10x performance improvement in one month: the proof-of-concept immediately showed promise. Within a month of migration, agent performance improved tenfold, enough to close the enterprise deal they were chasing.

Any Backend Engineer Can Contribute: "Before, any change into the agent was like, 'we have to talk to this person, and nobody knows what this person is doing, and no one can touch that. It looked like a black box.'" Torres recalls. "Today, anyone in the backend team can touch (the code) and build improvements."

Cleaner Codebase: Removing LangChain eliminated gigabytes of dependencies. The new implementation follows standard Python patterns that any backend developer can read and understand. "You can really take the full power of being a developer, and then apply that to these things that now understand structured data," Torres notes.

Hours Instead of Weeks: "Right now we can do internal experiments that can take hours, as opposed to weeks of work.". The team credits this to Pydantic AI's straightforward API and strong LLM coding assistant support: "Even as young as Pydantic AI is, the LLMs that know how to code have a very easy time understanding what you want to do with Pydantic AI, as opposed to a different framework."

The migration to Pydantic AI fundamentally changed how MindsDB approaches agent development. Rather than treating agents as opaque systems where LLMs handle control flow, the team now builds them like any other software component—with explicit state management, validated inputs and outputs, and predictable execution paths. The shift enabled their engineers to apply familiar development patterns to agent code, making it maintainable, testable, and iteratively improvable. This approach brought several concrete improvements to their agent architecture:

  • Type-safe state management: Each agent step produces validated Pydantic objects, enabling deterministic decision-making between LLM calls
  • Build agents with software development patterns: lets developers use standard software development practices to build agents
  • Composable architecture: Individual components can be rewritten and improved without refactoring other parts of the system
  • Programmatic workflows: The team can programmatically decide what comes next in the workflow, rather than relying on unpredictable LLM reasoning

For teams considering Pydantic AI:

  • Start with a proof-of-concept: MindsDB tested on their enterprise product over a weekend before committing
  • Embrace programmatic control: Reduce LLM decision-making loops to minimise errors
  • Leverage existing skills: Standard backend engineers can build and maintain production agents, no need for “walk-on-water engineers”.
  • Iterate fast: Type safety and standard software development patterns enable rapid experimentation

Want to build production agents with type safety and engineering best practices? Get started with Pydantic AI.