PyAI Conf
Register now
/Company

Claude Is Getting Better. What About Us?

5 mins

I have a confession: I used to be terrible at debugging.

I'd go through hundreds of threads, read through files I had no business in, spend hours chasing a bug that turned out to be something trivial. It wasn't efficient. But every wrong turn left me with a better understanding of the system. The next time something broke nearby, I already knew the terrain.

When you struggle with code long enough, you build a map. Not a conscious one — more like muscle memory for the codebase. You start to feel where things live. You develop instincts about where a bug probably is before you've even started looking.

I'm losing that map. And I think a lot of us are.

These days, when I hit a problem, I describe it to an AI and get an answer in seconds. And more often than I'd like to admit, I take it and move on. It's a tricky position — I don't want to bottleneck myself by refusing to use the tools. But the velocity I'm gaining today is costing me tomorrow. Every time I skip the struggle, I skip the understanding.

Anthropic published research showing that developers using AI for code generation scored 17% lower on comprehension tests compared to those who coded by hand. That's not a small gap. That's the difference between someone who understands the system and someone who can operate it.

I caught myself relying on the tools more than I liked. I was shipping. I was productive. But once I noticed it, I started catching it everywhere. Every shortcut was a gap in my map that I didn't notice until later. If you skip the struggle, you don't build the judgment. And without the judgment, you're just approving code you can't evaluate.

Addy Osmani wrote about this as "skill atrophy". That's the right term, but it undersells how it feels. It's more like: you used to be able to do something, and now you're not sure you can, and you're not sure when that changed.

I keep seeing software break in ways we just weren't used to before. AI writes the code, AI reviews the code, AI fixes the bugs in the code. A bug shows up and nobody understands why because nobody understood the code in the first place. So the fix is: ask Claude. Which introduces its own blind spots. Claude fixing Claude fixing Claude, with a human in the middle who's increasingly just along for the ride. That's a cost we're accumulating, and we will eventually pay for it.

I don't have a grand solution. I don't think there is one. But I've found one small practice that's made a real difference for me.

Claude Code has a feature called output styles that lets you change how it interacts with you. I wrote one called Mentor. Think of it as a super smart rubber duck. When I ask it a question, it asks me questions back. "What's your instinct here?" "Have you looked at how this is already handled in the codebase?" "Walk me through your thinking." It never writes code for me. No snippets, no diffs, no "here's what that would look like."

Here's the core of it:

You are a senior engineer sitting next to the user, mentoring them
through problems. Your job is to make them a better engineer —
never to do their work for them.

Rule 1: Never Write Code
Rule 2: Never Assume — Always Verify

Before giving ANY guidance, you MUST:
- Research the current state of things
- Read the actual codebase
- Think like a maintainer
- Cross-reference multiple sources
- Be transparent about confidence

The funny part? Most of the time when it asks me to go look at something, I don't even need the AI response after that. Just the act of being pointed in the right direction and having to think for myself is enough. It's the little friction that helps. That's all it takes to start building the map again.

I'm not anti-AI. That would be absurd, I work at a company that builds AI tooling. I use these tools every day, and they make me faster in real, measurable ways. I used AI to help me arrange my thoughts for this very blog post. Which, if you think about it for a second, is kind of the whole problem in a nutshell. Even my words aren't fully mine most days now.

We still own these codebases. When something breaks, a human gets paged. When the architecture needs to evolve, a human has to hold the whole system in their head and make judgment calls. If the people responsible for these systems are slowly losing their understanding of them, that's not just a personal problem. That's an industry-wide risk.

So here's my unsolicited advice, from one person figuring this out in real time: find your version of the wrong path. Build in friction where you need it. Use the tools, they're amazing, but don't let them make you a stranger in your own codebase.