This post draws practical lessons from How AI Impacts Skill Formation by Judy Hanwen Shen and Alex Tamkin at Anthropic. For a summary of the study, see AI Makes You Faster But Dumber?. For a critical analysis, see The Skill Tax of AI Coding.
The Two-Line Summary
Anthropic’s study found that developers who used AI assistance scored 17% lower on a mastery quiz, but developers who used AI to ask questions and build understanding scored nearly as well as those who coded without AI at all.
The lesson isn’t “don’t use AI.” It’s use AI in ways that keep your brain engaged. Here’s how.
For Individual Developers
1. Adopt the “Conceptual Inquiry” Pattern
The highest-scoring AI users in the study only asked the AI conceptual questions. They never asked it to write code. Instead, they’d ask things like:
- “What does structured concurrency mean in Trio?”
- “Why do I need a nursery to spawn tasks?”
- “What’s the difference between
awaitandasyncin this context?”
Then they wrote all the code themselves, using their improved understanding. This group was the second-fastest overall (behind only full delegators) and scored the highest on the quiz.
Practical tip: When learning a new library, framework, or concept, resist the urge to ask AI “write me a function that does X.” Instead, ask “explain how X works” and then write it yourself. You’ll be nearly as fast and you’ll actually retain the knowledge.
2. If You Generate Code, Always Ask “Why?”
The study identified a “Generation-Then-Comprehension” pattern that also preserved learning. These developers did ask AI to generate code, but then immediately followed up with questions like:
- “Explain what this code does line by line.”
- “Why did you use a memory channel here instead of a queue?”
- “What would break if I removed the
asynckeyword from this function?”
This small extra step, asking for explanations after getting code, made the difference between a ~24% quiz score and a ~75% quiz score.
Practical tip: Make it a habit. Every time you accept AI-generated code, spend 60 seconds asking the AI to explain the non-obvious parts. Think of it as a mini code review with the AI as the author and you as the reviewer.
3. Don’t Outsource Your Debugging
The biggest learning gap in the study was in debugging skills. The control group (no AI) encountered more errors and had to figure them out themselves. This was frustrating in the moment but built deep understanding of what goes wrong and why.
The AI group had fewer errors because the AI’s code was mostly correct, so they never built those debugging mental models.
Practical tip: When you hit a bug, try to understand it yourself for at least 5-10 minutes before asking AI. Read the error message. Form a hypothesis. Check it. If you do eventually ask AI, ask it to explain the error rather than fix the code. The fix is less valuable than understanding the cause.
4. Use the “Struggle Budget”
Not all tasks are learning opportunities, and not all tasks need to be. Here’s a framework for deciding when to delegate to AI vs. when to struggle:
| Scenario | Approach | Why |
|---|---|---|
| Learning something new | Struggle first, AI for concepts only | This is when skill formation happens |
| Repetitive boilerplate | Delegate freely to AI | No learning value in writing the 50th CRUD endpoint |
| Debugging a novel error | Struggle first (10-15 min), then ask AI for explanation | Debugging builds irreplaceable intuition |
| Under time pressure on known patterns | Delegate freely | You already have the skill; optimize for speed |
| Architectural decisions | Use AI as a sounding board, not a decider | AI can present options; you need to understand the trade-offs |
The key principle: delegate the work you’ve already mastered. Struggle through the work that stretches you.
5. Beware Progressive Reliance
One of the study’s low-scoring patterns was “Progressive AI Reliance”: developers who started coding independently but gradually surrendered more and more to the AI as the task got harder. They essentially gave up at the exact point where the most learning would have happened.
Practical tip: Notice when you’re sliding from “asking questions” to “just give me the code.” That transition point is usually where a concept is just beyond your current understanding, which is exactly where learning happens. Stay in the discomfort a little longer.
For Engineering Managers
6. Create Explicit Learning vs. Shipping Modes
The fundamental tension the study reveals is that the fastest approach to completing a task (full AI delegation) is the worst approach for skill development. In a team, this creates a perverse incentive: if you measure velocity, you’re inadvertently rewarding the behavior that atrophies your team’s skills. (The full breakdown of the six interaction patterns is worth reviewing if you haven’t already.)
Practical tip: Explicitly designate some tasks as “learning tasks” where AI usage is either restricted or deliberately structured. For example:
- Onboarding projects: New engineers learn key systems with limited AI assistance
- Stretch assignments: Tasks that push someone into unfamiliar territory should come with an explicit expectation of slower delivery and deeper understanding
- Pairing sessions: Regular pair programming where one person codes without AI while the other reviews
7. Make Code Review a Teaching Tool, Not a Checkbox
If your junior developers are generating code with AI, your code review process becomes the last line of defense for skill development. But this only works if reviews go beyond “LGTM.”
Practical tip: In code reviews for AI-assisted work, ask questions that test understanding:
- “Why did you choose this approach over X?”
- “What happens if this input is null?”
- “Walk me through the error handling here.”
If the developer can’t answer, because they delegated to AI without understanding, that’s a coaching moment, not a blame moment. Use it.
8. Measure Understanding, Not Just Output
The study showed that AI-assisted developers completed tasks at the same speed but understood significantly less. If you’re only measuring story points, PRs merged, or lines of code, you’re blind to this skill erosion.
Practical tip: Consider periodic, low-stakes knowledge checks:
- Include debugging exercises in team meetings
- Use architecture whiteboarding sessions to assess conceptual understanding
- Track bug introduction rates per engineer (if people don’t understand their code, they’ll introduce more subtle bugs over time)
- Incorporate “explain your PR” sessions where engineers walk through their code verbally
9. Design AI Tool Policies With Nuance
A blanket “always use AI” or “never use AI” policy misses the point. The study shows that the way AI is used matters far more than whether it’s used.
Practical tip: Consider tiered policies:
- Greenfield learning: AI limited to conceptual questions and documentation lookups
- Established systems: AI code generation allowed with mandatory self-review
- Maintenance and boilerplate: Full AI delegation encouraged
- Production debugging: AI explanations allowed, but engineers must identify the root cause themselves before applying fixes
10. Invest in “Learning Mode” AI Tools
The study’s conclusion points to an important product design direction: AI tools should have explicit learning modes. Some already do; Claude Code’s Explanatory and Learning modes and ChatGPT’s Study Mode are early examples.
Practical tip: Evaluate your team’s AI tooling not just on productivity metrics but on whether it supports learning. Does the tool explain its suggestions? Can you configure it to ask Socratic questions instead of giving direct answers? Does it surface relevant documentation alongside generated code?
The Meta-Lesson
Here’s the uncomfortable truth underlying all of this: learning is supposed to be effortful. The friction, the errors, the frustrating moments of being stuck: that’s not a bug in the learning process, it’s the mechanism. The study showed that participants in the control group reported the task as harder but also reported higher enjoyment and learning satisfaction. Struggle and satisfaction are linked. (That said, the study has real limitations worth understanding before drawing sweeping conclusions.)
AI is incredibly good at removing friction. And in many contexts, shipping features, writing boilerplate, automating tedious work, removing friction is exactly what you want. But when the friction is the learning, removing it removes the growth.
The developers who thrived in the study weren’t the ones who avoided AI, and they weren’t the ones who surrendered to it. They were the ones who used it deliberately, as a tool for understanding, not a substitute for thinking.
That’s the skill we all need to develop: not just how to code, but how to think about how we use tools that code for us.
Read the full paper or Anthropic’s blog post for more details.