*The new skill is the oscillation itself.*
*Published March 8, 2026*
---
I have spent most of my career building the kind of unconscious expertise that Josh Waitzkin describes in *The Art of Learning*: the final stage of a climb where skill drops below awareness and the expert mind just acts. Learned syntax. Chunked patterns. Built pathways until I could feel where a bug lived before I could explain why.
Waitzkin also writes about beginner's mind: the openness that lets you see what is actually in front of you rather than what you expect to see. In his framework, you earn mastery through beginner's mind, then leave it behind.
AI broke that sequence. Now you need both at the same time.
## The Cost of Refusing Beginner's Mind
On December 26, 2025, Andrej Karpathy wrote that he'd never felt so behind as a programmer. "The profession is being dramatically refactored." If one of AI's architects needs beginner's mind again, nobody gets to skip it.
The [METR study](https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/) put a number on the cost. Experienced open-source developers took **19% longer** to complete tasks on their own repositories when AI tools were allowed, despite forecasting a 24% speedup. The study authors point to integration overhead, context-switching, and AI's lack of codebase-specific knowledge. But there is another reading: at least some of that 19% is what it costs when expertise overrides openness. These developers knew their codebases deeply. That knowledge made it harder, not easier, to let a tool take a different path.
Novices do not pay this tax. They have no defaults to override. But they pay a different one: they cannot recognize when plausible-looking AI code introduces a subtle bug, because they have never debugged that class of bug manually. Mastery without beginner's mind is rigid. Beginner's mind without mastery is dangerous.
## Not Just Code Review
This might sound like code review, but the roles never stabilize.
When you review a junior developer's code, you are the expert and they are the learner. When you work with AI, you switch roles constantly. In one moment you defer to the AI because it knows a library better than you do. In the next you override it because it has generated a plausible solution to the wrong problem. Five minutes later you realize your override was wrong and the AI's approach was better after all.
The oscillation is faster than any review cycle. It happens within a single task, sometimes within a single function. And it requires something code review does not: the willingness to be wrong about your own expertise in real time, then right again, then wrong again, without losing your footing.
## What Remains
Craig Sturgis, a CTO who runs an AI agent army as an IC, asked the question directly. "What *am* I good for?" Claude writes better code than him most of the time. It debugs better. It designs better. His answer: "I'm a way better product manager than Claude. I'm really good at figuring out what the right things are to do next."
I do this every day now. An AI drafts code. I read it with pattern recognition built over years of debugging and ask: does this actually solve the problem, or does it just look like it does? Sometimes I catch something the AI missed, the kind of thing you only recognize if you have debugged it by hand before. Sometimes I realize my instinct is wrong and the AI's approach is better. The skill that remains is not execution. It is the discipline of switching between trust and skepticism fast enough to keep up.
I miss the feeling of settled mastery, the quiet confidence of knowing your tools so deeply they become invisible. But the old mastery was a destination you reached and defended. The new mastery is the oscillation itself, and you do not get to stop.
---
**Sources cited:**
- Waitzkin, Josh. *The Art of Learning*. Free Press, 2007. Mastery framework and beginner's mind.
- Karpathy, Andrej. X post, December 26, 2025. "I've never felt this much behind as a programmer."
- Sturgis, Craig. LinkedIn post, February 2026. "What am I good for?" self-audit.
- METR study: Becker, Rush, Barnes, Rein. "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity." arXiv:2507.09089v2, July 2025. 16 developers, 246 tasks on their own repos, 19% increase in completion time with AI tools allowed.