## Progressive Summary
### Executive Summary (Layer 3)
[Verified] Salvatore Zappalà argues that as LLMs evolve from code generation to autonomous planning (exemplified by Claude Code's "Plan mode"), programmers risk becoming "curators" rather than creators. The philosophical concern: "I don't like the idea of a world where agency, in its true meaning, doesn't belong to humans anymore." His personal standard—refusing to commit code without full comprehension—models how to preserve authentic human agency amid accelerating AI capabilities.
### Key Insights (Layer 2)
- **[Verified] The Planning Threshold**: Claude Code CLI's Plan mode represents a significant capability shift—AI now develops prompts into structured plans with user clarification, enabling multi-turn reasoning
- **[Verified] "Vibe Coding" Risk**: Andrej Karpathy's term for unsupervised AI loops that compound mistakes and create "runaway complexity" beyond recovery
- **[Verified] Scale Evidence**: Boris Cherny merged 259 PRs (497 commits, 40k lines) entirely written by Claude—demonstrating velocity gap between human comprehension and AI output
- **[Verified] Agency Preservation Standard**: Maintain git control, never commit code without full comprehension—human agency requires continued understanding, not delegation of judgment
### Important Context (Layer 1)
- Published January 7, 2025 by Salvatore Zappalà
- References Claude Opus 4.5 (November 2025) as capability threshold
- Doesn't propose solutions but identifies the threshold question: can AI acceleration be managed within frameworks preserving authentic human agency?
### Discoverability Score: 9/10
---
## Related
- [[The Paradox of AI Agency - Augmentation vs Erosion]] - Shared concern about human-as-architect principle; Zappalà's "never commit without comprehension" is a concrete implementation of this paradox resolution
- [[DHH Warning on AI Usage]] - Parallel argument about skill preservation through manual practice; both emphasize that agency requires continued understanding
- [[Human-AGI Collaboration Paradigm]] - Zappalà's personal standard answers the "rubber stamp" concern with a concrete practice: maintain git control and full comprehension
- [[Amplified Intelligence]] - Complementary perspective: human skill as the multiplier in AI output quality supports Zappalà's argument that human agency must remain central
- [[Personal AI Infrastructure]] - Consider adopting Zappalà's standard as a PAI principle: human-centric automation that preserves comprehension
---
**Source**: [Agency must stay with humans](https://readwise.io/reader/shared/01ked2w7kh34frxwdaa7dt2q62) - Salvatore Zappalà, salvozappa.com, January 2025