When AI coding tools generate changes at machine speed, the blast radius of a bad change scales proportionally. The safety net that traditionally catches errors — peer review, gradual understanding, implicit team knowledge — is bypassed when code is generated and shipped by agents. **The pattern:** insert deliberate friction into the deployment pipeline for AI-assisted changes. This trades raw velocity for reliability at the specific point where AI-generated code introduces the most risk. ## Amazon's 2026 Response After 4 Sev-1 outages in a single week traced to "Gen-AI assisted changes," Amazon implemented: - **Senior sign-off gate**: junior and mid-level engineers can no longer push AI-assisted code without a senior engineer approving - **"Controlled friction"** (Dave Treadwell's term): temporary safety practices introducing friction to changes in ~335 Tier-1 systems - AWS also experienced a 13-hour recovery after its own AI coding tool, asked to make changes, deleted and recreated an environment instead ## Why It's Necessary - AI doesn't understand organizational context, implicit conventions, or system boundaries - AI tools operate at speeds that outpace human review capacity when no gates exist - Traditional safeguards (code review, testing) were designed for human-speed change rates - The cost of skipping documentation and architectural constraints becomes immediate and severe, not gradual ## The Friction Spectrum Not all friction is equal. Ranked from light to heavy: 1. **Automated gates** — CI/CD checks that specifically flag AI-generated changes (metadata tags, diff analysis) 2. **Required documentation** — AI-generated PRs must include a rationale or architectural impact statement 3. **Human approval** — senior sign-off on AI-touched code paths (Amazon's approach) 4. **Change moratorium** — temporary freeze on AI-assisted changes to critical systems ## Connection to Coda Hale's Change Boundary Theory This pattern is a direct application of Hale's 2016 principles to the AI-coding era: - **[[Automation's Two-Edged Nature]]** — AI coding tools are the Roomba. They never get bored writing boilerplate, but they'll also "diligently paint your floors with the unexpected mess." Amazon's AWS incident (AI deleted and recreated an environment instead of making targeted changes) is the Roomba problem at production scale. - **[[Active Operators]]** — Amazon's senior sign-off gate *is* an Active Operator. The senior engineer is aware they're crossing a high-risk change boundary, understands the AI-generated change may cause an incident, and is prepared to respond. - **[[Change Boundaries]]** — AI tools make change boundaries invisible. "Claude, fix this bug" doesn't surface whether the change crosses a Tier-1 system boundary. Amazon's gate re-establishes boundary visibility. - **[[Coupled Change Boundaries]]** — AI tools compound this anti-pattern. An agent asked to make changes may couple deployment + migration + infrastructure modification in a single action, obscuring causality when something breaks. ## Sources - [[Amazon is holding a mandatory meeting about AI breaking its...]] (Lukasz Olejnik, 2026-03-10) - Business Insider: [Amazon Tightens Code Guardrails After Outages](https://www.businessinsider.com/amazon-tightens-code-controls-after-outages-including-one-ai-2026-3) (2026-03-10) - Fortune: [Amazon puts humans further back in the loop](https://fortune.com/2026/03/12/amazon-retail-site-outages-ai-agent-inaccurate-advice/) (2026-03-13) - [[Harness Engineering Is Cybernetics]] — the generation-verification asymmetry