An active human operator is one who: 1. Is **aware** they are crossing a change boundary 2. **Understands** that doing so may cause an incident 3. Is **prepared to respond** should one occur This is Coda Hale's core prescription for safe automation in complex systems. The goal isn't to remove humans — it's to design automation that *empowers* them. **Why it matters**: In complex systems, the human operator is the last line of defense. Automation that removes human intervention also removes human error *correction*. An active operator can detect anomalies and respond to incidents that automated hedging (blue/green deploys, canary deploys) cannot catch. **Design principle**: Actions that cross high-downside change boundaries should require explicit human participation — not be buried in automated pipelines or command-line muscle memory. Skyliner's design choice: builds deploy to QA automatically, but promoting to production requires a human to push a button in a context-rich UI. **Source**: Coda Hale, "Risky Business Requires Active Operators," blog.skyliner.io, ~2016.