Polished AI outputs (formatted code, clean documents) trigger the same cognitive shortcut as a printed report versus a rough draft — the brain assigns credibility based on presentation quality, not accuracy. Anthropic's own data from 9,830 conversations quantifies the effect: - **5.2 percentage points** less likely to catch missing context when output looks finished - **3.1 percentage points** less likely to question reasoning - Users who iterated showed **5.6x** more reasoning scrutiny and **4x** more context flagging - 85.7% of conversations showed iteration; 14.3% treated Claude as an infallible oracle The counterpattern: treat AI as a **first-draft collaborator**, not an oracle. Iteration is the mechanism that restores critical thinking. ## Implications for Claude Code Usage This directly validates the [[Scope Misinterpretation as Trust Boundary]] pattern — when Claude produces a multi-file implementation that *looks* complete and professional, users are cognitively biased toward accepting it even when the scope was misinterpreted. The polish suppresses the scrutiny that would catch the overreach.