## Core Framework
**Managing multiple AI coding agents simultaneously requires the "piggy farmer" mindset: patience, accountability, and realistic expectations.** Steve Yegge's real-world framework from running 5 AI agents for months reveals that agentic coding is not a panacea but a journey—requiring orchestration skills fundamentally different from traditional programming.
## The "Piggy Farmer" Metaphor
From Steve Yegge's Twitter thread (February 2026):
> "My piggies are the best. I love Amp. I love coding this way. I'm working via deep, ongoing conversations, keeping my piggies on track and accountable."
**The Metaphor Decoded**:
- **Piggies** = Individual AI coding agents (Amp, Anthropic Claude Code, etc.)
- **Farmer** = Developer orchestrating multiple agents
- **Farming** = Patient management through ongoing conversations and accountability
- **Not**: Micromanaging every action
- **But**: Setting direction, checking progress, course-correcting
## Real-World Setup
Yegge's actual configuration (running for "a couple months"):
1. **Agent 1**: Working on Emacs
2. **Agent 2**: Building Node ("little piggy")
3. **Agent 3**: Porting old tests ("little piggy")
4. **Agent 4**: Code reviews
5. **Agent 5**: Porting Ruby to Kotlin ("Piggy Five")
**Management approach**: "Deep, ongoing conversations, keeping my piggies on track and accountable."
## The Three Essential Principles
### 1. Patience
> "Mostly, though, it's patience. The only way you can be successful with agentic coding is through patience, persistence, and low expectations."
**Why patience is critical**:
- Agents make mistakes frequently
- Learning agent capabilities takes time
- Integration requires multiple iterations
- Debugging agent outputs is slow
- Economic costs accumulate ($100s/week)
**Anti-pattern**: Expecting agents to work perfectly from day one
### 2. Persistence
**The Journey Framing**:
> "It was not easy to learn how to work like this. It has been a journey full of traps and surprises. But it has paid off in droves."
**What persistence means**:
- Continuing despite frustrating failures
- Learning each agent's strengths and failure modes
- Building institutional knowledge about orchestration
- Not giving up when agents produce garbage
- Treating it as skill development, not tool adoption
### 3. Low Expectations
> "Coding agents aren't a panacea. They can't even be trusted. You have to be very patient with them."
**Reality check**:
- ❌ "Agents will solve everything automatically"
- ✅ "Agents are unreliable assistants that occasionally produce gold"
- ❌ "Set and forget"
- ✅ "Ongoing supervision and course-correction"
- ❌ "They understand context perfectly"
- ✅ "They misunderstand frequently and require clarification"
## Economic Reality
> "At least, it has for Anthropic, who bilk me, sorry I mean 'bill me', for a few hundred bucks a week."
**Cost structure**:
- Multiple agents running simultaneously
- Hundreds of dollars weekly in API costs
- Significant investment for individual developers
- ROI comes from productivity gains over months, not days
**Implication**: Multi-agent orchestration is expensive. You're paying for parallelization and exploration, not perfect execution.
## The "Secret Sauce" Myth
Yegge addresses the common question about his approach:
> "There are naysayers. I can share with you the secret sauce they are all missing. Is it money, you ask? No, sure. None of us have enough money for this shit."
**The real secret**: It's not money. It's not special techniques. It's **patience, persistence, and low expectations**.
## Management Approach: "Deep, Ongoing Conversations"
**Not**: Fire-and-forget task assignment
**But**: Continuous dialogue and accountability
### Conversation Pattern
1. **Assign task** to specific agent
2. **Check progress** regularly
3. **Course-correct** when agent misunderstands
4. **Keep accountable** - agents need reminders
5. **Manage expectations** - celebrate small wins
### Key Insight
The "piggy farmer" doesn't micromanage individual actions but maintains **ongoing relationships** with each agent through conversation.
## The Payoff: "Paid Off in Droves"
> "But it has paid off in droves. At least, it has for Anthropic, who bilk me..."
**What "paid off" means**:
- Significant productivity increase (despite costs)
- Parallel work on 5 different codebases/tasks
- Learning journey that builds orchestration skills
- New way of working that feels superior despite challenges
**The joke**: Most immediate beneficiary is Anthropic (collecting API fees), but the developer gains productivity that justifies the cost.
## Comparison to Other Frameworks
### vs. Factory Farming Code
- **Factory Farming**: Industrial-scale code production
- **Piggy Farming**: Managing specific agents with individual personalities/capabilities
- **Connection**: Both involve treating AI code generation as agricultural-scale operation
### vs. Pair Programming
- **Pair Programming**: Synchronous, single agent, interactive
- **Piggy Farming**: Asynchronous, multiple agents, supervisory
- **Shift**: From collaborative partner to orchestrator of workers
### vs. The Merge Wall
- **Merge Wall**: Bottleneck of integrating parallel agent outputs
- **Piggy Farming**: Addresses the management side (keeping agents productive)
- **Connection**: Multiple parallel agents create merge challenges
## Cross-Domain Applications
### Software Development Teams
- **Pattern**: Managing multiple junior developers
- **Parallel**: Each "piggy" like a junior developer needing guidance
- **Skill**: Delegation with accountability, not micromanagement
### Project Management
- **Pattern**: Managing contractors or outsourced teams
- **Parallel**: Agents are like contractors—you don't control their process, only outcomes
- **Skill**: Clear task definition and outcome verification
### Personal Productivity
- **Pattern**: Managing multiple parallel projects
- **Parallel**: Context-switching between different "agents" (projects)
- **Skill**: Maintaining momentum across multiple workstreams
### AI Strategy
- **Pattern**: Multi-model deployment for different tasks
- **Parallel**: Different LLMs excel at different tasks (like different agents)
- **Skill**: Model selection and task routing
## Practical Implementation
### Starting Small
1. **Two agents first**: Don't jump to 5 immediately
2. **Different tasks**: Ensure agents aren't competing for same codebase
3. **Daily check-ins**: Review each agent's progress daily
4. **Low stakes**: Start with non-critical tasks
### Scaling Up
1. **Add agents gradually**: Once comfortable with 2, add 3rd
2. **Task partitioning**: Clear boundaries prevent conflicts
3. **Conversation rhythm**: Develop check-in cadence for each agent
4. **Cost monitoring**: Track API spending as agents multiply
### Warning Signs
- **Agent conflict**: Multiple agents editing same files
- **Runaway costs**: Bills exceeding productivity value
- **Overwhelming complexity**: Too many agents to track
- **No progress**: Agents spinning without meaningful output
## The "Piggy Farmer" Mindset
### What It Requires
- **Comfortable with ambiguity**: Agents will surprise you
- **Patient with failure**: Most attempts fail initially
- **Persistent despite costs**: Economic investment required
- **Realistic about limitations**: Agents aren't magic
- **Conversational management style**: Talk to your piggies
### What It Produces
- **Parallelized productivity**: 5 tasks advancing simultaneously
- **Orchestration skill**: New capability beyond traditional coding
- **Cost-effective (eventually)**: ROI positive after learning curve
- **Competitive advantage**: "Piggy farmers" outpace traditional developers
## Related Concepts
- [[Pair Programming to Parallel Delegation Shift]] - Mental model transformation from synchronous to asynchronous AI collaboration
- [[Factory Farming Code]] - Industrial-scale code production paradigm
- [[The Merge Wall]] - Integration bottleneck from parallel agent work
- [[2000-Hour AI Trust Curve]] - Learning to predict agent behavior over time
- [[LLM Mechanical Sympathy]] - Understanding agent capabilities through operation
- [[AI Productivity Paradox]] - Initial productivity drop before gains
- [[Agency Preservation Standard]] - Maintaining human understanding despite delegation
- [[Software Development Methodology]] - Adapting development process for AI agents
## Common Obstacles
### "I can't afford hundreds per week"
**Reality**: Start with fewer agents, use cheaper models, or batch work to reduce costs. The framework scales down—you don't need 5 agents to benefit from the mindset.
### "My agents keep failing"
**Expected**: That's why patience is essential. Failure is the norm, not the exception. The skill is learning which failures to tolerate and which to fix.
### "This seems inefficient"
**Paradox**: It IS inefficient short-term. The efficiency comes from parallel exploration and long-term learning, not immediate perfect execution.
### "How do I know which agent to use for what?"
**Learning curve**: [[2000-Hour AI Trust Curve]]—it takes time to learn agent strengths. Start with obvious task partitioning (backend vs frontend, different languages, etc.)
## Steve Yegge Context
**Who**: Former Amazon and Google engineer, known for influential tech blog posts
**Credibility**: Decades of software engineering experience, early adopter of AI tooling
**Perspective**: Pragmatic skeptic who found success despite initial difficulties
**Message**: This works, but it's hard—don't expect magic
## Source
Extracted from [[Steve Yegge on Using Multiple Coding Agents]]—Twitter thread describing his experience running 5 Amp/Code agents simultaneously for months.
**Screenshots**: Twitter posts showing real-world multi-agent orchestration and the "piggy farmer" metaphor.
## Verification
[Verified] - Content synthesized from Steve Yegge's actual Twitter thread with direct quotes about multi-agent orchestration experience, costs, challenges, and payoffs.