# When Your MCP Server Breaks: Building a 94% Smaller Replacement
*Published January 24, 2026*
---
## TL;DR
My Readwise MCP server broke because it assumed a NixOS-specific path that doesn't exist on macOS. Instead of debugging the Node.js dependency chain, I built a replacement in Python—646 lines vs 6,749 lines (90% smaller), 8 tools vs 13+ (38% fewer), with ~60% token reduction. The key insight: I already had proven logic in a backfill script. The MCP server became a thin wrapper around battle-tested code. When an integration breaks, sometimes the fastest fix is a focused rebuild using what already works.
---
## What is MCP?
MCP (Model Context Protocol) is Anthropic's open protocol for connecting AI assistants to external data sources. Think of it like a plugin system: instead of copying data into chat, you install "MCP servers" that let Claude read from APIs directly.
In my setup, Claude can fetch highlights from Readwise, create tasks in Basecamp, or query my Obsidian vault—all without leaving the conversation. Each MCP server exposes "tools" (like `readwise_daily_review()` or `basecamp_create_todo()`) that Claude can call.
When an MCP server breaks, Claude loses access to that data source. No daily highlights. No recent tweets. The integration goes dark.
Which is exactly what happened.
---
## The Breakage
```
Failed to reconnect to readwise.
```
That error appeared after a system update. The `readwise-mcp-enhanced` Node.js server was trying to run `npx` at `/run/current-system/sw/bin/npx`—a path that exists on NixOS but not macOS.
This wasn't just a broken tool. It was Claude's connection to 1,044 imported documents and daily reading workflows.
I could have debugged the path resolution. Traced through the dependency chain. Filed an issue upstream. Instead, I asked a different question: **what's the minimum I need to replace this?**
---
## The Inventory
Before rebuilding, I inventoried what I already had:
**Working infrastructure:**
- A Basecamp MCP server in Python using FastMCP (proof that Python MCPs work fine in my setup)
- A `readwise-backfill.py` script with 400+ lines of battle-tested logic for pagination, deduplication, and state management
- 1,044 imported documents in my Obsidian vault
- A state file tracking synced ranges and import timestamps
**What the broken server provided:**
- 13+ tools, many of which I never used
- Daily review fetching
- Document import with various filters
- Highlight searching
**What I actually needed:**
- Import recent tweets (my primary use case)
- Fetch daily review highlights
- Search highlights
- Backfill to specific dates
- State management for deduplication
The gap between "what existed" and "what I needed" was smaller than expected.
---
## The 38-Minute Rebuild
The implementation followed a simple pattern: **wrap proven logic in MCP tools**.
### Step 1: Copy What Works
From `readwise-backfill.py`, I extracted:
- `load_state()` / `write_state()` — State persistence
- `optimize_backfill()` — Synced range optimization to skip already-imported content
- `scan_existing_documents()` — Filesystem deduplication
- `sanitize_filename()` — Safe file naming
- `extract_id_from_url()` — ID extraction for deduplication
These functions had processed 614+ documents without issues. I didn't change them.
### Step 2: Define 8 Essential Tools
```python
@mcp.tool()
def readwise_daily_review() -> str:
"""Fetch today's highlights and save to Daily Reviews/"""
@mcp.tool()
def readwise_import_recent(category: str = "tweet", limit: int = 20) -> str:
"""Import recent documents since last import"""
@mcp.tool()
def readwise_backfill(target_date: str, category: str = "tweet") -> str:
"""Paginate backward to target date with optimization"""
@mcp.tool()
def readwise_book_highlights(title: str = None, book_id: str = None) -> str:
"""Get all highlights from a specific book"""
@mcp.tool()
def readwise_search_highlights(query: str, limit: int = 50) -> str:
"""Search highlights by query"""
@mcp.tool()
def readwise_state_info() -> str:
"""Show current state: timestamps, synced ranges, document count"""
@mcp.tool()
def readwise_init_ranges() -> str:
"""Scan filesystem to build synced_ranges from existing documents"""
@mcp.tool()
def readwise_reset_state(clear_ranges: bool = False) -> str:
"""Clear state file for fresh start"""
```
Eight tools. Each maps to something I actually do.
### Step 3: Replace curl with requests
The backfill script used subprocess calls to curl. For the MCP server, I replaced these with Python's `requests` library:
```python
def fetch_reader_documents(updated_after: str = None, category: str = "tweet") -> dict:
"""Fetch documents from Reader API v3"""
url = "https://readwise.io/api/v3/list/"
params = {"category": category}
if updated_after:
params["updatedAfter"] = updated_after
headers = {"Authorization": f"Token {READWISE_TOKEN}"}
response = requests.get(url, params=params, headers=headers)
return response.json()
```
No subprocess. No shell escaping. Direct HTTP calls.
### Step 4: Update Configuration
One edit to `.mcp.json`:
```json
{
"readwise": {
"type": "stdio",
"command": "/path/to/readwise-mcp-server/.venv/bin/python",
"args": ["/path/to/src/readwise-mcp-server/server.py"],
"env": {
"READWISE_TOKEN": "${READWISE_TOKEN}",
"VAULT_PATH": "/path/to/vault"
}
}
}
```
Python path I control. No npx. I still use nix but only for controlling Python dependencies.
---
## The Numbers
| Metric | Node.js Original | Python Replacement |
|--------|------------------|-------------------|
| Lines of code | 6,749 | 646 |
| Number of tools | 13+ | 8 |
| Dependencies | npm ecosystem | requests, pyyaml, mcp |
| Token usage | High (verbose parameters) | ~60% reduction |
| Test coverage | Unknown | 40 tests*, all passing |
*Started with 20 tests, expanded to 38 after filename sanitization bugs, then 40 after backfill optimization fix. Each bug discovery added regression tests.
90% smaller. 38% fewer tools. Same functionality for my use cases.
---
## Why This Worked
### 1. The Logic Was Already Proven
I didn't write new pagination logic. I didn't invent new deduplication. I copied functions that had processed hundreds of documents successfully. The MCP layer was just plumbing.
### 2. I Knew My Actual Usage
The original server had tools I never used. By building only what I needed, I avoided complexity. Eight tools covering five use cases.
### 3. Python MCP Was Already Working
Basecamp MCP proved the stack. FastMCP, stdio transport, environment variables: all working. The Readwise server was the same pattern with different API calls.
### 4. State Format Was Preserved
```json
{
"last_import_timestamp": "2026-01-22T04:55:22.529182+00:00",
"synced_ranges": [
{"start": "2026-01-01T00:00:00Z", "end": "2026-01-22T04:55:22Z"}
],
"backfill_in_progress": false
}
```
Same state file. The new server read existing state without migration. Continuity preserved.
### 5. But Edge Cases Still Surfaced
The initial rebuild took 38 minutes. Making it production-ready took 28 hours and 3 bug fixes:
**Timestamp format bug (Jan 24, 04:36):**
Appending `Z` to `.isoformat()` output that already included `+00:00` produced malformed timestamps like `2026-01-23T02:16:59+00:00Z`. The Readwise API correctly rejected these with 400 Bad Request. Impact: all import operations blocked. Fix: remove the `+ "Z"` suffix from 6 locations.
**Filename sanitization bug (Jan 24, 05:24):**
Documents with emoji-only titles ("🍿🍿") or ellipsis ("…") created invalid filenames. The qmd indexer requires alphanumeric content and threw "handelize: path has no valid filename content." Impact: 3 documents failed to import. Fix: fallback to `{Category} by {Author} - {Date}.md` when title has no alphanumeric characters. Added 13 regression tests.
**Backfill optimization bug (Jan 24, 08:48):**
When target date was before synced ranges, the optimization logic incorrectly used `range.end` as `updatedAfter`, filtering out documents in the gap. Impact: backfill to Aug 2025 only imported 13 docs from Jan 21-23 instead of filling Aug-Nov gap. Fix: return `None` for `updatedAfter` when filling gaps, let deduplication handle overlap.
Each bug revealed itself in production use. Each got its own test suite. The test count grew from 20 → 38 → 40 as edge cases surfaced.
---
## The Pattern
When an integration breaks:
1. **Inventory what you have**: What's already working? What logic exists?
2. **Define what you actually need**: Not what the tool provided. What you use.
3. **Wrap proven logic**: Don't rewrite. Reuse.
4. **Minimize surface area**: Fewer tools = fewer failure points = lower token costs
The Node.js server broke because of a path assumption I didn't control. The Python server works because every path is explicit and every dependency is local.
Sometimes the fastest debugging is not debugging at all.
---
## So What?
This isn't just about Readwise. It's about integration ownership in the AI tooling era.
**The MCP ecosystem is young.** Servers break. Maintainers disappear. Dependencies shift. The Node.js server I replaced had 13+ tools, but the maintainer is one person maintaining 4+ MCP servers. When it broke, I had three options:
1. **Wait for upstream fix**: No timeline, dependency on someone else's priorities
2. **Debug the Node.js dependency chain**: High complexity, low control
3. **Rebuild with what I already had**: 38 minutes to working replacement
I chose #3 because I'd already invested in understanding the domain. The backfill script wasn't just a utility. It was insurance against integration fragility.
**The lesson for AI integrations**: When you depend on an MCP server, you're betting on that maintainer's availability. Your mitigation is domain understanding and reusable logic. The best time to extract that logic is before the integration breaks.
---
## The Lesson
Integration fragility is real. When you depend on tools you don't control, breakage is a matter of when, not if.
The mitigation isn't avoiding integrations. It's knowing your dependencies well enough to rebuild the parts you need. That requires:
- Understanding what the integration actually does for you
- Having proven logic available to reuse
- Controlling your environment (paths, dependencies, configuration)
My Readwise MCP broke. 38 minutes later, I had a replacement. Not because I'm fast, but because I'd already done the hard work in a backfill script that sat in `.claude/scripts/` waiting to be useful in a different form.
In the MCP ecosystem's early days, this pattern matters even more. Servers will break. Maintainers will move on. Your backfill scripts and domain logic are the hedge against that fragility.
The best code to write is code you've already written. The best insurance is understanding you've already earned.
**The code:** [github.com/ngpestelos/readwise-mcp-server](https://github.com/ngpestelos/readwise-mcp-server)