…Or how I learned to stop vibe-coding and love the modular bomb
Honestly, it’s been a while.
Like many of you, I’ve been deep in the weeds — testing AI limits, hitting context walls, and rediscovering that the very thing that makes AI development powerful (context) is also what makes it fragile.
A recent — and increasingly common — Reddit thread snapped it into focus. The developer cycle looks like this:
Vibe-code → context fades → docs bloat → token limits hit → modular fixes → more docs → repeat.
It’s not just annoying. It’s systemic. If you’re building with AI tools like Claude, Cursor, or Copilot, this “context rot” is the quiet killer of momentum, accuracy, and scalability.
The Real Problem: Context Rot and Architectural Drift
“Vibe-coding”—the joyful chaos of just diving in—works at small scale. But as projects grow, LLMs choke on sprawling histories. They forget relationships, misapply logic, and start reinventing what you already built.
Three things make this worse:
- LLM Degradation at Scale: Chroma’s “Context Rot” study and benchmarks like LongICLBench confirm what we’ve all felt: as context length increases, performance falls. Even models like Gemini 1.5 Pro (with a 1M-token window) start stumbling over long-form reasoning.
- Human Churn: Our own docs spiral out of date. We iterate fast and forget to anchor intent. .prod.main.final.final-v2 is funny the first time it happens… just not the 27th time at 2 am with a deadline.
- Architectural Blindness: LLMs are excellent implementers but poor architects. Without modular framing or persistent context, they flail. As one dev put it: “Claude’s like a junior with infinite typing speed and no memory. You still need to be the brain.”
How I Navigated the Cycle: From Chaos to Clauses
I’m a business and product architect, but I often end up wearing every hat — producer, game designer, systems thinker, and yes, sometimes even the game dev. I love working on game projects because they force clarity. They’re brutally honest. Any design flaw? You’ll feel it fast.
One night, deep into a procedural, atmospheric roguelite I was building to sharpen my thinking, I hit the same wall every AI-assisted developer eventually crashes into: context disappeared, re-prompts started failing, and the output drifted hard. My AI companion turned into a bit of a wildcard — spawning new files, reinventing functions, even retrying ideas we’d already ruled out for good reason.
Early on, I followed the path many developers are now embracing:
- Start vibe-coding
- Lose context
- Create a single architectural document (e.g., claude.md)
- That bloats
- Break it into modular prompt files (e.g., claude.md, /command modules/)
- That eventually bloats too
The cycle doesn’t end. It just upgrades. But each step forward buys clarity—and that’s what makes this process worth it.
claude.md: Not My Invention, But a Damn Good Habit
I didn’t invent claude.md. It’s a community practice—a persistent markdown file that functions like a screenplay for your workspace. You can use any document format that helps your AI stay anchored. The name is just shorthand for a living architectural spec.
# claude.md
> Persistent context for Claude/Cursor. Keep open during sessions.
## Project Overview
- **Name**: Dreamscape
- **Engine**: Unity 2022+
- **Core Loop**: Dreamlike exploration with modular storytelling
## Key Scripts
- `GameManager.cs`: Handles global state
- `EffectRegistry.cs`: Connects power-ups and logic
- `SceneLoader.cs`: Transitions with async logic
TIP: Reference this in prompts: // See claude.md
But even this anchor file bloats over time—which is where modular prompt definitions come in.
claude.md + Module files: Teaching Commands Like Functions
My architecture evolved. I needed a way to scope instructions—to teach the AI how to handle repeated requests, like creating new weapon effects or enemy logic. So I made a modular pattern using claude.md + command prompts:
# claude.md
## /create_effect
> Creates a new status effect for the roguelike.
- Inherits from `BaseEffect`
- Registers in `EffectRegistry.cs`
- Sample: `/create_effect BurnEffect that does damage over time`
This triggers the AI to pull a scoped module file:
# create_effect.module.md
## Create New Effect
1. Generate `PoisonEffect.cs` inheriting from `BaseEffect`
2. Override `ApplyEffect()`
- Reduce enemy HP over time
- Slow movement for 3s
3. Register in `EffectRegistry.cs`
4. Add icon: `poison_icon.png` in `Resources/`
5. Update `PlayerBullet.cs` to attach effect
The AI now acts with purpose, not guesswork. But here’s the truth: Even modularity has entropy. After 20 modules, you’ll need sub-modules. After that, indexing. The bloat shifts—not vanishes.
Modularity Is Just the Next Plateau
The Reddit conversations reflect it clearly—this is an iterative struggle:
- Vibe-coding is fast, until it fragments.
- Documentation helps, until it balloons.
- Modularity is clean, until it multiplies.
So don’t look for a silver bullet. Look for altitude.
Every level of architectural thinking gets you further before collapse. You’re not defeating context entropy—you’re just outpacing it.
Actionable Takeaways for Technical Leaders
- Design Before Code: Start every feature with a plain-English .md file. Force clarity before implementation.
- Modularize Prompt Context: Keep a /prompts/ directory of modular markdown files. Load only what’s needed per task.
- Feature-by-Feature Git Discipline: Develop in small branches. Commit early, often. Update specs with every change.
- Own the Architecture: LLMs build well—but only from your blueprints. Don’t delegate the structure.
Bonus: Based on my tests for token usage this method reduces prompt size by 2–10x and cuts debugging time by up to 25% because it introduces more surgical precision.
This Will Happen to You — and That’s the Point
If you’re building anything complex—a game system, a CRM, a finance tool—this will happen to you. This isn’t hyperbole. It will.
Not because your AI model is weak. But because the problem isn’t model size—it’s architectural load. Even with 2 million tokens of context, you can’t brute force clarity. You have to design for it.
That’s why I believe the era of AI-assisted development isn’t about being better developers. It’s about becoming better architects.
What’s Your Approach?
How are you managing AI context in real projects? Have a prompt ritual, toolchain trick, or mental model that works? Drop it in the comments. I’m collecting patterns.
Sources:
Chroma Research – Context Rot: How Increasing Input Tokens Impacts LLM Performance
- URL: https://research.trychroma.com/context-rot
- Description: A research paper defining and demonstrating “Context Rot,” where LLM performance degrades significantly with increasing input context length across various models.
LongICLBench: Long-context LLMs Struggle with Long In-context Learning – arXiv
- URL: https://arxiv.org/html/2404.02060v3 (or https://arxiv.org/abs/2404.02060 for the abstract page)
- Description: An academic benchmark revealing a notable decline in even advanced LLMs’ performance as task complexity and context length increase.
What is a long context window? Google DeepMind engineers explain – Google Blog
- URL: https://blog.google/technology/ai/long-context-window-ai-models/
- Description: Google’s explanation of long context windows, including Gemini 1.5 Pro’s 1 million token capacity and internal research on even larger contexts.
Context windows – Anthropic API Documentation
- URL: https://docs.anthropic.com/en/docs/build-with-claude/context-windows
- Description: Anthropic’s official guide to understanding and managing Claude’s context window, including token accumulation and capacity.