Parallel AI Agents Need Isolation. I Learned This the Hard Way.
Four agents writing code in the same git checkout. Ten stashes and 45 minutes of recovery later, the rule wasn't the lesson — the announcement that enforces it was.
⊕ zoomFour AI agents writing code in the same git checkout.
Within minutes, git branches were flipping mid-tool-call. Files appeared and disappeared between edits. Stashes piled up — one of them was literally named "others-work-again." One agent gave up entirely, its partial work sitting uncommitted. Another self-isolated into a temporary directory without being asked. By the time I noticed the pattern, we had ten stashes and a workspace that could only be reset, not recovered.
The parameter existed. I didn't use it.
The Failure Was Architectural, Not Tactical
The agent framework I use has a worktree-isolation option that creates a temporary git worktree so each agent operates on its own copy of the repo. It's right there in the tool specification. I'd used it before. On this particular parallel dispatch — four agents implementing four independent modules — I dispatched all four without setting the isolation parameter on any of them.
The failure wasn't subtle. It was architectural.
The moment you have two or more agents writing to the same filesystem, you have a race condition on every git operation. Not sometimes. Always.
The rule after the recovery:
Multi-agent parallel dispatch on shared filesystem state is structurally unsafe without explicit isolation. Every parallel dispatch of two or more code-writing agents must set the isolation parameter to worktree.
But that rule alone doesn't enforce itself. Rules stored as "I'll remember next time" don't survive the next pressure window.
Announcement-as-Enforcement
What survives is the announcement:
"All N dispatches use isolation: worktree — verified"
That exact sentence, said explicitly before firing any parallel dispatch. If I can't say it, I don't fire. Single agent? "Not applicable because N=1." Read-only agents? "Not applicable because no code-writing." The announcement happens every time, or the dispatch doesn't.
Why the announcement works when the memory doesn't: internal reminders fail under pressure because there's no external checkpoint that forces them to surface. The announcement is the checkpoint. You can't skip saying it without noticing you skipped it.
Announcement-as-enforcement beats memory-as-intent because it surfaces the decision at the moment of action. Internal reminders fail. External commitments don't.
The Math on Parallelism
Parallelism feels faster. Collision recovery is always slower than serial dispatch would have been, and the math is worse than you think — recovery isn't just "re-run the agents." It's untangling stashes, reconstructing which work belongs to which agent, re-dispatching with corrected framing, and hoping you caught all the corruption.
The incident cost 45 minutes. The discipline that came out of it won't cost 45 minutes again.
The Transfer
Any multi-agent LLM orchestration where agents write to shared state. Code, documents, config, anything. If you dispatch two or more agents in parallel and they both mutate state, you need isolation. If the tool has it, use it. If the tool doesn't have it, serialize.
The deeper point: orchestrating AI agents is a filesystem-hygiene problem as much as it's a prompting problem. The prompting side gets most of the attention. The filesystem side is where the catastrophic failures live.
The lesson: write the announcement into your dispatch habit, not into your memory. "All N dispatches use isolation: worktree — verified" is not a private thought. Say it out loud, every time, or don't fire.

New to Claude Code? The Getting Started track takes you from zero to your first project in 8 lessons. 8 lessons.
Start the Getting Started with Claude Code track →Explore the Invictus Labs Ecosystem
Follow the Signal
If this was useful, follow along. Daily intelligence across AI, crypto, and strategy — before the mainstream catches on.

Text Fidelity Is the New Image Quality
The real breakthrough in image generation is not style. It is control. When a model can render legible text, preserve structure, and obey constraints, it stops being a toy and starts behaving like infrastructure.

Frustration Is the Raw Material: The Only Retro Discipline That Matters
Every rule worth keeping came from something going wrong. The durable value of a retro isn't its narratives — it's its imperatives. If your post-mortem doesn't produce rules for next time, you shipped stories.

Grep the Consumers Before Writing the Producer
I specified a dataclass field name in a dispatch prompt. The agent built to spec, then stopped and flagged that the consuming interface expected a different name. The drift was on me, and it only took one grep to prevent.