Engineering

Parallel AI Agents Need Isolation. I Learned This the Hard Way.

Four agents writing code in the same git checkout. Ten stashes and 45 minutes of recovery later, the rule wasn't the lesson — the announcement that enforces it was.

April 20, 2026
7 min read
#agentic-coding#engineering#ai-agents
Parallel AI Agents Need Isolation. I Learned This the Hard Way.⊕ zoom
Share

Four AI agents writing code in the same git checkout.

Within minutes, git branches were flipping mid-tool-call. Files appeared and disappeared between edits. Stashes piled up — one of them was literally named "others-work-again." One agent gave up entirely, its partial work sitting uncommitted. Another self-isolated into a temporary directory without being asked. By the time I noticed the pattern, we had ten stashes and a workspace that could only be reset, not recovered.

Recovery Cost
45 min
drop 10 stashes · delete 4 branches · archive 1 self-isolated worktree · redispatch

The parameter existed. I didn't use it.


The Failure Was Architectural, Not Tactical

The agent framework I use has a worktree-isolation option that creates a temporary git worktree so each agent operates on its own copy of the repo. It's right there in the tool specification. I'd used it before. On this particular parallel dispatch — four agents implementing four independent modules — I dispatched all four without setting the isolation parameter on any of them.

The failure wasn't subtle. It was architectural.

The moment you have two or more agents writing to the same filesystem, you have a race condition on every git operation. Not sometimes. Always.

SHARED CHECKOUTCOLLISIONA1A2A3A4stashes · reverts · lost workISOLATED WORKTREESCLEAN PARALLELISMA1A2A3A4each agent · own filesystem · own branch

The rule after the recovery:

Multi-agent parallel dispatch on shared filesystem state is structurally unsafe without explicit isolation. Every parallel dispatch of two or more code-writing agents must set the isolation parameter to worktree.

But that rule alone doesn't enforce itself. Rules stored as "I'll remember next time" don't survive the next pressure window.


Announcement-as-Enforcement

What survives is the announcement:

"All N dispatches use isolation: worktree — verified"

That exact sentence, said explicitly before firing any parallel dispatch. If I can't say it, I don't fire. Single agent? "Not applicable because N=1." Read-only agents? "Not applicable because no code-writing." The announcement happens every time, or the dispatch doesn't.

INSIGHT

Why the announcement works when the memory doesn't: internal reminders fail under pressure because there's no external checkpoint that forces them to surface. The announcement is the checkpoint. You can't skip saying it without noticing you skipped it.

Announcement-as-enforcement beats memory-as-intent because it surfaces the decision at the moment of action. Internal reminders fail. External commitments don't.


The Math on Parallelism

SERIAL DISPATCH~20 minPARALLEL ISOLATED~5 minPARALLEL COLLISION+45 min recovery

Parallelism feels faster. Collision recovery is always slower than serial dispatch would have been, and the math is worse than you think — recovery isn't just "re-run the agents." It's untangling stashes, reconstructing which work belongs to which agent, re-dispatching with corrected framing, and hoping you caught all the corruption.

The incident cost 45 minutes. The discipline that came out of it won't cost 45 minutes again.


The Transfer

Any multi-agent LLM orchestration where agents write to shared state. Code, documents, config, anything. If you dispatch two or more agents in parallel and they both mutate state, you need isolation. If the tool has it, use it. If the tool doesn't have it, serialize.

The deeper point: orchestrating AI agents is a filesystem-hygiene problem as much as it's a prompting problem. The prompting side gets most of the attention. The filesystem side is where the catastrophic failures live.

The lesson: write the announcement into your dispatch habit, not into your memory. "All N dispatches use isolation: worktree — verified" is not a private thought. Say it out loud, every time, or don't fire.

Visual Summary
click to expand
Article visual summary
Go deeper in the AcademyFree

New to Claude Code? The Getting Started track takes you from zero to your first project in 8 lessons. 8 lessons.

Start the Getting Started with Claude Code track →

Explore the Invictus Labs Ecosystem

// Join the Network

Follow the Signal

If this was useful, follow along. Daily intelligence across AI, crypto, and strategy — before the mainstream catches on.

No spam. Unsubscribe anytime.

Share
// More SignalsAll Posts →