You type 4 sentences.
The system deploys 60 agents.
CommandCC is a coordination system that lets one human command dozens of AI agents working in parallel on the same codebase. No message passing. No databases. Just files, phases, and a doctrine that scales.
Here's exactly how it works.
AI agents are fast. Humans are the bottleneck.
A single AI coding agent can implement a feature in minutes. But a developer working with agents one at a time hits a wall: they become the serializer. They define each task, review each output, integrate each result. For every agent added, the human coordination cost grows linearly.
The agent is not the bottleneck. The human is.
Without CommandCC
4 features = 4-6 hours.
With CommandCC
4 features = 13 minutes.
The question we answered: Can one person effectively command dozens of agents on a shared codebase without the coordination overhead destroying the gains?
Yes. With three ideas.
Hierarchy. Decomposition. Files.
Match the model to the job
Not every task needs the most expensive model. Strategic thinking (what to build, how to design it, whether it's good enough) needs the smartest model. Implementation (writing the code, running the tests) needs a fast, capable one. Scanning the codebase needs the cheapest, fastest one. Three tiers, three cost profiles, one clear hierarchy.
Think
Strategize, decompose, design, review. Never writes code.
Build
Implement, wire, test. The workhorse. Deep execution.
Scan
Fast recon. Quick in, quick out. Cheap and expendable.
Automate the task breakdown
The biggest human bottleneck is decomposition: breaking "deploy moltbook engine" into specific, independent sub-tasks. The Decomposer is an Opus-class agent that does this automatically. It reads your high-level objectives and splits each into 2-4 independent sub-objectives. Independence is the key constraint: no sub-objective can depend on another's output, because they all run in parallel.
Coordinate through files, not messages
Agents don't talk to each other. They read files from the previous phase and write files for the next phase. The strategist writes OPERATION-PLAN.md. The decomposer reads it and writes DECOMPOSITION.md. Architects read that and write ARCHITECTURE-A1.md. No chat, no APIs, no message queues. The filesystem is the protocol.
The multiplication layer
This is the core innovation. Without it, the human manually breaks down every objective. With it, you type 4 sentences and the system generates 12+ independent work streams automatically.
The Decomposer enforces one hard rule: no sub-objective can touch the same file as another sub-objective. If two pieces of work share a dependency, they stay in the same sub-objective. This prevents merge conflicts when 12 builders are writing code simultaneously.
Lesson from the field
In our first operation (OPBLITZ3), two builders both edited the same file, empire1d.py. The edits were logically independent but created a merge conflict. This incident directly led to the Decomposer's file isolation rule. The system now prevents this class of error by design.
The octopus
Every operation follows nine sequential phases. Within each phase, agents fan out in parallel (the arms of the octopus), then collapse their work into shared files before the next phase begins (the spine).
Recon
Strategy
OPERATION-PLAN.md with objectives, phasing, risks.Decompose ✨
Architecture
Build
Wire
Test
Review
Integration
BATTLE-MAP.md, the consolidated result.From sentences to agents
The human types 4 objectives. Here's how the system expands that into a coordinated fleet:
12 sub-objectives × 5 agents each (architect + builder + wirer + tester + reviewer)
+ 8 scouts + 1 strategist + 1 decomposer + 1 integrator
The time scales sublinearly: doubling objectives doesn't double time because the extra agents run in parallel. The sequential phases (strategy, decompose, integrate) are fixed overhead regardless of objective count.
Operation OPBLITZ3
The first field deployment. One operator typed four objectives for the empire1.io web application. The system deployed 16 agents across 4 phases. This was before the Decomposer existed, so decomposition was manual.
The operator typed one command and read one battle map. The equivalent sequential process would need ~50 context switches (defining tasks, reviewing outputs, debugging integration, re-running tests). That's the 200x cognitive load compression.
With the Decomposer now in place, the same 4 objectives would produce 12 sub-objectives and deploy ~71 agents instead of 16. Same human input, 4x more parallelism.
The filesystem is the protocol
Traditional multi-agent systems use message passing: agents send requests to each other and wait for responses. This creates O(n²) communication channels for n agents, ordering dependencies, and cascading failures when one agent is slow.
CommandCC eliminates all inter-agent communication. The filesystem is the only coordination mechanism:
Phase 1 writes OPERATION-PLAN.md.
Phase 2 reads it, writes DECOMPOSITION.md.
Phase 3 reads that, writes ARCHITECTURE-*.md.
And so on. No agent ever reads a file that hasn't been written by a prior phase. The phase sequence guarantees ordering without explicit synchronization.
This means agents don't need to know about each other. A builder doesn't know how many other builders are running. It just reads its architecture file, writes its code, and produces a build report. The system handles everything else.
Four steps
Copy the agents
Copy agents/ from the CommandCC repo to your project's .claude/agents/ folder. These are the 27 pre-built agent definitions.
Pick an operation
Copy an operation template from operations/ to .claude/commands/. Start with feature-deploy.md, the flagship template.
Write your objectives
Edit the template. Replace the example objectives with yours. Four sentences is all you need.
Deploy
Run claude, then type /feature-deploy. Watch the waves execute. Read the battle map when it's done.
Ready to command?
Open source. Apache 2.0. Zero dependencies beyond Claude Code.