vt-c-dispatching-parallel-agents¶
Use when facing 3+ independent failures that can be investigated without shared state or dependencies - dispatches multiple Claude agents to investigate and fix independent problems concurrently
Plugin: core-standards
Category: Other
Command: /vt-c-dispatching-parallel-agents
Dispatching Parallel Agents¶
Overview¶
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
When to Use¶
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when: - 3+ test files failing with different root causes - Multiple subsystems broken independently - Each problem can be understood without context from others - No shared state between investigations
Don't use when: - Failures are related (fix one might fix others) - Need to understand full system state - Agents would interfere with each other
The Pattern¶
1. Identify Independent Domains¶
Group failures by what's broken: - File A tests: Tool approval flow - File B tests: Batch completion behavior - File C tests: Abort functionality
Each domain is independent - fixing tool approval doesn't affect abort tests.
2. Create Focused Agent Tasks¶
Each agent gets: - Specific scope: One test file or subsystem - Clear goal: Make these tests pass - Constraints: Don't change other code - Expected output: Summary of what you found and fixed
3. Dispatch in Parallel¶
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
4. Review and Integrate¶
When agents return: - Read each summary - Verify fixes don't conflict - Run full test suite - Integrate all changes
Agent Prompt Structure¶
Good agent prompts are: 1. Focused - One clear problem domain 2. Self-contained - All context needed to understand the problem 3. Specific about output - What should the agent return?
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
Common Mistakes¶
❌ Too broad: "Fix all the tests" - agent gets lost ✅ Specific: "Fix agent-tool-abort.test.ts" - focused scope
❌ No context: "Fix the race condition" - agent doesn't know where ✅ Context: Paste the error messages and test names
❌ No constraints: Agent might refactor everything ✅ Constraints: "Do NOT change production code" or "Fix tests only"
❌ Vague output: "Fix it" - you don't know what changed ✅ Specific: "Return summary of root cause and changes"
Worktree Isolation¶
When dispatched agents will modify files (not just investigate), use worktree isolation to prevent conflicts.
Decision tree:
- Will agents modify files? → Use isolation: worktree
- Read-only investigation? → Shared workspace is fine
Using isolation: worktree in Agent Definitions¶
Agents with isolation: worktree in their frontmatter automatically get their own git worktree:
Using --worktree from CLI¶
Launch isolated sessions with the native flag:
Dispatch Pattern with Worktrees¶
Agent 1 (worktree: feature/dark-mode) → Build dark mode
Agent 2 (worktree: fix/payment-bug) → Fix payment flow
Agent 3 (worktree: feature/edit-todos) → Add inline editing
Each agent works in isolated files on its own branch. When one finishes, merge or review independently without waiting for the others.
Without worktrees: Agents editing the same codebase cause file conflicts and branch contamination.
Wave-Aware Dispatch from tasks.md¶
When the input is a tasks.md file (not individual problem domains), use wave-based dispatch to execute plan tasks in dependency order with worktree isolation.
This mode is triggered when: the user provides a tasks.md file, an explicit dependency graph, or a spec directory containing tasks.md. The existing problem-domain dispatch (3+ independent failures) remains unchanged.
Pre-flight: Orphan worktree detection
Before dispatching, check for orphaned worktrees from a previous run:
1. Run git worktree list
2. Check for worktrees matching .worktrees/wave-*
3. If found and no active wave-state.yaml references them:
Found orphaned worktrees from a previous run:
.worktrees/wave-2-task-2.1
Clean up? [Yes / Keep for inspection]
git worktree remove <path> --force
1. Parse Task Dependencies¶
Extract structure from tasks.md:
- Phase headings: ## Phase N: Name (optionally with [P] marker for parallelizable phases)
- Task IDs: ### N.M Task Name
- Explicit dependency annotations: **Depends on**: X.Y or **Depends on**: Phase N
- Inline references: prose like "reuses algorithm from 1.1" or "extends the parser in 2.3" — extract the referenced task ID as a dependency
- File annotations: **File**: path/to/file
Build dependency graph:
| Pattern | Rule |
|---|---|
No annotations, no [P] |
Default: Task N.M depends on ALL tasks in Phase N-1 |
[P] on phase heading |
Tasks in that phase have NO implicit phase dependency |
**Depends on**: X.Y |
Explicit dependency on task X.Y (overrides phase default for that task) |
**Depends on**: Phase N |
Explicit dependency on ALL tasks in Phase N |
| Inline reference to task ID (e.g., "reuses X.Y") | Implicit dependency on the referenced task |
| No dependencies found anywhere | All tasks are independent (single wave) |
Detect circular dependencies before execution:
⛔ Circular dependency detected:
Task 2.1 → depends on 3.1 → depends on 2.1
Resolve the cycle in tasks.md before re-running.
2. Compute Wave Groupings¶
Topological sort with level assignment:
- Let
resolved= set of tasks already completed (fromwave-state.yamlif resuming, otherwise empty) - Repeat until all tasks assigned:
a.
wave= tasks whose dependencies are ALL inresolved(or have no dependencies) b. Ifwaveis empty and tasks remain → circular dependency c. Sort wave: by priority (P0 > P1 > P2), then by task ID d. Iflen(wave) > 5: split into sub-waves of 3–5 tasks each e. Add wave to wave list; add tasks toresolved
3. File-Overlap Detection¶
For each wave with 2+ tasks, check if any tasks modify the same files:
⚠ File overlap detected in Wave 2:
Task 2.1 and Task 2.3 both modify: src/parser.ts
Options:
(a) Serialize: move Task 2.3 to a new sub-wave after 2.1 completes
(b) Proceed anyway (risk merge conflicts)
4. Display Wave Plan and Confirm¶
Wave Execution Plan
═══════════════════════════════════════════
Wave 1 (sequential — 1 task):
1.1 Setup environment
Wave 2 (parallel — 3 tasks):
2.1 Implement parser
2.2 Implement validator
2.3 Write unit tests
Wave 3 (sequential — 1 task):
3.1 Integration testing
Total: 5 tasks in 3 waves
═══════════════════════════════════════════
Ask user to confirm before starting execution.
5. Dispatch Agents Wave-by-Wave¶
For each wave (sequential):
Single-task wave: Execute directly in current directory (no worktree overhead).
Multi-task wave: Dispatch one Agent per task with isolation: "worktree":
- Pass task description, file paths, and verification steps
- Dispatch ALL agents in a single message (parallel execution)
- Wait for all agents to complete
6. Merge and Checkpoint¶
After each wave completes:
- Merge successful worktree branches: git merge <branch> --no-ff -m "Wave N: Task M.N — <name>"
- Handle merge conflicts: pause, display, ask user to resolve
- Clean up worktrees
- Record failed tasks in wave-state.yaml, mark dependents as blocked
- Display checkpoint summary:
Wave 2 Complete
─────────────────────────────────────
✓ 2.1 Implement parser completed
✓ 2.2 Implement validator completed
✗ 2.3 Write unit tests failed
Impact: 3.1 is blocked (depends on 2.3)
─────────────────────────────────────
When NOT to Use¶
Related failures: Fixing one might fix others - investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)
Real Example from Session¶
Scenario: 6 test failures across 3 files after major refactoring
Failures: - agent-tool-abort.test.ts: 3 failures (timing issues) - batch-completion-behavior.test.ts: 2 failures (tools not executing) - tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)
Decision: Independent domains - abort logic separate from batch completion separate from race conditions
Dispatch:
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts
Results: - Agent 1: Replaced timeouts with event-based waiting - Agent 2: Fixed event structure bug (threadId in wrong place) - Agent 3: Added wait for async tool execution to complete
Integration: All fixes independent, no conflicts, full suite green
Time saved: 3 problems solved in parallel vs sequentially
Key Benefits¶
- Parallelization - Multiple investigations happen simultaneously
- Focus - Each agent has narrow scope, less context to track
- Independence - Agents don't interfere with each other
- Speed - 3 problems solved in time of 1
Verification¶
After agents return: 1. Review each summary - Understand what changed 2. Check for conflicts - Did agents edit same code? 3. Run full suite - Verify all fixes work together 4. Spot check - Agents can make systematic errors
Real-World Impact¶
From debugging session (2025-10-03): - 6 failures across 3 files - 3 agents dispatched in parallel - All investigations completed concurrently - All fixes integrated successfully - Zero conflicts between agent changes