Multi-Agent Orchestrator
•15 min read
Execution Patterns
Declarative and imperative execution patterns for coordinating multiple agents.
Patterns define how agents run together. Register named patterns for reuse, or call imperative methods for one-off execution. All patterns respect agent concurrency limits, timeouts, and guardrails.
Quick Start
import { createMultiAgentOrchestrator, parallel, sequential, concatResults } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
researcher: {
agent: researcher,
maxConcurrent: 3,
},
writer: { agent: writer },
reviewer: { agent: reviewer },
},
patterns: {
research: parallel(
['researcher', 'researcher'],
(results) => concatResults(results),
),
pipeline: sequential(['researcher', 'writer', 'reviewer']),
},
});
// Run a named pattern
const result = await orchestrator.runPattern('pipeline', 'Write about WASM');
Pattern Overview
| Pattern | Use Case | Agents | Result |
|---|---|---|---|
parallel | Fan-out, redundancy | Same or different | Merged via callback |
sequential | Pipelines, chains | Different roles | Last agent's output |
supervisor | Dynamic delegation | Manager + workers | Supervisor's final answer |
dag | Complex dependencies | Any topology | Merged leaf outputs |
race | Fastest wins | Competing agents | Winner's output |
reflect | Self-improvement | Agent + evaluator | Best iteration |
debate | Adversarial refinement | Multiple + judge | Winner per round |
goal | Desired-state goal resolution | Any topology | Achieved facts |
Parallel
Run multiple agents simultaneously and merge their results.
Named Pattern
import { parallel, concatResults } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
researcher: {
agent: researcher,
maxConcurrent: 3,
},
},
patterns: {
research: parallel(
['researcher', 'researcher', 'researcher'],
(results) => concatResults(results, '\n\n---\n\n'),
{ minSuccess: 2 }
),
},
});
const output = await orchestrator.runPattern<string>('research', 'Explain WASM');
Imperative
// Same input to all agents
const combined = await orchestrator.runParallel(
['researcher', 'researcher'],
'What are WebSockets?',
(results) => concatResults(results)
);
// Different inputs per agent
const answers = await orchestrator.runParallel(
['researcher', 'researcher', 'researcher'],
['Explain REST', 'Explain GraphQL', 'Explain gRPC'],
(results) => collectOutputs(results)
);
When passing an array of inputs, the count must match the agent count.
Options
| Option | Type | Default | Description |
|---|---|---|---|
minSuccess | number | all | Minimum successful results. Failed agents are caught silently when set |
timeout | number | – | Overall timeout for the batch (ms) |
Sequential
Chain agents so each one's output feeds into the next.
Named Pattern
import { sequential } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
researcher: { agent: researcher },
writer: { agent: writer },
reviewer: { agent: reviewer },
},
patterns: {
pipeline: sequential(['researcher', 'writer', 'reviewer'], {
transform: (output, agentId) => {
if (agentId === 'researcher') {
return `Write based on this research:\n\n${output}`;
}
if (agentId === 'writer') {
return `Review this draft:\n\n${output}`;
}
return String(output);
},
}),
},
});
Imperative
const results = await orchestrator.runSequential<string>(
['researcher', 'writer', 'reviewer'],
'Create a blog post about AI safety',
{
transform: (output, agentId, index) => {
if (agentId === 'researcher') {
return `Write based on this research:\n\n${output}`;
}
return String(output);
},
}
);
const finalReview = results[results.length - 1].output;
const totalTokens = aggregateTokens(results);
Options
| Option | Type | Default | Description |
|---|---|---|---|
transform | (output, agentId, index) => string | auto-stringify | Shape each agent's output for the next |
extract | (output) => T | identity | Extract final result (named patterns only) |
continueOnError | boolean | false | Skip failed agents instead of aborting |
Supervisor
A supervisor agent delegates work to workers in a loop until it declares the task complete.
Named Pattern
import { createMultiAgentOrchestrator, supervisor, collectOutputs, aggregateTokens } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
manager: {
agent: manager,
maxConcurrent: 1,
},
researcher: {
agent: researcher,
maxConcurrent: 3,
},
writer: {
agent: writer,
maxConcurrent: 1,
},
},
patterns: {
managed: supervisor('manager', ['researcher', 'writer'], {
maxRounds: 5,
extract: (supervisorOutput, workerResults) => ({
answer: supervisorOutput,
sources: collectOutputs(workerResults),
tokens: aggregateTokens(workerResults),
}),
}),
},
});
const result = await orchestrator.runPattern('managed', 'Research and write about WASM');
Imperative
const result = await orchestrator.runSupervisor(
'manager',
['researcher', 'writer'],
'Research and write about WASM',
{
maxRounds: 5,
extract: (supervisorOutput, workerResults) => ({
answer: supervisorOutput,
sources: collectOutputs(workerResults),
tokens: aggregateTokens(workerResults),
}),
}
);
How the Loop Works
- Runs the supervisor with the initial input
- Parses the supervisor's output as JSON
- If
{ action: "delegate", worker: "researcher", workerInput: "..." }– runs that worker - Feeds the worker result back:
"Worker researcher completed with result: ..." - Repeats until
{ action: "complete" }ormaxRoundsis reached
The supervisor validates worker names. Delegating to an unregistered worker throws immediately.
Options
| Option | Type | Default | Description |
|---|---|---|---|
maxRounds | number | 5 | Maximum delegation rounds |
extract | (output, workerResults) => T | identity | Extract final result |
DAG (Directed Acyclic Graph)
Define complex dependency graphs where agents run as soon as their dependencies complete.
Named Pattern
import { createMultiAgentOrchestrator, dag, concatResults } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
researcher: { agent: researcher },
analyst: { agent: analyst },
writer: { agent: writer },
editor: { agent: editor },
},
patterns: {
pipeline: dag(
{
researcher: { agent: 'researcher' },
analyst: { agent: 'analyst', deps: ['researcher'] },
writer: { agent: 'writer', deps: ['researcher'] },
editor: { agent: 'editor', deps: ['analyst', 'writer'], priority: 10 },
},
(context) => concatResults(Object.values(context.results).map((r) => String(r.output))),
{ timeout: 60000, maxConcurrent: 3 }
),
},
});
const result = await orchestrator.runPattern('pipeline', 'Research, analyze, and write about WASM');
Imperative
const result = await orchestrator.runDag(
{
researcher: { agent: 'researcher' },
analyst: { agent: 'analyst', deps: ['researcher'] },
writer: { agent: 'writer', deps: ['researcher'] },
editor: { agent: 'editor', deps: ['analyst', 'writer'] },
},
'Research, analyze, and write about WASM',
(context) => concatResults(Object.values(context.results).map((r) => String(r.output))),
{ timeout: 60000 }
);
DagNode
| Field | Type | Default | Description |
|---|---|---|---|
agent | string | required | Agent ID |
deps | string[] | [] | Upstream node IDs that must complete first |
when | (context: DagExecutionContext) => boolean | – | Conditional edge – evaluated when deps are met |
transform | (context: DagExecutionContext) => string | – | Build input from dependency results |
timeout | number | – | Per-node timeout (ms) |
priority | number | 0 | Tiebreaker when multiple nodes are ready (higher = first) |
DagExecutionContext
interface DagExecutionContext {
input: string; // Original input to the DAG
outputs: Record<string, unknown>; // Outputs keyed by node ID
statuses: Record<string, DagNodeStatus>; // Statuses keyed by node ID
errors: Record<string, string>; // Error messages keyed by node ID
results: Record<string, RunResult<unknown>>; // Full RunResult keyed by node ID
}
Options
| Option | Type | Default | Description |
|---|---|---|---|
timeout | number | – | Overall DAG timeout (ms) |
maxConcurrent | number | – | Max parallel nodes |
onNodeError | "fail" | "skip-downstream" | "continue" | "fail" | Error handling strategy |
Race
Run multiple agents in parallel – the first successful result wins. Remaining agents are cancelled.
Named Pattern
import { createMultiAgentOrchestrator, race } from '@directive-run/ai';
import type { RaceResult } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
'gpt4-agent': { agent: gpt4Agent },
'claude-agent': { agent: claudeAgent },
'gemini-agent': { agent: geminiAgent },
},
patterns: {
fastest: race<string>(
['gpt4-agent', 'claude-agent', 'gemini-agent'],
{
extract: (output) => String(output),
timeout: 10000,
minSuccess: 1,
}
),
},
});
const result: RaceResult<string> = await orchestrator.runPattern('fastest', 'Summarize this');
console.log(result.winnerId); // 'claude-agent'
console.log(result.result); // The winning output
Imperative
const result = await orchestrator.runRace<string>(
['gpt4-agent', 'claude-agent', 'gemini-agent'],
'Summarize this',
{
extract: (output) => String(output),
timeout: 10000,
}
);
console.log(result.winnerId);
console.log(result.result);
RaceResult
interface RaceResult<T> {
winnerId: string;
result: T;
allResults?: RunResult<unknown>[];
}
Options
| Option | Type | Default | Description |
|---|---|---|---|
extract | (output) => T | identity | Extract result from winner's output |
timeout | number | – | Overall timeout (ms) |
minSuccess | number | 1 | Minimum successful results before declaring a winner |
signal | AbortSignal | – | External cancellation signal |
Timeline events: race_start, race_winner, race_cancelled.
Reflect
An agent produces output, an evaluator scores it, and the agent retries with feedback until the score passes a threshold.
Named Pattern
import { createMultiAgentOrchestrator, reflect } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
writer: { agent: writer },
evaluator: { agent: evaluator },
},
patterns: {
selfImprove: reflect<string>('writer', 'evaluator', {
maxIterations: 3,
threshold: 0.8,
onExhausted: 'accept-best',
onIteration: ({ iteration, score, feedback }) => {
console.log(`Iteration ${iteration}: score=${score}, feedback=${feedback}`);
},
}),
},
});
const result = await orchestrator.runPattern('selfImprove', 'Write a technical blog post');
console.log(result.result); // Best output
console.log(result.iterations); // Number of iterations run
console.log(result.exhausted); // true if maxIterations reached without passing
Imperative
const result = await orchestrator.runReflect<string>(
'writer',
'evaluator',
'Write a technical blog post',
{
maxIterations: 3,
threshold: 0.8,
onExhausted: 'accept-best',
}
);
console.log(result.result);
console.log(result.iterations);
The evaluator agent must return JSON with score (0–1) and optional feedback:
{ "score": 0.6, "feedback": "Needs more technical depth in section 2" }
Options
| Option | Type | Default | Description |
|---|---|---|---|
maxIterations | number | 2 | Maximum improvement attempts |
threshold | number | ((iteration: number) => number) | – | Score threshold to pass (0–1), or function for dynamic thresholds |
parseEvaluation | (output) => { score, feedback? } | JSON.parse | Custom evaluation parser |
buildRetryInput | (input, feedback, iteration) => string | – | Custom retry input builder (iteration is a number) |
extract | (output) => T | identity | Extract final result |
onExhausted | "accept-last" | "accept-best" | "throw" | "accept-last" | What to do when max iterations reached |
onIteration | (record) => void | – | Callback per iteration |
signal | AbortSignal | – | Cancellation signal |
timeout | number | – | Overall timeout (ms) |
Return Shape
interface ReflectResult<T> {
result: T;
iterations: number;
history: ReflectIterationRecord[];
exhausted: boolean;
}
Timeline events: reflection_iteration with score, feedback, and durationMs.
withReflection Middleware
Wrap any runner with reflection so that every call goes through evaluate-and-retry:
import { withReflection } from '@directive-run/ai';
const reflectingRunner = withReflection(runner, {
evaluator: evaluatorAgent,
evaluatorRunner: runner,
maxIterations: 3,
parseEvaluation: (output) => JSON.parse(String(output)),
buildRetryInput: (input, feedback, iteration) =>
`Attempt ${iteration}: ${feedback}\n\nOriginal: ${input}`,
onExhausted: 'accept-best',
});
// Every call now auto-reflects
const result = await reflectingRunner(agent, 'Write a blog post');
Debate
Multiple agents propose solutions, a judge evaluates each round, and the process repeats.
Named Pattern
import { createMultiAgentOrchestrator, debate } from '@directive-run/ai';
import type { DebateResult } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
optimist: { agent: optimist },
pessimist: { agent: pessimist },
realist: { agent: realist },
judge: { agent: judge },
},
patterns: {
adversarial: debate<string>({
agents: ['optimist', 'pessimist', 'realist'],
evaluator: 'judge',
maxRounds: 3,
extract: (output) => String(output),
}),
},
});
const result: DebateResult<string> = await orchestrator.runPattern(
'adversarial',
'Should we use microservices?'
);
console.log(result.winnerId);
console.log(result.rounds.length);
for (const round of result.rounds) {
console.log(round.proposals.map((p) => p.agentId));
console.log(round.judgement.winnerId, round.judgement.score);
}
Imperative
const result = await orchestrator.runDebate<string>(
{
agents: ['optimist', 'pessimist', 'realist'],
evaluator: 'judge',
maxRounds: 2,
},
'Should we use microservices?'
);
console.log(result.winnerId);
console.log(result.result);
DebateResult
interface DebateResult<T> {
winnerId: string;
result: T;
rounds: Array<{
proposals: Array<{ agentId: string; output: unknown }>;
judgement: { winnerId: string; feedback?: string; score?: number };
}>;
}
Options
| Option | Type | Default | Description |
|---|---|---|---|
agents | string[] | required | Competing agent IDs |
evaluator | string | required | Judge agent ID |
maxRounds | number | 2 | Number of debate rounds |
extract | (output) => T | identity | Extract final result |
parseJudgement | (output) => { winnerId, feedback?, score? } | JSON.parse | Custom judge parser |
signal | AbortSignal | – | Cancellation signal |
timeout | number | – | Overall timeout (ms) |
Timeline events: debate_round with round number, winnerId, score, and agentCount.
Goal
Declare the desired end-state and let the runtime figure out which agents to run. Nodes declare what they produce and require — the runtime resolves the dependency graph and drives agents to goal achievement.
Goal vs DAG
DAG requires you to wire the execution graph manually with explicit deps edges — it's a static topology. Goal infers the graph from produces/requires declarations and drives toward a when() condition — it's dynamic, adaptive pursuit. Use DAG when you know the exact execution order upfront. Use Goal when you want the runtime to figure out ordering, handle stalls with relaxation, and track satisfaction progress toward a desired end-state.
Standalone utilities
Need goal planning without an orchestrator? Use planGoal(), validateGoal(), and getDependencyGraph() from @directive-run/ai. These work with the same produces/requires declarations. All 6 multi-step patterns support checkpointing for fault tolerance.
Quick Start
const result = await orchestrator.runGoal(
{
fetcher: {
agent: 'fetcher',
produces: ['data'],
extractOutput: (r) => ({ data: r.output }),
},
analyzer: {
agent: 'analyzer',
produces: ['analysis'],
requires: ['data'],
extractOutput: (r) => ({ analysis: r.output }),
},
},
{ query: 'market trends' },
(facts) => facts.analysis != null,
{ maxSteps: 5, extract: (facts) => facts.analysis },
);
Each node declares produces (fact keys it writes) and requires (fact keys it needs). The when callback defines the goal condition. The runtime iterates: find ready nodes, run them in parallel, merge output facts, check goal achievement.
Named Pattern
Register a goal pattern for reuse:
import { goal } from '@directive-run/ai';
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
researcher: { agent: researcher },
writer: { agent: writer },
reviewer: { agent: reviewer },
},
patterns: {
articlePipeline: goal(
{
researcher: {
agent: 'researcher',
produces: ['research.findings'],
requires: ['research.topic'],
extractOutput: (r) => ({ 'research.findings': r.output }),
},
writer: {
agent: 'writer',
produces: ['article.draft'],
requires: ['research.findings'],
buildInput: (facts) => `Write about: ${facts['research.findings']}`,
extractOutput: (r) => ({ 'article.draft': r.output }),
},
reviewer: {
agent: 'reviewer',
produces: ['article.approved'],
requires: ['article.draft'],
allowRerun: true,
extractOutput: (r) => ({
'article.approved': String(r.output).includes('APPROVED'),
}),
},
},
(facts) => facts['article.approved'] === true,
{ maxSteps: 10, extract: (facts) => facts['article.draft'] },
),
},
});
const result = await orchestrator.runPattern('articlePipeline', 'AI Safety');
Selection Strategies
Control which ready nodes run each step:
import { allReadyStrategy, highestImpactStrategy, costEfficientStrategy } from '@directive-run/ai';
// Run all ready nodes (default)
goal(nodes, when, { selectionStrategy: allReadyStrategy() });
// Pick top N by historical satisfaction impact
goal(nodes, when, { selectionStrategy: highestImpactStrategy({ topN: 2 }) });
// Prefer agents with lower token cost per satisfaction delta
goal(nodes, when, { selectionStrategy: costEfficientStrategy() });
Relaxation Tiers
When goal resolution stalls, progressively apply recovery strategies:
goal(nodes, when, {
relaxation: [
{ label: 'retry-reviewer', afterStallSteps: 3, strategy: { type: 'allow_rerun', nodes: ['reviewer'] } },
{ label: 'inject-defaults', afterStallSteps: 5, strategy: { type: 'inject_facts', facts: { 'article.approved': true } } },
{ label: 'accept-partial', afterStallSteps: 8, strategy: { type: 'accept_partial' } },
],
});
| Strategy | Effect |
|---|---|
allow_rerun | Re-enable completed nodes for another run |
inject_facts | Inject fact values to unblock dependencies |
accept_partial | Return current facts as partial result |
alternative_nodes | Add new nodes to the graph |
custom | Run arbitrary async logic |
GoalResult
runGoal() returns a GoalResult<T> with goal achievement metadata:
| Field | Type | Description |
|---|---|---|
achieved | boolean | Whether when() was satisfied |
result | T | Extracted result (from extract, or raw facts) |
facts | Record<string, unknown> | Final facts state |
executionOrder | string[] | Nodes that ran, in order |
steps | number | Total goal resolution steps |
totalTokens | number | Tokens consumed |
stepMetrics | GoalStepMetrics[] | Per-step satisfaction and timing |
relaxations | RelaxationRecord[] | Applied relaxation events |
Explaining Results
explainGoal() converts a GoalResult into a human-readable step-by-step summary — useful for logging, LLM context, or debugging:
import { explainGoal } from '@directive-run/ai';
const result = await orchestrator.runGoal(nodes, input, when);
const explanation = explainGoal(result);
console.log(explanation.summary);
// "Goal achieved in 3 steps (550 tokens, 5200ms, final satisfaction: 1.000)"
for (const step of explanation.steps) {
console.log(step.text);
// "Step 1: Ran fetcher → satisfaction 0.000 → 0.330 (+0.330), produced: data (1800ms, 150 tokens)"
}
Checkpoint & Resume
Save goal resolution state at intervals for fault tolerance in long-running workflows:
const result = await orchestrator.runGoal(nodes, input, when, {
checkpoint: {
everyN: 5,
store: myCheckpointStore, // or uses orchestrator's store
labelPrefix: 'article-pipeline',
},
});
Resume from a saved checkpoint:
// Load the checkpoint (stored as systemExport JSON)
const checkpoint = await store.load(checkpointId);
const state = JSON.parse(checkpoint.systemExport) as GoalCheckpointState;
// Resume with the same pattern definition
const result = await orchestrator.resumeGoal(state, pattern);
The checkpoint captures facts, completed nodes, failure counts, step metrics, and relaxation state — everything needed to continue exactly where you left off.
All patterns support checkpoints
Checkpointing works with all multi-step patterns (sequential, supervisor, reflect, debate, DAG, goal). See the Pattern Checkpoints page for per-pattern examples, progress tracking, diffing, forking, and the full API reference.
Result Merging
Four built-in helpers for combining results from parallel runs:
import {
concatResults,
collectOutputs,
pickBestResult,
aggregateTokens,
} from '@directive-run/ai';
// Join string outputs with a separator (default: '\n\n')
const merged = concatResults(results, '\n\n---\n\n');
// Gather outputs into a typed array
const outputs = collectOutputs<string>(results);
// Select the best result by a scoring function
const best = pickBestResult(results, (r) =>
typeof r.output === 'string' ? r.output.length : 0
);
// Sum token usage
const totalTokens = aggregateTokens(results);
| Helper | Signature | Description |
|---|---|---|
concatResults | (results, separator?) => string | Concatenate outputs. Non-strings are JSON.stringify'd |
collectOutputs | (results) => T[] | Collect outputs into an array |
pickBestResult | (results, scoreFn) => RunResult<T> | Highest-scoring result. Throws if empty |
aggregateTokens | (results) => number | Sum totalTokens across results |
Agent Selection Helpers
Route work to agents based on runtime state using Directive constraints.
selectAgent
import { selectAgent } from '@directive-run/ai';
const routeToExpert = selectAgent(
(facts) => facts.complexity > 0.8,
'expert',
(facts) => String(facts.query),
100 // priority
);
// Dynamic agent selection
const dynamicRoute = selectAgent(
(facts) => facts.needsProcessing === true,
(facts) => facts.preferredAgent as string,
(facts) => `Process this: ${facts.data}`
);
runAgentRequirement
Create RUN_AGENT requirements for constraint definitions:
import { runAgentRequirement } from '@directive-run/ai';
const constraints = {
needsResearch: {
when: (facts) => facts.hasUnknowns,
require: runAgentRequirement('researcher', 'Find relevant data', {
priority: 'high',
}),
},
};
findAgentsByCapability
import { findAgentsByCapability } from '@directive-run/ai';
const matches = findAgentsByCapability(agents, ['search', 'summarize']);
// Returns agent IDs where capabilities include ALL required ones
capabilityRoute
Create a constraint that routes by capability match:
import { capabilityRoute } from '@directive-run/ai';
const route = capabilityRoute(
agents,
(facts) => facts.requiredCapabilities as string[],
(facts) => facts.query as string,
{
priority: 50,
select: (matches, registry) => matches[0], // Custom tiebreaker
}
);
spawnOnCondition
Spawn an agent when a condition becomes true:
import { spawnOnCondition } from '@directive-run/ai';
const spawn = spawnOnCondition({
when: (facts) => facts.errorCount > 3,
agent: 'debugger',
input: 'Investigate recurring errors',
priority: 90,
});
spawnPool
Spawn multiple instances of an agent:
import { spawnPool } from '@directive-run/ai';
const pool = spawnPool(
(facts) => facts.batchReady === true,
{ agent: 'processor', input: 'Process batch item', count: 5 }
);
derivedConstraint
Trigger agent runs based on derived state:
import { derivedConstraint } from '@directive-run/ai';
const onHighCost = derivedConstraint(
'totalCost',
(value) => (value as number) > 100,
{ agent: 'cost-optimizer', input: 'Reduce costs', priority: 80 }
);
Pattern Composition
Compose multiple patterns into a pipeline where each pattern's output feeds as input to the next:
import { composePatterns, parallel, sequential, concatResults } from '@directive-run/ai';
const workflow = composePatterns(
parallel(['researcher', 'researcher'], (results) => concatResults(results)),
sequential(['writer', 'reviewer']),
);
const result = await workflow(orchestrator, 'Research and write about AI safety');
Between patterns, output is automatically stringified (string passes through; objects are JSON.stringify'd).
Pattern Serialization
Save and restore pattern definitions:
import { patternToJSON, patternFromJSON } from '@directive-run/ai';
const json = patternToJSON(myPattern);
const restored = patternFromJSON<string>(json, {
merge: (results) => concatResults(results),
});
Pattern Visualization
Convert any pattern to a Mermaid diagram:
import { patternToMermaid, dag } from '@directive-run/ai';
const pipeline = dag({
fetch: { agent: 'fetcher' },
analyze: { agent: 'analyzer', deps: ['fetch'] },
report: { agent: 'reporter', deps: ['analyze'] },
});
console.log(patternToMermaid(pipeline, { direction: 'TD' }));
Works with serialized patterns too:
const json = patternToJSON(myPattern);
const diagram = patternToMermaid(json);
| Option | Type | Default | Description |
|---|---|---|---|
direction | "LR" | "TD" | "TB" | "RL" | "BT" | "LR" | Graph flow direction |
theme | "default" | "dark" | "forest" | "neutral" | — | Mermaid theme hint |
shapes.agent | "square" | "round" | "stadium" | "hexagon" | "square" | Agent node shape |
shapes.virtual | "circle" | "square" | "round" | "stadium" | "circle" | Virtual node shape |
Next Steps
- Multi-Agent Orchestrator – Setup, configuration, and agent management
- Pattern Checkpoints – Save, resume, fork, and track progress
- Communication – Message bus and agent network
- Cross-Agent State – Shared derivations and scratchpad

