AI & Agents
•3 min read
AI & Agents Overview
The AI adapter brings Directive's constraint system to AI agent orchestration. Wrap any LLM framework with safety guardrails, approval workflows, token budgets, and state persistence.
Architecture
Directive doesn't replace your agent framework – it wraps it:
Your Agent Framework (OpenAI, Anthropic, LangChain, etc.)
↕
Directive AI Adapter (guardrails, constraints, state)
↕
Your Application
Learning Path
Build up from simple to complex:
| Level | Page | What You Learn |
|---|---|---|
| 1 | Running Agents | End-to-end examples and deployment patterns |
| 2 | Resilience & Routing | Retry, fallback, budgets, model selection, structured outputs |
| 3 | Orchestrator | Single-agent runs with guardrails and constraints |
| 4 | Agent Stack | Composable agent pipelines with .run() / .stream() / .structured() |
| 5 | Guardrails | Input/output/tool-call validation, PII detection, moderation |
| 6 | Streaming | Real-time token streaming with backpressure and stream guardrails |
| 7 | Multi-Agent | Parallel, sequential, and supervisor execution patterns |
| 8 | MCP Integration | Model Context Protocol tool servers |
| 9 | SSE Transport | Server-Sent Events streaming for HTTP endpoints |
| 10 | RAG Enricher | Embedding-based retrieval-augmented generation |
Key Concepts
| Concept | Description |
|---|---|
| Orchestrator | Wraps an AgentRunner with constraints, guardrails, and state tracking |
| Agent Stack | Composable .run() / .stream() / .structured() API |
| Guardrails | Input, output, and tool-call validators that block or transform data |
| Constraints | Declarative rules (e.g., "if confidence < 0.7, escalate to expert") |
| Memory | Sliding window, token-based, or hybrid conversation management |
| Resilience | Intelligent retry, provider fallback chains, and cost budget guards |
| Circuit Breaker | Automatic fault isolation for failing agent calls |
Quick Example
import { createAgentOrchestrator, createPIIGuardrail } from '@directive-run/ai';
const orchestrator = createAgentOrchestrator({
runner: myAgentRunner,
// Block any user input that contains personal information
guardrails: {
input: [createPIIGuardrail({ action: 'block' })],
},
// Pause agents automatically when token spend exceeds the budget
constraints: {
budgetLimit: {
when: (facts) => facts.agent.tokenUsage > 10000,
require: { type: 'PAUSE_AGENTS' },
},
},
maxTokenBudget: 10000,
});
// Run the agent – guardrails and constraints are applied automatically
const result = await orchestrator.run(myAgent, 'Hello!');
Safety & Compliance
Directive provides security guardrails and compliance tooling for AI agent systems. See the Security & Compliance section for full details. Apply multiple layers of protection:
User Input
→ Prompt Injection Detection (block attacks before they reach agents)
→ PII Detection (redact sensitive data from input)
→ Agent Execution (safe to process after filtering)
→ Output PII Scan (catch any data leaks in responses)
→ Audit Trail (log every operation for compliance)
| Feature | Page | Threat Addressed |
|---|---|---|
| PII Detection | Input/output scanning | Personally identifiable information leaking to/from agents |
| Prompt Injection | Input validation | Jailbreaks, instruction overrides, encoding evasion |
| Audit Trail | Observability | Tamper-evident logging of every system operation |
| GDPR/CCPA | Data governance | Right to erasure, data export, consent tracking, retention |
| Scenario | Features |
|---|---|
| User-facing chatbot | PII detection + prompt injection + audit trail |
| Internal tool | Audit trail + GDPR compliance |
| Healthcare/finance | All four features |
| Development/testing | Audit trail only |
Next Steps
- New to AI adapter? Start with Running Agents
- Need resilience? See Resilience & Routing for retry, fallback, and budgets
- Want streaming? See Streaming
- Need safety? See Guardrails and Security & Compliance

