Skip to main content

AI & Agents

6 min read

Running Agents

Run a single AI agent in three lines.

This is the minimal path. No orchestrator, no guardrails, no memory – just a runner function, an agent, and an input. When you need more, layer in the orchestrator or the full agent stack.


What Is a Runner?

A runner is an async function that sends a prompt to an LLM provider and returns a standardized result. It handles the HTTP call, authentication, and response parsing for a specific provider (OpenAI, Anthropic, Ollama, etc.) so your application code stays provider-agnostic. Think of it as a thin adapter: (agent, input) => RunResult.

Directive ships pre-built runners (createOpenAIRunner, createAnthropicRunner, createOllamaRunner) and a createRunner helper for custom providers. Every runner returns the same RunResult shape – swap providers by changing one line.


Quick Start

import { createOpenAIRunner } from '@directive-run/ai/openai';

// Create a runner for OpenAI (just needs an API key)
const runner = createOpenAIRunner({ apiKey: process.env.OPENAI_API_KEY! });

// Run an agent – pass the agent definition and the user input
const result = await runner(
  { name: 'assistant', instructions: 'You are helpful.', model: 'gpt-4o' },
  'What is WebAssembly?'
);

console.log(result.output);      // "WebAssembly is..."
console.log(result.totalTokens); // 142
console.log(result.tokenUsage);  // { inputTokens: 42, outputTokens: 100 }

That's it. runner is a plain async function – no framework, no state, no setup.


Choose a Provider

Directive ships pre-built runners for common providers. Each returns a standard AgentRunner:

OpenAI

import { createOpenAIRunner } from '@directive-run/ai/openai';

const runner = createOpenAIRunner({
  apiKey: process.env.OPENAI_API_KEY!,
  model: 'gpt-4o',                               // Default model (agent can override)
  baseURL: 'https://api.openai.com/v1',           // Works with Azure, Together, etc.
});

Anthropic (Claude)

import { createAnthropicRunner } from '@directive-run/ai/anthropic';

const runner = createAnthropicRunner({
  apiKey: process.env.ANTHROPIC_API_KEY!,
  model: 'claude-sonnet-4-5-20250929',     // Default Claude model
  maxTokens: 4096,                         // Max output tokens per request
});

Ollama (Local)

import { createOllamaRunner } from '@directive-run/ai/ollama';

// Connect to a locally running Ollama instance – no API key needed
const runner = createOllamaRunner({
  model: 'llama3',
  baseURL: 'http://localhost:11434',  // Default Ollama address
});

Define an Agent

An agent is a plain object with name, instructions, and model:

import type { AgentLike } from '@directive-run/ai';

// An agent is a plain object – name, instructions, and optional model
const agent: AgentLike = {
  name: 'code-reviewer',
  instructions: 'You review code for bugs, security issues, and style.',
  model: 'gpt-4o',  // Optional – falls back to the runner's default model
};

The model field is optional – if omitted, the runner function's default model is used.


Run Result

Every runner() call returns a RunResult:

const result = await runner(agent, 'Review this function: function add(a, b) { return a + b; }');

// Every RunResult includes these fields
result.output;        // string – the agent's response
result.messages;      // Message[] – full conversation (user + assistant turns)
result.toolCalls;     // ToolCall[] – any tool calls made (empty for basic runs)
result.totalTokens;   // number – total tokens consumed
result.tokenUsage;    // { inputTokens, outputTokens } – breakdown by direction

Cost Tracking

Every adapter returns a tokenUsage breakdown alongside totalTokens. Pair it with the pricing constants each adapter exports:

import { estimateCost } from '@directive-run/ai';
import { createOpenAIRunner, OPENAI_PRICING } from '@directive-run/ai/openai';

const runner = createOpenAIRunner({ apiKey: process.env.OPENAI_API_KEY! });
const result = await runner(agent, 'Summarize this document...');

const { inputTokens, outputTokens } = result.tokenUsage!;
const cost =
  estimateCost(inputTokens, OPENAI_PRICING['gpt-4o'].input) +
  estimateCost(outputTokens, OPENAI_PRICING['gpt-4o'].output);

console.log(`$${cost.toFixed(6)}`); // "$0.001025"

Available pricing constants:

ImportModels
OPENAI_PRICING from @directive-run/ai/openaigpt-4o, gpt-4o-mini, gpt-4-turbo, o3-mini
ANTHROPIC_PRICING from @directive-run/ai/anthropicclaude-sonnet-4-5-20250929, claude-haiku-3-5-20241022, claude-opus-4-20250514

Pricing disclaimer

Pricing changes over time. The constants are provided as a convenience and may not reflect the latest rates. Always verify at your provider's pricing page.


Lifecycle Hooks

Attach hooks to any adapter for tracing, logging, and metrics without modifying application code:

import { createAnthropicRunner } from '@directive-run/ai/anthropic';

const runner = createAnthropicRunner({
  apiKey: process.env.ANTHROPIC_API_KEY!,
  hooks: {
    onBeforeCall: ({ agent, input }) => {
      console.log(`${agent.name}`, input.slice(0, 50));
    },
    onAfterCall: ({ durationMs, tokenUsage, totalTokens }) => {
      metrics.track('llm_call', {
        durationMs,
        inputTokens: tokenUsage.inputTokens,
        outputTokens: tokenUsage.outputTokens,
        totalTokens,
      });
    },
    onError: ({ error, durationMs }) => {
      Sentry.captureException(error, { extra: { durationMs } });
    },
  },
});
HookFiresPayload
onBeforeCallBefore each LLM API callagent, input, timestamp
onAfterCallAfter a successful responseagent, input, output, totalTokens, tokenUsage, durationMs, timestamp
onErrorWhen a call failsagent, input, error, durationMs, timestamp

Hooks work on both standard runners (createOpenAIRunner, createAnthropicRunner, createOllamaRunner) and streaming runners (createOpenAIStreamingRunner, createAnthropicStreamingRunner).


Custom Runner

For providers without a pre-built helper, use createRunner:

import { createRunner } from '@directive-run/ai';

const runner = createRunner({
  // Build the HTTP request from the agent definition and user input
  buildRequest: (agent, input) => ({
    url: 'https://my-llm.example.com/chat',
    init: {
      method: 'POST',
      headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ...' },
      body: JSON.stringify({
        model: agent.model ?? 'default-model',
        system: agent.instructions ?? '',
        messages: [{ role: 'user', content: input }],
      }),
    },
  }),

  // Extract the text and token count from the raw HTTP response
  parseResponse: async (res) => {
    const data = await res.json();

    return {
      text: data.output ?? '',
      totalTokens: data.usage?.total ?? 0,
    };
  },
});

Or write an AgentRunner from scratch:

import type { AgentRunner } from '@directive-run/ai';

// Implement the AgentRunner interface from scratch
const runner: AgentRunner = async (agent, input, options) => {
  const response = await fetch('/api/chat', {
    method: 'POST',
    signal: options?.signal,         // Support cancellation via AbortSignal
    body: JSON.stringify({ model: agent.model, prompt: input }),
  });
  const data = await response.json();

  // Return a standard RunResult so it works with the orchestrator and stack
  return {
    output: data.text,
    messages: [
      { role: 'user', content: input },
      { role: 'assistant', content: data.text },
    ],
    toolCalls: [],
    totalTokens: data.tokens ?? 0,
  };
};

When to Add More

The raw runner is perfect for scripts, one-off calls, and simple integrations. Layer in more features as your needs grow:

NeedSolution
Retry with backoff, fallback providersResilience & Routing
Cost budget limits, model routingResilience & Routing
Typed JSON output from LLMsResilience & Routing
Guardrails (input/output validation)Orchestrator
Approval workflowsOrchestrator
Token budgetsOrchestrator
Reactive UI stateOrchestrator + Framework hooks
Memory / conversation contextAgent Stack
Caching, circuit breakers, observabilityAgent Stack
Parallel / sequential / supervisor patternsMulti-Agent

Next Steps

Previous
Overview

We care about your data. We'll never share your email.

Powered by Directive. This signup uses a Directive module with facts, derivations, constraints, and resolvers – zero useState, zero useEffect. Read how it works

Directive - Constraint-Driven State Management for TypeScript