Anchor monitors agent behavior against your original intent and intervenes before drift causes damage. One layer — for consumers and developers.
Every agent framework suffers from the same bug: over multiple steps, agents gradually lose fidelity to your original intent.
"Create a project in /new-dir." By step 4, the agent is writing files to /current-dir. Each step made local sense — the aggregate violated your intent.
A bot trained to respond as you starts fabricating opinions you never held, using phrasing patterns that don't match your style. Your persona drifted.
No system is watching the gap between original intent and current behavior. Anchor fills that gap — in real time, before damage occurs.
Agent loses context after 3 steps.
Anchor detects pivot and re-anchors.
Anchor introduces three core concepts that work together to keep AI aligned with your intent.
Describe your task in plain language. Anchor extracts a structured contract: goal, scope, constraints, and context. Every agent action is checked against it.
Upload your notes, documents, and writing samples. Anchor builds a grounded RAG system from your data — your AI can only say what you actually said.
Every action scored in real time across three dimensions: scope, constraint, and intent alignment. Configurable intervention: block, warn, or log.
Add drift monitoring to any agent framework. Works with Claude Code, LangGraph, OpenClaw, or your own agent.
import { Anchor } from 'anchor-sdk' const anchor = new Anchor({ apiKey: process.env.ANCHOR_API_KEY }) // Create intent contract from plain language const contract = await anchor.contract({ task: "Build the project in /new-dir. Don't touch /src.", onDrift: 'block' }) // Score any action before executing const result = await anchor.score(action, contract.id) if (!result.passed) { console.log(result.intervention.message) // "This writes to /src — outside your scope (/new-dir)" }
Start free. Upgrade when you need full monitoring power.