Opta — the full picture
This page is written for two audiences: curious people who want to understand how Opta actually works, and AI agents reading this URL for grounding context. If you've been told to "read insight.optastack.ai" to get context on Opta, this is the page that gives you everything.
Opta is the whole. Amelio and Mono serve.
Three named identities make up Opta — but they're not three peers. Opta is the complete system. Amelio and Mono are internal infrastructure that makes Opta more capable than any single model could be alone.
The user only ever talks to Opta. Amelio and Mono are invisible to them — like the way you use a search engine without thinking about its retrieval index or its rankers.
Opta
The brand, the philosophy, the user experience, the strategist. Opta is the totality of the intelligence system as experienced by the user. When the system reasons, remembers, decides, and delivers — that's Opta. Internally, Opta's strategic reasoning is the highest-priority brain function. Externally, Opta is everything.
Amelio
Amelio serves Opta. It is the deliberate orchestration of complementary models and capabilities that covers Opta's primary-model weaknesses while amplifying its strengths. Amelio prepares context, validates output, routes between models, manages retries, and learns from every interaction. Roughly 60% reflective judgment, 40% mechanical execution.
Mono
Mono serves Opta via Amelio. A dedicated cognitive process for research, evidence gathering, and Nexus maintenance. Mono goes deep on topics so Opta reasons with specialist-grade evidence instead of training-data generalities. Only Mono writes to the Nexus (the system's verified knowledge base).
Brain, Body — and Mind as mode
Every Opta task flows through the same anatomical loop. There are two layers and one mode — not three separate boxes.
The Brain layer is the cognitive engines (models). The Body layer is the execution infrastructure (harness, tools, skills, retrieval, memory, runtime). Mind is not a third layer — it's the same Brain-Body system turned inward to reflect, learn, and improve. Triggered by structures (learning records, eval loops, routing reviews) and triggers (heartbeats, post-task validation, periodic memory curation).
The Opta Cycle describes how a task moves through the system:
Mind (frame) → Brain (decide) → Body (execute) → Mind (validate + learn) → next task.
The three Pathways
The connections between layers carry information. Each Pathway has its own quality concerns and bandwidth limits.
What the Brain sees. Context packages, retrieved evidence, injected skills, task framing. Quality metric: signal-to-noise ratio.
Function calls, tool invocations, deployment commands. Quality metric: intent fidelity — does the Body execute what the Brain intended?
Telemetry, observation, learning records, pattern tracking. This is the weakest pathway in most AI systems — and the prerequisite for improvement.
24 components, honestly rated
Each component has a target state (What Optimal Looks Like, abbreviated WOL) and a confidence level reflecting how well we currently understand or have implemented it. Most are Low/Medium confidence today — we're in the early phase.
Confidence levels: Proven · High · Medium · Low · Unknown. The system improves by raising confidence component-by-component.
| Component | Layer | Purpose | Confidence |
|---|---|---|---|
| Primary Model | Brain | Opta's main reasoning engine. Currently Opus-class via OpenClaw, no intelligent routing yet. | High |
| Amelio Fleet | Brain | Multi-model constellation covering primary's weaknesses + amplifying strengths. | Medium |
| Mono Research Brain | Brain | Dedicated research model. Not yet operational as separate process. | Low |
| Opta Local | Brain+Body | Local inference substrate AND the existence proof that architecture beats parameters. | Medium |
| Harness / Runtime | Body | Core orchestration — sessions, traces, permissions, model routing. OpenClaw provides v0. | Medium |
| Tools | Body | Execution capabilities — files, web, APIs, browser, code. Broad via OpenClaw. | Medium |
| Skills | Body | Crystallised reasoning patterns. 226 in OpenClaw — needs intelligent routing. | Low |
| RAG / Evidence System | Body | Structured evidence retrieval. Basic memory search exists; no first-class pipeline. | Low |
| Opta CLI | Body | Agent-native control plane. Machine-readable commands. Not built. | Low |
| Apps / Surfaces | Body | Four primary: HQ, Terminal, Deploy, Gateway. Mostly specced. | Low |
| Subagent System | Body | Bounded execution units. Exists; needs envelope contracts + validation. | Medium |
| Context Engineering | Mind | Structured context packages. Currently prompt + injected files. No formal schema. | Low |
| Memory System | Body+Mind | Four-tier: working / episodic / long-term / Nexus. Files exist; lifecycle not automated. | Medium |
| Nexus | Body | Verified knowledge base. Exists with rich content; not yet Mono-maintained or indexed. | Medium |
| Evaluation System | Mind | Quality measurement. Minimal harness exists; no continuous eval loop yet. | Low |
| Trust System | Mind | Living registry of what each model/tool/source is trusted to do. Not built. | Low |
| Telemetry / Learning Records | Body→Mind | Structured observation of what happened. Logs exist; learning-grade telemetry does not. Critical missing infrastructure. | Low |
| Metacognitive Governance | Mind | Continuous routing improvement, pattern graduation, quality trend tracking. Philosophical only. | Unknown |
| Contracts | All layers | Inter-component interface definitions with schema + tests. Not defined. | Unknown |
| Security / Permissions | Mind | Trust-informed permission gates. Operating via AGENTS.md rules. | Medium |
| Adaptive Depth | Cross-cutting | Fast triage → depth assignment per task. Severity 1-5. Conceptual only. | Low |
| Alignment Primitives | Mind | WOL, Foundation, Bridge, Severity — values that constrain work. Documented as theory. | Low |
| Research Pipeline | Cross-cutting | Discover → verify → integrate new knowledge. Ad-hoc, not structured. | Low |
| Public Surface | Body | Website + docs + demos + status. Minimal. Insight (this site) is the first proof-backed entry. | Low |
Four surfaces, one intelligence
Opta isn't a single app — it's a stack of surfaces, each optimised for one kind of work, all sharing the same identity, memory, and intelligence underneath. You don't need every app. Pick what fits your workflow.
Opta HQ
Control plane for strategy, operations, governance, and model routing. Where you steer the whole stack from one screen.
Low confidence · speccedOpta Terminal
High-agency TUI/CLI for execution. Designed for the work that doesn't need a window — running commands, orchestrating subagents, shipping changes from a keyboard.
Low confidence · speccedOpta Deploy
Shipping + release surface. The interface for turning AI work into deployed software — apps, sites, services, configs.
Medium confidence · buildingOpta Gateway
Download, setup, and local-runtime management. The "install Opta" surface — where users first encounter the stack and where the local inference runtime lives.
Low confidence · speccedOptaLocal — for the people
Make local AI competent enough to be a viable everyday alternative for most people.
Most AI today is gated behind subscription stacking, surveillance, and comprehension barriers. The OptaLocal mission is to make a multi-model local composition match cloud AI on ~80% of everyday tasks via orchestration — so people don't have to pay for AI to participate in modern life.
We don't promise to "beat cloud" universally. Cloud-fallback for the remaining ~20% is fine (using your existing OpenAI / Minimax / Anthropic subscription — we never add a billing layer on top). The local-first path stays the canon path.
The engineering thesis is the Ensemble Principle — diversity-as-substitute-for-scale. A diverse ensemble of small specialised local models, orchestrated to compensate for each other's weaknesses, can match a single frontier cloud model on most everyday tasks.
Ideology Layer 0 — the operating ground
Every system has an unstated philosophy. Opta's is stated — 13 quotes across 5 domains. Heavy chess influence. Treated as Layer 0 of the operating canon: doctrine derives from these, not the other way around.
Compete — positional & adaptive thinking
Communicate — external posture
Work — effort & competence
Learn — reflection & improvement
Plan — chess-life realism
The canon, in citation-handle form
Behavioural rules in the Opta ecosystem use stable codes of the form [CATEGORY-N] — cited
unambiguously from commits, sync payloads, audit logs, cross-AI handoffs. Below is a sample; the full
registry is internal canon.
Five commitments, made concrete
For Opta to be trustworthy enough that anyone could implement it in their life, the brand must stand on commitments — not promises. These five are load-bearing.
Data sovereignty
All conversations + memory stay on user hardware unless explicitly synced. Local-by-default.
Transparent operation
Every action shows which model handled it, why, what it read, what it did, and the confidence.
No surveillance
No telemetry by default. Opt-in only, with explicit scope. The user owns what's collected.
User-revocable everything
Any data, memory, or learned behaviour deletable in one action. No retention by default.
Honest performance reporting
Published benchmarks include tasks where Opta loses. Methodology is community-runnable + reproducible.
When to use which surface
Each app maps to a different mode of working with Opta. Most people use 1-2 daily and ignore the rest.