March 30, 2026 · OpenClaw Ecosystem

AI Agent Memory Systems:
A Practical Comparison

From basic compaction to personality emergence — what exists, what works together, and what to choose.

🎧 Listen to this article

~4 min audio summary by Laura · includes description of diagrams

The Baseline — What OpenClaw Gives You

Out of the box, OpenClaw ships with a solid memory foundation. Most agents can go a long way with just this.

OpenClaw's built-in memory system includes:

Built-in Alternatives

OpenClaw uses a plugin slot (plugins.slots.memory) for its memory backend — meaning the whole memory system can be swapped out. Three options exist within the OpenClaw ecosystem:

Important: memory-core, memory-lancedb, and QMD all occupy the same plugin slot — only one can be active at a time. Lossless Claw, by contrast, sits in the contextEngine slot and is fully combinable with any of the above.

What's missing

  • Compaction discards context — the default compaction replaces earlier turns with a summary. Detail is lost permanently.
  • No reasoning over memory — retrieval surfaces relevant chunks, but the system doesn't interpret patterns or draw conclusions about the user.
  • No personality layer — there's no mechanism for tracking who the user is over time, how they've changed, or what they care about at a deeper level.

These gaps are the reason third-party memory systems exist. But they solve different problems — which is the key thing to understand.

Two Layers of Memory

Before comparing tools, there's a fundamental distinction worth naming clearly.

Memory systems for AI agents operate on two separate layers. Mixing them up leads to picking the wrong tool:

🗄️

Infrastructure Layer

Ensures nothing is lost and retrieval works well. Solves the "where did that go?" problem.

"What was said?"

🧠

Meaning Layer

Reasons about conversations to build models of the user — their preferences, patterns, and how they've changed over time.

"What does it mean?"

These are complementary, not competing. Infrastructure makes sure all the raw material exists. The meaning layer makes sense of it. You can use one without the other — but both together is more powerful than either alone.

The Four Candidates

Four systems are worth knowing about. They sit at different positions on the infrastructure–meaning spectrum and vary widely in integration complexity.

A two-by-two matrix with integration complexity on the horizontal axis (low to high) and memory layer on the vertical axis (infrastructure at bottom, meaning and personality at top). Lossless Claw appears at bottom-left, QMD at bottom-right, Honcho at top-left, and cortex-engine at top-right.

Four systems mapped by memory layer and integration effort. Bottom = infrastructure. Top = meaning & personality.

memory-lancedb OpenClaw ecosystem

Community plugin that replaces memory-core. Conversation-based memory using LanceDB — automatically extracts and stores facts, preferences, and decisions after each turn, and injects relevant context before every response.

Auto-capture Auto-recall LanceDB Plugin-slot swap
Integration 1 command, plugin-slot swap
Strength Auto-capture + auto-recall. No manual memory management. Works immediately.
Weakness Community plugin (not official). Requires embedding provider (OpenAI/Gemini). autoRecall defaults to false.
License Community / Open Source
openclaw plugins install @noncelogic/memory-lancedb
Lossless Claw Infrastructure

OpenClaw plugin that replaces the default compaction routine with a DAG-tree of rolling summaries. Original session transcripts are preserved in SQLite — nothing is ever thrown away.

DAG summaries SQLite retention Auto-compaction
Integration 1 command
Strength No context ever lost. Zero config.
Weakness No personality system. Infrastructure only.
License MIT · OpenClaw-only
openclaw plugins install @martian-engineering/lossless-claw
QMD Infrastructure

Local hybrid search engine with BM25 + vector retrieval, LLM reranking, and query expansion. Uses ~2 GB GGUF models. Already integrated into OpenClaw — switch via config.

BM25 + vector LLM reranking Query expansion Fully local
Integration 1 config switch
Strength Noticeably better retrieval. Runs locally.
Weakness Overlaps with Lossless Claw. Requires Bun.
License Open Source
memory.backend = "qmd"
Honcho Meaning Layer

External memory service that reasons over conversations and builds "Peer Representations" — living models of each participant that evolve with every interaction. Injects summaries automatically via Peer Cards.

Peer representations Peer Cards Self-hostable REST SDK
Integration SDK wrapper + REST calls
Strength Reasoning over memory, not just retrieval.
Weakness Needs own setup. No native OpenClaw plugin slot.
Privacy Self-hosted = data stays at your LLM provider
License AGPL-3.0
cortex-engine Meaning Layer

MCP server with 57 cognitive tools — believe, contradict, evolve, dream, goal_set, and more. Models biologically-inspired dream consolidation (NREM + REM phases). Built explicitly for personality development.

57 cognitive tools Dream consolidation SQLite local MCP server
Integration REST bridge or MCP proxy — highest effort
Strength Only system optimized for personality development.
Weakness No OpenClaw adapter. 57 tools = steep learning curve.
License MIT · self-hostable

What Works Together?

Not all combinations make sense. Here's what we know about compatibility.

Diagram showing two boxes connected by a double arrow labeled 'complementary, no conflict'. Left box: Lossless Claw with subtitle 'Context Engine: nothing is lost'. Right box: Honcho with subtitle 'Meaning Layer: who are you?'. Below both: a foundation bar labeled OpenClaw.

Lossless Claw and Honcho operate on different layers — no conflict, maximum coverage.

Combination Compatible? Why
Lossless Claw + Honcho Yes Different layers, no conflict. Recommended starting point.
Lossless Claw + QMD ⚠️ Unclear Use-case overlap on the infrastructure layer. May conflict in the memory backend slot.
Lossless Claw + cortex-engine Yes Complementary by design: one preserves everything, one gives it meaning.
QMD + cortex-engine Yes Independent layers, no known conflicts.
All four Not recommended Too much complexity; potential conflicts and diminishing returns.
memory-lancedb + Lossless Claw Yes Different slots (memory vs contextEngine) — fully compatible.
memory-lancedb + Honcho Yes Complementary — lancedb handles auto-capture, Honcho adds reasoning over memory.
memory-lancedb + QMD No Both occupy the same plugin slot — only one can be active at a time.

Decision Guide

Start with a concrete problem, not a technology. Here's how to map your situation to the right tool.

Current Status & What We Haven't Tested

This is a research summary, not a production review. Here's where things actually stand.

📋 Stand der Dinge

  • Researched: all four systems documented, compared, and mapped. Sources verified.
  • Next: Lossless Claw installation and long-term test in production.
  • Next: Honcho self-hosted with single-provider config (Anthropic-only) — evaluate privacy guarantees in practice.
  • 🔬 Not tested yet: Lossless Claw + Honcho combination running in production. Compatibility is theoretical.
  • 📋 Watchlist: cortex-engine — following closely. Will revisit if a native OpenClaw adapter appears.
  • 🔧 QMD clarification: potentially more useful as a document search backend (Obsidian, notes, external docs) than as a drop-in memory backend replacement.