From basic compaction to personality emergence — what exists, what works together, and what to choose.
🎧 Listen to this article
~4 min audio summary by Laura · includes description of diagrams
Section 1
Out of the box, OpenClaw ships with a solid memory foundation. Most agents can go a long way with just this.
OpenClaw's built-in memory system includes:
MEMORY.md, daily notes, and a working buffer for in-session stateOpenClaw uses a plugin slot (plugins.slots.memory) for its memory backend — meaning the whole memory system can be swapped out. Three options exist within the OpenClaw ecosystem:
openclaw plugins install @noncelogic/memory-lancedbautoRecall defaults to false — must be explicitly enabledImportant: memory-core, memory-lancedb, and QMD all occupy the same plugin slot — only one can be active at a time. Lossless Claw, by contrast, sits in the contextEngine slot and is fully combinable with any of the above.
What's missing
These gaps are the reason third-party memory systems exist. But they solve different problems — which is the key thing to understand.
Section 2
Before comparing tools, there's a fundamental distinction worth naming clearly.
Memory systems for AI agents operate on two separate layers. Mixing them up leads to picking the wrong tool:
Ensures nothing is lost and retrieval works well. Solves the "where did that go?" problem.
"What was said?"
Reasons about conversations to build models of the user — their preferences, patterns, and how they've changed over time.
"What does it mean?"
These are complementary, not competing. Infrastructure makes sure all the raw material exists. The meaning layer makes sense of it. You can use one without the other — but both together is more powerful than either alone.
Section 3
Four systems are worth knowing about. They sit at different positions on the infrastructure–meaning spectrum and vary widely in integration complexity.
Four systems mapped by memory layer and integration effort. Bottom = infrastructure. Top = meaning & personality.
Community plugin that replaces memory-core. Conversation-based memory using LanceDB — automatically extracts and stores facts, preferences, and decisions after each turn, and injects relevant context before every response.
OpenClaw plugin that replaces the default compaction routine with a DAG-tree of rolling summaries. Original session transcripts are preserved in SQLite — nothing is ever thrown away.
Local hybrid search engine with BM25 + vector retrieval, LLM reranking, and query expansion. Uses ~2 GB GGUF models. Already integrated into OpenClaw — switch via config.
External memory service that reasons over conversations and builds "Peer Representations" — living models of each participant that evolve with every interaction. Injects summaries automatically via Peer Cards.
MCP server with 57 cognitive tools — believe, contradict, evolve, dream, goal_set, and more. Models biologically-inspired dream consolidation (NREM + REM phases). Built explicitly for personality development.
Section 4
Not all combinations make sense. Here's what we know about compatibility.
Lossless Claw and Honcho operate on different layers — no conflict, maximum coverage.
| Combination | Compatible? | Why |
|---|---|---|
| Lossless Claw + Honcho | Yes | Different layers, no conflict. Recommended starting point. |
| Lossless Claw + QMD | Unclear | Use-case overlap on the infrastructure layer. May conflict in the memory backend slot. |
| Lossless Claw + cortex-engine | Yes | Complementary by design: one preserves everything, one gives it meaning. |
| QMD + cortex-engine | Yes | Independent layers, no known conflicts. |
| All four | Not recommended | Too much complexity; potential conflicts and diminishing returns. |
| memory-lancedb + Lossless Claw | Yes | Different slots (memory vs contextEngine) — fully compatible. |
| memory-lancedb + Honcho | Yes | Complementary — lancedb handles auto-capture, Honcho adds reasoning over memory. |
| memory-lancedb + QMD | No | Both occupy the same plugin slot — only one can be active at a time. |
Section 5
Start with a concrete problem, not a technology. Here's how to map your situation to the right tool.
autoRecall and it runs without any manual memory calls.
Section 6
This is a research summary, not a production review. Here's where things actually stand.