OpenClaw Memory
How OpenClaw memory actually works, why it can seem forgetful, and how to make memory durable without confusing sessions and groups.
Why OpenClaw can look forgetful
OpenClaw does not “remember” because a chat felt important. It remembers when information is written into the workspace and can be loaded again later.
That is why people often think memory is broken when the real issue is one of these:
- nothing was written to disk
- the fact was stored in the wrong file
- the conversation happened in a group context
- the session compacted before durable notes were saved
If you keep that model in mind, OpenClaw memory becomes much easier to reason about.
The source of truth is Markdown on disk
The official docs are very explicit here: OpenClaw memory lives in plain Markdown files inside the workspace. The files are the source of truth, not the model.
That design has two practical consequences:
- memory stays inspectable and editable by humans
- “remember this” only matters if it turns into a write to disk
This is also why install and workspace setup matter so much. If the workspace is not healthy, memory will not be healthy either.
The two default memory layers
memory/YYYY-MM-DD.md
This is the daily memory log.
- append-only by default
- good for running context, short-lived notes, and recent work
- OpenClaw reads today and yesterday at session start
Think of it as operational memory, not a polished knowledge base.
MEMORY.md
This is the curated long-term layer.
- intended for durable preferences, decisions, and facts
- should stay much cleaner than the daily log
- only loaded in the main private session, not in group contexts
That last point explains a lot of “why did it remember this in DM but not in a shared space?” confusion.
What should go in each file
Use this rule:
- durable preferences and long-lived facts go to
MEMORY.md - temporary notes and ongoing work go to
memory/YYYY-MM-DD.md - if you want something to stick, ask the assistant to write it down
Do not expect a raw transcript to become memory automatically. OpenClaw can help with recall, but the durable layer is still file-backed.
How recall works
OpenClaw exposes two memory-facing tools:
memory_searchfor semantic recall across indexed snippetsmemory_getfor reading a specific file or file range
This is important because “memory” is not one giant prompt blob. The agent uses retrieval to pull the relevant piece back into the active turn.
The current docs also note that memory_get degrades gracefully when a file does not exist yet. That helps with normal cases like “today’s memory file has not been created yet.”
Vector memory search is optional, but useful
OpenClaw can index MEMORY.md and memory/*.md for semantic search. That makes recall much better when the wording changes between the original note and the later question.
But this is not magic either:
- remote embeddings need a real embedding-capable provider and API key
- local embeddings need local model setup
- if memory search is misconfigured, you can still have file-based memory but weaker recall
So the stable order is:
- get the workspace and file layout right
- confirm the assistant writes durable notes correctly
- only then tune vector search
What automatic memory flush actually does
OpenClaw has a pre-compaction memory flush. When a session is getting close to compaction, it can trigger a silent internal turn that reminds the model to store durable notes before context is compressed.
This helps, but it is not a substitute for clean operator habits.
Important boundaries:
- it usually stays silent with
NO_REPLY - it only runs once per compaction cycle
- it depends on a writable workspace
- it cannot rescue facts that were never recognized as worth saving
If the agent runs in read-only mode, this safety net is skipped.
Why private chat and group chat feel different
Memory in OpenClaw is intentionally scoped. Private and group contexts are not treated the same way.
That is a safety feature, not a bug.
- long-term
MEMORY.mdis for the main private session - group sessions should not casually inherit private user memory
- search scope can also be restricted by session policy
So if you are testing memory in a group, do not assume it will behave like your direct dashboard or DM session.
Common memory mistakes
Assuming the transcript is the memory
A conversation happened. That does not mean a durable memory was created.
Treating MEMORY.md like a dumping ground
If everything becomes “long-term memory,” nothing stays easy to retrieve or maintain.
Expecting group contexts to inherit private memory
They usually should not. OpenClaw keeps those boundaries on purpose.
Skipping embeddings but expecting semantic recall
File-backed memory still works without embeddings, but search quality will be lower.
Running with a read-only workspace
If the workspace cannot be written, durable memory and pre-compaction flush both lose value.
A practical memory setup that stays stable
For most operators, the cleanest path is:
- let onboarding create a normal workspace
- keep one private main session for durable memory
- use direct chat or the dashboard when you want something remembered
- review
MEMORY.mdoccasionally so it stays curated - keep daily logs for transient context
- add vector search or QMD only after the basics are already working
This is much more reliable than trying to make every chat surface behave like one global memory pool.
FAQ
Where is OpenClaw memory stored?
By default it lives as Markdown files in the workspace, usually under ~/.openclaw/workspace, including MEMORY.md and memory/YYYY-MM-DD.md.
Why does OpenClaw remember something in a DM but not in a group?
Because long-term curated memory is intended for the main private session. Group contexts are intentionally more isolated.
Do I need vector search for memory to work?
No. The file-based system works without it. Vector search improves semantic recall, but it is an enhancement, not the foundation.
What is the fastest way to improve memory quality?
Make sure the assistant actually writes important facts to disk, keep MEMORY.md curated, and test memory in a private direct context before debugging group behavior.