Compaction
Every model has a context window — the maximum number of tokens it can process. When a conversation approaches that limit, OpenClaw compacts older messages into a summary so the chat can continue.How it works
- Older conversation turns are summarized into a compact entry.
- The summary is saved in the session transcript.
- Recent messages are kept intact.
Auto-compaction
Auto-compaction is on by default. It runs when the session nears the context limit, or when the model returns a context-overflow error (in which case OpenClaw compacts and retries).Before compacting, OpenClaw automatically reminds the agent to save important
notes to memory files. This prevents context loss.
Manual compaction
Type/compact in any chat to force a compaction. Add instructions to guide
the summary:
Using a different model
By default, compaction uses your agent’s primary model. You can use a more capable model for better summaries:Compaction vs pruning
| Compaction | Pruning | |
|---|---|---|
| What it does | Summarizes older conversation | Trims old tool results |
| Saved? | Yes (in session transcript) | No (in-memory only, per request) |
| Scope | Entire conversation | Tool results only |
Troubleshooting
Compacting too often? The model’s context window may be small, or tool outputs may be large. Try enabling session pruning. Context feels stale after compaction? Use/compact Focus on <topic> to
guide the summary, or enable the memory flush so notes
survive.
Need a clean slate? /new starts a fresh session without compacting.
For advanced configuration (reserve tokens, identifier preservation, custom
context engines, OpenAI server-side compaction), see the
Session Management Deep Dive.
Related
- Session — session management and lifecycle
- Session Pruning — trimming tool results
- Context — how context is built for agent turns
- Hooks — compaction lifecycle hooks (before_compaction, after_compaction)