Your agent ran for 12 hours straight. The conversation context grew to 200,000 tokens. Every new turn was processing the full context window—and your bill spiked by $50 in a single session.
The Problem: Unbounded Context Growth
When an OpenClaw agent runs a long continuous session—a multi-hour coding task, a research marathon, or an overnight automation—the conversation context grows with every turn. Without a safeguard:
- Turn 1: 5K context tokens
- Turn 50: 50K context tokens
- Turn 200: 200K context tokens — every new input now includes 200K tokens of history
The cost accelerates exponentially. Each turn is more expensive than the last because the context is larger. By the time you notice, the session has already consumed orders of magnitude more than a normal day.
How ClawBridge Detects This (Diagnostic A07)
The Cost Control Center checks whether your OpenClaw configuration has safeguard compaction enabled. This is a built-in OpenClaw feature that automatically summarizes and compresses the conversation when it exceeds a threshold—but many users have it disabled or don't know it exists.
The diagnostic triggers when:
compaction.safeguardis not enabled- No
contextTokenslimit is configured - Recent sessions have exceeded 100K context tokens
One-Tap Fix
Tap Apply to enable safeguard compaction with sensible defaults:
contextTokens: 100000— maximum context window before compaction triggers- Compaction summarizes older conversation history into a compressed format
- New turns only process the summary + recent messages, not the full history
How Compaction Works
When the context exceeds the threshold:
- OpenClaw summarizes older conversation turns into a compact format.
- The summary replaces the full history, dramatically reducing token count.
- The agent continues with full awareness of past context—just in a compressed form.
This is not "forgetting"—it's intelligent compression. The agent retains the key facts and decisions from earlier in the session.
Trade-offs
- Information loss: Compaction is lossy by design. Fine-grained details from early in the conversation may be lost. For sessions that require perfect recall of every line, this is a trade-off.
- Compaction quality: The quality of the summary depends on the model. Better models produce better summaries.
- Threshold tuning: 100K tokens is a sensible default for most users. If your agent regularly needs deep context (e.g., reviewing a large codebase), you may want to set it higher.
Real Numbers
A 12-hour coding session without compaction on Claude Sonnet:
| Turn Range | Avg Context Size | Cost per Turn | Turns | Subtotal |
|---|---|---|---|---|
| 1–50 | 25K tokens | $0.075 | 50 | $3.75 |
| 51–100 | 75K tokens | $0.225 | 50 | $11.25 |
| 101–200 | 150K tokens | $0.450 | 100 | $45.00 |
| Total | $60.00 |
With compaction at 100K tokens:
| Turn Range | Avg Context Size | Cost per Turn | Turns | Subtotal |
|---|---|---|---|---|
| 1–50 | 25K tokens | $0.075 | 50 | $3.75 |
| 51–100 | 75K tokens | $0.225 | 50 | $11.25 |
| 101+ | ~50K (compacted) | $0.150 | 100 | $15.00 |
| Total | $30.00 |
Savings: $30 on a single session.
FAQ
Q: Will my agent "forget" things from earlier in the conversation? A: It keeps a summary of key facts and decisions. Think of it as "notes" rather than "transcript." Critical details are preserved; verbose back-and-forth is compressed.
Q: Can I adjust the threshold?
A: Yes. ClawBridge applies a sensible default, but you can manually tune contextTokens in your OpenClaw config.
ClawBridge is free and open source (MIT License) — install it in seconds, own it forever. Get ClawBridge Free →