OpenClaw + neuromem.cloud
Give your OpenClaw agent deep behavioral memory — not just facts, but patterns, emotions, and evolving traits that make it truly understand you.
The Problem with OpenClaw's Native Memory
OpenClaw's Markdown memory is powerful for simple use cases, but has fundamental limitations that grow with usage.
Markdown files loaded in full — high token cost, slow responses
Every conversation, OpenClaw loads MEMORY.md and memory/**/*.md files into the context window (default limit 150,000 chars, ~50K tokens). More memories mean bigger files, bloated context — token costs grow linearly and LLM responses get noticeably slower.
neuromem doesn't load files. It uses vector semantic retrieval to recall only the most relevant memory fragments on demand. For equivalent information, injected context is 1/10 the size of raw Markdown — faster responses, lower token cost.
Can only remember, can't understand
OpenClaw's memory is flat text — you tell it "I like TypeScript" and "I'm learning Rust", it stores both. But it will never discover "this user tries a new language every 6 months, always starting from system-level languages."
neuromem's Digest periodically reflects on all memories, extracting Traits (behavioral patterns) with confidence scores, reinforcement counts, and first-observed timestamps. Memory evolves from passive storage to active insight.
Context compaction degrades memory
In long conversations, OpenClaw compresses and discards old context — a hard architectural limitation. Typical external memory stores rely solely on vector similarity for retrieval, often missing critical context.
neuromem provides triple-path recall: vector semantic search + knowledge graph relationship traversal + contextual inference. When the user asks "how did my project go last time", it doesn't just match keywords — it chains through entities to find related people, tech stack, and timeline. Retrieval quality is fundamentally higher.
Memories are isolated data points with no structure
OpenClaw's .md files are flat text. Most memory solutions only extract facts, which are still flat. neuromem organizes memories into a 4-layer cognitive structure:
Facts (persistent attributes): "User is a backend engineer, skilled in Python"
Episodes (time-bound events): "Refactored auth module on March 5, felt accomplished"
Triples (entity relationships): User --works-on--> AuthModule --uses--> OAuth2
Traits (behavioral patterns): "User performs best on refactoring tasks, prefers thinking at the architecture level"
This isn't storage optimization — it's a different cognitive level of memory.
What neuromem Brings to Your OpenClaw Agent
Not just better storage — a fundamentally different relationship between your agent and its memory.
Your agent gets smarter over time
A regular OpenClaw agent's "understanding depth" is the same on the 100th conversation as the 1st — it depends on what .md files get loaded. After connecting neuromem, the agent accumulates not just facts but behavioral patterns refined from those facts. The longer you use it, the deeper the understanding, the more precise the responses.
Coherent memory across sessions and agents
You might mention "I'm stressed lately" in your coding agent, and frequently cancel meetings in your calendar agent. Individually, these are isolated events. neuromem's digest can cross these memories to discover: "User has been under high pressure for the past two weeks, with increasing social avoidance tendency." This cross-context insight is impossible with Markdown files.
Answers "who you are", not just "what you said"
Without neuromem: recall("my coding habits") returns specific facts you mentioned. With neuromem: the same query also returns Traits — persistent behavioral patterns like "prefers functional style, writes tests before code, tends to draw architecture diagrams for complex problems." This transforms the agent from "a tool with memory" to "an assistant that understands you."
Self-maintaining memory quality
OpenClaw users frequently complain about MEMORY.md bloating, information becoming stale, requiring manual cleanup. neuromem's trait system has built-in lifecycle management — confidence decay, contradiction counting, importance thresholds. Outdated patterns naturally lose weight, validated patterns strengthen. Memory is self-maintaining, no manual cleanup needed.
Emotional and temporal dimensions
OpenClaw's memory has no emotion tagging or time awareness. neuromem's Episodes carry emotion metadata (valence, arousal, emotion labels) and temporal expressions. The agent knows not just "what the user did" but "how the user felt at the time." This is a critical differentiator for companion and coaching agents.
Role-Specific Memory Spaces
Pre-configured memory strategies for different OpenClaw roles. Pick a role template, create your memory space in 30 seconds.
Family Butler
Family tastes, children's academics, daily routines, shopping lists, health reminders
Work Assistant
Colleague relationships, project status, meeting decisions, deadlines, communication styles
Coding Companion
Tech stack, architecture decisions, code conventions, debugging experience, deployment configs
Content Creator
Writing style, publishing frequency, audience profiles, past content, trending preferences
Trading Assistant
Budget range, brand preferences, price sensitivity, transaction history, comparison strategies
Life Coach
Emotional patterns, stress factors, exercise habits, sleep patterns, personal goals
Custom Role
Start from scratch, fully customize your memory strategy
Memory Cognitive Architecture
This isn't storage optimization — it's a different cognitive level of memory.
Facts
Persistent attributesUser is a backend engineer, skilled in Python
Episodes
Time-bound events + emotionsRefactored auth module on March 5, felt accomplished
Knowledge Graph
Entity relationshipsUser --works-on--> AuthModule --uses--> OAuth2
Traits
Behavioral patternsUser performs best on refactoring tasks, prefers thinking at the architecture level
Facts relate to Episodes → Episodes generate Knowledge Graph → All layers converge into Traits (reflection)
Best For These Scenarios
neuromem shines when your agent needs to understand you — not just remember what you said.
Long-term companion agent
Personal AI assistant that grows with you over months and years.
Needs traits and behavioral patterns, not just facts
Coding coach / mentor
Tracks your skill growth, discovers learning patterns, adapts teaching style.
Needs to detect skill progression and preferred learning approaches
Multi-agent team memory
Multiple agents sharing memory with proper isolation and knowledge graph.
Needs multi-tenant isolation and entity-relation reasoning
Wellness / journaling agent
Tracks emotional patterns, discovers stress triggers, suggests interventions.
Needs emotion metadata and sleep-time reflection on behavioral trends
Why neuromem?
Deep memory gives agents warmth.
Auto-Digest Traits
Discovers behavioral patterns from your memories — habits, preferences, emotional tendencies. Your agent gets smarter over time.
ONE LLM Mode
Reuses OpenClaw's own model for memory extraction. Zero extra LLM cost, consistent extraction quality.
Three-Layer Memory
Facts + Episodes (with emotion metadata) + Knowledge Graph. Not flat storage — structured understanding.
Quick Start
Two ways to connect — choose what fits your workflow.
1. Install the plugin:
openclaw plugins install @neuromem/openclaw-neuromem
2. Configure your API Key:
openclaw config set plugins.entries.openclaw-neuromem.config.apiKey "nm_sk_your_key_here"
Replace nm_sk_your_key_here with your API Key (get it from the neuromem.cloud dashboard).
3. Enable the memory plugin (set neuromem as OpenClaw's memory backend):
openclaw config set plugins.slots.memory "openclaw-neuromem"
4. Restart the OpenClaw Gateway:
openclaw gateway restart
Done. Auto-recall, auto-capture, and auto-digest are enabled by default.
How It Works
Three automatic hooks run during every conversation — no manual effort needed.
Auto-Recall
Before responding, the agent searches your memories for relevant facts, episodes, and behavioral traits. Context is injected into the prompt automatically.
Auto-Capture
After responding, new information is stored. In ONE LLM mode, OpenClaw's own model extracts facts, episodes, and knowledge graph triples — no extra API call.
Auto-Digest
At session end, accumulated memories are analyzed to discover behavioral patterns. These become traits — persistent insights that improve future recall.
ONE LLM Mode — Zero Extra Cost
neuromem doesn't need a separate LLM for memory extraction — your OpenClaw agent's own model handles it directly, at zero extra cost.
OpenClaw (Claude/GPT)
↓
Same LLM extracts memories
↓
Zero extra LLM cost
Agent Tools
Five tools available for explicit memory operations:
neuromem_recallSearch memories with natural language — returns facts, episodes, and behavioral traits
neuromem_storeExplicitly store information to long-term memory
neuromem_digestTrigger behavioral pattern analysis on recent memories
neuromem_listBrowse stored memories, optionally filtered by type
neuromem_forgetDelete a specific memory by ID
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
| apiKey | string | - | Your neuromem.cloud API key (nm_sk_...) |
| baseUrl | string | https://api.neuromem.cloud | API base URL |
| autoRecall | boolean | true | Inject memories before each response |
| autoCapture | boolean | true | Store memories after each response |
| autoDigest | string | session-end | When to discover behavioral patterns ("off" or "session-end") |
| topK | integer | 10 | Max memories recalled per turn |
| includeTraits | boolean | true | Include behavioral traits in recall results |
neuromem Core Capabilities
A complete overview of memory capabilities for OpenClaw.
Deep memory gives agents warmth.
From remembering to understanding — one step away.
Ready to upgrade your OpenClaw's memory?
Free to start. Get your API key in 30 seconds.
Create Memory Space