The next evolution of SQLite is here! Read Announcement

Giving OpenClaw a memory that actually works

Glauber CostaGlauber Costa
Cover image for Giving OpenClaw a memory that actually works

I recently wrote about memelord, a persistent memory system for coding agents that I built with Turso. That post focused on Claude Code, but memelord is agent-agnostic. The MCP server doesn't care who's calling it. So now that everybody is talking about OpenClaw and speculating for how many billion dollars were they acquired, the natural question was: does it work there too?

It does. And it took about five minutes to set up. Here's what we did.

#Why OpenClaw Needs This

OpenClaw stores memory in Markdown files under ~/.openclaw/workspace/. It chunks them into ~400 token segments, and then retrieves the relevant context. That's a reasonable starting point, but it has a fundamental limitation: all memories are treated equally.

The correction that saved you from a 30-minute detour last session has the same weight as a stale observation from three weeks ago. There's no feedback loop. The agent can't learn which of its memories are actually useful, and memories that become outdated never go away on their own.

For the first few weeks this is fine. But as your memory files grow, retrieval quality degrades. You get fragments that are semantically similar to your query but not actually helpful for the task at hand.

#What Memelord Adds

Memelord is a per-project memory system powered by Turso's native vector search. Every memory has a weight that changes over time based on actual usefulness:

  • When the agent retrieves a memory and reports it was directly useful (score 3/3), that memory's weight goes up.
  • When a memory gets retrieved but ignored (score 0/3), it decays.
  • When a memory is actively wrong, the agent deletes it on the spot with memory_contradict and optionally stores a correction.
  • Memories that go unused across sessions gradually lose weight through time decay.

This creates a self-improving system. Corrections that consistently help survive. Stale insights fade. Wrong memories get cleaned up. The agent stops making the same mistake twice.

There are different categories of memories:

  • Corrections: the agent tried the wrong approach and found the right one ("config is in .env.local, not config.json")
  • Insights: codebase knowledge discovered during exploration ("auth middleware is in src/middleware/auth.rs")
  • User input: things the user explicitly told the agent ("we use pnpm, not npm")

Each category gets a different initial weight. User corrections start highest, because if the user had to tell the agent something, it really should remember.

The storage is a single SQLite file per project (.memelord/memory.db), with embeddings generated locally using all-MiniLM-L6-v2. Everything runs on your machine.

#Setting It Up

OpenClaw connects to external tool servers via mcporter. Memelord has first-class OpenClaw support, and memelord init generates the mcporter config automatically.

1) Install memelord and mcporter:

npm install -g memelord mcporter

2) Initialize memelord in your project:

cd your-project
memelord init

This creates a .memelord/ directory with a local SQLite database and writes config/mcporter.json with the memelord MCP server configuration. It also sets up configs for Claude Code, Codex, and OpenCode if you use those.

3) Verify it works:

mcporter list memelord

You should see 5 tools: memory_start_task, memory_report, memory_end_task, memory_contradict, and memory_status.

That's it. OpenClaw's mcporter skill will discover and use these tools during agent sessions.

#How It Works In Practice

The memory lifecycle follows the agent's workflow. When a task starts, the agent calls memory_start_task with a description:

mcporter call memelord.memory_start_task description="Fix the auth middleware bug"

Memelord searches its vector index and returns memories ranked by a combination of cosine similarity and weight. If you fixed an auth bug last week and stored a correction about the middleware location, that memory surfaces now.

As the agent works, it can store new memories:

mcporter call memelord.memory_report type=correction \
  lesson="Auth config is in src/middleware/auth.rs, not src/auth/config.rs" \
  what_failed="Looked in src/auth/config.rs" \
  what_worked="Found it in src/middleware/auth.rs"

When done, the agent ends the task with outcome metrics:

mcporter call memelord.memory_end_task \
  task_id="<id>" \
  tokens_used=12000 \
  tool_calls=35 \
  errors=2 \
  user_corrections=0 \
  completed=true

These metrics feed into the reinforcement learning loop. Tasks that used fewer tokens and had fewer errors score higher, and the memories retrieved for those tasks get credit. Over time, the system learns which memories actually help.

#The Storage Layer

Under the hood, memelord uses Turso's native vector search. Turso is a full rewrite of SQLite that adds, among other things, a vector32 column type and vector_distance_cos() function for cosine similarity search.

The retrieval query combines similarity with a recency decay:

SELECT id, content, category, weight
FROM memories
WHERE vector_distance_cos(embedding, vector32(?)) < 0.8
ORDER BY (1.0 - vector_distance_cos(embedding, vector32(?)))
       * POWER(0.995, julianday('now') - julianday(created_at))
LIMIT 10;

Recent, high-weight memories that are semantically similar to the current task bubble to the top. Old memories that haven't been useful gradually fade, even if they were once highly rated.

Just a SQLite file you can ls -la and reason about.

#Summary

Memelord gives OpenClaw weighted memories with reinforcement learning, automatic correction detection, and a self-cleaning feedback loop. Thanks to mcporter, the setup is three commands.

If you're using OpenClaw and want your agent to stop repeating itself, give it a try: github.com/glommer/memelord. And if you want to build your own memory system with native vector search in SQLite, check out Turso.