Compare
Table of Contents
Agent memory solutions differ in where memories live, how they’re shared, and what you have to run.
Landscape #
| Approach | Examples | How it works | Strengths | Limitations |
|---|---|---|---|---|
| Built-in memory | Claude Code auto-memory, Windsurf Memories, Cursor .cursorrules |
Markdown files on disk, scoped to one client | Zero setup, works out of the box | Siloed per client. No semantic search, just text injection into context |
| Session capture | claude-mem, claude-memory | Records tool calls into local SQLite, replays context next session | Rich session history, automatic capture | Local to one machine. Sharing requires exporting/syncing the database |
| Vault-based | Nexus, Obsidian MCP | Markdown files in an Obsidian vault, with optional local embeddings | Human-readable graph of notes, Obsidian’s linking UI | Requires Obsidian running as a bridge. Sync needs Obsidian Sync or file-sync |
| Agent-native | OpenClaw | SQLite + Markdown bundled with the agent runtime | Self-contained, no external services | Tied to one runtime. Each agent has its own database |
| Learning agent | Agent Zero | FAISS vector search with LLM-powered memory extraction. Four memory areas | Automatic extraction without explicit calls. LLM consolidation prevents bloat | Tied to Agent Zero framework. Needs an LLM for extraction. Local files only |
| Memory platform | Mem0 / OpenMemory | MCP server backed by Qdrant + PostgreSQL + LLM | Automatic memory extraction, knowledge graphs (Neo4j optional), managed cloud | Three containers minimum plus an LLM. Can struggle with context isolation |
| Knowledge engine | Cognee | LLM pipeline that extracts entities and relationships, builds a knowledge graph | Builds knowledge graphs from unstructured data, self-improving | Needs an LLM (OpenAI default). Cloud from $35/month. More infrastructure than a memory store |
| Cognitive database | MuninnDB | Single-binary database with ACT-R decay, Hebbian co-activation, Bayesian confidence | Purpose-built for cognitive memory. Sub-20ms queries. No LLM needed | New project, smaller ecosystem. Dedicated binary to run |
| Shared database | Ogham MCP | MCP server backed by Supabase PostgreSQL + pgvector | Lightweight — one server, one database, local embeddings. No LLM needed. Cognitive scoring + auto-linking in SQL | Requires Supabase + Ollama. No automatic memory extraction — you store what you choose to |
Ogham vs Mem0 vs Cognee vs Agent Zero #
Four ways to do agent memory, each with different infrastructure and portability trade-offs.
Mem0 uses an LLM to extract and deduplicate memories from conversations, supports knowledge graphs via Neo4j, and has a managed cloud option. A full deployment runs three containers (API, Qdrant, Postgres) plus an LLM.
Cognee adds a knowledge graph on top — an LLM pipeline extracts entities and relationships from your data, then refines the graph over time. An LLM runs on every ingestion step, self-hosting recommends 32B+ models, and cloud has a free tier (paid plans from $35/month).
Agent Zero bakes memory into the agent framework itself. It extracts conversation fragments and problem-solving patterns via LLM, then consolidates them to prevent bloat. Four memory areas organize what gets stored, and FAISS handles local vector search. The trade-off: it only works inside Agent Zero, and memories live in local project directories.
Ogham MCP skips the LLM entirely for memory processing. It embeds and indexes what you give it, ranks results with cognitive scoring (recency, frequency, confidence) on top of hybrid search, and discovers relationships by embedding similarity. Weighted edges in PostgreSQL, traversable via recursive CTEs.
| Mem0 / OpenMemory | Cognee | Agent Zero | Ogham MCP | |
|---|---|---|---|---|
| Architecture | MCP server + 3 containers | MCP server + graph/vector backends | Built into agent framework | MCP server + Supabase |
| Vector store | Qdrant | Qdrant, LanceDB, Milvus, pgvector, or others | FAISS (local files) | pgvector (inside Supabase) |
| Graph store | Neo4j (optional) | Neo4j, Kuzu, FalkorDB, or NetworkX | None | PostgreSQL (recursive CTEs) |
| LLM required | Yes, for memory extraction | Yes, for entity/relationship extraction | Yes, for extraction + consolidation | No |
| Embeddings | OpenAI (default) or self-hosted | OpenAI, Ollama, or others | 100+ providers via LiteLLM | Ollama local (default) or OpenAI |
| Memory creation | Automatic (LLM extracts) | Automatic (LLM builds graph) | Automatic (LLM extracts) | Explicit (store_memory, or hooks/skills) |
| Ranking | Semantic similarity | Graph traversal + vector search | Cosine similarity + metadata | Hybrid search + cognitive scoring (ACT-R + confidence) |
| Graph building | LLM entity extraction (optional) | LLM pipeline (required) | None | Embedding similarity — auto-linked, no LLM |
| Cross-client sharing | Yes (MCP server) | Yes (MCP server) | No (framework-bound) | Yes (MCP server, shared database) |
| Cross-machine sharing | Yes (cloud or self-hosted) | Yes (cloud or self-hosted) | No (sync manually) | Yes (Supabase cloud or self-hosted PostgreSQL) |
| Managed cloud | Yes (mem0.ai) | Yes (free tier, paid from $35/month) | No | No (Supabase free tier) |
Why a shared database matters #
Most solutions store memories locally and then face a sync problem when you want to share across machines or clients. Ogham puts the database in the cloud from the start. The MCP server is stateless; add it to a new machine and your memories are already there.
The trade-off is real: local-only solutions are simpler to set up and keep all data on your machine. Ogham needs a database and an embedding service, and your memories live in Supabase (you can self-host PostgreSQL if that matters). But for multi-device workflows, a shared database beats cobbling together file sync for Markdown memories.