Skip to main content
Ancient Ogham standing stones connected by luminous threads

Ogham MCP ᚛ᚑᚌᚆᚐᚋ᚜

Pronounced "OH-um" in modern Irish

Shared persistent memory for AI agents. Store a memory in Claude Desktop, recall it from Cursor or Claude Code. One database, every client.

᚛ᚑᚌᚆᚐᚋ᚜
A single Ogham stone with glowing inscriptions

Hybrid Search

Vector similarity + keyword matching, merged with Reciprocal Rank Fusion. Find memories by meaning or exact terms, in a single SQL query.

Ogham stones connected by glowing root networks

Knowledge Graph

Memories auto-link by embedding similarity. Traverse relationships with recursive CTEs. No graph database, no LLM in the write path.

Aerial view of a forest with luminous threads connecting trees

Cognitive Scoring

ACT-R inspired ranking boosts frequently and recently accessed memories. Bayesian confidence lets agents verify or dispute facts. All computed in SQL.

Split workspace — two profiles, one system

Profile Isolation

Partition memories by context — work, personal, per-project. Switch profiles instantly. Inspired by Severance: what one profile knows, the others don't.

Three standing stones connected by a single glowing thread

Cross-Client Memory

Store a memory in Claude Desktop, recall it from Cursor or Claude Code. Every client hits the same Supabase database — no sync, no export, no intermediary.

An Ogham stone sheltered within an ancient oak tree

Private by Design

Embeddings run locally with Ollama by default — your data goes from your machine to your database and nowhere else. OpenAI embeddings available as an opt-in alternative.

᚛ᚑᚌᚆᚐᚋ᚜

How it works #

graph LR CD[Claude Desktop] --> S CU[Cursor] --> S CC[Claude Code] --> S CX[Codex] --> S subgraph S["Ogham MCP Server"] MCP["MCP (stdio)"] Tools[Tools] end Tools --> DB["Supabase PostgreSQL\n+ pgvector"] Tools --> OL["Ollama\n(local embeddings)"]

When you store a memory, Ogham generates a 768-dimension embedding with Ollama, saves the text and vector to Supabase, and automatically links it to similar memories already in the database.

Searching uses hybrid retrieval: vector similarity and keyword matching run together. A search for us-east-1 finds the exact match via full-text search, while “which AWS region do we use” finds it via semantic understanding. Both happen in the same query.

Results get re-ranked by three signals:

  • Fused semantic + keyword ranking via RRF
  • An ACT-R formula that weights how often and how recently each memory was accessed
  • A Bayesian confidence score that agents can raise (verified) or lower (disputed)

Memories you use often stay sharp. Rarely accessed ones fade. Disputed ones drop in ranking without being deleted.

᚛ᚑᚌᚆᚐᚋ᚜

Try it #

# In any AI client with Ogham connected:
> "Remember that our Azure resource groups use {env}-{service}"

# Later, in the same client or a different one:
> "What do you know about our Azure resource naming?"

It works across clients because they all hit the same Supabase database.