Real long-term memory for AI agents. Not RAG. Not a vector DB. Self-hosted, Python + Node.
OpenMemory is a cognitive memory engine for LLMs and agents.
- 🧠 Real long-term memory (not just embeddings in a table)
- 💾 Self-hosted, local-first (SQLite / Postgres)
- 🐍 Python + 🟦 Node SDKs
- 🧩 Integrations: LangChain, CrewAI, AutoGen, Streamlit, MCP, VS Code
- 📥 Sources: GitHub, Notion, Google Drive, OneDrive, Web Crawler
- 🔍 Explainable traces (see why something was recalled)
Your model stays stateless. Your app stops being amnesiac.
Spin up a shared OpenMemory backend (HTTP API + MCP + dashboard):
Use the SDKs when you want embedded local memory. Use the server when you want multi‑user org‑wide memory.
Install:
pip install openmemory-pyUse:
from openmemory.client import Memory
mem = Memory()
mem.add("user prefers dark mode", user_id="u1")
results = mem.search("preferences", user_id="u1")Note:
add,search,get,deleteare async. Useawaitin async contexts.
mem = Memory()
client = mem.openai.register(OpenAI(), user_id="u1")
resp = client.chat.completions.create(...)from openmemory.integrations.langchain import OpenMemoryChatMessageHistory
history = OpenMemoryChatMessageHistory(memory=mem, user_id="u1")OpenMemory is designed to sit behind agent frameworks and UIs:
- Crew-style agents: use
Memoryas a shared long-term store - AutoGen-style orchestrations: store dialog + tool calls as episodic memory
- Streamlit apps: give each user a persistent memory by
user_id
See the integrations section in the docs for concrete patterns.
Install:
npm install openmemory-jsUse:
import { Memory } from "openmemory-js"
const mem = new Memory()
await mem.add("user likes spicy food", { user_id: "u1" })
const results = await mem.search("food?", { user_id: "u1" })Drop this into:
- Node backends
- CLIs
- local tools
- anything that needs durable memory without running a separate service.
Ingest data from external sources directly into memory:
# python
github = mem.source("github")
await github.connect(token="ghp_...")
await github.ingest_all(repo="owner/repo")// javascript
const github = await mem.source("github")
await github.connect({ token: "ghp_..." })
await github.ingest_all({ repo: "owner/repo" })Available connectors: github, notion, google_drive, google_sheets, google_slides, onedrive, web_crawler
OpenMemory can run inside your app or as a central service.
- ✅ Local SQLite by default
- ✅ Supports external DBs (via config)
- ✅ Great fit for LangChain / LangGraph / CrewAI / notebooks
Docs: https://openmemory.cavira.app/docs/sdks/python
- Same cognitive model as Python
- Ideal for JS/TS applications
- Can either run fully local or talk to a central backend
Docs: https://openmemory.cavira.app/docs/sdks/javascript
Use when you want:
- org‑wide memory
- HTTP API
- dashboard
- MCP server for Claude / Cursor / Windsurf
Run from source:
git clone https://github.com/CaviraOSS/OpenMemory.git
cd OpenMemory
cp .env.example .env
cd backend
npm install
npm run dev # default :8080Or with Docker:
docker compose up --build -dThe backend exposes:
/api/memory/*– memory operations/api/temporal/*– temporal knowledge graph/mcp– MCP server- dashboard UI
LLMs forget everything between messages.
Most “memory” solutions are really just RAG pipelines:
- text is chunked
- embedded into a vector store
- retrieved by similarity
They don’t understand:
- whether something is a fact, event, preference, or feeling
- how recent / important it is
- how it links to other memories
- what was true at a specific time
Cloud memory APIs add:
- vendor lock‑in
- latency
- opaque behavior
- privacy problems
OpenMemory gives you an actual memory system:
- 🧠 Multi‑sector memory (episodic, semantic, procedural, emotional, reflective)
- ⏱ Temporal reasoning (what was true when)
- 📉 Decay & reinforcement instead of dumb TTLs
- 🕸 Waypoint graph (associative, traversable links)
- 🔍 Explainable traces (see which nodes were recalled and why)
- 🏠 Self‑hosted, local‑first, you own the DB
- 🔌 SDKs + server + VS Code + MCP
It behaves like a memory module, not a “vector DB with marketing copy”.
Vector DB + LangChain (cloud-heavy, ceremony):
import os
import time
from langchain.chains import ConversationChain
from langchain.memory import VectorStoreRetrieverMemory
from langchain_community.vectorstores import Pinecone
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
os.environ["PINECONE_API_KEY"] = "sk-..."
os.environ["OPENAI_API_KEY"] = "sk-..."
time.sleep(3) # cloud warmup
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_existing_index(embeddings, index_name="my-memory")
retriever = pinecone.as_retriever(search_kwargs={"k": 2})
memory = VectorStoreRetrieverMemory(retriever=retriever)
conversation = ConversationChain(llm=ChatOpenAI(), memory=memory)
conversation.predict(input="I'm allergic to peanuts")OpenMemory (3 lines, local file, no vendor lock-in):
from openmemory.client import Memory
mem = Memory()
mem.add("user allergic to peanuts", user_id="user123")
results = mem.search("allergies", user_id="user123")✅ Zero cloud config • ✅ Local SQLite • ✅ Offline‑friendly • ✅ Your DB, your schema
-
Multi-sector memory
Episodic (events), semantic (facts), procedural (skills), emotional (feelings), reflective (insights). -
Temporal knowledge graph
valid_from/valid_to, point‑in‑time truth, evolution over time. -
Composite scoring
Salience + recency + coactivation, not just cosine distance. -
Decay engine
Adaptive forgetting per sector instead of hard TTLs. -
Explainable recall
“Waypoint” traces that show exactly which nodes were used in context. -
Embeddings
OpenAI, Gemini, Ollama, AWS, synthetic fallback. -
Integrations
LangChain, CrewAI, AutoGen, Streamlit, MCP, VS Code, IDEs. -
Connectors
Import from GitHub, Notion, Google Drive, Google Sheets/Slides, OneDrive, Web Crawler. -
Migration tool
Import memories from Mem0, Zep, Supermemory and more.
If you’re building agents, copilots, journaling systems, knowledge workers, or coding assistants, OpenMemory is the piece that turns them from “goldfish” into something that actually remembers.
OpenMemory ships a native MCP server, so any MCP‑aware client can treat it as a tool.
claude mcp add --transport http openmemory http://localhost:8080/mcp.mcp.json:
{
"mcpServers": {
"openmemory": {
"type": "http",
"url": "http://localhost:8080/mcp"
}
}
}Available tools include:
openmemory_queryopenmemory_storeopenmemory_listopenmemory_getopenmemory_reinforce
Your IDE assistant can query, store, list, and reinforce memories without you wiring every call manually.
OpenMemory treats time as a first‑class dimension.
valid_from/valid_to– truth windows- auto‑evolution – new facts close previous ones
- confidence decay – old facts fade gracefully
- point‑in‑time queries – “what was true on X?”
- timelines – reconstruct an entity’s history
- change detection – see when something flipped
POST /api/temporal/fact
{
"subject": "CompanyX",
"predicate": "has_CEO",
"object": "Alice",
"valid_from": "2021-01-01"
}Then later:
POST /api/temporal/fact
{
"subject": "CompanyX",
"predicate": "has_CEO",
"object": "Bob",
"valid_from": "2024-04-10"
}Alice’s term is automatically closed; timeline queries stay sane.
The opm CLI talks directly to the engine / server.
cd backend
npm install
npm link # adds `opm` to your PATHopm add "user prefers dark mode" --user u1 --tags prefs
opm query "preferences" --user u1 --limit 5
opm list --user u1
opm reinforce <id>
opm statsUseful for scripting, debugging, and non‑LLM pipelines that still want memory.
OpenMemory uses Hierarchical Memory Decomposition with a temporal graph on top.
graph TB
classDef inputStyle fill:#eceff1,stroke:#546e7a,stroke-width:2px,color:#37474f
classDef processStyle fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#0d47a1
classDef sectorStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#e65100
classDef storageStyle fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#880e4f
classDef engineStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#4a148c
classDef outputStyle fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#1b5e20
classDef graphStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#01579b
INPUT[Input / Query]:::inputStyle
CLASSIFIER[Sector Classifier]:::processStyle
EPISODIC[Episodic]:::sectorStyle
SEMANTIC[Semantic]:::sectorStyle
PROCEDURAL[Procedural]:::sectorStyle
EMOTIONAL[Emotional]:::sectorStyle
REFLECTIVE[Reflective]:::sectorStyle
EMBED[Embedding Engine]:::processStyle
SQLITE[(SQLite/Postgres<br/>Memories / Vectors / Waypoints)]:::storageStyle
TEMPORAL[(Temporal Graph)]:::storageStyle
subgraph RECALL_ENGINE["Recall Engine"]
VECTOR[Vector Search]:::engineStyle
WAYPOINT[Waypoint Graph]:::engineStyle
SCORING[Composite Scoring]:::engineStyle
DECAY[Decay Engine]:::engineStyle
end
subgraph TKG["Temporal KG"]
FACTS[Facts]:::graphStyle
TIMELINE[Timeline]:::graphStyle
end
CONSOLIDATE[Consolidation]:::processStyle
REFLECT[Reflection]:::processStyle
OUTPUT[Recall + Trace]:::outputStyle
INPUT --> CLASSIFIER
CLASSIFIER --> EPISODIC
CLASSIFIER --> SEMANTIC
CLASSIFIER --> PROCEDURAL
CLASSIFIER --> EMOTIONAL
CLASSIFIER --> REFLECTIVE
EPISODIC --> EMBED
SEMANTIC --> EMBED
PROCEDURAL --> EMBED
EMOTIONAL --> EMBED
REFLECTIVE --> EMBED
EMBED --> SQLITE
EMBED --> TEMPORAL
SQLITE --> VECTOR
SQLITE --> WAYPOINT
SQLITE --> DECAY
TEMPORAL --> FACTS
FACTS --> TIMELINE
VECTOR --> SCORING
WAYPOINT --> SCORING
DECAY --> SCORING
TIMELINE --> SCORING
SCORING --> CONSOLIDATE
CONSOLIDATE --> REFLECT
REFLECT --> OUTPUT
OUTPUT -.->|Reinforce| WAYPOINT
OUTPUT -.->|Salience| DECAY
OpenMemory ships a migration tool to import data from other memory systems.
Supported:
- Mem0
- Zep
- Supermemory
Example:
cd migrate
python -m migrate --from zep --api-key ZEP_KEY --verify(See migrate/ and docs for detailed commands per provider.)
- 🧬 Learned sector classifier (trainable on your data)
- 🕸 Federated / clustered memory nodes
- 🤝 Deeper LangGraph / CrewAI / AutoGen integrations
- 🔭 Memory visualizer 2.0
- 🔐 Pluggable encryption at rest
Star the repo to follow along.
Issues and PRs are welcome.
- Bugs: https://github.com/CaviraOSS/OpenMemory/issues
- Feature requests: use the GitHub issue templates
- Before large changes, open a discussion or small design PR
OpenMemory is licensed under Apache 2.0. See LICENSE for details.
