You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Local-first AI memory — runs offline on any machine with 8 GB+ RAM (SBC, mini PC, laptop, workstation). Zero-loss verbatim archive, knowledge graph, hybrid retrieval. Framework-agnostic, no cloud.
Agent memory without the retrieval tax. Fidelity-preserving memory for Claude Code and AI agents — local-first, fast, and with no LLM in the default retrieval path. 83.2% R@1 on LongMemEval-S, $0/query retrieval.
Your AI forgets everything between sessions. This fixes that — 98%+ retrieval accuracy, 100% on LongMemEval, 99% token savings. 44 MCP tools. Fully local, zero cost.
Persistent memory for AI agents. Single Rust CLI, hybrid Gemini + FTS5 + RRF retrieval. R@5 = 0.99 on LongMemEval S (beats MemPalace). Agent-native: no MCP, no server, just shell out.
Official Python SDK for RecallrAI – a revolutionary contextual memory system that enables AI assistants to form meaningful connections between conversations, just like human memory.
Public, reproducible benchmarks for Agent Brain on LongMemEval-M. 71.7% accuracy (Test 0). Companion code to https://doi.org/10.5281/zenodo.19673132 (Concept DOI → latest version, currently v3).
Anti-RAG dual-whitebox memory for LLM agents. 2.72 MB SQLite + Markdown kernel, no vector DB, no embeddings. Lifts qwen2.5:7b from 1.79% to 60.71% on NoLiMa-32k (+58.9pp), 88.71% on LV-Eval EN 256k, 84.8% on LongMemEval-S. Restart-safe, concurrency-bullet-proof, 100% transparent.