Skip to content

Maatq1544/hermes-memory-stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

hermes-memory-stack

Agent Hermes Memory Stack — organized collection of memory components for AI agents: Holographic Memory (enhanced) + LCM (Lossless Context Management).

What

hermes-memory-stack provides persistent, searchable, and context-aware memory for autonomous AI agents. It solves the core problem of context window limitations by storing facts, experiences, and knowledge outside the LLM's working memory, then retrieving them on demand.

Key benefits:

  • No context loss — facts persist across sessions
  • Semantic search — find by meaning, not just keywords
  • Trust scoring — facts gain/lose credibility based on usefulness
  • Zero OAuth — local SQLite + configurable backends

Features

Holographic Memory (Enhanced)

  • Fact extraction — automatically capture key information from conversations
  • Entity resolution — link facts about the same person/thing across time
  • Trust scoring — dynamic credibility based on helpfulness ratings
  • FTS5 full-text search — fast keyword queries
  • HRR vector encoding — semantic similarity (embeddings)

LCM (Lossless Context Management)

  • DAG summarization — conversation history compressed into nodes
  • Pre-compression extraction — facts saved before summarization
  • lcm_grep/lcm_expand — retrieve any past context instantly
  • Session continuity — pick up where you left off, weeks later

Integration Layer

  • voice.py hooks — automatic fact extraction on every LLM call
  • fact_store API — explicit save/query/probe/reason operations
  • Plugin-friendly — extend with custom memory backends

Architecture

┌─────────────────┐    ┌────────────────────┐
│  LLM Conversation│───▶│  voice.py pre_hook │
└─────────────────┘    └────────────────────┘
         │                       │
         ▼                       ▼
┌─────────────────┐    ┌────────────────────┐
│  Fact Extraction│◀───│  LCM Summarization │
└─────────────────┘    └────────────────────┘
         │                       │
         └───────────┬───────────┘
                     ▼
          ┌────────────────────┐
          │  Holographic Memory│
          │  (SQLite + FTS5)   │
          └────────────────────┘
                     │
                     ▼
          ┌────────────────────┐
          │  fact_store API    │
          │  (probe, reason)   │
          └────────────────────┘

Components:

  1. Holographic Memory — persistent fact store with trust scoring and semantic search
  2. LCM (Lossless Context Management) — DAG-based conversation summarization with pre-compression fact extraction
  3. voice.py — injects fact extraction into every LLM call via pre_llm_call hook
  4. fact_store — unified API for saving and querying facts

Quick Start

# Install
pip install hermes-memory-stack

# Configure (optional)
hermes config set memory.backend=sqlite

# Verify installation
hermes memory verify

Configuration

See CONFIG.md for all options. Key settings:

memory:
  backend: sqlite  # or 'redis', 'postgres'
  holographic:
    enable_trust_scoring: true
    embedding_model: sentence-transformers/all-MiniLM-L6-v2
  lcm:
    compression_ratio: 0.3  # keep 30% of tokens
    max_node_tokens: 500

Usage Examples

Save a fact

from hermes.fact_store import save

save(
    content="Vanya prefers standalone repos over ecosystem badges",
    category="user_pref",
    tags=["preference", "github"]
)

Query facts

from hermes.fact_store import probe

# Find all facts about Vanya
facts = probe(entity='vanya')
for fact in facts:
    print(fact.content)

Retrieve past context

from hermes.lcm import expand

# Get full context of conversation from 2026-05-07
context = expand(node_id=123, max_tokens=4000)
print(context.text)

Security & Best Practices

⚠️ Important:

The memory stack stores all facts from your agent's conversations. Treat it as sensitive data.

  • Encryption at rest — enable if storing sensitive info: memory.encryption.enabled: true
  • Backup regularlyhermes memory backup ~/backups/
  • Rotate keys — if using external embeddings API, rotate keys quarterly
  • Access control — restrict file permissions on SQLite DB (chmod 600)
  • Never commit real data — use .env.example with fake facts for examples

Performance

Metric Built-in Memory hermes-memory-stack
Context retention (7 days) ~15% ~95%
Fact retrieval latency N/A <50ms (SQLite)
Semantic search quality N/A High (embeddings)
Storage overhead 0 MB ~2-10 MB per 1000 facts

Benchmarks run on: M2 MacBook Air, 16GB RAM, SQLite with FTS5.

Troubleshooting

Memory not saving facts

Symptom: fact_store.save() returns success but facts disappear after restart. Fix: Check memory.backend in config — must be sqlite or persistent backend. Verify write permissions on DB file.

LCM retrieval returns empty

Symptom: lcm_expand() gives no context. Fix: Ensure on_pre_compress hook is enabled in hermes.config.yaml. Check LCM DAG exists: lcm_grep(query='*').

Trust scores all zero

Symptom: Facts have trust=0.0, never update. Fix: Enable holographic.enable_trust_scoring. Ensure feedback loop: when facts are used, call fact_store.feedback(helpful=True).

License

MIT © 2026 Hermes Agent Contributors

Author

Built by Lisa Carter — hermes agent.

About

Complete memory stack for Hermes Agent: Holographic Memory (enhanced) + LCM (Lossless Context Management). Architecture, install guide, configuration reference.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages