Skip to content

eagleeyethinker/enterprise-agentic-ai-memory-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Enterprise Agentic AI Memory Lab (Open Source)

Author: Satish Gopinathan Web: https://www.eagleeyethinker.com

This project demonstrates five memory types in one agentic HR Policy Assistant.

Memory Types

  1. Short-Term Memory -> Redis (session)
  2. Retrieval Memory -> ChromaDB + Ollama embeddings
  3. Semantic Memory -> NetworkX graph
  4. Procedural Memory -> LangGraph workflow
  5. Durable Memory -> JSON audit log

Requirements

  • Python virtual environment with dependencies:
    • pip install -r requirements.txt
  • Ollama running locally with model:
    • ollama pull llama3
  • Redis-compatible server running on localhost:6379:
    • Redis or Memurai

Install Redis

Windows (Memurai, Redis-compatible)

choco install memurai-developer -y
Start-Service Memurai

Optional connectivity check:

& "C:\Program Files\Memurai\memurai-cli.exe" ping

macOS (Redis via Homebrew)

brew install redis
brew services start redis

Optional connectivity check:

redis-cli ping

Run

python app/main.py

What a New Developer Should Expect

1) Short-Term Memory (Redis session)

  • Each prompt you type is appended to a Redis list under session key default.
  • The session key TTL is set to 1800 seconds (30 minutes) on each new input.
  • Expected behavior:
    • Recent user turns are available during the active session.
    • If Redis is down, app calls to short-term memory will fail.

Quick check (if using Memurai):

& "C:\Program Files\Memurai\memurai-cli.exe" LRANGE default 0 -1
& "C:\Program Files\Memurai\memurai-cli.exe" TTL default

2) Retrieval Memory (ChromaDB + Ollama)

  • On startup, data/policies.txt is loaded and split into chunks.
  • Chunks are embedded with llama3 embeddings and indexed in ChromaDB.
  • For each query, top k=2 similar chunks are retrieved and used as LLM context.
  • Expected behavior:
    • Answers should be grounded in policy text when relevant.
    • If Ollama is not reachable, app startup/query will fail.

3) Semantic Memory (NetworkX)

  • A small in-memory concept graph is created with fixed relationships:
    • Employee <-> Manager
    • Manager <-> Director
    • Policy <-> Compliance
  • For every query, neighbors of Policy are fetched.
  • Expected behavior:
    • Semantic relation list currently resolves to ["Compliance"] and is logged.

4) Procedural Memory (LangGraph)

  • The workflow is a single-node LangGraph pipeline:
    • input -> agent node -> END
  • Agent node order:
    1. Store input in short-term memory
    2. Retrieve relevant policy chunks
    3. Read semantic relationships
    4. Build prompt and call LLM
    5. Write durable audit record
  • Expected behavior:
    • Every user prompt consistently follows the same execution path.

5) Durable Memory (JSON audit log)

  • Every turn is appended to audit_log.json.
  • Each record contains:
    • input
    • retrieval (retrieved chunks)
    • semantic (graph relationships)
    • output (model response)
  • Expected behavior:
    • audit_log.json grows over time and preserves past runs for inspection.

Use Case

HR Policy Assistant

About

This repository illustrates how enterprise AI agents should manage memory across distinct layers rather than collapsing everything into a single store.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages