Skip to content

A travel planning agent demo showcasing RedisVL, Agent Memory Server with Claude Agent SDK.

Notifications You must be signed in to change notification settings

redis-developer/travel-agent-memory-claude-agent-sdk

Repository files navigation

TravelMate - Travel Agent with Redis and Claude Agent SDK

A travel planning agent demo showcasing Redis Vector Library, Redis Agent Memory Server, and Claude Agent SDK.

Features

  • Persistent Memory: Cross-session memory recall using Redis Agent Memory Server
  • Semantic Caching: Speed up responses with RedisVL semantic cache for similar queries
  • Automatic Extraction: Memories are automatically extracted after 30s of inactivity
  • MCP Protocol: Uses Model Context Protocol (SSE) for agent-server communication
  • Travel-Focused: Personalized travel recommendations based on stored preferences
  • Open-Source Embeddings: Uses Ollama with all-minilm:22m (runs locally)

Prerequisites

Quick Start

1. Setup Environment

cd travel-agent-memory-claude-sdk

# Copy environment template
cp .env.example .env

# Add your Anthropic API key to .env
# Edit the file and set: ANTHROPIC_API_KEY=sk-ant-...

2. Start Infrastructure

# Start Redis, Ollama, and Memory Server
docker-compose up -d

# Wait ~30-60 seconds for Ollama to download the embedding model (~274MB)
# You can check progress with:
docker-compose logs -f ollama-pull

3. Install Dependencies

uv sync

4. Run the Agent

Option A: Command Line Interface (CLI)

# Basic usage
uv run travel-agent

# With custom user/session
uv run travel-agent --user-id Nitin --session-id vacation-planning

# Debug mode (shows tool calls)
uv run travel-agent --debug-tools

Option B: Web Interface (Gradio)

# Launch the web UI
uv run travel-agent-web

# Opens at http://localhost:7860

The web interface includes:

  • Chat interface with streaming responses
  • Configurable User ID and Session ID
  • Debug panel showing MCP tool calls and results

5. Verify Everything is Running

# Check all containers are healthy
docker-compose ps

# Expected output:
# redis        running  0.0.0.0:6379->6379/tcp
# ollama       running  0.0.0.0:11434->11434/tcp
# memory_mcp   running  0.0.0.0:9000->9000/tcp

Usage Examples

> Hi, I'm planning a trip to Europe this summer. My budget is around $3000.

> I prefer beach destinations but I also love historical sites.

> What destinations would you recommend based on my preferences?

The agent will:

  1. Store your preferences in long-term memory
  2. Recall them in future sessions
  3. Provide personalized recommendations

Command Line Options

Option Default Description
--user-id Nitin User ID for memory context
--session-id travel_session_1 Session ID for conversation tracking
--mcp-url http://localhost:9000/sse Memory Server SSE endpoint
--mcp-token None Auth token (if auth enabled)
--debug-tools False Show tool calls and results

Architecture

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   TravelMate    │────▶│  Memory Server  │────▶│     Redis       │
│   (Claude SDK)  │ MCP │   (Port 9000)   │     │   (Port 6379)   │
└─────────────────┘ SSE └────────┬────────┘     └─────────────────┘
                                 │
                        ┌────────▼────────┐
                        │     Ollama      │
                        │  (Port 11434)   │
                        │ nomic-embed-text│
                        └─────────────────┘

Open-Source Embeddings: This project uses Ollama with the nomic-embed-text model (137M parameters, 768 dimensions) for vector embeddings.

Memory Tools

The agent has access to these MCP tools:

Tool Purpose
memory_prompt Preload memory context before responding
set_working_memory Update session memory
search_long_term_memory Find past preferences and memories
get_long_term_memory Retrieve a specific memory by ID
create_long_term_memories Store new preferences
edit_long_term_memory Update existing memories
delete_long_term_memories Remove incorrect data

For detailed documentation on each tool, see Memory Tools Reference.

Semantic Caching

TravelMate uses RedisVL's semantic cache to speed up responses for similar queries. When a user asks a question semantically similar to a previous one, the cached response is returned instantly instead of calling Claude.

How it works:

  1. User queries are embedded using sentence-transformers/all-MiniLM-L6-v2 (384 dimensions)
  2. The cache checks for semantically similar queries using cosine distance
  3. If a match is found (distance < threshold), the cached response is returned
  4. Cache entries are scoped by user_id to prevent cross-user pollution

Configuration (in .env):

Variable Default Description
ENABLE_SEMANTIC_CACHE true Enable/disable caching
CACHE_DISTANCE_THRESHOLD 0.1 Cosine distance threshold (0-2, lower = stricter)
CACHE_TTL 3600 Cache entry TTL in seconds

Debug Panel: The web UI shows cache hit/miss status with timing information.

Troubleshooting

Issue Solution
"Connection refused" on port 9000 Run docker-compose up -d and wait 30-60s for Ollama to download the model
Agent hangs on startup Check Memory Server logs: docker-compose logs memory_mcp
"No module named..." Run uv sync to install dependencies
Memory not persisting Check Redis is running: docker-compose ps redis
Embedding errors Check Ollama model downloaded: docker-compose logs ollama-pull

Reset Everything

# Stop and remove all containers and data
docker-compose down -v

# Start fresh
docker-compose up -d

Development

# Install with dev dependencies
uv sync --dev

# Run linting
uv run ruff check src/

# Format code
uv run ruff format src/

License

MIT

About

A travel planning agent demo showcasing RedisVL, Agent Memory Server with Claude Agent SDK.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages