Skip to content

Latest commit

 

History

History
479 lines (353 loc) · 12.4 KB

File metadata and controls

479 lines (353 loc) · 12.4 KB

reDB Node Docker Deployment

This document describes how to deploy reDB Node Open Source using Docker.

Overview

The Docker setup includes:

  • Single Container Architecture: All services run in one container
  • Built-in Databases: PostgreSQL 17 and Redis included
  • Auto-Initialization: Headless setup with --autoinitialize flag
  • CLI Access: CLI is available within the container for external access

Quick Start

Using Docker Compose (Recommended)

  1. Clone the repository:

    git clone https://github.com/redbco/redb-open
    cd redb-open
  2. Build and start the services:

    docker compose up -d
  3. Check the logs:

    docker compose logs -f redb-node
  4. Access the CLI:

    docker compose exec redb-node redb-cli --help

Using Docker Directly

  1. Build the image:

    docker build -t redb-node .
  2. Run the container:

    docker run -d \
      --name redb-node \
      -p 8080:8080 \
      -p 8081:8081 \
      -p 8082:8082 \
      -v redb_data:/opt/redb/data \
      -e REDB_KEYRING_PATH=/opt/redb/data/keyring.json \
      redb-node

Container Architecture

Main Container (redb-node)

Services Included:

  • PostgreSQL 17 (database)
  • Redis (caching)
  • Supervisor (orchestrator)
  • All microservices (security, core, mesh, etc.)

Ports Exposed:

HTTP API Ports (External Access):

  • 3000: Client Dashboard (HTTP)
  • 8080: Client API (HTTP)

Internal gRPC Ports (Service Communication):

  • 50000: Supervisor (internal)
  • 50051: Security service (internal)
  • 50052: Unified Model service (internal)
  • 50053: Webhook service (internal)
  • 50054: Transformation service (internal)
  • 50055: Core service (internal)
  • 50056: Mesh service (internal)
  • 50057: Anchor service (internal)
  • 50058: Integration service (internal)
  • 50059: Client API gRPC (internal)
  • 50060: MCP Server (internal)

Environment Variables

Database Configuration

Note: In the Docker container, PostgreSQL is managed internally by the container. The database credentials are set up automatically during container initialization. No external database configuration is needed.

For external database connections (if needed), you can use these environment variables:

Variable Default Description
REDB_POSTGRES_USER postgres PostgreSQL username (for external DB)
REDB_POSTGRES_PASSWORD postgres PostgreSQL password (for external DB)
REDB_POSTGRES_HOST localhost PostgreSQL host (for external DB)
REDB_POSTGRES_PORT 5432 PostgreSQL port (for external DB)
REDB_POSTGRES_DATABASE postgres PostgreSQL database (for external DB)

Keyring Configuration

The Docker container uses a file-based keyring for secure storage of credentials and encryption keys. The keyring is stored in a persistent volume to survive container restarts.

Variable Default Description
REDB_KEYRING_PATH /opt/redb/data/keyring.json Path to the keyring file in the container
REDB_KEYRING_PASSWORD default-master-password-change-me Master password for encrypting the keyring (change in production!)

Initial Setup

Note: The initial tenant, user, and workspace are now created via the Client API endpoint /api/v1/setup. No environment variables are needed for this process.

For external database connections (if needed), you can use these environment variables:

Initialization Process

Auto-Initialization

The container automatically runs initialization on first startup:

  1. PostgreSQL Setup:

    • Initialize data directory
    • Configure for container environment
    • Set up internal database with default credentials (postgres/postgres)
    • No external database configuration needed
  2. Redis Setup:

    • Configure Redis for container
    • Start Redis service
  3. reDB Initialization:

    • Run --autoinitialize flag (fully idempotent)
    • Connect to internal PostgreSQL using default credentials
    • Create production database (redb)
    • Generate and store node keys (preserves existing keys)
    • Create database schema (skips if already exists)
    • Create local node (skips if already exists)
    • No default tenant/user creation - This is now done via API
    • Supervisor starts automatically after initialization

Idempotent Features:

  • ✅ Safe to run multiple times
  • ✅ Preserves existing node keys and passwords
  • ✅ Skips schema creation if already exists
  • ✅ Reuses existing local node if present
  • ✅ Graceful handling of "already exists" scenarios

Manual Tenant/User Setup

After auto-initialization completes, you need to create the initial tenant, user, and workspace:

  1. Check System Status:

    curl http://localhost:8081/health
  2. Create Initial Setup (using CLI - recommended):

    docker compose exec redb-node redb-cli setup

    The CLI will prompt you for:

    • Tenant Name (e.g., "My Company")
    • Tenant URL (e.g., "mycompany")
    • Tenant Description (optional)
    • Admin User Email (e.g., "admin@mycompany.com")
    • Admin User Password
    • Workspace Name (defaults to "default")
  3. Or Create Initial Setup (using API directly):

    curl -X POST http://localhost:8081/api/v1/setup \
      -H "Content-Type: application/json" \
      -d '{
        "tenant_name": "my-company",
        "tenant_url": "mycompany",
        "tenant_description": "My Company Tenant",
        "user_email": "admin@mycompany.com",
        "user_password": "your-secure-password",
        "workspace_name": "default"
      }'
  4. Verify Setup:

    curl http://localhost:8081/api/v1/tenants

Manual Initialization

If you need to re-initialize:

# Stop the container
docker compose down

# Remove volumes to start fresh
docker volume rm redb-open_postgres_data redb-open_redb_data

# Start again
docker compose up -d

Data Persistence

Volumes

The following data is persisted:

  • postgres_data: PostgreSQL database files
  • redis_data: Redis database files
  • redb_data: Application data
  • redb_logs: Application logs

Backup

To backup your data:

# Backup PostgreSQL
docker exec redb-node pg_dump -U postgres redb > backup.sql

# Backup Redis
docker exec redb-node redis-cli BGSAVE
docker cp redb-node:/var/lib/redis/dump.rdb ./redis-backup.rdb

Monitoring and Health Checks

Health Check

The container includes a health check that monitors the supervisor service:

# Check container health
docker ps

# View health check logs
docker inspect redb-node | grep Health -A 10

Logs

# View all logs
docker compose logs -f

# View specific service logs
docker compose logs -f redb-node

# View logs in real-time
docker exec redb-node tail -f /opt/redb/logs/redb-node-event.log

Password Management

Initial Setup Password

When creating the initial tenant and user via the API, you specify the password directly:

  1. API Setup (Recommended):

    curl -X POST http://localhost:8081/api/v1/setup \
      -H "Content-Type: application/json" \
      -d '{
        "tenant_name": "my-company",
        "tenant_url": "mycompany",
        "user_email": "admin@mycompany.com",
        "user_password": "your-secure-password",
        "workspace_name": "default"
      }'
  2. Security: The password is set during the initial setup and stored securely in the database

Using the Password

Once you've created the initial setup, you can use the credentials with the CLI:

# Create a profile for your reDB instance
docker compose exec redb-node redb-cli profiles create default --hostname localhost:8080 --tenant-url http://localhost:8080/mycompany

# Login with the profile
docker compose exec redb-node redb-cli auth login --profile default

# Enter the email: (the email you set during setup)
# Enter the password: (the password you set during setup)

# Select workspace if needed
docker compose exec redb-node redb-cli select workspace default

CLI Usage

Using Docker Compose

# Show CLI help
docker compose exec redb-node redb-cli --help

# Initial setup (create first tenant, user, and workspace)
docker compose exec redb-node redb-cli setup

# Create a profile for your instance
docker compose exec redb-node redb-cli profiles create default --hostname localhost:8080 --tenant-url http://localhost:8080/mycompany

# Authenticate using the profile
docker compose exec redb-node redb-cli auth login --profile default

# List tenants
docker compose exec redb-node redb-cli tenants list

# Create a database
docker compose exec redb-node redb-cli databases create

Using Docker Directly

# Run CLI commands directly in the container
docker exec redb-node redb-cli --help
docker exec redb-node redb-cli tenants list

Troubleshooting

Common Issues

  1. PostgreSQL Connection Failed:

    # Check if PostgreSQL is running
    docker exec redb-node pg_isready -U postgres
    
    # Check PostgreSQL logs
    docker exec redb-node tail -f /var/lib/postgresql/data/log/*
    
    # Note: In Docker, PostgreSQL is managed internally
    # No external database configuration is needed
  2. Initialization Failed:

    # Check initialization logs
    docker compose logs redb-node | grep -i "auto-initialization"
    
    # Re-run initialization (safe to run multiple times)
    docker exec redb-node /opt/redb/bin/redb-node --autoinitialize
  3. Service Not Starting:

    # Check service status
    docker exec redb-node ps aux
    
    # Check supervisor logs
    docker exec redb-node tail -f /opt/redb/logs/redb-node-event.log
  4. Keyring Issues:

    # Check if keyring file exists and has proper permissions
    docker exec redb-node ls -la /opt/redb/data/keyring.json
    
    # Check keyring file contents (if it exists)
    docker exec redb-node cat /opt/redb/data/keyring.json
    
    # Check keyring environment variables
    docker exec redb-node env | grep REDB_KEYRING
    
    # If keyring is corrupted, you can remove it and re-initialize
    docker exec redb-node rm -f /opt/redb/data/keyring.json
    docker compose restart redb-node

Debug Mode

To run in debug mode with more verbose logging:

# Set debug logging
export REDB_LOG_LEVEL=debug

# Start with debug
docker compose up

Production Deployment

Security Considerations

  1. Use External Databases (Optional):

    # Point to external PostgreSQL instead of container's internal DB
    export REDB_POSTGRES_HOST=your-postgres-host
    export REDB_POSTGRES_PASSWORD=your-password
    docker compose up -d

    Note: By default, the container uses its internal PostgreSQL. Only set these variables if you want to use an external database.

  2. Secure Keyring Configuration:

    # Set a strong master password for the keyring
    export REDB_KEYRING_PASSWORD="your-strong-master-password"
    
    # Optionally use a custom keyring path
    export REDB_KEYRING_PATH="/secure/path/to/keyring.json"
    
    docker compose up -d

    Important: Change the default keyring master password in production environments.

  3. Network Security:

    # Use custom network
    docker network create redb-network
    docker compose --network redb-network up -d

Resource Limits

# In docker-compose.yml
services:
  redb-node:
    deploy:
      resources:
        limits:
          memory: 4G
          cpus: '2.0'
        reservations:
          memory: 2G
          cpus: '1.0'

High Availability

For production deployments, consider:

  • Using external PostgreSQL cluster
  • Using external Redis cluster
  • Running multiple instances behind a load balancer
  • Implementing proper backup strategies

Development

Building for Development

# Build with development flags
docker build --build-arg GOOS=linux --build-arg GOARCH=amd64 -t redb-node:dev .

# Run with development config
docker compose -f docker-compose.dev.yml up -d

Debugging

# Attach to running container
docker exec -it redb-node bash

# View real-time logs
docker exec redb-node tail -f /opt/redb/logs/*.log

# Check service processes
docker exec redb-node ps aux

Support

For issues and questions:

  • Check the logs: docker compose logs -f
  • Review this documentation
  • Check the main project README
  • Open an issue on GitHub