Skip to content

dash0hq/opentelemetry-for-dummies

Repository files navigation

This tutorial is courtesy of Dash0

OpenTelemetry for Dummies - Demo Application

This repository contains the companion demo application for the book "OpenTelemetry for Dummies". It demonstrates OpenTelemetry in action - showing how to move from theory to practice with real code, real services, and real observability.

🎯 What This Demo Shows

This application brings Chapter 5 to life, demonstrating:

  • 🔧 Auto-Instrumentation: Zero-code-change observability with the Java Agent
  • 📊 Business Context: Enriching telemetry with meaningful attributes and metrics
  • 🔍 Distributed Tracing: Following requests across multiple services and external APIs
  • 🎛️ Manual Instrumentation: Adding custom spans when auto-instrumentation isn't enough∏
  • 🌐 Multi-Language Support: Java backend services + React frontend instrumentation

🏗️ Architecture Overview

The demo consists of a realistic e-commerce-style todo application with three main services:

┌─────────────┐    ┌─────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Frontend  │───▶│ Todo Service│───▶│Validation Service│───▶│ JSONPlaceholder│
│   (React)   │    │   (Java)    │    │     (Java)      │    │   (External)    │
└─────────────┘    └─────────────┘    └─────────────────┘    └─────────────────┘
                           │
                           ▼
                   ┌─────────────┐
                   │   MySQL     │
                   │ (Database)  │
                   └─────────────┘

Services

1. Frontend Service (Port 3002)

  • Technology: React with OpenTelemetry Web SDK
  • Purpose: User interface for todo management
  • Instrumentation: Automatic fetch/XHR tracing + manual spans for user actions
  • Features: Create, view, and delete todos with real-time error handling

Frontend

2. Todo Service (Port 3000)

  • Technology: Spring Boot with OpenTelemetry Java Agent
  • Purpose: Main REST API for todo operations
  • Instrumentation: Automatic HTTP, database, and service-to-service call tracing
  • Database: MySQL with JPA/Hibernate auto-instrumentation

3. Validation Service (Port 3001)

  • Technology: Spring Boot microservice with OpenTelemetry Java Agent
  • Purpose: Content validation and external API integration
  • Instrumentation: HTTP client calls to external services
  • External Integration: Calls JSONPlaceholder API for realistic distributed traces

🛠️ Observability Stack

The complete observability infrastructure includes:

  • 📊 Jaeger (Port 16686) - Distributed tracing visualization
  • 🔍 OpenTelemetry Collector (Ports 4317/4318) - Telemetry data collection and routing
  • 📈 Prometheus (Port 9090) - Metrics collection and storage
  • 🗄️ OpenSearch (Port 9200) - Log storage and search
  • 📋 OpenSearch Dashboards (Port 5601) - Log visualization and analysis

🚀 Quick Start

Prerequisites

  • Docker and Docker Compose
  • 8GB+ RAM recommended (for all services)
  • Ports 3000-3002, 4317-4318, 5601, 9090, 9200, 16686 available

Running the Demo

  1. Clone the repository:

    git clone <repository-url>
    cd opentelemetry-for-dummies
  2. Start all services:

    docker-compose up --build
  3. Wait for services to start (2-3 minutes for all health checks to pass)

  4. Access the application:

🎮 Demo Scenarios

Scenario 1: Successful Todo Creation

  1. Open http://localhost:3002
  2. Enter "Learn OpenTelemetry" in the todo field
  3. Click "Add Todo"
  4. View the distributed trace in Jaeger showing the complete flow

Scenario 2: Validation Failure

  1. Click the "Try Invalid Todo" button (contains "bad" keyword)
  2. Observe the validation failure in the UI
  3. Check Jaeger to see the trace with validation error details

Scenario 3: External API Integration

  1. Create any todo - this triggers a call to JSONPlaceholder API
  2. In Jaeger, observe spans showing external HTTP calls
  3. Notice how the trace includes both your services and external dependencies

🔍 What to Look For in Jaeger

When viewing traces in Jaeger (http://localhost:16686), you'll see:

Service Map

  • Complete service topology and dependencies
  • Request rates and error percentages
  • Service health indicators

Trace Details

  • Frontend spans: User interactions, fetch requests
  • Todo Service spans: REST endpoints, database queries
  • Validation Service spans: External API calls, business logic
  • Database spans: SQL queries with automatic instrumentation
  • HTTP Client spans: Service-to-service communication

Key Trace Attributes

  • service.name: Service identification
  • http.method, http.url: HTTP request details
  • db.statement: SQL queries (when enabled)
  • user.action: Custom business events
  • validation.result: Business logic outcomes

📊 Code Examples from the Demo

This repository demonstrates the instrumentation patterns covered in Chapter 5 of the book.

Auto-Instrumentation with the Java Agent

Both Java services use the OpenTelemetry Java Agent for zero-code-change instrumentation:

java -javaagent:opentelemetry-javaagent.jar -jar todo.jar

The agent automatically captures HTTP requests, database queries, and service-to-service calls - exactly as described in the book.

Adding Business Context to Spans

Just like in Chapter 5, we enrich spans with business attributes:

// From TodoController.java and ValidationController.java
Span current = Span.current();
current.setAttribute("todo.name", todoName);
current.setAttribute("todo.name.length", todoName.length());

Custom Metrics with Business Context

Following the book's examples, we track business metrics:

// From ValidationController.java
LongCounter validationCounter = openTelemetry.getMeter("validation-service")
    .counterBuilder("validations.performed") 
    .setDescription("Number of validations performed")
    .build();

// Usage with attributes:
validationCounter.add(1, Attributes.of(
    AttributeKey.stringKey("validation.type"), "todo_name",
    AttributeKey.booleanKey("validation.passed"), isValid
));

Manual Span Creation for Business Logic

When auto-instrumentation isn't enough, we add custom spans:

// From TodoController.java
Span span = tracer.spanBuilder("create_todo").startSpan();
try (Scope scope = span.makeCurrent()) {
    span.setAttribute("todo.name", todoName);
    span.addEvent("Starting validation");
    
    // Business logic here
    Todo savedTodo = repository.save(todo);
    
    span.addEvent("Todo created successfully");
    return ResponseEntity.status(HttpStatus.CREATED).body(savedTodo);
} catch (Exception e) {
    span.recordException(e);
    throw e;
} finally {
    span.end();
}

Frontend Instrumentation (React)

// From App.js
const tracer = trace.getTracer('todo-frontend', '1.0.0');

const createTodo = async (e) => {
    const span = tracer.startSpan('create_todo');
    span.setAttributes({
        'user.action': 'create_todo',
        'todo.name': newTodo,
        'todo.name.length': newTodo.length
    });
    
    try {
        const response = await fetch('/todos', {
            method: 'POST',
            headers: { 'Content-Type': 'application/json' },
            body: JSON.stringify({ name: newTodo }),
        });
        
        span.addEvent('Todo created successfully');
    } finally {
        span.end();
    }
};

🛠️ Configuration Details

OpenTelemetry Java Agent

Both Java services use the OpenTelemetry Java Agent for automatic instrumentation:

# From Dockerfile
RUN curl -L -o opentelemetry-javaagent.jar \
    https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar

CMD ["java", "-javaagent:opentelemetry-javaagent.jar", "-jar", "app.jar"]

Environment Variables

# From docker-compose.yml
environment:
  - OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
  - OTEL_EXPORTER_OTLP_PROTOCOL=grpc
  - OTEL_SERVICE_NAME=todo-service
  - OTEL_RESOURCE_ATTRIBUTES=service.name=todo-service,service.version=1.0.0

🧪 Testing the Setup

Health Checks

# Check if all services are running
docker-compose ps

# Test the API directly
curl http://localhost:3000/todos

# Test validation service
curl http://localhost:3001/validate/health

Generate Test Data

# Create a valid todo
curl -X POST http://localhost:3000/todos \
  -H "Content-Type: application/json" \
  -d '{"name": "Learn distributed tracing"}'

# Try an invalid todo (triggers validation)
curl -X POST http://localhost:3000/todos \
  -H "Content-Type: application/json" \
  -d '{"name": "This is bad content"}'

🎓 Learning Objectives

After exploring this demo, you'll understand the key concepts from Chapter 5:

  1. Auto-Instrumentation: How the Java Agent captures telemetry without code changes
  2. Business Context: Why adding meaningful attributes makes telemetry actionable
  3. Manual vs Automatic: When auto-instrumentation isn't enough and custom spans are needed
  4. Cross-Service Tracing: How trace context propagates through HTTP calls and databases
  5. Multi-Language Telemetry: Correlating traces from Java services and React frontend
  6. Production Patterns: Real-world instrumentation that scales beyond toy examples

This isn't just a "hello world" - it's a realistic microservices application that shows how OpenTelemetry works when systems get complex.

🛑 Stopping the Demo

# Stop all services
docker-compose down

# Remove volumes (clears database data)
docker-compose down -v

🔧 Troubleshooting

Common Issues

Services won't start: Ensure Docker has enough memory (8GB+) and ports are available

No traces in Jaeger: Check that services are sending to the collector:

docker-compose logs otel-collector

Frontend can't connect: Verify nginx is properly proxying requests:

docker-compose logs frontend

Useful Commands

# View service logs
docker-compose logs [service-name]

# Restart a specific service
docker-compose restart [service-name]

# Rebuild without cache
docker-compose build --no-cache [service-name]

🚢 Kubernetes Deployment (No-Touch Instrumentation)

In addition to the Docker Compose setup, this demo showcases Chapter 5's no-touch instrumentation approach using the OpenTelemetry Operator in Kubernetes. This demonstrates how platform teams can enable observability at scale without requiring code changes from development teams.

Prerequisites

  • Kubernetes cluster (kind, minikube, or cloud cluster)
  • Docker for building images
  • Helm for installing observability stack
  • kubectl configured for your cluster
  • 8GB+ RAM recommended for full observability stack

What No-Touch Instrumentation Provides

Following the principles outlined in Chapter 5, the OpenTelemetry Operator delivers:

  • Zero Code Changes: Applications get instrumented automatically via Kubernetes annotations
  • Consistent Configuration: Centralized instrumentation policy across all services
  • Platform Team Control: Enable observability standards without developer intervention
  • Runtime Injection: OpenTelemetry agents are injected at pod startup, not build time

This approach is particularly powerful for organizations wanting to enforce observability standards while minimizing developer friction.

Quick Start with Kubernetes

  1. Create the cluster and infrastructure:

    make cluster

    This creates a kind cluster with multi-node configuration and sets up the opentelemetry namespace.

  2. Deploy the complete observability stack:

    make deploy-all

    This single command installs:

    • MySQL database
    • Jaeger for distributed tracing
    • Prometheus for metrics
    • OpenSearch and OpenSearch Dashboards for logs
    • OpenTelemetry Operator for no-touch instrumentation
    • OpenTelemetry Collector for telemetry pipeline
    • Demo application services
  3. Access the application:

    kubectl port-forward svc/frontend 3000:80

    Then open http://localhost:3000 to access the Dash0 Todo Demo.

  4. Create some todos to generate telemetry data:

    • Add valid todos using the main form
    • Click "Try Invalid Todo" to generate validation errors and see distributed error traces
  5. Explore the observability stack:

    Jaeger (Distributed Tracing):

    kubectl port-forward svc/jaeger-query 16686:16686

    View at http://localhost:16686 to see complete request traces across all services.

    Prometheus (Metrics):

    kubectl port-forward svc/prometheus-server 9090:80

    View at http://localhost:9090 to explore metrics and create custom queries.

    OpenSearch Dashboards (Logs):

    kubectl port-forward svc/opensearch-dashboards 5601:5601

    View at http://localhost:5601 to search and analyze application logs.

How the No-Touch Instrumentation Works

The magic happens in the Instrumentation resource (located in kubernetes/instrumentations/instrumentation.yaml):

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: instrumentation
  namespace: opentelemetry
spec:
  exporter:
    endpoint: http://otel-collector.opentelemetry.svc.cluster.local:4317
  propagators:
    - tracecontext
    - baggage
  sampler:
    type: always_on
  resource:
    addK8sUIDAttributes: true
  java:
    env:
      - name: OTEL_LOGS_EXPORTER
        value: otlp
      - name: OTEL_INSTRUMENTATION_LOGBACK_APPENDER_ENABLED
        value: "true"

Services opt into instrumentation via a simple annotation:

# In deployment manifests
metadata:
  annotations:
    instrumentation.opentelemetry.io/inject-java: "opentelemetry/instrumentation"

When a pod starts, the OpenTelemetry Operator:

  1. Injects the OpenTelemetry Java agent as an init container
  2. Sets the JAVA_TOOL_OPTIONS environment variable to load the agent
  3. Configures OTLP exporters, sampling, and resource attributes
  4. Enables automatic log capture from Logback/Log4j frameworks

Key Differences from Docker Compose

Aspect Docker Compose Kubernetes
Instrumentation Java agent downloaded in Dockerfile Agent injected by operator
Configuration Environment variables in compose file Centralized Instrumentation resource
Service Discovery Docker service names Kubernetes DNS (FQDN)
Scaling Manual service scaling Kubernetes deployment scaling
Updates Rebuild images Update Instrumentation resource

Kubernetes-Specific Features

Resource Attribution: Automatic Kubernetes metadata is added to all telemetry:

  • k8s.pod.name, k8s.namespace.name
  • k8s.deployment.name, k8s.node.name
  • service.instance.id with pod information

Multi-Environment Support: Different Instrumentation resources can be created for different environments (dev, staging, prod) with appropriate sampling rates and exporters.

Zero Downtime Updates: Instrumentation configuration changes are applied on pod restart without requiring image rebuilds.

Observability Stack Architecture

┌─────────────┐    ┌──────────────┐    ┌─────────────────┐
│   Frontend  │───▶│ Todo Service │───▶│Validation Service│
│   (React)   │    │   (Java)     │    │     (Java)      │
└─────────────┘    └──────────────┘    └─────────────────┘
       │                   │                      │
       │                   ▼                      │
       │           ┌──────────────┐               │
       │           │    MySQL     │               │
       │           └──────────────┘               │
       │                                          │
       └──────────────────┬───────────────────────┘
                          ▼
              ┌──────────────────────┐
              │ OpenTelemetry        │
              │ Collector            │
              └──────┬──────┬────────┘
                     │      │      │
          ┌──────────▼─┐  ┌─▼─┐  ┌─▼──────────┐
          │   Jaeger   │  │Pro│  │ OpenSearch │
          │ (Traces)   │  │me │  │   (Logs)   │
          └────────────┘  │th │  └────────────┘
                          │eu │
                          │s  │
                          └───┘

Cleanup

# Remove the entire cluster
make delete-cluster

# Or stop individual components
kubectl delete -f ./frontend/manifests/
kubectl delete -f ./todo-service/manifests/
kubectl delete -f ./validation-service/manifests/

🚀 Forwarding Telemetry to Dash0

This demo is pre-configured to work with Dash0, a modern observability platform built for OpenTelemetry. You can easily forward all telemetry data (traces, metrics, and logs) to Dash0 alongside or instead of the local observability stack.

Why Forward to Dash0?

  • Production-Ready: Built for enterprise-scale observability with high availability
  • OpenTelemetry Native: Designed specifically for OpenTelemetry data with zero vendor lock-in
  • Advanced Analytics: AI-powered insights, anomaly detection, and intelligent alerting
  • Team Collaboration: Share dashboards, insights, and investigations across your team
  • Cost Effective: Pay only for what you use with intelligent data sampling

Docker Compose Setup

To forward telemetry to Dash0 in your Docker Compose environment:

  1. Get your Dash0 authorization token from your Dash0 dashboard

  2. Update docker-compose.yml to add the token:

    otel-collector:
      # ... existing configuration
      environment:
        - DASH0_AUTHORIZATION_TOKEN=<your-dash0-token>
  3. Update otel-collector-config.yaml to enable Dash0 exporters by uncommenting the relevant sections:

    extensions:
      # Uncomment this section:
      bearertokenauth/dash0:
        scheme: Bearer
        token: ${env:DASH0_AUTHORIZATION_TOKEN}
    
    exporters:
      # Uncomment this section:
      otlp/dash0:
        auth:
          authenticator: bearertokenauth/dash0
        endpoint: ingress.eu-west-1.aws.dash0.com:4317
    
    service:
      # Update to include bearertokenauth/dash0:
      extensions: [basicauth/client, bearertokenauth/dash0]
      pipelines:
        metrics:
          # Update exporters (choose local, Dash0, or both):
          exporters: [prometheus, otlp/dash0]
        traces:
          # Update exporters (choose local, Dash0, or both):
          exporters: [otlp/jaeger, otlp/dash0]
        logs:
          # Update exporters (choose local, Dash0, or both):
          exporters: [opensearch/log, otlp/dash0]
  4. Restart the collector:

    docker-compose restart otel-collector

Kubernetes Setup

To forward telemetry to Dash0 in your Kubernetes environment:

  1. Get your Dash0 authorization token from your Dash0 dashboard

  2. Create a Kubernetes secret with your token:

    export DASH0_AUTH_TOKEN="your-dash0-token"
    kubectl create secret generic dash0-secrets \
      --from-literal=dash0-authorization-token=${DASH0_AUTH_TOKEN} \
      --namespace opentelemetry
  3. Update kubernetes/collector/values.yaml to enable Dash0 configuration by uncommenting the relevant sections:

    extraEnvs:
      # Uncomment these lines:
      - name: DASH0_AUTHORIZATION_TOKEN
        valueFrom:
          secretKeyRef:
            name: dash0-secrets
            key: dash0-authorization-token
    
    config:
      exporters:
        # Uncomment this section:
        otlp/dash0:
          auth:
            authenticator: bearertokenauth/dash0
          endpoint: ingress.eu-west-1.aws.dash0.com:4317
      
      extensions:
        # Uncomment this section:
        bearertokenauth/dash0:
          scheme: Bearer
          token: ${env:DASH0_AUTHORIZATION_TOKEN}
      
      service:
        extensions:
          # Add bearertokenauth/dash0:
          - basicauth/client
          - health_check
          - bearertokenauth/dash0
        pipelines:
          metrics:
            # Update exporters (choose local, Dash0, or both):
            exporters: [otlphttp/prometheus, otlp/dash0]
          traces:
            # Update exporters (choose local, Dash0, or both):
            exporters: [otlp/jaeger, otlp/dash0]
          logs:
            # Update exporters (choose local, Dash0, or both):
            exporters: [opensearch/log, otlp/dash0]
  4. Update the collector deployment:

    helm upgrade otel-collector-deployment open-telemetry/opentelemetry-collector \
      --namespace opentelemetry -f ./kubernetes/collector/values.yaml

Happy Tracing! 🔍✨

This demo shows OpenTelemetry's power in action - from a single user click to distributed traces spanning multiple services, databases, and external APIs. Whether you choose Docker Compose for local development, Kubernetes for production-like no-touch instrumentation, or forward everything to Dash0 for enterprise observability, you'll see how Chapter 5's principles work in practice.

About

Repository containing demo application for the OpenTelemetry for Dummies book

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors