This repository contains the companion demo application for the book "OpenTelemetry for Dummies". It demonstrates OpenTelemetry in action - showing how to move from theory to practice with real code, real services, and real observability.
This application brings Chapter 5 to life, demonstrating:
- 🔧 Auto-Instrumentation: Zero-code-change observability with the Java Agent
- 📊 Business Context: Enriching telemetry with meaningful attributes and metrics
- 🔍 Distributed Tracing: Following requests across multiple services and external APIs
- 🎛️ Manual Instrumentation: Adding custom spans when auto-instrumentation isn't enough∏
- 🌐 Multi-Language Support: Java backend services + React frontend instrumentation
The demo consists of a realistic e-commerce-style todo application with three main services:
┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Frontend │───▶│ Todo Service│───▶│Validation Service│───▶│ JSONPlaceholder│
│ (React) │ │ (Java) │ │ (Java) │ │ (External) │
└─────────────┘ └─────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────┐
│ MySQL │
│ (Database) │
└─────────────┘
- Technology: React with OpenTelemetry Web SDK
- Purpose: User interface for todo management
- Instrumentation: Automatic fetch/XHR tracing + manual spans for user actions
- Features: Create, view, and delete todos with real-time error handling
- Technology: Spring Boot with OpenTelemetry Java Agent
- Purpose: Main REST API for todo operations
- Instrumentation: Automatic HTTP, database, and service-to-service call tracing
- Database: MySQL with JPA/Hibernate auto-instrumentation
- Technology: Spring Boot microservice with OpenTelemetry Java Agent
- Purpose: Content validation and external API integration
- Instrumentation: HTTP client calls to external services
- External Integration: Calls JSONPlaceholder API for realistic distributed traces
The complete observability infrastructure includes:
- 📊 Jaeger (Port 16686) - Distributed tracing visualization
- 🔍 OpenTelemetry Collector (Ports 4317/4318) - Telemetry data collection and routing
- 📈 Prometheus (Port 9090) - Metrics collection and storage
- 🗄️ OpenSearch (Port 9200) - Log storage and search
- 📋 OpenSearch Dashboards (Port 5601) - Log visualization and analysis
- Docker and Docker Compose
- 8GB+ RAM recommended (for all services)
- Ports 3000-3002, 4317-4318, 5601, 9090, 9200, 16686 available
-
Clone the repository:
git clone <repository-url> cd opentelemetry-for-dummies
-
Start all services:
docker-compose up --build
-
Wait for services to start (2-3 minutes for all health checks to pass)
-
Access the application:
- Frontend UI: http://localhost:3002
- Jaeger Tracing: http://localhost:16686
- Prometheus Metrics: http://localhost:9090
- OpenSearch Dashboards: http://localhost:5601
- Open http://localhost:3002
- Enter "Learn OpenTelemetry" in the todo field
- Click "Add Todo"
- View the distributed trace in Jaeger showing the complete flow
- Click the "Try Invalid Todo" button (contains "bad" keyword)
- Observe the validation failure in the UI
- Check Jaeger to see the trace with validation error details
- Create any todo - this triggers a call to JSONPlaceholder API
- In Jaeger, observe spans showing external HTTP calls
- Notice how the trace includes both your services and external dependencies
When viewing traces in Jaeger (http://localhost:16686), you'll see:
- Complete service topology and dependencies
- Request rates and error percentages
- Service health indicators
- Frontend spans: User interactions, fetch requests
- Todo Service spans: REST endpoints, database queries
- Validation Service spans: External API calls, business logic
- Database spans: SQL queries with automatic instrumentation
- HTTP Client spans: Service-to-service communication
service.name: Service identificationhttp.method,http.url: HTTP request detailsdb.statement: SQL queries (when enabled)user.action: Custom business eventsvalidation.result: Business logic outcomes
This repository demonstrates the instrumentation patterns covered in Chapter 5 of the book.
Both Java services use the OpenTelemetry Java Agent for zero-code-change instrumentation:
java -javaagent:opentelemetry-javaagent.jar -jar todo.jarThe agent automatically captures HTTP requests, database queries, and service-to-service calls - exactly as described in the book.
Just like in Chapter 5, we enrich spans with business attributes:
// From TodoController.java and ValidationController.java
Span current = Span.current();
current.setAttribute("todo.name", todoName);
current.setAttribute("todo.name.length", todoName.length());Following the book's examples, we track business metrics:
// From ValidationController.java
LongCounter validationCounter = openTelemetry.getMeter("validation-service")
.counterBuilder("validations.performed")
.setDescription("Number of validations performed")
.build();
// Usage with attributes:
validationCounter.add(1, Attributes.of(
AttributeKey.stringKey("validation.type"), "todo_name",
AttributeKey.booleanKey("validation.passed"), isValid
));When auto-instrumentation isn't enough, we add custom spans:
// From TodoController.java
Span span = tracer.spanBuilder("create_todo").startSpan();
try (Scope scope = span.makeCurrent()) {
span.setAttribute("todo.name", todoName);
span.addEvent("Starting validation");
// Business logic here
Todo savedTodo = repository.save(todo);
span.addEvent("Todo created successfully");
return ResponseEntity.status(HttpStatus.CREATED).body(savedTodo);
} catch (Exception e) {
span.recordException(e);
throw e;
} finally {
span.end();
}// From App.js
const tracer = trace.getTracer('todo-frontend', '1.0.0');
const createTodo = async (e) => {
const span = tracer.startSpan('create_todo');
span.setAttributes({
'user.action': 'create_todo',
'todo.name': newTodo,
'todo.name.length': newTodo.length
});
try {
const response = await fetch('/todos', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: newTodo }),
});
span.addEvent('Todo created successfully');
} finally {
span.end();
}
};Both Java services use the OpenTelemetry Java Agent for automatic instrumentation:
# From Dockerfile
RUN curl -L -o opentelemetry-javaagent.jar \
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar
CMD ["java", "-javaagent:opentelemetry-javaagent.jar", "-jar", "app.jar"]# From docker-compose.yml
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
- OTEL_EXPORTER_OTLP_PROTOCOL=grpc
- OTEL_SERVICE_NAME=todo-service
- OTEL_RESOURCE_ATTRIBUTES=service.name=todo-service,service.version=1.0.0# Check if all services are running
docker-compose ps
# Test the API directly
curl http://localhost:3000/todos
# Test validation service
curl http://localhost:3001/validate/health# Create a valid todo
curl -X POST http://localhost:3000/todos \
-H "Content-Type: application/json" \
-d '{"name": "Learn distributed tracing"}'
# Try an invalid todo (triggers validation)
curl -X POST http://localhost:3000/todos \
-H "Content-Type: application/json" \
-d '{"name": "This is bad content"}'After exploring this demo, you'll understand the key concepts from Chapter 5:
- Auto-Instrumentation: How the Java Agent captures telemetry without code changes
- Business Context: Why adding meaningful attributes makes telemetry actionable
- Manual vs Automatic: When auto-instrumentation isn't enough and custom spans are needed
- Cross-Service Tracing: How trace context propagates through HTTP calls and databases
- Multi-Language Telemetry: Correlating traces from Java services and React frontend
- Production Patterns: Real-world instrumentation that scales beyond toy examples
This isn't just a "hello world" - it's a realistic microservices application that shows how OpenTelemetry works when systems get complex.
# Stop all services
docker-compose down
# Remove volumes (clears database data)
docker-compose down -vServices won't start: Ensure Docker has enough memory (8GB+) and ports are available
No traces in Jaeger: Check that services are sending to the collector:
docker-compose logs otel-collectorFrontend can't connect: Verify nginx is properly proxying requests:
docker-compose logs frontend# View service logs
docker-compose logs [service-name]
# Restart a specific service
docker-compose restart [service-name]
# Rebuild without cache
docker-compose build --no-cache [service-name]In addition to the Docker Compose setup, this demo showcases Chapter 5's no-touch instrumentation approach using the OpenTelemetry Operator in Kubernetes. This demonstrates how platform teams can enable observability at scale without requiring code changes from development teams.
- Kubernetes cluster (kind, minikube, or cloud cluster)
- Docker for building images
- Helm for installing observability stack
- kubectl configured for your cluster
- 8GB+ RAM recommended for full observability stack
Following the principles outlined in Chapter 5, the OpenTelemetry Operator delivers:
- Zero Code Changes: Applications get instrumented automatically via Kubernetes annotations
- Consistent Configuration: Centralized instrumentation policy across all services
- Platform Team Control: Enable observability standards without developer intervention
- Runtime Injection: OpenTelemetry agents are injected at pod startup, not build time
This approach is particularly powerful for organizations wanting to enforce observability standards while minimizing developer friction.
-
Create the cluster and infrastructure:
make cluster
This creates a kind cluster with multi-node configuration and sets up the
opentelemetrynamespace. -
Deploy the complete observability stack:
make deploy-all
This single command installs:
- MySQL database
- Jaeger for distributed tracing
- Prometheus for metrics
- OpenSearch and OpenSearch Dashboards for logs
- OpenTelemetry Operator for no-touch instrumentation
- OpenTelemetry Collector for telemetry pipeline
- Demo application services
-
Access the application:
kubectl port-forward svc/frontend 3000:80
Then open http://localhost:3000 to access the Dash0 Todo Demo.
-
Create some todos to generate telemetry data:
- Add valid todos using the main form
- Click "Try Invalid Todo" to generate validation errors and see distributed error traces
-
Explore the observability stack:
Jaeger (Distributed Tracing):
kubectl port-forward svc/jaeger-query 16686:16686
View at http://localhost:16686 to see complete request traces across all services.
Prometheus (Metrics):
kubectl port-forward svc/prometheus-server 9090:80
View at http://localhost:9090 to explore metrics and create custom queries.
OpenSearch Dashboards (Logs):
kubectl port-forward svc/opensearch-dashboards 5601:5601
View at http://localhost:5601 to search and analyze application logs.
The magic happens in the Instrumentation resource (located in kubernetes/instrumentations/instrumentation.yaml):
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: instrumentation
namespace: opentelemetry
spec:
exporter:
endpoint: http://otel-collector.opentelemetry.svc.cluster.local:4317
propagators:
- tracecontext
- baggage
sampler:
type: always_on
resource:
addK8sUIDAttributes: true
java:
env:
- name: OTEL_LOGS_EXPORTER
value: otlp
- name: OTEL_INSTRUMENTATION_LOGBACK_APPENDER_ENABLED
value: "true"Services opt into instrumentation via a simple annotation:
# In deployment manifests
metadata:
annotations:
instrumentation.opentelemetry.io/inject-java: "opentelemetry/instrumentation"When a pod starts, the OpenTelemetry Operator:
- Injects the OpenTelemetry Java agent as an init container
- Sets the
JAVA_TOOL_OPTIONSenvironment variable to load the agent - Configures OTLP exporters, sampling, and resource attributes
- Enables automatic log capture from Logback/Log4j frameworks
| Aspect | Docker Compose | Kubernetes |
|---|---|---|
| Instrumentation | Java agent downloaded in Dockerfile | Agent injected by operator |
| Configuration | Environment variables in compose file | Centralized Instrumentation resource |
| Service Discovery | Docker service names | Kubernetes DNS (FQDN) |
| Scaling | Manual service scaling | Kubernetes deployment scaling |
| Updates | Rebuild images | Update Instrumentation resource |
Resource Attribution: Automatic Kubernetes metadata is added to all telemetry:
k8s.pod.name,k8s.namespace.namek8s.deployment.name,k8s.node.nameservice.instance.idwith pod information
Multi-Environment Support: Different Instrumentation resources can be created for different environments (dev, staging, prod) with appropriate sampling rates and exporters.
Zero Downtime Updates: Instrumentation configuration changes are applied on pod restart without requiring image rebuilds.
┌─────────────┐ ┌──────────────┐ ┌─────────────────┐
│ Frontend │───▶│ Todo Service │───▶│Validation Service│
│ (React) │ │ (Java) │ │ (Java) │
└─────────────┘ └──────────────┘ └─────────────────┘
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ MySQL │ │
│ └──────────────┘ │
│ │
└──────────────────┬───────────────────────┘
▼
┌──────────────────────┐
│ OpenTelemetry │
│ Collector │
└──────┬──────┬────────┘
│ │ │
┌──────────▼─┐ ┌─▼─┐ ┌─▼──────────┐
│ Jaeger │ │Pro│ │ OpenSearch │
│ (Traces) │ │me │ │ (Logs) │
└────────────┘ │th │ └────────────┘
│eu │
│s │
└───┘
# Remove the entire cluster
make delete-cluster
# Or stop individual components
kubectl delete -f ./frontend/manifests/
kubectl delete -f ./todo-service/manifests/
kubectl delete -f ./validation-service/manifests/This demo is pre-configured to work with Dash0, a modern observability platform built for OpenTelemetry. You can easily forward all telemetry data (traces, metrics, and logs) to Dash0 alongside or instead of the local observability stack.
- Production-Ready: Built for enterprise-scale observability with high availability
- OpenTelemetry Native: Designed specifically for OpenTelemetry data with zero vendor lock-in
- Advanced Analytics: AI-powered insights, anomaly detection, and intelligent alerting
- Team Collaboration: Share dashboards, insights, and investigations across your team
- Cost Effective: Pay only for what you use with intelligent data sampling
To forward telemetry to Dash0 in your Docker Compose environment:
-
Get your Dash0 authorization token from your Dash0 dashboard
-
Update docker-compose.yml to add the token:
otel-collector: # ... existing configuration environment: - DASH0_AUTHORIZATION_TOKEN=<your-dash0-token>
-
Update otel-collector-config.yaml to enable Dash0 exporters by uncommenting the relevant sections:
extensions: # Uncomment this section: bearertokenauth/dash0: scheme: Bearer token: ${env:DASH0_AUTHORIZATION_TOKEN} exporters: # Uncomment this section: otlp/dash0: auth: authenticator: bearertokenauth/dash0 endpoint: ingress.eu-west-1.aws.dash0.com:4317 service: # Update to include bearertokenauth/dash0: extensions: [basicauth/client, bearertokenauth/dash0] pipelines: metrics: # Update exporters (choose local, Dash0, or both): exporters: [prometheus, otlp/dash0] traces: # Update exporters (choose local, Dash0, or both): exporters: [otlp/jaeger, otlp/dash0] logs: # Update exporters (choose local, Dash0, or both): exporters: [opensearch/log, otlp/dash0]
-
Restart the collector:
docker-compose restart otel-collector
To forward telemetry to Dash0 in your Kubernetes environment:
-
Get your Dash0 authorization token from your Dash0 dashboard
-
Create a Kubernetes secret with your token:
export DASH0_AUTH_TOKEN="your-dash0-token" kubectl create secret generic dash0-secrets \ --from-literal=dash0-authorization-token=${DASH0_AUTH_TOKEN} \ --namespace opentelemetry
-
Update kubernetes/collector/values.yaml to enable Dash0 configuration by uncommenting the relevant sections:
extraEnvs: # Uncomment these lines: - name: DASH0_AUTHORIZATION_TOKEN valueFrom: secretKeyRef: name: dash0-secrets key: dash0-authorization-token config: exporters: # Uncomment this section: otlp/dash0: auth: authenticator: bearertokenauth/dash0 endpoint: ingress.eu-west-1.aws.dash0.com:4317 extensions: # Uncomment this section: bearertokenauth/dash0: scheme: Bearer token: ${env:DASH0_AUTHORIZATION_TOKEN} service: extensions: # Add bearertokenauth/dash0: - basicauth/client - health_check - bearertokenauth/dash0 pipelines: metrics: # Update exporters (choose local, Dash0, or both): exporters: [otlphttp/prometheus, otlp/dash0] traces: # Update exporters (choose local, Dash0, or both): exporters: [otlp/jaeger, otlp/dash0] logs: # Update exporters (choose local, Dash0, or both): exporters: [opensearch/log, otlp/dash0]
-
Update the collector deployment:
helm upgrade otel-collector-deployment open-telemetry/opentelemetry-collector \ --namespace opentelemetry -f ./kubernetes/collector/values.yaml
Happy Tracing! 🔍✨
This demo shows OpenTelemetry's power in action - from a single user click to distributed traces spanning multiple services, databases, and external APIs. Whether you choose Docker Compose for local development, Kubernetes for production-like no-touch instrumentation, or forward everything to Dash0 for enterprise observability, you'll see how Chapter 5's principles work in practice.

