A demonstration of a modern platform engineering workflow featuring Backstage developer portal with OpenTelemetry-instrumented microservices. This demo showcases how platform teams can provide self-service capabilities for developers to create new services with built-in observability, complete with automatic telemetry pipeline including metrics, traces, and logs exported to both local observability tools and Dash0.
- Docker
- Kind
- kubectl
- Helm
- Node.js and Yarn (for Backstage)
- Dash0 account and authorization token
Copy the template and configure your Dash0 settings:
cp .env.template .envEdit .env and add your Dash0 credentials:
DASH0_AUTH_TOKEN="your-dash0-token"
DASH0_DATASET="default"
DASH0_ENDPOINT_OTLP_GRPC_HOSTNAME="ingress.eu-west-1.aws.dash0.com"
DASH0_ENDPOINT_OTLP_GRPC_PORT="4317"Execute the main orchestration script:
./00_run.shStart the Backstage developer portal:
./01_start-demo.sh
# Visit: http://localhost:3000After deployment completes, access the frontend:
# Frontend
kubectl port-forward -n default svc/frontend 3001:80
# Visit: http://localhost:3001# Jaeger UI
kubectl port-forward -n default svc/jaeger-query 16686:16686
# Visit: http://localhost:16686
# Prometheus
kubectl port-forward -n default svc/prometheus 9090:9090
# Visit: http://localhost:9090
# OpenSearch Dashboards
kubectl port-forward -n default svc/opensearch-dashboards 5601:5601
# Visit: http://localhost:5601
# Username: admin, Password: SecureP@ssw0rd123All telemetry data (metrics, traces, and logs) is also exported to Dash0. Visit your Dash0 dashboard at https://app.dash0.com to see:
- Distributed traces from all services
- Application metrics
- Structured logs with correlation
Delete the entire cluster and all resources:
./02_cleanup.shThis demo showcases how platform teams can enable developers to create fully observable services without writing instrumentation code. Using Backstage templates and OpenTelemetry auto-instrumentation, developers get distributed tracing, metrics, and logs automatically.
First, show what's running in the cluster:
kubectl get pods -AYou'll see the running services: frontend, todo-service, notification-service, and the observability stack.
Open the frontend application:
# Already port-forwarded from setup
# Visit: http://localhost:3001Create a new todo (e.g., "Buy groceries") and show that it works.
Open Jaeger UI:
# Already port-forwarded from setup
# Visit: http://localhost:16686- Select service:
todo-service - Find traces and click on one
- Show the distributed trace spanning:
- HTTP request to todo-service
- Database insert operation
- Publisher span - todo-service publishing event to RabbitMQ
- Consumer span - notification-service consuming the event from RabbitMQ
- Highlight that the entire flow is automatically traced with no code changes
- Show span attributes including
user.email- note that it's been hashed by the collector to protect sensitive data
Now we'll add a new service that validates todo names, showcasing the self-service platform.
Open Backstage:
# Visit: http://localhost:3000Create the validation service:
- Click "Create" in the sidebar
- Select "Node.js Validation Service" template
- Fill in the details:
- Service Name:
validation-service - Port:
3001 - Description:
Validates todo names for inappropriate content
- Service Name:
- Click "Create"
Follow the instructions shown in Backstage:
# Navigate to the service directory
cd validation-service
# Build the Docker image
docker build -t validation-service:v1 .
# Load image into Kind cluster
kind load docker-image --name otel-platform-demo validation-service:v1
# Deploy to Kubernetes
kubectl apply -f manifests/
# Enable validation in todo-service
kubectl set env deployment/todo-service \
VALIDATION_SERVICE_ENABLED=true \
VALIDATION_SERVICE_URL=http://validation-service:3001kubectl get pods -n default | grep validation-serviceThe pod should be running with OpenTelemetry auto-instrumentation enabled.
Go back to the frontend (http://localhost:3001) and create a todo with "bad" in the name (e.g., "This is bad"):
- The request should fail with a validation error
- Show the error message in the UI
Return to Jaeger (http://localhost:16686):
- Refresh and find the latest trace with an error
- Click on it and expand the full trace
- Highlight the new service appearing in the trace:
- HTTP request to todo-service
- Call to validation-service (new!)
- Validation service processing the request
- Error returned to todo-service
- No database insert (validation failed)
Key point: The validation-service is now automatically part of the distributed trace with:
- No instrumentation code written
- Context propagation working automatically
- HTTP spans captured automatically
- All enabled by the OpenTelemetry operator annotation
The Backstage template also generated a Perses dashboard for monitoring the service:
kubectl apply -f ./validation-service/dashboards/validation-service-dashboard.yamlOpen Perses (http://localhost:8080) and view real-time metrics:
- HTTP Request Rate
- Validation Results (valid vs invalid)
- Response Time (p95)
- Event Loop & CPU utilization
Note: In a production setup, this dashboard would be automatically applied by a GitOps tool (ArgoCD, Flux, etc.) as part of the deployment pipeline.
Open your Dash0 dashboard at https://app.dash0.com to show:
- All services including validation-service appearing automatically
- Distributed traces with full context
- Service map showing the relationships
- Metrics and logs correlated together
