This project demonstrates how to instrument a Celery application with OpenTelemetry for distributed tracing and monitoring. It includes multiple task generators, workers, and a monitoring setup.
The project consists of the following components:
- Celery Workers: Process tasks from 4 different queues
- Task Generators: 4 separate services generating different mathematical operations
- RabbitMQ: Message broker for Celery
- OpenTelemetry Collector: Collects and exports telemetry data
- Flower: Web-based tool for monitoring Celery tasks
- Queue1: Addition operation
- Queue2: Multiplication operation
- Queue3: Subtraction operation
- Queue4: Division operation
- Docker
- Docker Compose
- Python 3.9+ (for local development)
CELERY_BROKER_URL
: RabbitMQ connection URL (default:amqp://guest:guest@rabbitmq:5672//
)TASK_DELAY
: Delay between task generation in seconds- Generator1: 10 seconds (default)
- Generator2: 15 seconds (default)
- Generator3: 20 seconds (default)
- Generator4: 25 seconds (default)
OTEL_EXPORTER_OTLP_PROTOCOL
: Protocol for OpenTelemetry exportOTEL_EXPORTER_OTLP_LOGS_ENDPOINT
: Endpoint for logsOTEL_EXPORTER_OTLP_METRICS_ENDPOINT
: Endpoint for metricsOTEL_EXPORTER_OTLP_TRACES_ENDPOINT
: Endpoint for traces
- Clone the repository:
git clone https://github.com/SigNoz/celery-opentelemetry-instrumentation.git
cd celery-opentelemetry-instrumentation
- Build and start the services:
docker-compose up --build
- Access the monitoring interfaces:
- Flower Dashboard:
http://localhost:5555
.
├── celery_app/
│ ├── celery.py # Celery application configuration
│ └── Dockerfile # Celery worker Dockerfile
├── task_generators/
│ ├── generator1.py # Addition task generator
│ ├── generator2.py # Multiplication task generator
│ ├── generator3.py # Subtraction task generator
│ ├── generator4.py # Division task generator
│ └── Dockerfile # Task generators Dockerfile
├── tasks/
│ └── tasks.py # Task definitions
├── opentelemetry-collector/
│ ├── config.yaml # OpenTelemetry collector configuration
│ └── Dockerfile # OpenTelemetry collector Dockerfile
└── docker-compose.yml # Docker services configuration
The OpenTelemetry collector is configured to:
- Receive traces and metrics via OTLP (HTTP and gRPC)
- Process data using batch processor
- Export detailed debug information
The Celery application is configured with:
- 4 separate queues (queue1, queue2, queue3, queue4)
- RPC result backend
- Task result expiration: 3600 seconds
- Default queue: queue1
To set up the development environment:
- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r celery_app/requirements.txt
pip install -r task_generators/requirements.txt
- Use Flower dashboard to monitor task execution
- Check OpenTelemetry collector logs for trace information
- Monitor RabbitMQ queues for message flow
MIT License
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request