-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Add Otel collector to our tracing exports to Tempo #5049
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
ndr-ds
wants to merge
1
commit into
12-03-limit_max_pending_message_bundles_in_benchmarks
Choose a base branch
from
11-25-add_otel_collector_to_our_tracing_exports_to_tempo
base: 12-03-limit_max_pending_message_bundles_in_benchmarks
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Add Otel collector to our tracing exports to Tempo #5049
ndr-ds
wants to merge
1
commit into
12-03-limit_max_pending_message_bundles_in_benchmarks
from
11-25-add_otel_collector_to_our_tracing_exports_to_tempo
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This was referenced Dec 1, 2025
Contributor
Author
This was referenced Dec 1, 2025
Closed
41fd9c0 to
184f4e0
Compare
c42d64a to
e439965
Compare
85c2474 to
c003b5b
Compare
ad6f9ec to
2b0654b
Compare
c003b5b to
589cdd3
Compare
589cdd3 to
8911a78
Compare
This was referenced Dec 11, 2025
This was referenced Dec 12, 2025
8911a78 to
463fffb
Compare
This was referenced Dec 15, 2025
Draft
463fffb to
38270c4
Compare
38270c4 to
be3ef0b
Compare
2681799 to
7ed6c81
Compare
deuszx
reviewed
Jan 8, 2026
Comment on lines
+138
to
+147
| // Configure batch processor for high-throughput scenarios | ||
| // Larger queue (16k instead of 2k default) to handle benchmark load | ||
| // Faster export (100ms instead of 5s default) to prevent queue buildup | ||
| let batch_config = opentelemetry_sdk::trace::BatchConfigBuilder::default() | ||
| .with_max_queue_size(16384) // 8x default, enough for 8 shards under load | ||
| .with_max_export_batch_size(2048) // Larger batches for efficiency | ||
| .with_scheduled_delay(std::time::Duration::from_millis(100)) // Fast export to prevent queue buildup | ||
| .build(); | ||
|
|
||
| let batch_processor = BatchSpanProcessor::new(exporter, batch_config); |
Contributor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to have different configs for when running with benchmark feature and on production?
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.

Motivation
Direct export to Tempo from every pod with no sampling can create a very high volume of data. That can cause the proxy/shards to get backpressured and get filled with errors, as well as a performance hit in them.
A two-tier collector architecture (routers receiving from pods, samplers doing tail-based sampling) is more efficient.
Proposal
Add Kubernetes templates for the OTel collector infrastructure:
Test Plan
Release Plan