-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Describe the bug
Describe the bug:
Fluentd metric "fluentd_router_records_total" is missing worker_id label on the aggregated_metrics endpoint which leads to problems when scraped by prometheus:
{"time":"2025-09-24T09:53:43.708254288Z","level":"WARN","source":"scrape.go:1884","msg":"Error on ingesting samples with different value but same timestamp","component":"scrape manager","scrape_pool":"serviceMonitor/logging/logging-fluentd-metrics/0","target":{},"num_dropped":243}`
To Reproduce
I deployed latest logging-operator chart version and enabled fluentd with a serviceMonitor:
fluentd:
metrics:
prometheusRules: false
serviceMonitor: true
Expected behavior
I would expect the "fluentd_router_records_total" metric to have a worker_id label as all other metrics.
fluentd_router_records_total{flow="@d15dc1e90fc1d60534f40b874312ada1",id="flow:foo-bar:logs"} 1.0
Your Environment
- Fluentd version: 1.17.1
- Logging Operator (Fluentd image): ghcr.io/kube-logging/logging-operator/fluentd:5.2.0-full
- Logging Operator chart version: 5.2.0
- Package version: using container image
- Operating system: Linux
- Kernel version: 6.8.0-60-generic
- Kubernetes version: v1.32.8+rke2r1
- Deployment method: BanzaiCloud Logging Operator Helm chart
- Configuration: Generated by Logging Operator (custom `Flow` and `Output` CRDs)Your Configuration
-Your Error Log
`{"time":"2025-09-24T09:53:43.708254288Z","level":"WARN","source":"scrape.go:1884","msg":"Error` on ingesting samples with different value but same timestamp","component":"scrape manager","scrape_pool":"serviceMonitor/logging/logging-fluentd-metrics/0","target":{},"num_dropped":243}`
`fluentd_router_records_total{flow="@d15dc1e90fc1d60534f40b874312ada1",id="flow:foo-bar:logs"} 1.0`Additional context
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status