-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Describe the bug
We're collecting pod logs via fluentd in a Kubernetes environment.
We're using buffer sections to enhance the stability of log collection.
Checking the memory status of fluentd using the Grafana dashboard, we see that memory usage continues to increase as fluentd's runtime increases.
To resolve this situation, I upgraded the fluentd version to 1.18, but the issue persisted.
I ran several tests, but the only one that didn't cause a memory leak was the one that didn't use the buffer section.
To Reproduce
I modified some of the file-related code in buffer.rb because it seemed that closing a file after opening it was not working properly.
But nothing was solved.
Expected behavior
resolve memory leak
Your Environment
- Fluentd version: fluentd-kubernetes-daemonset:v1.14.6Your Configuration
<buffer>
@type file
path <path>
flush_at_shutdown true
flush_mode interval
flush_thread_count 16
flush_interval 1
chunk_limit_size 10m
total_limit_size 65g
queued_chunks_limit_size 4096
</buffer>Your Error Log
not shownAdditional context
It is deployed as a daemonset on the k8s node, and the amount of logs collected per day exceeds 50GB.
I have an environment with less than 50gb, but the phenomenon was the same.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status