Skip to content

the elastic output plugin sigsegvs trying to call strftime #10339

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tkennedy1-godaddy opened this issue May 15, 2025 · 1 comment · May be fixed by #10353 or #10356
Open

the elastic output plugin sigsegvs trying to call strftime #10339

tkennedy1-godaddy opened this issue May 15, 2025 · 1 comment · May be fixed by #10353 or #10356

Comments

@tkennedy1-godaddy
Copy link
Contributor

Bug Report

Describe the bug
Given a set of logs with varying formatting (these are the output from docker's container runtime, but this was originally observed when using AWS Firelens to forward container logs to fluent-bit running as a sidecar task)

The elasticsearch output plugin will sigsegv when trying to call strftime:

17:40:32 [2025/05/15 21:40:32] [engine] caught signal (SIGSEGV)
17:40:32 #0  0xffffa73013db      in  ???() at ???:0
17:40:32 #1  0xaaaabf4f2fcf      in  elasticsearch_format() at plugins/out_es/es.c:362
17:40:32 #2  0xaaaabf4f3703      in  cb_es_flush() at plugins/out_es/es.c:828
17:40:32 #3  0xaaaabf9b97d7      in  co_switch() at lib/monkey/deps/flb_libco/aarch64.c:133
17:40:32 #4  0xffffffffffffffff  in  ???() at ???:0

To Reproduce

  • Rubular link if applicable:
  • Example log message if applicable:
{"container_id":"d51a6872fa3d41a9bf3ac7af0f2756b9-0507563709","container_name":"template-builder","source":"stdout","log":""}
{"container_id":"d51a6872fa3d41a9bf3ac7af0f2756b9-0507563709","container_name":"template-builder","source":"stdout","log":"> [email protected] start:instrumented"}
{"container_name":"template-builder","source":"stdout","log":"> node --import ./instrumentation.mjs server.js","container_id":"d51a6872fa3d41a9bf3ac7af0f2756b9-0507563709"}
{"container_id":"d51a6872fa3d41a9bf3ac7af0f2756b9-0507563709","container_name":"template-builder","source":"stdout","log":""}
{"container_id":"d51a6872fa3d41a9bf3ac7af0f2756b9-0507563709","container_name":"template-builder","source":"stdout","log":"Instrumentation started for template-builder-svc version template-builder-svc"}
{"container_id":"d51a6872fa3d41a9bf3ac7af0f2756b9-0507563709","container_name":"template-builder","source":"stdout","log":"Loading config for environment dev..."}
{"container_id":"d51a6872fa3d41a9bf3ac7af0f2756b9-0507563709","container_name":"template-builder","source":"stdout","log":"{\\"@timestamp\\":\\"2025-03-26T21:17:11.234Z\\",\\"ecs.version\\":\\"8.10.0\\",\\"log.level\\":\\"info\\",\\"message\\":\\"Logging loaded. LOGGING.SIMPLE: false, LOGGING.LEVEL: info\\",\\"service\\":{\\"_deployed_at\\":\\"2025-03-04T21:06:39Z\\",\\"_deployed_branch\\":\\"main\\",\\"_sha\\":\\"a97697d7\\",\\"environment\\":\\"dev\\",\\"name\\":\\"template-builder-svc\\",\\"version\\":\\"1.1.0\\"}}"}
  • Steps to reproduce the problem:

Use fluent-cat to send these messages to a fluent-bit process with the following config

```yaml service: flush: 1 daemon: off log_level: ${FLUENT_BIT_LOG_LEVEL}

includes:

  • /opt/fluent-bit/conf/*yaml

parsers:

  • name: json
    format: json
    decode_field: json log
    time_key: "@timestamp"

pipeline:
inputs:
- name: forward
unix_path: /var/run/fluent.sock
tag: app_aws_firelens

  processors:
    logs:
      - name: parser
        parser: json
        key_name: log

filters:
- name: nest
match: app_aws_firelens
operation: lift
nested_under: log
# if an app emits a plain-text log, the lift operation will not work, and the log message will remain plaintext.
# if we lift and log still exists, we rename it so that log remains an object.
- name: modify
match: app_aws_firelens
condition: A_key_matches log
hard_rename:
- log log.message
- name: modify
match: app_aws_firelens
condition: A_key_matches fields.timestamp
hard_copy:
- fields.timestamp "@timestamp"
- name: modify
match: app_aws_firelens
condition: A_key_matches fields.message
hard_copy:
- fields.message "message"

outputs:
- name: es
match: app_aws_firelens
host: ${INGESTION_HOST}
port: ${INGESTION_PORT}
index: ${APP_LOGS_INDEX}
pipeline: ${DEFAULT_PIPELINE}
http_user: ${INGESTION_USER}
http_passwd: ${INGESTION_USER_PASSWORD}
suppress_type_name: true
trace_error: true
tls: true


**Expected behavior**
fluent-bit should continue operating correctly. perhaps show an error in the logs.

**Screenshots**
<!--- If applicable, add screenshots to help explain your problem. -->

**Your Environment**
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: 4.0.1
* Configuration: see details aboe
* Environment name and version (e.g. Kubernetes? What version?): Docker, AWS Fargate/ECS and "native" Mac OS 14.7.5
* Server type and version: Mac OS, ECS/Fargate
* Operating System and version:
* Filters and plugins: Elasticsearch, forward, nest, modify

**Additional context**
This seems to only happen when we have mixed message types AND are outputting to elasticsearch (well, since the error is in that output plugin). Testing this locally using `stdout` output plugin is fine.
@tkennedy1-godaddy
Copy link
Contributor Author

This is an issue if the index being passed into the configuration is empty. In this case APP_LOGS_INDEX is set to "".

This should be guarded against probably. I'll make a PR to propose a solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
1 participant