You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/filters/multiline-stacktrace.md
+51-51Lines changed: 51 additions & 51 deletions
Original file line number
Diff line number
Diff line change
@@ -1,73 +1,67 @@
1
1
---
2
-
description: >-
3
-
Concatenate Multiline or Stack trace log messages. Available on Fluent Bit >=
2
+
description: Concatenate multiline or stack trace log messages. Available on Fluent Bit >=
4
3
v1.8.2.
5
4
---
6
5
7
6
# Multiline
8
7
9
-
The Multiline Filter helps to concatenate messages that originally belong to one context but were split across multiple records or log lines. Common examples are stack traces or applications that print logs in multiple lines.
8
+
The Multiline filter helps concatenate messages that originally belonged to one context but were split across multiple records or log lines. Common examples are stack traces or applications that print logs in multiple lines.
10
9
11
-
As part of the built-in functionality, without major configuration effort, you can enable one of ours built-in parsers with auto detection and multiformat support:
10
+
Along with multiline filters, you can enable one of the following built-in Fluent Bit parsers with auto detection and multi-format support:
* The usage of this filter depends on a previous configuration of a [Multiline Parser](../../administration/configuring-fluent-bit/multiline-parsing.md) definition.
21
-
* If you wish to concatenate messages read from a log file, it is highly recommended to use the multiline support in the [Tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support) itself. This is because performing concatenation while reading the log file is more performant. Concatenating messages originally split by Docker or CRI container engines, is supported in the [Tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support).
19
+
- The usage of this filter depends on a previous configuration of a [multiline parser](../../administration/configuring-fluent-bit/multiline-parsing.md) definition.
20
+
- To concatenate messages read from a log file, it's highly recommended to use the multiline support in the [Tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support) itself. This is because performing concatenation while reading the log file is more performant. Concatenating messages originally split by Docker or CRI container engines, is supported in the [Tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support).
22
21
23
22
{% hint style="warning" %}
24
-
This filter only performs buffering that persists across different Chunks when `Buffer` is enabled. Otherwise, the filter will *process one Chunk at a time* and is not suitable for most inputs which might send multiline messages in separate chunks.
23
+
This filter only performs buffering that persists across different Chunks when `Buffer` is enabled. Otherwise, the filter processes one chunk at a time and isn't suitable for most inputs which might send multiline messages in separate chunks.
25
24
26
-
When buffering is enabled, the filter does not immediately emit messages it receives. It uses the in_emitter plugin, same as the [Rewrite Tag Filter](pipeline/filters/rewrite-tag.md), and emits messages once they are fully concatenated, or a timeout is reached.
25
+
When buffering is enabled, the filter doesn't immediately emit messages it receives. It uses the `in_emitter` plugin, similar to the [Rewrite Tag filter](pipeline/filters/rewrite-tag.md), and emits messages once they're fully concatenated, or a timeout is reached.
27
26
28
27
{% endhint %}
29
28
30
29
{% hint style="warning" %}
31
30
32
31
Since concatenated records are re-emitted to the head of the Fluent Bit log pipeline, you can not configure multiple multiline filter definitions that match the same tags. This will cause an infinite loop in the Fluent Bit pipeline; to use multiple parsers on the same logs, configure a single filter definitions with a comma separated list of parsers for `multiline.parser`. For more, see issue [#5235](https://github.com/fluent/fluent-bit/issues/5235).
33
32
34
-
Secondly, for the same reason, the multiline filter should be the **first** filter. Logs will be re-emitted by the multiline filter to the head of the pipeline- the filter will ignore its own re-emitted records, but other filters won't. If there are filters before the multiline filter, they will be applied twice.
33
+
Secondly, for the same reason, the multiline filter should be the first filter. Logs will be re-emitted by the multiline filter to the head of the pipeline- the filter will ignore its own re-emitted records, but other filters won't. If there are filters before the multiline filter, they will be applied twice.
35
34
36
35
{% endhint %}
37
36
38
-
## Configuration Parameters
37
+
## Configuration parameters
39
38
40
39
The plugin supports the following configuration parameters:
| multiline.parser| Specify one or multiple [Multiline Parser definitions](../../administration/configuring-fluent-bit/multiline-parsing.md) to apply to the content. You can specify multiple multiline parsers to detect different formats by separating them with a comma.|
45
-
| multiline.key_content | Key name that holds the content to process. Note that a Multiline Parser definition can already specify the `key_content`to use, but this option allows to overwrite that value for the purpose of the filter.|
46
-
| mode | Mode can be `parser` for regex concat, or `partial_message` to concat split docker logs. |
47
-
| buffer | Enable buffered mode. In buffered mode, the filter can concatenate multilines from inputs that ingest records one by one (ex: Forward), rather than in chunks, re-emitting them into the beggining of the pipeline (with the same tag) using the in_emitter instance. With buffer off, this filter will not work with most inputs, except tail. |
48
-
| flush_ms | Flush time for pending multiline records. Defaults to 2000. |
49
-
| emitter_name | Name for the emitter input instance which re-emits the completed records at the beginning of the pipeline. |
50
-
| emitter_storage.type | The storage type for the emitter input instance. This option supports the values `memory`\(default\) and `filesystem`. |
51
-
|emitter\_mem\_buf\_limit| Set a limit on the amount of memory the emitter can consume if the outputs provide backpressure. The default for this limit is `10M`. The pipeline will pause once the buffer exceeds the value of this setting. For example, if the value is set to `10M` then the pipeline will pause if the buffer exceeds `10M`. The pipeline will remain paused until the output drains the buffer below the `10M` limit. |
41
+
| Property | Description |
42
+
| -------- | ----------- |
43
+
|`multiline.parser`| Specify one or multiple [Multiline Parser definitions](../../administration/configuring-fluent-bit/multiline-parsing.md) to apply to the content. You can specify multiple multiline parsers to detect different formats by separating them with a comma. |
44
+
|`multiline.key_content`| Key name that holds the content to process. A multiline parser definition can specify the `key_content`This option allows for overwriting that value for the purpose of the filter. |
45
+
|`mode`| Mode can be `parser` for regular expression concatenation, or `partial_message` to concatenate split Docker logs. |
46
+
|`buffer`| Enable buffered mode. In buffered mode, the filter can concatenate multiple lines from inputs that ingest records one by one (like Forward), rather than in chunks, re-emitting them into the beginning of the pipeline (with the same tag) using the `in_emitter` instance. With buffer off, this filter won't work with most inputs, except Tail. |
47
+
|`flush_ms`| Flush time for pending multiline records. Default: `2000`. |
48
+
|`emitter_name`| Name for the emitter input instance which re-emits the completed records at the beginning of the pipeline. |
49
+
|`emitter_storage.type`| The storage type for the emitter input instance. This option supports the values `memory` (default) and `filesystem`. |
50
+
|`emitter_mem_buf_limit`| Set a limit on the amount of memory the emitter can consume if the outputs provide backpressure. The default for this limit is `10M`. The pipeline will pause once the buffer exceeds the value of this setting. or example, if the value is set to `10M` then the pipeline pauses if the buffer exceeds `10M`. The pipeline will remain paused until the output drains the buffer below the `10M` limit. |
52
51
52
+
## Configuration example
53
53
54
-
## Configuration Example
54
+
The following example aims to parse a log file called `test.log` that contains some full lines, a custom Java stack trace and a Go Stack Trace.
55
55
56
-
The following example aims to parse a log file called `test.log` that contains some full lines, a custom Java stacktrace and a Go stacktrace.
The following example files can be located [in the Fluent Bit repository](https://github.com/fluent/fluent-bit/tree/master/documentation/examples/multiline/filter_multiline).
63
57
64
58
Example files content:
65
59
66
60
{% tabs %}
67
61
{% tab title="fluent-bit.conf" %}
68
-
This is the primary Fluent Bit configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parsers `multiline-regex-test` and `go`. Then it sends the processing to the standard output.
62
+
This is the primary Fluent Bit configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parsers `multiline-regex-test` and `go`. Then it sends the processing to the standard output.
69
63
70
-
```
64
+
```python
71
65
[SERVICE]
72
66
flush 1
73
67
log_level info
@@ -87,14 +81,15 @@ This is the primary Fluent Bit configuration file. It includes the `parsers_mult
87
81
[OUTPUT]
88
82
name stdout
89
83
match *
90
-
84
+
91
85
```
86
+
92
87
{% endtab %}
93
88
94
89
{% tab title="parsers_multiline.conf" %}
95
-
This second file defines a multiline parser for the example. Note that a second multiline parser called `go` is used in **fluent-bit.conf**, but this one is a built-in parser.
90
+
This second file defines a multiline parser for the example. A second multiline parser called `go` is used in `fluent-bit.conf`, but this one is a built-in parser.
96
91
97
-
```
92
+
```python
98
93
[MULTILINE_PARSER]
99
94
name multiline-regex-test
100
95
type regex
@@ -112,14 +107,15 @@ This second file defines a multiline parser for the example. Note that a second
[1] tail.0: [1626736433.143570538, {"log"=>"Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
195
196
at com.myproject.module.MyProject.badMethod(MyProject.java:22)
@@ -253,25 +254,24 @@ created by runtime.gcenable
253
254
254
255
```
255
256
256
-
The lines that did not match a pattern are not considered as part of the multiline message, while the ones that matched the rules were concatenated properly.
257
+
Lines that don't match a pattern aren't considered as part of the multiline message, while the ones that matched the rules were concatenated properly.
257
258
259
+
## Docker partial message use case
258
260
259
-
## Docker Partial Message Use Case
261
+
When Fluent Bit is consuming logs from a container runtime, such as Docker, these logs will be split when larger than a certain limit, usually 16KB. If your application emits a 100K log line, it will be split into seven partial messages. If you are using the [Fluentd Docker Log Driver](https://docs.docker.com/config/containers/logging/fluentd/) to send the logs to Fluent Bit, they might look like this:
260
262
261
-
When Fluent Bit is consuming logs from a container runtime, such as docker, these logs will be split above a certain limit, usually 16KB. If your application emits a 100K log line, it will be split into 7 partial messages. If you are using the [Fluentd Docker Log Driver](https://docs.docker.com/config/containers/logging/fluentd/) to send the logs to Fluent Bit, they might look like this:
Fluent Bit can re-combine these logs that were split by the runtime and remove the partial message fields. The filter example below is for this use case.
267
+
Fluent Bit can re-combine these logs that were split by the runtime and remove the partial message fields. The following filter example is for this use case.
268
268
269
-
```
269
+
```python
270
270
[FILTER]
271
271
name multiline
272
272
match *
273
273
multiline.key_content log
274
274
mode partial_message
275
275
```
276
276
277
-
The two options for `mode` are mutually exclusive in the filter. If you set the `mode` to `partial_message` then the `multiline.parser` option is not allowed.
277
+
The two options for `mode` are mutually exclusive in the filter. If you set the `mode` to `partial_message` then the `multiline.parser` option isn't allowed.
0 commit comments