Skip to content

Commit 3ef1d7c

Browse files
Merge pull request #1448 from fluent/add-workers-info
Add workers info
2 parents af279ac + be9e5c0 commit 3ef1d7c

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+81
-122
lines changed

pipeline/outputs/azure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ To get more details about how to setup Azure Log Analytics, please refer to the
2020
| Log_Type_Key | If included, the value for this key will be looked upon in the record and if present, will over-write the `log_type`. If not found then the `log_type` value will be used. | |
2121
| Time\_Key | Optional parameter to specify the key name where the timestamp will be stored. | @timestamp |
2222
| Time\_Generated | If enabled, the HTTP request header 'time-generated-field' will be included so Azure can override the timestamp with the key specified by 'time_key' option. | off |
23+
| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
2324

2425
## Getting Started
2526

@@ -61,4 +62,3 @@ Another example using the `Log_Type_Key` with [record-accessor](https://docs.flu
6162
Customer_ID abc
6263
Shared_Key def
6364
```
64-

pipeline/outputs/azure_blob.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ We expose different configuration properties. The following table lists all the
3131
| emulator\_mode | If you want to send data to an Azure emulator service like [Azurite](https://github.com/Azure/Azurite), enable this option so the plugin will format the requests to the expected format. | off |
3232
| endpoint | If you are using an emulator, this option allows you to specify the absolute HTTP address of such service. e.g: [http://127.0.0.1:10000](http://127.0.0.1:10000). | |
3333
| tls | Enable or disable TLS encryption. Note that Azure service requires this to be turned on. | off |
34+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
3435

3536
## Getting Started
3637

@@ -128,4 +129,3 @@ Azurite Queue service is successfully listening at http://127.0.0.1:10001
128129
127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log HTTP/1.1" 201 -
129130
127.0.0.1 - - [03/Sep/2020:17:40:04 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log?comp=appendblock HTTP/1.1" 201 -
130131
```
131-

pipeline/outputs/azure_kusto.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@ By default, Kusto will insert incoming ingestions into a table by inferring the
6363
| tag_key | The key name of tag. If `include_tag_key` is false, This property is ignored. | `tag` |
6464
| include_time_key | If enabled, a timestamp is appended to output. The key name is used `time_key` property. | `On` |
6565
| time_key | The key name of time. If `include_time_key` is false, This property is ignored. | `timestamp` |
66+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
6667

6768
### Configuration File
6869

pipeline/outputs/azure_logs_ingestion.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ To get more details about how to setup these components, please refer to the fol
3737
| time\_key | _Optional_ - Specify the key name where the timestamp will be stored. | `@timestamp` |
3838
| time\_generated | _Optional_ - If enabled, will generate a timestamp and append it to JSON. The key name is set by the 'time_key' parameter. | `true` |
3939
| compress | _Optional_ - Enable HTTP payload gzip compression. | `true` |
40+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
4041

4142
## Getting Started
4243

@@ -58,7 +59,7 @@ Use this configuration to quickly get started:
5859
Name tail
5960
Path /path/to/your/sample.log
6061
Tag sample
61-
Key RawData
62+
Key RawData
6263
# Or use other plugins Plugin
6364
# [INPUT]
6465
# Name cpu

pipeline/outputs/bigquery.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,7 @@ You must configure workload identity federation in GCP before using it with Flue
5959
| pool\_id | GCP workload identity pool where the identity provider was created. Used to construct the full resource name of the identity provider. | |
6060
| provider\_id | GCP workload identity provider. Used to construct the full resource name of the identity provider. Currently only AWS accounts are supported. | |
6161
| google\_service\_account | Email address of the Google service account to impersonate. The workload identity provider must have permissions to impersonate this service account, and the service account must have permissions to access Google BigQuery resources (e.g. `write` access to tables) | |
62+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
6263

6364
See Google's [official documentation](https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll) for further details.
6465

@@ -77,4 +78,3 @@ If you are using a _Google Cloud Credentials File_, the following configuration
7778
dataset_id my_dataset
7879
table_id dummy_table
7980
```
80-

pipeline/outputs/chronicle.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ Fluent Bit's Chronicle output plugin uses a JSON credentials file for authentica
3434
| log\_type | The log type to parse logs as. Google Chronicle supports parsing for [specific log types only](https://cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers). | |
3535
| region | The GCP region in which to store security logs. Currently, there are several supported regions: `US`, `EU`, `UK`, `ASIA`. Blank is handled as `US`. | |
3636
| log\_key | By default, the whole log record will be sent to Google Chronicle. If you specify a key name with this option, then only the value of that key will be sent to Google Chronicle. | |
37+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
3738

3839
See Google's [official documentation](https://cloud.google.com/chronicle/docs/reference/ingestion-api) for further details.
3940

pipeline/outputs/cloudwatch.md

Lines changed: 1 addition & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ See [here](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b
3434
| profile | Option to specify an AWS Profile for credentials. Defaults to `default` |
3535
| auto\_retry\_requests | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to `true`. |
3636
| external\_id | Specify an external ID for the STS API, can be used with the role\_arn parameter if your role requires an external ID. |
37+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. Default: `1`. |
3738

3839
## Getting Started
3940

@@ -80,28 +81,6 @@ The following AWS IAM permissions are required to use this plugin:
8081
}
8182
```
8283

83-
### Worker support
84-
85-
Fluent Bit 1.7 adds a new feature called `workers` which enables outputs to have dedicated threads. This `cloudwatch_logs` plugin has partial support for workers in Fluent Bit 2.1.11 and prior. **2.1.11 and prior, the plugin can support a single worker; enabling multiple workers will lead to errors/indeterminate behavior.**
86-
Starting from Fluent Bit 2.1.12, the `cloudwatch_logs` plugin added full support for workers, meaning that more than one worker can be configured.
87-
88-
Example:
89-
90-
```
91-
[OUTPUT]
92-
Name cloudwatch_logs
93-
Match *
94-
region us-east-1
95-
log_group_name fluent-bit-cloudwatch
96-
log_stream_prefix from-fluent-bit-
97-
auto_create_group On
98-
workers 1
99-
```
100-
101-
If you enable workers, you are enabling one or more dedicated threads for your CloudWatch output.
102-
We recommend starting with 1 worker, evaluating the performance, and then enabling more workers if needed.
103-
For most users, the plugin can provide sufficient throughput with 0 or 1 workers.
104-
10584
### Log Stream and Group Name templating using record\_accessor syntax
10685

10786
Sometimes, you may want the log group or stream name to be based on the contents of the log record itself. This plugin supports templating log group and stream names using Fluent Bit [record\_accessor](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/record-accessor) syntax.

pipeline/outputs/datadog.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ Before you begin, you need a [Datadog account](https://app.datadoghq.com/signup)
2525
| dd_source | _Recommended_ - A human readable name for the underlying technology of your service (e.g. `postgres` or `nginx`). If unset, Datadog will look for the source in the [`ddsource` attribute](https://docs.datadoghq.com/logs/log_configuration/pipelines/?tab=source#source-attribute). | |
2626
| dd_tags | _Optional_ - The [tags](https://docs.datadoghq.com/tagging/) you want to assign to your logs in Datadog. If unset, Datadog will look for the tags in the [`ddtags' attribute](https://docs.datadoghq.com/api/latest/logs/#send-logs). | |
2727
| dd_message_key | By default, the plugin searches for the key 'log' and remap the value to the key 'message'. If the property is set, the plugin will search the property name key. | |
28+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
2829

2930
### Configuration File
3031

pipeline/outputs/elasticsearch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ The **es** output plugin, allows to ingest your records into an [Elasticsearch](
4848
| Trace\_Error | If elasticsearch return an error, print the elasticsearch API request and response \(for diag only\) | Off |
4949
| Current\_Time\_Index | Use current time for index generation instead of message record | Off |
5050
| Suppress\_Type\_Name | When enabled, mapping types is removed and `Type` option is ignored. If using Elasticsearch 8.0.0 or higher - it [no longer supports mapping types](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html), so it shall be set to On. | Off |
51-
| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
51+
| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
5252

5353
> The parameters _index_ and _type_ can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the _database_ and _table_ concepts. Also see [the FAQ below](elasticsearch.md#faq)
5454

pipeline/outputs/file.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The plugin supports the following configuration parameters:
1212
| File | Set file name to store the records. If not set, the file name will be the _tag_ associated with the records. |
1313
| Format | The format of the file content. See also Format section. Default: out\_file. |
1414
| Mkdir | Recursively create output directory if it does not exist. Permissions set to 0755. |
15-
| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 1 |
15+
| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
1616

1717
## Format
1818

@@ -111,4 +111,3 @@ In your main configuration file append the following Input & Output sections:
111111
Match *
112112
Path output_dir
113113
```
114-

pipeline/outputs/firehose.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ See [here](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b
2828
| auto\_retry\_requests | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to `true`. |
2929
| external\_id | Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID. |
3030
| profile | AWS profile name to use. Defaults to `default`. |
31+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. Default: `1`. |
3132

3233
## Getting Started
3334

@@ -132,4 +133,3 @@ aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/
132133
```
133134

134135
For more see [the AWS for Fluent Bit github repo](https://github.com/aws/aws-for-fluent-bit#public-images).
135-

pipeline/outputs/flowcounter.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ The plugin supports the following configuration parameters:
99
| Key | Description | Default |
1010
| :--- | :--- | :--- |
1111
| Unit | The unit of duration. \(second/minute/hour/day\) | minute |
12+
| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
1213

1314
## Getting Started
1415

@@ -42,7 +43,7 @@ In your main configuration file append the following Input & Output sections:
4243
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
4344

4445
```bash
45-
$ fluent-bit -i cpu -o flowcounter
46+
$ fluent-bit -i cpu -o flowcounter
4647
Fluent Bit v1.x.x
4748
* Copyright (C) 2019-2020 The Fluent Bit Authors
4849
* Copyright (C) 2015-2018 Treasure Data
@@ -52,4 +53,3 @@ Fluent Bit v1.x.x
5253
[2016/12/23 11:01:20] [ info] [engine] started
5354
[out_flowcounter] cpu.0:[1482458540, {"counts":60, "bytes":7560, "counts/minute":1, "bytes/minute":126 }]
5455
```
55-

pipeline/outputs/forward.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The following parameters are mandatory for either Forward for Secure Forward mod
2323
| Send_options | Always send options (with "size"=count of messages) | False |
2424
| Require_ack_response | Send "chunk"-option and wait for "ack" response from server. Enables at-least-once and receiving server can control rate of traffic. (Requires Fluentd v0.14.0+ server) | False |
2525
| Compress | Set to 'gzip' to enable gzip compression. Incompatible with `Time_as_Integer=True` and tags set dynamically using the [Rewrite Tag](../filters/rewrite-tag.md) filter. Requires Fluentd server v0.14.7 or later. | _none_ |
26-
| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
26+
| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
2727

2828
## Secure Forward Mode Configuration Parameters
2929

pipeline/outputs/gelf.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ According to [GELF Payload Specification](https://go2docs.graylog.org/5-0/gettin
2222
| Gelf_Level_Key | Key to be used as the log level. Its value must be in [standard syslog levels](https://en.wikipedia.org/wiki/Syslog#Severity_level) (between 0 and 7). (_Optional in GELF_) | level |
2323
| Packet_Size | If transport protocol is `udp`, you can set the size of packets to be sent. | 1420 |
2424
| Compress | If transport protocol is `udp`, you can set this if you want your UDP packets to be compressed. | true |
25+
| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
2526

2627
### TLS / SSL
2728

pipeline/outputs/http.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The **http** output plugin allows to flush your records into a HTTP endpoint. Fo
3333
| gelf\_level\_key | Specify the key to use for the `level` in _gelf_ format | |
3434
| body\_key | Specify the key to use as the body of the request (must prefix with "$"). The key must contain either a binary or raw string, and the content type can be specified using headers\_key (which must be passed whenever body\_key is present). When this option is present, each msgpack record will create a separate request. | |
3535
| headers\_key | Specify the key to use as the headers of the request (must prefix with "$"). The key must contain a map, which will have the contents merged on the request headers. This can be used for many purposes, such as specifying the content-type of the data contained in body\_key. | |
36-
| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
36+
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
3737

3838
### TLS / SSL
3939

0 commit comments

Comments
 (0)