diff --git a/pipeline/outputs/s3.md b/pipeline/outputs/s3.md index 38c6643d5..9aca877a4 100644 --- a/pipeline/outputs/s3.md +++ b/pipeline/outputs/s3.md @@ -6,39 +6,24 @@ description: Send logs, data, and metrics to Amazon S3 ![AWS logo](<../../.gitbook/assets/image (9).png>) -The Amazon S3 output plugin lets you ingest records into the -[S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) -cloud object store. - -The plugin can upload data to S3 using the -[multipart upload API](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html) -or [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html). -Multipart is the default and is recommended. Fluent Bit will stream data in a series -of _parts_. This limits the amount of data buffered on disk at any point in time. -By default, every time 5 MiB of data have been received, a new part will be uploaded. -The plugin can create files up to gigabytes in size from many small chunks or parts -using the multipart API. All aspects of the upload process are configurable. - -The plugin lets you specify a maximum file size, and a timeout for uploads. A -file will be created in S3 when the maximum size or the timeout is reached, whichever -comes first. +The _Amazon S3_ output plugin lets you ingest records into the [S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) cloud object store. + +The plugin can upload data to S3 using the [multipart upload API](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html) or [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html). Multipart is the default and is recommended. Fluent Bit will stream data in a series of _parts_. This limits the amount of data buffered on disk at any point in time. By default, every time 5 MiB of data have been received, a new part will be uploaded. The plugin can create files up to gigabytes in size from many small chunks or parts using the multipart API. All aspects of the upload process are configurable. + +The plugin lets you specify a maximum file size, and a timeout for uploads. A file will be created in S3 when the maximum size or the timeout is reached, whichever comes first. Records are stored in files in S3 as newline delimited JSON. -See [AWS -Credentials](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b0edb2f9acd7cdfdbc3/administration/aws-credentials.md) +See [AWS Credentials](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b0edb2f9acd7cdfdbc3/administration/aws-credentials.md) for details about fetching AWS credentials. {% hint style="warning" %} -The [Prometheus success/retry/error metrics values](administration/monitoring.md) -output by the built-in HTTP server in Fluent Bit are meaningless for S3 output. S3 has -its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers apologize -for this feature gap; you can [track issue progress on GitHub](https://github.com/fluent/fluent-bit/issues/6141). +The [Prometheus success/retry/error metrics values](administration/monitoring.md) output by the built-in HTTP server in Fluent Bit are meaningless for S3 output. S3 has its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers acknowlege this feature gap, and you can [track issue progress on GitHub](https://github.com/fluent/fluent-bit/issues/6141). {% endhint %} -## Configuration Parameters +## Configuration parameters | Key | Description | Default | |--------------------| --------------------------------- | ----------- | @@ -51,7 +36,7 @@ for this feature gap; you can [track issue progress on GitHub](https://github.co | `upload_timeout` | When this amount of time elapses, Fluent Bit uploads and creates a new file in S3. Set to `60m` to upload a new file every hour. | `10m`| | `store_dir` | Directory to locally buffer data before sending. When using multipart uploads, data buffers until reaching the `upload_chunk_size`. S3 stores metadata about in progress multipart uploads in this directory, allowing pending uploads to be completed if Fluent Bit stops and restarts. It stores the current `$INDEX` value if enabled in the S3 key format so the `$INDEX` keeps incrementing from its previous value after Fluent Bit restarts. | `/tmp/fluent-bit/s3` | | `store_dir_limit_size` | Size limit for disk usage in S3. Limit theS3 buffers in the `store_dir` to limit disk usage. Use `store_dir_limit_size` instead of `storage.total_limit_size` which can be used for other plugins | `0` (unlimited) | -| `s3_key_format` | Format string for keys in S3. This option supports a UUID, strftime time formatters, a syntax for selecting parts of the Fluent log tag using a syntax inspired by the `rewrite_tag` filter. Add `$UUID` in the format string to insert a random string. Add `$INDEX` in the format string to insert an integer that increments each upload. The `$INDEX` value saves in the `store_dir`. Add `$TAG` in the format string to insert the full log tag. Add `$TAG[0]` to insert the first part of the tag in theS3 key. The tag is split into parts using the characters specified with the `s3_key_format_tag_delimiters` option. Add the extension directly after the last piece of the format string to insert a key suffix. To specify a key suffix in `use_put_object` mode, you must specify `$UUID`. See [S3 Key Format](#allowing-a-file-extension-in-the-s3-key-format-with-usduuid). Time in `s3_key` is the timestamp of the first record in the S3 file. | `/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S` | +| `s3_key_format` | Format string for keys in S3. This option supports a UUID, strftime time formatters, a syntax for selecting parts of the Fluent log tag using a syntax inspired by the `rewrite_tag` filter. Add `$UUID` in the format string to insert a random string. Add `$INDEX` in the format string to insert an integer that increments each upload. The `$INDEX` value saves in the `store_dir`. Add `$TAG` in the format string to insert the full log tag. Add `$TAG[0]` to insert the first part of the tag in theS3 key. The tag is split into parts using the characters specified with the `s3_key_format_tag_delimiters` option. Add the extension directly after the last piece of the format string to insert a key suffix. To specify a key suffix in `use_put_object` mode, you must specify `$UUID`. See [S3 Key Format](#allowing-a-file-extension-in-the-amazon-s3-key-format-with-usduuid). Time in `s3_key` is the timestamp of the first record in the S3 file. | `/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S` | | `s3_key_format_tag_delimiters` | A series of characters used to split the tag into parts for use with `s3_key_format`. option. | `.` | | `static_file_path` | Disables behavior where UUID string appends to the end of the S3 key name when `$UUID` isn't provided in `s3_key_format`. `$UUID`, time formatters, `$TAG`, and other dynamic key formatters all work as expected while this feature is set to true. | `false` | | `use_put_object` | Use the S3 `PutObject` API instead of the multipart upload API. When enabled, the key extension is only available when `$UUID` is specified in `s3_key_format`. If `$UUID` isn't included, a random string appends format string and the key extension can't be customized. | `false` | @@ -73,9 +58,7 @@ for this feature gap; you can [track issue progress on GitHub](https://github.co ## TLS / SSL -To skip TLS verification, set `tls.verify` as `false`. For more details about the -properties available and general configuration, refer to -[TLS/SSL](../../administration/transport-security.md). +To skip TLS verification, set `tls.verify` as `false`. For more details about the properties available and general configuration, refer to [TLS/SSL](../../administration/transport-security.md). ## Permissions @@ -96,66 +79,37 @@ The plugin requires the following AWS IAM permissions: ## Differences between S3 and other Fluent Bit outputs -The S3 output plugin is used to upload large files to an Amazon S3 bucket, while -most other outputs which send many requests to upload data in batches of a few -megabytes or less. - -When Fluent Bit receives logs, it stores them in chunks, either in memory or the -filesystem depending on your settings. Chunks are usually around 2 MB in size. -Fluent Bit sends chunks, in order, to each output that matches their tag. Most outputs -then send the chunk immediately to their destination. A chunk is sent to the output's -`flush` callback function, which must return one of `FLB_OK`, `FLB_RETRY`, or -`FLB_ERROR`. Fluent Bit keeps count of the return values from each output's -`flush` callback function. These counters are the data source for Fluent Bit error, retry, -and success metrics available in Prometheus format through its monitoring interface. - -The S3 output plugin conforms to the Fluent Bit output plugin specification. -Since S3's use case is to upload large files (over 2 MB), its behavior is different. -S3's `flush` callback function buffers the incoming chunk to the filesystem, and -returns an `FLB_OK`. This means Prometheus metrics available from the Fluent -Bit HTTP server are meaningless for S3. In addition, the `storage.total_limit_size` -parameter isn't meaningful for S3 since it has its own buffering system in the -`store_dir`. Instead, use `store_dir_limit_size`. S3 requires a writeable filesystem. -Running Fluent Bit on a read-only filesystem won't work with the S3 output. - -S3 uploads primarily initiate using the S3 -[`timer`](https://docs.aws.amazon.com/iotevents/latest/apireference/API_iotevents-data_Timer.html) -callback function, which runs separately from its `flush`. - -S3 has its own buffering system and its own callback to upload data, so the normal -sequential data ordering of chunks provided by the Fluent Bit engine can be -compromised. S3 has the `presevere_data_ordering` option which ensures data is -uploaded in the original order it was collected by Fluent Bit. - -### Summary: Uniqueness in S3 Plugin - -- The HTTP Monitoring interface output metrics aren't meaningful for S3. AWS - understands that this is non-ideal. See the - [open issue and design](https://github.com/fluent/fluent-bit/issues/6141) - to allow S3 to manage its own output metrics. +The S3 output plugin is used to upload large files to an Amazon S3 bucket, while most other outputs which send many requests to upload data in batches of a few megabytes or less. + +When Fluent Bit receives logs, it stores them in chunks, either in memory or the filesystem depending on your settings. Chunks are usually around 2 MB in size. Fluent Bit sends chunks, in order, to each output that matches their tag. Most outputs then send the chunk immediately to their destination. A chunk is sent to the output's `flush` callback function, which must return one of `FLB_OK`, `FLB_RETRY`, or `FLB_ERROR`. Fluent Bit keeps count of the return values from each output's `flush` callback function. These counters are the data source for Fluent Bit error, retry, and success metrics available in Prometheus format through its monitoring interface. + +The S3 output plugin conforms to the Fluent Bit output plugin specification. Because S3's use case is to upload large files (over 2 MB), its behavior is different. S3's `flush` callback function buffers the incoming chunk to the filesystem, and returns an `FLB_OK`. This means Prometheus metrics available from the Fluent Bit HTTP server are meaningless for S3. In addition, the `storage.total_limit_size` parameter isn't meaningful for S3 since it has its own buffering system in the `store_dir`. Instead, use `store_dir_limit_size`. S3 requires a writeable filesystem. Running Fluent Bit on a read-only filesystem won't work with the S3 output. + +S3 uploads primarily initiate using the S3 [`timer`](https://docs.aws.amazon.com/iotevents/latest/apireference/API_iotevents-data_Timer.html) callback function, which runs separately from its `flush`. + +S3 has its own buffering system and its own callback to upload data, so the normal sequential data ordering of chunks provided by the Fluent Bit engine can be compromised. S3 has the `presevere_data_ordering` option which ensures data is uploaded in the original order it was collected by Fluent Bit. + +### Summary: uniqueness in the Amazon S3 plugin + +- The HTTP Monitoring interface output metrics aren't meaningful for S3. AWS understands that this is non-ideal. See the [open issue and design](https://github.com/fluent/fluent-bit/issues/6141) to allow S3 to manage its own output metrics. - You must use `store_dir_limit_size` to limit the space on disk used by S3 buffer files. -- The original ordering of data inputted to Fluent Bit might not be preserved unless you enable -`preserve_data_ordering On`. +- The original ordering of data inputted to Fluent Bit might not be preserved unless you enable `preserve_data_ordering On`. -## S3 Key Format and Tag Delimiters +## S3 key format and tag delimiters -In Fluent Bit, all logs have an associated tag. The `s3_key_format` option lets you -inject the tag into the S3 key using the following syntax: +In Fluent Bit, all logs have an associated tag. The `s3_key_format` option lets you inject the tag into the S3 key using the following syntax: - `$TAG`: The full tag. -- `$TAG[n]`: The nth part of the tag (index starting at zero). This syntax is copied - from the rewrite tag filter. By default, tag parts are separated with - dots, but you can change this with `s3_key_format_tag_delimiters`. +- `$TAG[n]`: The nth part of the tag (index starting at zero). This syntax is copied from the rewrite tag filter. By default, tag parts are separated with dots, but you can change this with `s3_key_format_tag_delimiters`. -In the following example, assume the date is `January 1st, 2020 00:00:00` and the tag -associated with the logs in question is `my_app_name-logs.prod`. +In the following example, assume the date is `January 1st, 2020 00:00:00` and the tag associated with the logs in question is `my_app_name-logs.prod`: {% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -191,34 +145,19 @@ With the delimiters as `.` and `-`, the tag splits into parts as follows: The key in S3 will be `/prod/my_app_name/2020/01/01/00/00/00/bgdHN1NM.gz`. -### Allowing a file extension in the S3 Key Format with $UUID +### Allowing a file extension in the Amazon S3 key format with `$UUID` -The Fluent Bit S3 output was designed to ensure that previous uploads will never be -overwritten by a subsequent upload. The `s3_key_format` supports time formatters, -`$UUID`, and `$INDEX`. `$INDEX` is special because it's saved in the `store_dir`. If -you restart Fluent Bit with the same disk, it can continue incrementing the -index from its last value in the previous run. +The Fluent Bit S3 output was designed to ensure that previous uploads will never be overwritten by a subsequent upload. The `s3_key_format` supports time formatters, `$UUID`, and `$INDEX`. `$INDEX` is special because it's saved in the `store_dir`. If you restart Fluent Bit with the same disk, it can continue incrementing the index from its last value in the previous run. -For files uploaded with the `PutObject` API, the S3 output requires that a unique -random string be present in the S3 key. Many of the use cases for -`PutObject` uploads involve a short time period between uploads, so a timestamp -in the S3 key might not be unique enough between uploads. For example, if you only -specify minute granularity timestamps in the S3 key, with a small upload size, it's -possible to have two uploads that have timestamps set in the same minute. This -requirement can be disabled with `static_file_path On`. +For files uploaded with the `PutObject` API, the S3 output requires that a unique random string be present in the S3 key. Many of the use cases for `PutObject` uploads involve a short time period between uploads, so a timestamp in the S3 key might not be unique enough between uploads. For example, if you only specify minute granularity timestamps in the S3 key, with a small upload size, it's possible to have two uploads that have timestamps set in the same minute. This requirement can be disabled with `static_file_path On`. The `PutObject` API is used in these cases: - When you explicitly set `use_put_object On`. -- On startup when the S3 output finds old buffer files in the `store_dir` from - a previous run and attempts to send all of them at once. -- On shutdown. To prevent data loss the S3 output attempts to send all currently - buffered data at once. +- On startup when the S3 output finds old buffer files in the `store_dir` from a previous run and attempts to send all of them at once. +- On shutdown. To prevent data loss the S3 output attempts to send all currently buffered data at once. -You should always specify `$UUID` somewhere in your S3 key format. Otherwise, if the -`PutObject` API is used, S3 appends a random eight-character UUID to the end of your -S3 key. This means that a file extension set at the end of an S3 key will have the -random UUID appended to it. Disabled this with `static_file_path On`. +You should always specify `$UUID` somewhere in your S3 key format. Otherwise, if the `PutObject` API is used, S3 appends a random eight-character UUID to the end of your S3 key. This means that a file extension set at the end of an S3 key will have the random UUID appended to it. Disable this with `static_file_path On`. This example attempts to set a `.gz` extension without specifying `$UUID`: @@ -227,7 +166,7 @@ This example attempts to set a `.gz` extension without specifying `$UUID`: ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -257,26 +196,24 @@ pipeline: {% endtab %} {% endtabs %} -In the case where pending data is uploaded on shutdown, if the tag was `app`, the S3 -key in the S3 bucket might be: +In the case where pending data is uploaded on shutdown, if the tag was `app`, the S3 key in the S3 bucket might be: ```text /app/2022/12/25/00_00_00.gz-apwgylqg ``` -The S3 output appended a random string to the file extension, since this upload -on shutdown used the `PutObject` API. +The S3 output appended a random string to the file extension, since this upload on shutdown used the `PutObject` API. -There are two ways of disabling this behavior: +To disable this behavior, use one of the following methods: - Use `static_file_path`: - + {% tabs %} {% tab title="fluent-bit.yaml" %} - + ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -288,10 +225,10 @@ There are two ways of disabling this behavior: s3_key_format: '/$TAG/%Y/%m/%d/%H_%M_%S.gz' static_file_path: on ``` - + {% endtab %} {% tab title="fluent-bit.conf" %} - + ```text [OUTPUT] Name s3 @@ -304,7 +241,7 @@ There are two ways of disabling this behavior: s3_key_format /$TAG/%Y/%m/%d/%H_%M_%S.gz static_file_path On ``` - + {% endtab %} {% endtabs %} @@ -315,7 +252,7 @@ There are two ways of disabling this behavior: ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -347,44 +284,24 @@ There are two ways of disabling this behavior: ## Reliability -The `store_dir` is used to temporarily store data before upload. If Fluent Bit -stops suddenly, it will try to send all data and complete all uploads before it -shuts down. If it can not send some data, on restart it will look in the `store_dir` -for existing data and try to send it. - -Multipart uploads are ideal for most use cases because they allow the plugin to -upload data in small chunks over time. For example, 1 GB file can be created -from 200 5 MB chunks. While the file size in S3 will be 1 GB, only -5 MB will be buffered on disk at any one point in time. - -One drawback to multipart uploads is that the file and data aren't visible in S3 -until the upload is completed with a -[CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) -call. The plugin attempts to make this call whenever Fluent Bit is shut down to -ensure your data is available in S3. It also stores metadata about each upload in -the `store_dir`, ensuring that uploads can be completed when Fluent Bit restarts -(assuming it has access to persistent disk and the `store_dir` files will still be -present on restart). +The `store_dir` is used to temporarily store data before upload. If Fluent Bit stops suddenly, it will try to send all data and complete all uploads before it shuts down. If it can not send some data, on restart it will look in the `store_dir` for existing data and try to send it. -### Using S3 without persisted disk +Multipart uploads are ideal for most use cases because they allow the plugin to upload data in small chunks over time. For example, 1 GB file can be created from 200 5 MB chunks. Although the file size in S3 will be 1 GB, only 5 MB will be buffered on disk at any one point in time. + +One drawback to multipart uploads is that the file and data aren't visible in S3 until the upload is completed with a [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) call. The plugin attempts to make this call whenever Fluent Bit is shut down to ensure your data is available in S3. It also stores metadata about each upload in the `store_dir`, ensuring that uploads can be completed when Fluent Bit restarts (assuming it has access to persistent disk and the `store_dir` files will still be present on restart). -If you run Fluent Bit in an environment without persistent disk, or without the -ability to restart Fluent Bit and give it access to the data stored in the -`store_dir` from previous executions, some considerations apply. This might occur if -you run Fluent Bit on [AWS Fargate](https://aws.amazon.com/fargate/). +### Using S3 without persisted disk -In these situations, Fluent Bits recommend using the `PutObject` API and sending data -frequently, to avoid local buffering as much as possible. This will limit data loss -in the event Fluent Bit is killed unexpectedly. +If you run Fluent Bit in an environment without persistent disk, or without the ability to restart Fluent Bit and give it access to the data stored in the `store_dir` from previous executions, some considerations apply. This might occur if you run Fluent Bit on [AWS Fargate](https://aws.amazon.com/fargate/). -The following settings are recommended for this use case: +In these situations, Fluent Bits recommend using the `PutObject` API and sending data frequently, to avoid local buffering as much as possible. This will limit data loss in the event Fluent Bit is killed unexpectedly. The following settings are recommended for this use case: {% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -412,97 +329,46 @@ pipeline: {% endtab %} {% endtabs %} -## S3 Multipart Uploads +## S3 multipart uploads -With `use_put_object Off` (default), S3 will attempt to send files using multipart -uploads. For each file, S3 first calls -[CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html), -then a series of calls to -[UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) for -each fragment (targeted to be `upload_chunk_size` bytes), and finally -[CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) -to create the final file in S3. +With `use_put_object Off` (default), S3 will attempt to send files using multipart uploads. For each file, S3 first calls [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html), then a series of calls to [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) for each fragment (targeted to be `upload_chunk_size` bytes), and finally [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) to create the final file in S3. ### Fallback to `PutObject` -S3 [requires](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html) each -[UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) -fragment to be at least 5,242,880 bytes, otherwise the upload is rejected. +S3 [requires](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html) each [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) fragment to be at least 5,242,880 bytes, otherwise the upload is rejected. -The S3 output must sometimes fallback to the [`PutObject` -API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_`PutObject`.html). +The S3 output must sometimes fallback to the [`PutObject` API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_`PutObject`.html). Uploads are triggered by these settings: -- `total_file_size` and `upload_chunk_size`: When S3 has buffered data in the - `store_dir` that meets the desired `total_file_size` (for `use_put_object On`) or - the `upload_chunk_size` (for Multipart), it will trigger an upload operation. -- `upload_timeout`: Whenever locally buffered data has been present on the filesystem - in the `store_dir` longer than the configured `upload_timeout`, it will be sent - even when the desired byte size hasn't been reached. - If you configure a small `upload_timeout`, your files can be smaller - than the `total_file_size`. The timeout is evaluated against the time at which S3 - started buffering data for each unique tag (that is, the time when new data was - buffered for the unique tag after the last upload). The timeout is also evaluated - against the - [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) - time, so a multipart upload will be completed after `upload_timeout` has elapsed, - even if the desired size hasn't yet been reached. - -If your `upload_timeout` triggers an upload before the pending buffered data reaches -the `upload_chunk_size`, it might be too small for a multipart upload. S3 will -fallback to use the [`PutObject` API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html). - -When you enable compression, S3 applies the compression algorithm at send time. The -size settings trigger uploads based on the size of buffered data, not the -final compressed size. It's possible that after compression, buffered data no longer -meets the required minimum S3 -[UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) -size. If this occurs, you will see a log message like: +- `total_file_size` and `upload_chunk_size`: When S3 has buffered data in the `store_dir` that meets the desired `total_file_size` (for `use_put_object On`) or the `upload_chunk_size` (for Multipart), it will trigger an upload operation. +- `upload_timeout`: Whenever locally buffered data has been present on the filesystem in the `store_dir` longer than the configured `upload_timeout`, it will be sent even when the desired byte size hasn't been reached. If you configure a small `upload_timeout`, your files can be smaller than the `total_file_size`. The timeout is evaluated against the time at which S3 started buffering data for each unique tag (the time when new data was buffered for the unique tag after the last upload). The timeout is also evaluated against the [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) time, so a multipart upload will be completed after `upload_timeout` has elapsed, even if the desired size hasn't yet been reached. + +If your `upload_timeout` triggers an upload before the pending buffered data reaches the `upload_chunk_size`, it might be too small for a multipart upload. S3 will fallback to use the [`PutObject` API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html). + +When you enable compression, S3 applies the compression algorithm at send time. The size settings trigger uploads based on the size of buffered data, not the final compressed size. It's possible that after compression, buffered data no longer meets the required minimum S3 [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) size. If this occurs, you will see a log message like: ```text -[ info] [output:s3:s3.0] Pre-compression upload_chunk_size= 5630650, After -compression, chunk is only 1063320 bytes, the chunk was too small, using PutObject to upload +[ info] [output:s3:s3.0] Pre-compression upload_chunk_size= 5630650, After compression, chunk is only 1063320 bytes, the chunk was too small, using PutObject to upload ``` -If you encounter this frequently, use the numbers in the messages to guess your -compression factor. In this example, the buffered data was reduced from -5,630,650 bytes to 1,063,320 bytes. The compressed size is one-fifth the actual data size. -Configuring `upload_chunk_size 30M` should ensure each part is large enough after -compression to be over the minimum required part size of 5,242,880 bytes. +If you encounter this frequently, use the numbers in the messages to guess your compression factor. In this example, the buffered data was reduced from 5,630,650 bytes to 1,063,320 bytes. The compressed size is one-fifth the actual data size. Configuring `upload_chunk_size 30M` should ensure each part is large enough after compression to be over the minimum required part size of 5,242,880 bytes. -The S3 API allows the last part in an upload to be less than the 5,242,880 byte -minimum. If a part is too small for an existing upload, the S3 output will -upload that part and then complete the upload. +The S3 API allows the last part in an upload to be less than the 5,242,880 byte minimum. If a part is too small for an existing upload, the S3 output will upload that part and then complete the upload. ### `upload_timeout` constrains total multipart upload time for a single file -The `upload_timeout` evaluated against the -[CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) -time. A multipart upload will be completed after `upload_timeout` elapses, even if -the desired size has not yet been reached. +The `upload_timeout` evaluated against the [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) time. A multipart upload will be completed after `upload_timeout` elapses, even if the desired size hasn't yet been reached. ### Completing uploads -When -[CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) -is called, an `UploadID` is returned. S3 stores these IDs for active uploads in the -`store_dir`. Until -[CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) -is called, the uploaded data isn't visible in S3. - -On shutdown, S3 output attempts to complete all pending uploads. If an upload fails -to complete, the ID remains buffered in the `store_dir` in a directory called -`multipart_upload_metadata`. If you restart the S3 output with the same `store_dir` -it will discover the old UploadIDs and complete the pending uploads. The [S3 -documentation](https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/) -has suggestions on discovering and deleting or completing dangling uploads in your -buckets. +When [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) is called, an `UploadID` is returned. S3 stores these IDs for active uploads in the `store_dir`. Until [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) is called, the uploaded data isn't visible in S3. + +On shutdown, S3 output attempts to complete all pending uploads. If an upload fails to complete, the ID remains buffered in the `store_dir` in a directory called `multipart_upload_metadata`. If you restart the S3 output with the same `store_dir` it will discover the old UploadIDs and complete the pending uploads. The [S3 documentation](https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/) has suggestions on discovering and deleting or completing dangling uploads in your buckets. ## Usage with MinIO -[MinIO](https://min.io/) is a high-performance, S3 compatible object storage and you -can build your app with S3 functionality without S3. +[MinIO](https://min.io/) is a high-performance, S3 compatible object storage and you can build your app with S3 capability without S3. The following example runs [a MinIO server](https://docs.min.io/docs/minio-quickstart-guide.html) at `localhost:9000`, and create a bucket of `your-bucket`. @@ -514,7 +380,7 @@ Example: ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -540,8 +406,7 @@ The records store in the MinIO server. ## Usage with Google Cloud -You can send your S3 output to Google. You must generate HMAC keys on GCS and use -those keys for `access-key` and `access-secret`. +You can send your S3 output to Google. You must generate HMAC keys on GCS and use those keys for `access-key` and `access-secret`. Example: @@ -550,7 +415,7 @@ Example: ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -572,12 +437,11 @@ pipeline: {% endtab %} {% endtabs %} -## Get Started +## Get started -To send records into Amazon S3, you can run the plugin from the command line or -through the configuration file. +To send records into Amazon S3, run the plugin from the command line or through the configuration file. -### Command Line +### Command line The S3 plugin reads parameters from the command line through the `-p` argument: @@ -585,16 +449,16 @@ The S3 plugin reads parameters from the command line through the `-p` argument: fluent-bit -i cpu -o s3 -p bucket=my-bucket -p region=us-west-2 -p -m '*' -f 1 ``` -### Configuration File +### Configuration file -In your main configuration file append the following `Output` section: +In your main configuration file append the following section: {% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -629,7 +493,7 @@ An example using `PutObject` instead of multipart: ```yaml pipeline: - + outputs: - name: s3 match: '*' @@ -715,12 +579,9 @@ For more information, see the ### Use Apache Arrow for in-memory data processing -With Fluent Bit v1.8 or greater, the Amazon S3 plugin includes the support for -[Apache Arrow](https://arrow.apache.org/). Support isn't enabled by -default, and has a dependency on a shared version of `libarrow`. +With Fluent Bit v1.8 or later, the Amazon S3 plugin includes the support for [Apache Arrow](https://arrow.apache.org/). Support isn't enabled by default, and has a dependency on a shared version of `libarrow`. -To use this feature, `FLB_ARROW` must be turned on at compile time. Use the following -commands: +To use this feature, `FLB_ARROW` must be turned on at compile time. Use the following commands: ```shell cd build/ @@ -738,8 +599,8 @@ For example: ```yaml pipeline: inputs: - - name: cpu - + - name: cpu + outputs: - name: s3 bucket: your-bucket-name @@ -770,8 +631,7 @@ pipeline: Setting `Compression` to `arrow` makes Fluent Bit convert payload into Apache Arrow format. -Load, analyze, and process stored data using popular data -processing tools such as Python pandas, Apache Spark and Tensorflow. +Load, analyze, and process stored data using popular data processing tools such as Python pandas, Apache Spark and Tensorflow. The following example uses `pyarrow` to analyze the uploaded data: @@ -789,4 +649,4 @@ The following example uses `pyarrow` to analyze the uploaded data: 2 2021-04-27T09:33:55.539305Z 1.0 0.0 1.0 1.0 0.0 1.0 3 2021-04-27T09:33:56.539430Z 0.0 0.0 0.0 0.0 0.0 0.0 4 2021-04-27T09:33:57.539803Z 0.0 0.0 0.0 0.0 0.0 0.0 -``` \ No newline at end of file +``` diff --git a/vale-styles/FluentBit/Acronyms.yml b/vale-styles/FluentBit/Acronyms.yml index d6198e1fc..372413768 100644 --- a/vale-styles/FluentBit/Acronyms.yml +++ b/vale-styles/FluentBit/Acronyms.yml @@ -36,6 +36,7 @@ exceptions: - FIPS - GCC - GCP + - GCS - GDB - GET - GNU @@ -45,6 +46,7 @@ exceptions: - GUI - GZIP - HEC + - HMAC - HPA - HTML - HTTP @@ -62,6 +64,7 @@ exceptions: - LLVM - LTS - LTSV + - MIME - MAC - MQTT - MSK diff --git a/vale-styles/FluentBit/Headings.yml b/vale-styles/FluentBit/Headings.yml index ce56edc0f..ee35400ce 100644 --- a/vale-styles/FluentBit/Headings.yml +++ b/vale-styles/FluentBit/Headings.yml @@ -9,14 +9,18 @@ indicators: exceptions: - Amazon - Amazon CloudWatch + - Amazon ECR - Amazon ECR Public Gallery - Amazon Kinesis Data Firehose - Amazon Kinesis Data Streams - Amazon Kinesis Firehose - Amazon Kinesis Streams - Amazon OpenSearch Service + - Amazon S3 + - Apache Arrow - API - APIs + - AWS - AWS MSK IAM - AWS IAM - Azure diff --git a/vale-styles/FluentBit/Spelling-exceptions.txt b/vale-styles/FluentBit/Spelling-exceptions.txt index 6463cdb2f..ad4f4b1e3 100644 --- a/vale-styles/FluentBit/Spelling-exceptions.txt +++ b/vale-styles/FluentBit/Spelling-exceptions.txt @@ -73,6 +73,8 @@ Fargate Firehose FluentBit Fluentd +formatter +formatters github glibc Golang