2121class FirehoseClient extends AbstractApi
2222{
2323 /**
24- * Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery
25- * stream, use PutRecordBatch. Applications using these operations are referred to as producers.
24+ * Writes a single data record into an Firehose stream. To write multiple data records into a Firehose stream, use
25+ * PutRecordBatch. Applications using these operations are referred to as producers.
2626 *
27- * By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
27+ * By default, each Firehose stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
2828 * per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each
29- * delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
29+ * Firehose stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
3030 *
3131 * Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
32- * that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
32+ * that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the
3333 * actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
3434 *
35- * You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
35+ * You must specify the name of the Firehose stream and the data record when using PutRecord. The data record consists
3636 * of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log
3737 * file, geographic location data, website clickstream data, and so on.
3838 *
39+ * For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000
40+ * KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn't work as
41+ * expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated
42+ * base64 decoded records.
43+ *
3944 * Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
4045 * destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
4146 * unique within the data. This allows the consumer application to parse individual data items when reading the data
@@ -45,13 +50,13 @@ class FirehoseClient extends AbstractApi
4550 * applications can use this ID for purposes such as auditability and investigation.
4651 *
4752 * If the `PutRecord` operation throws a `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3
48- * times. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery
53+ * times. If the exception persists, it is possible that the throughput limits have been exceeded for the Firehose
4954 * stream.
5055 *
5156 * Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
5257 * larger data assets, allow for a longer time out before retrying Put API operations.
5358 *
54- * Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries
59+ * Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it tries
5560 * to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no
5661 * longer available.
5762 *
@@ -90,23 +95,28 @@ public function putRecord($input): PutRecordOutput
9095 }
9196
9297 /**
93- * Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per
94- * producer than when writing single records. To write single data records into a delivery stream, use PutRecord.
98+ * Writes multiple data records into a Firehose stream in a single call, which can achieve higher throughput per
99+ * producer than when writing single records. To write single data records into a Firehose stream, use PutRecord.
95100 * Applications using these operations are referred to as producers.
96101 *
97102 * Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
98- * that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
103+ * that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the
99104 * actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
100105 *
101106 * For information about service quota, see Amazon Firehose Quota [^1].
102107 *
103108 * Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB
104109 * (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.
105110 *
106- * You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
111+ * You must specify the name of the Firehose stream and the data record when using PutRecord. The data record consists
107112 * of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a
108113 * log file, geographic location data, website clickstream data, and so on.
109114 *
115+ * For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000
116+ * KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn't work as
117+ * expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated
118+ * base64 decoded records.
119+ *
110120 * Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
111121 * destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
112122 * unique within the data. This allows the consumer application to parse individual data items when reading the data
@@ -132,12 +142,12 @@ public function putRecord($input): PutRecordOutput
132142 * charges). We recommend that you handle any duplicates at the destination.
133143 *
134144 * If PutRecordBatch throws `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3 times. If the
135- * exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.
145+ * exception persists, it is possible that the throughput limits have been exceeded for the Firehose stream.
136146 *
137147 * Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
138148 * larger data assets, allow for a longer time out before retrying Put API operations.
139149 *
140- * Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it
150+ * Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it
141151 * attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data
142152 * is no longer available.
143153 *
0 commit comments