/AWS1/CL_FRH=>PUTRECORD()
¶
About PutRecord¶
Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers.
By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits.
Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To
disambiguate the data blobs at the destination, a common solution is to use delimiters in
the data, such as a newline (\n
) or some other character unique within the
data. This allows the consumer application to parse individual data items when reading the
data from the destination.
The PutRecord
operation returns a RecordId
, which is a
unique string assigned to each record. Producer applications can use this ID for purposes
such as auditability and investigation.
If the PutRecord
operation throws a
ServiceUnavailableException
, the API is automatically reinvoked (retried) 3
times. If the exception persists, it is possible that the throughput limits have been
exceeded for the delivery stream.
Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations.
Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available.
Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding.
Method Signature¶
IMPORTING¶
Required arguments:¶
IV_DELIVERYSTREAMNAME
TYPE /AWS1/FRHDELIVERYSTREAMNAME
/AWS1/FRHDELIVERYSTREAMNAME
¶
The name of the delivery stream.
IO_RECORD
TYPE REF TO /AWS1/CL_FRHRECORD
/AWS1/CL_FRHRECORD
¶
The record.