Amazon Kinesis Data Firehose
Developer Guide

The AWS Documentation website is getting a new look!
Try it now and let us know what you think. Switch to the new look >>

You can return to the original look by selecting English in the language selector above.

Amazon Kinesis Data Firehose Limits

Amazon Kinesis Data Firehose has the following limits.

  • By default, each account can have up to 50 Kinesis Data Firehose delivery streams per Region. If you exceed this limit, a call to CreateDeliveryStream results in a LimitExceededException exception. This limit can be increased using the Amazon Kinesis Data Firehose Limits form.

  • When Direct PUT is configured as the data source, each Kinesis Data Firehose delivery stream is subject to the following limits:

    • For US East (N. Virginia), US West (Oregon), and EU (Ireland): 5,000 records/second, 2,000 transactions/second, and 5 MiB/second.

    • For EU (Paris), Asia Pacific (Mumbai), US East (Ohio), EU (Frankfurt), South America (São Paulo), Asia Pacific (Hong Kong), Asia Pacific (Seoul), EU (London), Asia Pacific (Tokyo), US West (N. California), Asia Pacific (Singapore), Asia Pacific (Sydney), AWS GovCloud (US-West), AWS GovCloud (US-East), EU (Stockholm), and Canada (Central): 1,000 records/second, 1,000 transactions/second, and 1 MiB/second.

    You can submit a limit increase request using the Amazon Kinesis Data Firehose Limits form. The three limits scale proportionally. For example, if you increase the throughput limit in US East (N. Virginia), US West (Oregon), or EU (Ireland) to 10 MiB/second, the other two limits increase to 4,000 transactions/second and 10,000 records/second.


    If the increased limit is much higher than the running traffic, it causes small delivery batches to destinations. This is inefficient and can result in higher costs at the destination services. Be sure to increase the limit only to match current running traffic, and increase the limit further if traffic increases.


    When Kinesis Data Streams is configured as the data source, this limit doesn't apply, and Kinesis Data Firehose scales up and down with no limit.

  • Each Kinesis Data Firehose delivery stream stores data records for up to 24 hours in case the delivery destination is unavailable.

  • The maximum size of a record sent to Kinesis Data Firehose, before base64-encoding, is 1,000 KiB.

  • The PutRecordBatch operation can take up to 500 records per call or 4 MiB per call, whichever is smaller. This limit cannot be changed.

  • The following operations can provide up to five transactions per second: CreateDeliveryStream, DeleteDeliveryStream, DescribeDeliveryStream, ListDeliveryStreams, UpdateDestination, TagDeliveryStream, UntagDeliveryStream, ListTagsForDeliveryStream, StartDeliveryStreamEncryption, StopDeliveryStreamEncryption.

  • The buffer sizes hints range from 1 MiB to 128 MiB for Amazon S3 delivery. For Amazon Elasticsearch Service (Amazon ES) delivery, they range from 1 MiB to 100 MiB. For AWS Lambda processing, you can set a buffering hint between 1 MiB and 3 MiB using the BufferSizeInMBs processor parameter. The size threshold is applied to the buffer before compression. These options are treated as hints. Kinesis Data Firehose might choose to use different values when it is optimal.

  • The buffer interval hints range from 60 seconds to 900 seconds.

  • For delivery from Kinesis Data Firehose to Amazon Redshift, only publicly accessible Amazon Redshift clusters are supported.

  • The retry duration range is from 0 seconds to 7,200 seconds for Amazon Redshift and Amazon ES delivery.

  • Kinesis Data Firehose supports Elasticsearch versions 1.5, 2.3, 5.1, 5.3, 5.5, 5.6, as well as all 6.* versions.

  • Kinesis Data Firehose doesn't support delivery to Elasticsearch domains in a virtual private cloud (VPC).

  • When the destination is Amazon S3, Amazon Redshift, or Amazon ES, Kinesis Data Firehose allows up to 5 outstanding Lambda invocations per shard. For Splunk, the limit is 10 outstanding Lambda invocations per shard.