Provisioning storage throughput
Amazon MSK brokers persist data on storage volumes. Storage I/O is consumed when producers write to the cluster, when data is replicated between brokers, and when consumers read data that isn't in memory. The volume storage throughput is the rate at which data can be written into and read from a storage volume. Provisioned storage throughput is the ability to specify that rate for the brokers in your cluster.
You can specify the provisioned throughput rate in MiB per second for clusters whose
brokers are of size kafka.m5.4xlarge
or larger and if the storage
volume is 10 GiB or greater. It is possible to specify provisioned throughput during
cluster creation. You can also enable or disable provisioned throughput for a
cluster that is in the ACTIVE
state.
Throughput bottlenecks
There are multiple causes of bottlenecks in broker throughput: volume throughput, Amazon EC2 to Amazon EBS network throughput, and Amazon EC2 egress throughput. You can enable provisioned storage throughput to adjust volume throughput. However, broker throughput limitations can be caused by Amazon EC2 to Amazon EBS network throughput and Amazon EC2 egress throughput.
Amazon EC2 egress throughput is impacted by the number of consumer groups and consumers per consumer groups. Also, both Amazon EC2 to Amazon EBS network throughput and Amazon EC2 egress throughput are higher for larger broker sizes.
For volume sizes of 10 GiB or larger, you can provision storage throughput of 250 MiB per second or greater. 250 MiB per second is the default. To provision storage throughput, you must choose broker size kafka.m5.4xlarge or larger (or kafka.m7g.2xlarge or larger), and you can specify maximum throughput as shown in the following table.
broker size | Maximum storage throughput (MiB/second) |
---|---|
kafka.m5.4xlarge | 593 |
kafka.m5.8xlarge | 850 |
kafka.m5.12xlarge | 1000 |
kafka.m5.16xlarge | 1000 |
kafka.m5.24xlarge | 1000 |
kafka.m7g.2xlarge | 312.5 |
kafka.m7g.4xlarge | 625 |
kafka.m7g.8xlarge | 1000 |
kafka.m7g.12xlarge | 1000 |
kafka.m7g.16xlarge | 1000 |
Measuring storage throughput
You can use the VolumeReadBytes
and VolumeWriteBytes
metrics to measure the average storage throughput of a cluster. The sum of these two
metrics gives the average storage throughput in bytes. To get the average storage
throughput for a cluster, set these two metrics to SUM and the period to 1 minute,
then use the following formula.
Average storage throughput in MiB/s = (Sum(VolumeReadBytes) + Sum(VolumeWriteBytes)) / (60 * 1024 * 1024)
For information about the VolumeReadBytes
and
VolumeWriteBytes
metrics, see PER_BROKER Level monitoring.
Configuration update
You can update your Amazon MSK configuration either before or after you turn on provisioned throughput. However,
you won't see the desired throughput until you perform both actions: update the
num.replica.fetchers
configuration parameter and turn on
provisioned throughput.
In the default Amazon MSK configuration, num.replica.fetchers
has a value
of 2. To update your num.replica.fetchers
, you can use the suggested
values from the following table. These values are for guidance purposes. We
recommend that you adjust these values based on your use case.
broker size | num.replica.fetchers |
---|---|
kafka.m5.4xlarge | 4 |
kafka.m5.8xlarge | 8 |
kafka.m5.12xlarge | 14 |
kafka.m5.16xlarge | 16 |
kafka.m5.24xlarge | 16 |
Your updated configuration may not take effect for up to 24 hours, and may take longer when a source volume is not fully utilized. However, transitional volume performance at least equals the performance of source storage volumes during the migration period. A fully-utilized 1 TiB volume typically takes about six hours to migrate to an updated configuration.
Provisioning storage throughput using the AWS Management Console
Sign in to the AWS Management Console, and open the Amazon MSK console at https://console.aws.amazon.com/msk/home?region=us-east-1#/home/
. Choose Create cluster.
Choose Custom create.
Specify a name for the cluster.
In the Storage section, choose Enable.
Choose a value for storage throughput per broker.
Choose a VPC, zones and subnets, and a security group.
Choose Next.
At the bottom of the Security step, choose Next.
At the bottom of the Monitoring and tags step, choose Next.
Review the cluster settings, then choose Create cluster.
Provisioning storage throughput using the AWS CLI
This section shows an example of how you can use the AWS CLI to create a cluster with provisioned throughput enabled.
Copy the following JSON and paste it into a file. Replace the subnet IDs and security group ID placeholders with values from your account. Name the file
cluster-creation.json
and save it.{ "Provisioned": { "BrokerNodeGroupInfo":{ "InstanceType":"kafka.m5.4xlarge", "ClientSubnets":[ "
Subnet-1-ID
", "Subnet-2-ID
" ], "SecurityGroups":[ "Security-Group-ID
" ], "StorageInfo": { "EbsStorageInfo": { "VolumeSize": 10, "ProvisionedThroughput": { "Enabled": true, "VolumeThroughput": 250 } } } }, "EncryptionInfo": { "EncryptionInTransit": { "InCluster": false, "ClientBroker": "PLAINTEXT" } }, "KafkaVersion":"2.8.1", "NumberOfBrokerNodes": 2 }, "ClusterName": "provisioned-throughput-example" }Run the following AWS CLI command from the directory where you saved the JSON file in the previous step.
aws kafka create-cluster-v2 --cli-input-json file://cluster-creation.json
Provisioning storage throughput using the API
To configure provisioned storage throughput while creating a cluster, use CreateClusterV2.