max_concurrent_requests - AWS SDKs and Tools


This setting applies only to AWS CLI commands in the s3 namespace.

Specifies the maximum number of concurrent Amazon S3 transfer requests that can run at the same time.


The aws s3 transfer commands are multithreaded. At any given time, multiple Amazon S3 requests can be running. For example, when you use the command:

$ aws s3 cp some/local/dir s3://bucket/ --recursive

to upload files to an Amazon S3 bucket, the AWS CLI can upload the files some/local/dir/file1, some/local/dir/file2, and some/local/dir/file3 in parallel. This setting limits the number of transfer operations that can run at the same time.

Default value: 10

You might need to change this value for a few reasons:

  • Decreasing this value – On some environments, the default of 10 concurrent requests can overwhelm a system. This can cause connection timeouts or slow the responsiveness of the system. Lowering this value makes the Amazon S3 transfer commands less resource intensive. The tradeoff is that Amazon S3 transfers can take longer to complete. Lowering this value might be necessary if your environment includes a tool that limits your available bandwidth.

  • Increasing this value – In some scenarios, you might want the Amazon S3 transfers to complete as quickly as possible, using as much network bandwidth as necessary. In this scenario, the default number of concurrent requests might not be enough to use all of the available network bandwidth. Increasing this value can improve the time it takes to complete an Amazon S3 transfer.

Ways to set this value

Location Supported Example
config file Yes
s3 = max_concurrent_requests = 20
credentials file -
Environment variable -
AWS CLI parameter -

Compatibility with AWS SDKS and tools