public class TransferManagerConfiguration extends Object
TransferManager
processes requests.
The best configuration settings depend on network configuration, latency and bandwidth. The default configuration settings are suitable for most applications, but this class enables developers to experiment with different configurations and tune transfer manager performance.
Constructor and Description |
---|
TransferManagerConfiguration() |
Modifier and Type | Method and Description |
---|---|
long |
getMinimumUploadPartSize()
Returns the minimum part size for upload parts.
|
long |
getMultipartCopyPartSize()
Returns the minimum size in bytes of each part in a multi-part copy
request.
|
long |
getMultipartCopyThreshold()
Returns the maximum threshold size of an Amazon S3 object after which the
copy operation is carried out using multi-part request.
|
long |
getMultipartUploadThreshold()
Returns the size threshold in bytes for when to use multipart uploads.
|
boolean |
isAlwaysCalculateMultipartMd5()
Returns true if Transfer Manager should calculate MD5 for multipart uploads.
|
boolean |
isDisableParallelDownloads()
Returns if the parallel downloads are disabled or not.
|
void |
setAlwaysCalculateMultipartMd5(boolean alwaysCalculateMultipartMd5)
Set to true if Transfer Manager should calculate MD5 for multipart uploads.
|
void |
setDisableParallelDownloads(boolean disableParallelDownloads)
Sets the option to disable parallel downloads.
|
void |
setMinimumUploadPartSize(long minimumUploadPartSize)
Sets the minimum part size for upload parts.
|
void |
setMultipartCopyPartSize(long multipartCopyPartSize)
Sets the minimum part size in bytes for each part in a multi-part copy
request.
|
void |
setMultipartCopyThreshold(long multipartCopyThreshold)
Sets the size threshold in bytes for when to use multi-part copy
requests.
|
void |
setMultipartUploadThreshold(int multipartUploadThreshold)
Deprecated.
replaced by
setMultipartUploadThreshold(long) |
void |
setMultipartUploadThreshold(long multipartUploadThreshold)
Sets the size threshold in bytes for when to use multipart uploads.
|
public long getMinimumUploadPartSize()
public void setMinimumUploadPartSize(long minimumUploadPartSize)
minimumUploadPartSize
- The minimum part size for upload parts.public long getMultipartUploadThreshold()
Multipart uploads are easier to recover from and potentially faster than single part uploads, especially when the upload parts can be uploaded in parallel as with files. Due to additional network communication, small uploads should use a single connection for the upload.
public void setMultipartUploadThreshold(long multipartUploadThreshold)
Multipart uploads are easier to recover from and potentially faster than single part uploads, especially when the upload parts can be uploaded in parallel as with files. Due to additional network communication, small uploads should use a single connection for the upload.
multipartUploadThreshold
- The size threshold in bytes for when to use multipart
uploads.public long getMultipartCopyPartSize()
public void setMultipartCopyPartSize(long multipartCopyPartSize)
multipartCopyPartSize
- The minimum size in bytes for each part in a multi part copy
request.public long getMultipartCopyThreshold()
public void setMultipartCopyThreshold(long multipartCopyThreshold)
multipartCopyThreshold
- The size threshold in bytes for when to use multi part copy.@Deprecated public void setMultipartUploadThreshold(int multipartUploadThreshold)
setMultipartUploadThreshold(long)
Multipart uploads are easier to recover from and potentially faster than single part uploads, especially when the upload parts can be uploaded in parallel as with files. Due to additional network communication, small uploads should use a single connection for the upload. This reversed the backward incompatibility with Hadoop 2.7 and S3A filesystem introduced in Amazon Web Services SDK v1.7.6 by this pull request: https://github.com/aws/aws-sdk-java/pull/201 See details (on error message, and fix targeted for Hadoop 2.8) here: - https://issues.apache.org/jira/browse/HADOOP-12420 - https://issues.apache.org/jira/browse/HADOOP-12496 - https://issues.apache.org/jira/browse/HADOOP-12269 Once Hadoop 2.8 (which uses aws-sdk 1.10.6 or later) is used commonly, this may be removed
multipartUploadThreshold
- The size threshold in bytes for when to use multipart
uploads.public boolean isDisableParallelDownloads()
TransferManager automatically detects and downloads a multipart object in parallel. Setting this option to true will disable parallel downloads.
Disabling parallel downloads might reduce performance for large files.
public void setDisableParallelDownloads(boolean disableParallelDownloads)
TransferManager automatically detects and downloads a multipart object in parallel. Setting this option to true will disable parallel downloads.
Disabling parallel downloads might reduce performance for large files.
disableParallelDownloads
- boolean value to disable parallel downloads.public boolean isAlwaysCalculateMultipartMd5()
For instance, if a bucket is enabled for Object Locking, put requests for objects and object parts must contain an MD5 digest. Since Transfer Manager operates on a whole object, the user cannot supply the MD5 digest directly if multipart uploads are in effect.
Supplying any object locking parameter also instructs Transfer Manager to calculate MD5 for parts. This flag should be used in instances where they are not present.
public void setAlwaysCalculateMultipartMd5(boolean alwaysCalculateMultipartMd5)
For instance, if a bucket is enabled for Object Locking, put requests for objects and object parts must contain an MD5 digest. Since Transfer Manager operates on a whole object, the user cannot supply the MD5 digest directly if multipart uploads are in effect.
Supplying any object locking parameter also instructs Transfer Manager to calculate MD5 for parts. This flag should be used in instances where they are not present.