AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Implementation for accessing S3
Namespace: Amazon.S3
Assembly: AWSSDK.S3.dll
Version: 3.x.y.z
public class AmazonS3Client : AmazonServiceClient IAmazonS3, IAmazonService, ICoreAmazonS3, IDisposable
The AmazonS3Client type exposes the following members
Name | Description | |
---|---|---|
![]() |
AmazonS3Client() |
Constructs AmazonS3Client with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
![]() |
AmazonS3Client(RegionEndpoint) |
Constructs AmazonS3Client with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
![]() |
AmazonS3Client(AmazonS3Config) |
Constructs AmazonS3Client with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
![]() |
AmazonS3Client(AWSCredentials) |
Constructs AmazonS3Client with AWS Credentials |
![]() |
AmazonS3Client(AWSCredentials, RegionEndpoint) |
Constructs AmazonS3Client with AWS Credentials |
![]() |
AmazonS3Client(AWSCredentials, AmazonS3Config) |
Constructs AmazonS3Client with AWS Credentials and an AmazonS3Client Configuration object. |
![]() |
AmazonS3Client(string, string) |
Constructs AmazonS3Client with AWS Access Key ID and AWS Secret Key |
![]() |
AmazonS3Client(string, string, RegionEndpoint) |
Constructs AmazonS3Client with AWS Access Key ID and AWS Secret Key |
![]() |
AmazonS3Client(string, string, AmazonS3Config) |
Constructs AmazonS3Client with AWS Access Key ID, AWS Secret Key and an AmazonS3Client Configuration object. |
![]() |
AmazonS3Client(string, string, string) |
Constructs AmazonS3Client with AWS Access Key ID and AWS Secret Key |
![]() |
AmazonS3Client(string, string, string, RegionEndpoint) |
Constructs AmazonS3Client with AWS Access Key ID and AWS Secret Key |
![]() |
AmazonS3Client(string, string, string, AmazonS3Config) |
Constructs AmazonS3Client with AWS Access Key ID, AWS Secret Key and an AmazonS3Client Configuration object. |
Name | Type | Description | |
---|---|---|---|
![]() |
Config | Amazon.Runtime.IClientConfig | Inherited from Amazon.Runtime.AmazonServiceClient. |
![]() |
Paginators | Amazon.S3.Model.IS3PaginatorFactory |
Paginators for the service |
Name | Description | |
---|---|---|
![]() |
AbortMultipartUpload(string, string, string) |
This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts. To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty. For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
AbortMultipartUpload(AbortMultipartUploadRequest) |
This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts. To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty. For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
AbortMultipartUploadAsync(string, string, string, CancellationToken) |
This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts. To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty. For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
AbortMultipartUploadAsync(AbortMultipartUploadRequest, CancellationToken) |
This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts. To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty. For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
CompleteMultipartUpload(CompleteMultipartUploadRequest) |
Completes a multipart upload by assembling previously uploaded parts.
You first initiate the multipart upload and then upload all parts using the UploadPart
operation. After successfully uploading all relevant parts of an upload, you call
this action to complete the upload. Upon receiving this request, Amazon S3 concatenates
all the parts in ascending order by part number to create a new object. In the Complete
Multipart Upload request, you must provide the parts list. You must ensure that the
parts list is complete. This action concatenates the parts that you provide in the
list. For each part in the list, you must provide the part number and the Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.
Note that if
You cannot use For more information about multipart uploads, see Uploading Objects Using Multipart Upload. For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
CompleteMultipartUploadAsync(CompleteMultipartUploadRequest, CancellationToken) |
Completes a multipart upload by assembling previously uploaded parts.
You first initiate the multipart upload and then upload all parts using the UploadPart
operation. After successfully uploading all relevant parts of an upload, you call
this action to complete the upload. Upon receiving this request, Amazon S3 concatenates
all the parts in ascending order by part number to create a new object. In the Complete
Multipart Upload request, you must provide the parts list. You must ensure that the
parts list is complete. This action concatenates the parts that you provide in the
list. For each part in the list, you must provide the part number and the Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.
Note that if
You cannot use For more information about multipart uploads, see Uploading Objects Using Multipart Upload. For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
CopyObject(string, string, string, string) |
Creates a copy of an object that is already stored in Amazon S3.
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of
your object up to 5 GB in size in a single atomic action using this API. However,
to copy an object greater than 5 GB, you must use the multipart upload Upload Part
- Copy (UploadPartCopy) API. For more information, see Copy
Object Using the REST Multipart Upload API.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
A copy request might return an error when Amazon S3 receives the copy request or while
Amazon S3 is copying the files. If the error occurs before the copy action starts,
you receive a standard Amazon S3 error. If the error occurs during the copy operation,
the error response is embedded in the If the copy is successful, you receive a response with information about the copied object. If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body. The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Requesterror. For more information, see Transfer Acceleration. Metadata When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
To specify whether you want the object metadata copied from the source object or replaced
with metadata provided in the request, you can optionally add the x-amz-copy-source-if Headers
To only copy an object under certain conditions, such as whether the
If both the 200 OKand copies the data:
If both the
All headers with the Server-side encryption When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption. If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide. Access Control List (ACL)-Specific Request Headers When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.
If the bucket that you're copying objects to uses the bucket owner enforced setting
for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets
that use this setting only accept PUT requests that don't specify an ACL or PUT requests
that specify bucket owner full control ACLs, such as the For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide. If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. Checksums
When copying an object, if it has a checksum, that checksum will be copied to the
new object by default. When you copy the object over, you may optionally specify a
different checksum algorithm to use with the Storage Class Options
You can use the Versioning
By default,
If you enable versioning on the target bucket, Amazon S3 generates a unique version
ID for the object being copied. This version ID is different from the version ID of
the source object. Amazon S3 returns the version ID of the copied object in the If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null. If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.
The following operations are related to For more information, see Copying Objects. |
![]() |
CopyObject(string, string, string, string, string) |
Creates a copy of an object that is already stored in Amazon S3.
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of
your object up to 5 GB in size in a single atomic action using this API. However,
to copy an object greater than 5 GB, you must use the multipart upload Upload Part
- Copy (UploadPartCopy) API. For more information, see Copy
Object Using the REST Multipart Upload API.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
A copy request might return an error when Amazon S3 receives the copy request or while
Amazon S3 is copying the files. If the error occurs before the copy action starts,
you receive a standard Amazon S3 error. If the error occurs during the copy operation,
the error response is embedded in the If the copy is successful, you receive a response with information about the copied object. If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body. The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Requesterror. For more information, see Transfer Acceleration. Metadata When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
To specify whether you want the object metadata copied from the source object or replaced
with metadata provided in the request, you can optionally add the x-amz-copy-source-if Headers
To only copy an object under certain conditions, such as whether the
If both the 200 OKand copies the data:
If both the
All headers with the Server-side encryption When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption. If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide. Access Control List (ACL)-Specific Request Headers When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.
If the bucket that you're copying objects to uses the bucket owner enforced setting
for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets
that use this setting only accept PUT requests that don't specify an ACL or PUT requests
that specify bucket owner full control ACLs, such as the For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide. If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. Checksums
When copying an object, if it has a checksum, that checksum will be copied to the
new object by default. When you copy the object over, you may optionally specify a
different checksum algorithm to use with the Storage Class Options
You can use the Versioning
By default,
If you enable versioning on the target bucket, Amazon S3 generates a unique version
ID for the object being copied. This version ID is different from the version ID of
the source object. Amazon S3 returns the version ID of the copied object in the If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null. If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.
The following operations are related to For more information, see Copying Objects. |
![]() |
CopyObject(CopyObjectRequest) |
Creates a copy of an object that is already stored in Amazon S3.
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of
your object up to 5 GB in size in a single atomic action using this API. However,
to copy an object greater than 5 GB, you must use the multipart upload Upload Part
- Copy (UploadPartCopy) API. For more information, see Copy
Object Using the REST Multipart Upload API.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
A copy request might return an error when Amazon S3 receives the copy request or while
Amazon S3 is copying the files. If the error occurs before the copy action starts,
you receive a standard Amazon S3 error. If the error occurs during the copy operation,
the error response is embedded in the If the copy is successful, you receive a response with information about the copied object. If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body. The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Requesterror. For more information, see Transfer Acceleration. Metadata When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
To specify whether you want the object metadata copied from the source object or replaced
with metadata provided in the request, you can optionally add the x-amz-copy-source-if Headers
To only copy an object under certain conditions, such as whether the
If both the 200 OKand copies the data:
If both the
All headers with the Server-side encryption When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption. If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide. Access Control List (ACL)-Specific Request Headers When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.
If the bucket that you're copying objects to uses the bucket owner enforced setting
for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets
that use this setting only accept PUT requests that don't specify an ACL or PUT requests
that specify bucket owner full control ACLs, such as the For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide. If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. Checksums
When copying an object, if it has a checksum, that checksum will be copied to the
new object by default. When you copy the object over, you may optionally specify a
different checksum algorithm to use with the Storage Class Options
You can use the Versioning
By default,
If you enable versioning on the target bucket, Amazon S3 generates a unique version
ID for the object being copied. This version ID is different from the version ID of
the source object. Amazon S3 returns the version ID of the copied object in the If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null. If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.
The following operations are related to For more information, see Copying Objects. |
![]() |
CopyObjectAsync(string, string, string, string, CancellationToken) |
Creates a copy of an object that is already stored in Amazon S3.
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of
your object up to 5 GB in size in a single atomic action using this API. However,
to copy an object greater than 5 GB, you must use the multipart upload Upload Part
- Copy (UploadPartCopy) API. For more information, see Copy
Object Using the REST Multipart Upload API.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
A copy request might return an error when Amazon S3 receives the copy request or while
Amazon S3 is copying the files. If the error occurs before the copy action starts,
you receive a standard Amazon S3 error. If the error occurs during the copy operation,
the error response is embedded in the If the copy is successful, you receive a response with information about the copied object. If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body. The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Requesterror. For more information, see Transfer Acceleration. Metadata When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
To specify whether you want the object metadata copied from the source object or replaced
with metadata provided in the request, you can optionally add the x-amz-copy-source-if Headers
To only copy an object under certain conditions, such as whether the
If both the 200 OKand copies the data:
If both the
All headers with the Server-side encryption When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption. If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide. Access Control List (ACL)-Specific Request Headers When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.
If the bucket that you're copying objects to uses the bucket owner enforced setting
for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets
that use this setting only accept PUT requests that don't specify an ACL or PUT requests
that specify bucket owner full control ACLs, such as the For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide. If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. Checksums
When copying an object, if it has a checksum, that checksum will be copied to the
new object by default. When you copy the object over, you may optionally specify a
different checksum algorithm to use with the Storage Class Options
You can use the Versioning
By default,
If you enable versioning on the target bucket, Amazon S3 generates a unique version
ID for the object being copied. This version ID is different from the version ID of
the source object. Amazon S3 returns the version ID of the copied object in the If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null. If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.
The following operations are related to For more information, see Copying Objects. |
![]() |
CopyObjectAsync(string, string, string, string, string, CancellationToken) |
Creates a copy of an object that is already stored in Amazon S3.
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of
your object up to 5 GB in size in a single atomic action using this API. However,
to copy an object greater than 5 GB, you must use the multipart upload Upload Part
- Copy (UploadPartCopy) API. For more information, see Copy
Object Using the REST Multipart Upload API.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
A copy request might return an error when Amazon S3 receives the copy request or while
Amazon S3 is copying the files. If the error occurs before the copy action starts,
you receive a standard Amazon S3 error. If the error occurs during the copy operation,
the error response is embedded in the If the copy is successful, you receive a response with information about the copied object. If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body. The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Requesterror. For more information, see Transfer Acceleration. Metadata When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
To specify whether you want the object metadata copied from the source object or replaced
with metadata provided in the request, you can optionally add the x-amz-copy-source-if Headers
To only copy an object under certain conditions, such as whether the
If both the 200 OKand copies the data:
If both the
All headers with the Server-side encryption When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption. If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide. Access Control List (ACL)-Specific Request Headers When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.
If the bucket that you're copying objects to uses the bucket owner enforced setting
for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets
that use this setting only accept PUT requests that don't specify an ACL or PUT requests
that specify bucket owner full control ACLs, such as the For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide. If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. Checksums
When copying an object, if it has a checksum, that checksum will be copied to the
new object by default. When you copy the object over, you may optionally specify a
different checksum algorithm to use with the Storage Class Options
You can use the Versioning
By default,
If you enable versioning on the target bucket, Amazon S3 generates a unique version
ID for the object being copied. This version ID is different from the version ID of
the source object. Amazon S3 returns the version ID of the copied object in the If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null. If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.
The following operations are related to For more information, see Copying Objects. |
![]() |
CopyObjectAsync(CopyObjectRequest, CancellationToken) |
Creates a copy of an object that is already stored in Amazon S3.
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of
your object up to 5 GB in size in a single atomic action using this API. However,
to copy an object greater than 5 GB, you must use the multipart upload Upload Part
- Copy (UploadPartCopy) API. For more information, see Copy
Object Using the REST Multipart Upload API.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
A copy request might return an error when Amazon S3 receives the copy request or while
Amazon S3 is copying the files. If the error occurs before the copy action starts,
you receive a standard Amazon S3 error. If the error occurs during the copy operation,
the error response is embedded in the If the copy is successful, you receive a response with information about the copied object. If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body. The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Requesterror. For more information, see Transfer Acceleration. Metadata When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
To specify whether you want the object metadata copied from the source object or replaced
with metadata provided in the request, you can optionally add the x-amz-copy-source-if Headers
To only copy an object under certain conditions, such as whether the
If both the 200 OKand copies the data:
If both the
All headers with the Server-side encryption When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption. If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide. Access Control List (ACL)-Specific Request Headers When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.
If the bucket that you're copying objects to uses the bucket owner enforced setting
for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets
that use this setting only accept PUT requests that don't specify an ACL or PUT requests
that specify bucket owner full control ACLs, such as the For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide. If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. Checksums
When copying an object, if it has a checksum, that checksum will be copied to the
new object by default. When you copy the object over, you may optionally specify a
different checksum algorithm to use with the Storage Class Options
You can use the Versioning
By default,
If you enable versioning on the target bucket, Amazon S3 generates a unique version
ID for the object being copied. This version ID is different from the version ID of
the source object. Amazon S3 returns the version ID of the copied object in the If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null. If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.
The following operations are related to For more information, see Copying Objects. |
![]() |
CopyPart(string, string, string, string, string) | |
![]() |
CopyPart(string, string, string, string, string, string) | |
![]() |
CopyPart(CopyPartRequest) | |
![]() |
CopyPartAsync(string, string, string, string, string, CancellationToken) | |
![]() |
CopyPartAsync(string, string, string, string, string, string, CancellationToken) | |
![]() |
CopyPartAsync(CopyPartRequest, CancellationToken) | |
![]() |
DeleteBucket(string) | |
![]() |
DeleteBucket(DeleteBucketRequest) | |
![]() |
DeleteBucketAnalyticsConfiguration(DeleteBucketAnalyticsConfigurationRequest) |
Deletes an analytics configuration for the bucket (specified by the analytics configuration ID).
To use this operation, you must have permissions to perform the For information about the Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.
The following operations are related to |
![]() |
DeleteBucketAnalyticsConfigurationAsync(DeleteBucketAnalyticsConfigurationRequest, CancellationToken) |
Deletes an analytics configuration for the bucket (specified by the analytics configuration ID).
To use this operation, you must have permissions to perform the For information about the Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.
The following operations are related to |
![]() |
DeleteBucketAsync(string, CancellationToken) | |
![]() |
DeleteBucketAsync(DeleteBucketRequest, CancellationToken) | |
![]() |
DeleteBucketEncryption(DeleteBucketEncryptionRequest) | |
![]() |
DeleteBucketEncryptionAsync(DeleteBucketEncryptionRequest, CancellationToken) | |
![]() |
DeleteBucketIntelligentTieringConfiguration(DeleteBucketIntelligentTieringConfigurationRequest) |
Deletes the S3 Intelligent-Tiering configuration from the specified bucket. The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities. The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to |
![]() |
DeleteBucketIntelligentTieringConfigurationAsync(DeleteBucketIntelligentTieringConfigurationRequest, CancellationToken) |
Deletes the S3 Intelligent-Tiering configuration from the specified bucket. The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities. The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to |
![]() |
DeleteBucketInventoryConfiguration(DeleteBucketInventoryConfigurationRequest) |
Deletes an inventory configuration (identified by the inventory ID) from the bucket.
To use this operation, you must have permissions to perform the For information about the Amazon S3 inventory feature, see Amazon S3 Inventory.
Operations related to |
![]() |
DeleteBucketInventoryConfigurationAsync(DeleteBucketInventoryConfigurationRequest, CancellationToken) |
Deletes an inventory configuration (identified by the inventory ID) from the bucket.
To use this operation, you must have permissions to perform the For information about the Amazon S3 inventory feature, see Amazon S3 Inventory.
Operations related to |
![]() |
DeleteBucketMetricsConfiguration(DeleteBucketMetricsConfigurationRequest) |
Deletes a metrics configuration for the Amazon CloudWatch request metrics (specified by the metrics configuration ID) from the bucket. Note that this doesn't include the daily storage metrics.
To use this operation, you must have permissions to perform the For information about CloudWatch request metrics for Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to |
![]() |
DeleteBucketMetricsConfigurationAsync(DeleteBucketMetricsConfigurationRequest, CancellationToken) |
Deletes a metrics configuration for the Amazon CloudWatch request metrics (specified by the metrics configuration ID) from the bucket. Note that this doesn't include the daily storage metrics.
To use this operation, you must have permissions to perform the For information about CloudWatch request metrics for Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to |
![]() |
DeleteBucketOwnershipControls(DeleteBucketOwnershipControlsRequest) |
Removes For information about Amazon S3 Object Ownership, see Using Object Ownership.
The following operations are related to |
![]() |
DeleteBucketOwnershipControlsAsync(DeleteBucketOwnershipControlsRequest, CancellationToken) |
Removes For information about Amazon S3 Object Ownership, see Using Object Ownership.
The following operations are related to |
![]() |
DeleteBucketPolicy(string) |
This implementation of the DELETE action uses the policy subresource to delete the
policy of a specified bucket. If you are using an identity other than the root user
of the Amazon Web Services account that owns the bucket, the calling identity must
have the
If you don't have As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information about bucket policies, see Using Bucket Policies and UserPolicies.
The following operations are related to |
![]() |
DeleteBucketPolicy(DeleteBucketPolicyRequest) |
This implementation of the DELETE action uses the policy subresource to delete the
policy of a specified bucket. If you are using an identity other than the root user
of the Amazon Web Services account that owns the bucket, the calling identity must
have the
If you don't have As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information about bucket policies, see Using Bucket Policies and UserPolicies.
The following operations are related to |
![]() |
DeleteBucketPolicyAsync(string, CancellationToken) |
This implementation of the DELETE action uses the policy subresource to delete the
policy of a specified bucket. If you are using an identity other than the root user
of the Amazon Web Services account that owns the bucket, the calling identity must
have the
If you don't have As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information about bucket policies, see Using Bucket Policies and UserPolicies.
The following operations are related to |
![]() |
DeleteBucketPolicyAsync(DeleteBucketPolicyRequest, CancellationToken) |
This implementation of the DELETE action uses the policy subresource to delete the
policy of a specified bucket. If you are using an identity other than the root user
of the Amazon Web Services account that owns the bucket, the calling identity must
have the
If you don't have As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information about bucket policies, see Using Bucket Policies and UserPolicies.
The following operations are related to |
![]() |
DeleteBucketReplication(DeleteBucketReplicationRequest) |
Deletes the replication configuration from the bucket.
To use this operation, you must have permissions to perform the It can take a while for the deletion of a replication configuration to fully propagate. For information about replication configuration, see Replication in the Amazon S3 User Guide.
The following operations are related to |
![]() |
DeleteBucketReplicationAsync(DeleteBucketReplicationRequest, CancellationToken) |
Deletes the replication configuration from the bucket.
To use this operation, you must have permissions to perform the It can take a while for the deletion of a replication configuration to fully propagate. For information about replication configuration, see Replication in the Amazon S3 User Guide.
The following operations are related to |
![]() |
DeleteBucketTagging(string) |
Deletes the tags from the bucket.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
DeleteBucketTagging(DeleteBucketTaggingRequest) |
Deletes the tags from the bucket.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
DeleteBucketTaggingAsync(string, CancellationToken) |
Deletes the tags from the bucket.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
DeleteBucketTaggingAsync(DeleteBucketTaggingRequest, CancellationToken) |
Deletes the tags from the bucket.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
DeleteBucketWebsite(string) |
This action removes the website configuration for a bucket. Amazon S3 returns a 200 OKresponse upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you
are trying to delete does not exist on the bucket. Amazon S3 returns a 404
response if the bucket specified in the request does not exist.
This DELETE action requires the For more information about hosting websites, see Hosting Websites on Amazon S3.
The following operations are related to |
![]() |
DeleteBucketWebsite(DeleteBucketWebsiteRequest) |
This action removes the website configuration for a bucket. Amazon S3 returns a 200 OKresponse upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you
are trying to delete does not exist on the bucket. Amazon S3 returns a 404
response if the bucket specified in the request does not exist.
This DELETE action requires the For more information about hosting websites, see Hosting Websites on Amazon S3.
The following operations are related to |
![]() |
DeleteBucketWebsiteAsync(string, CancellationToken) |
This action removes the website configuration for a bucket. Amazon S3 returns a 200 OKresponse upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you
are trying to delete does not exist on the bucket. Amazon S3 returns a 404
response if the bucket specified in the request does not exist.
This DELETE action requires the For more information about hosting websites, see Hosting Websites on Amazon S3.
The following operations are related to |
![]() |
DeleteBucketWebsiteAsync(DeleteBucketWebsiteRequest, CancellationToken) |
This action removes the website configuration for a bucket. Amazon S3 returns a 200 OKresponse upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you
are trying to delete does not exist on the bucket. Amazon S3 returns a 404
response if the bucket specified in the request does not exist.
This DELETE action requires the For more information about hosting websites, see Hosting Websites on Amazon S3.
The following operations are related to |
![]() |
DeleteCORSConfiguration(string) | |
![]() |
DeleteCORSConfiguration(DeleteCORSConfigurationRequest) | |
![]() |
DeleteCORSConfigurationAsync(string, CancellationToken) | |
![]() |
DeleteCORSConfigurationAsync(DeleteCORSConfigurationRequest, CancellationToken) | |
![]() |
DeleteLifecycleConfiguration(string) |
Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.
To use this operation, you must have permission to perform the There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems. For more information about the object expiration, see Elements to Describe Lifecycle Actions. Related actions include: |
![]() |
DeleteLifecycleConfiguration(DeleteLifecycleConfigurationRequest) |
Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.
To use this operation, you must have permission to perform the There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems. For more information about the object expiration, see Elements to Describe Lifecycle Actions. Related actions include: |
![]() |
DeleteLifecycleConfigurationAsync(string, CancellationToken) |
Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.
To use this operation, you must have permission to perform the There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems. For more information about the object expiration, see Elements to Describe Lifecycle Actions. Related actions include: |
![]() |
DeleteLifecycleConfigurationAsync(DeleteLifecycleConfigurationRequest, CancellationToken) |
Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.
To use this operation, you must have permission to perform the There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems. For more information about the object expiration, see Elements to Describe Lifecycle Actions. Related actions include: |
![]() |
DeleteObject(string, string) |
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.
To remove a specific version, you must be the bucket owner and you must use the version
Id subresource. Using this subresource permanently deletes the version. If the object
deleted is a delete marker, Amazon S3 sets the response header,
If the object you want to delete is in a bucket where the bucket versioning configuration
is MFA Delete enabled, you must include the For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling DELETE Object or configure its lifecycle
(PutBucketLifecycle)
to enable Amazon S3 to remove them for you. If you want to block users or accounts
from removing or deleting objects from your bucket, you must deny them the
The following action is related to |
![]() |
DeleteObject(string, string, string) |
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.
To remove a specific version, you must be the bucket owner and you must use the version
Id subresource. Using this subresource permanently deletes the version. If the object
deleted is a delete marker, Amazon S3 sets the response header,
If the object you want to delete is in a bucket where the bucket versioning configuration
is MFA Delete enabled, you must include the For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling DELETE Object or configure its lifecycle
(PutBucketLifecycle)
to enable Amazon S3 to remove them for you. If you want to block users or accounts
from removing or deleting objects from your bucket, you must deny them the
The following action is related to |
![]() |
DeleteObject(DeleteObjectRequest) |
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.
To remove a specific version, you must be the bucket owner and you must use the version
Id subresource. Using this subresource permanently deletes the version. If the object
deleted is a delete marker, Amazon S3 sets the response header,
If the object you want to delete is in a bucket where the bucket versioning configuration
is MFA Delete enabled, you must include the For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling DELETE Object or configure its lifecycle
(PutBucketLifecycle)
to enable Amazon S3 to remove them for you. If you want to block users or accounts
from removing or deleting objects from your bucket, you must deny them the
The following action is related to |
![]() |
DeleteObjectAsync(string, string, CancellationToken) |
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.
To remove a specific version, you must be the bucket owner and you must use the version
Id subresource. Using this subresource permanently deletes the version. If the object
deleted is a delete marker, Amazon S3 sets the response header,
If the object you want to delete is in a bucket where the bucket versioning configuration
is MFA Delete enabled, you must include the For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling DELETE Object or configure its lifecycle
(PutBucketLifecycle)
to enable Amazon S3 to remove them for you. If you want to block users or accounts
from removing or deleting objects from your bucket, you must deny them the
The following action is related to |
![]() |
DeleteObjectAsync(string, string, string, CancellationToken) |
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.
To remove a specific version, you must be the bucket owner and you must use the version
Id subresource. Using this subresource permanently deletes the version. If the object
deleted is a delete marker, Amazon S3 sets the response header,
If the object you want to delete is in a bucket where the bucket versioning configuration
is MFA Delete enabled, you must include the For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling DELETE Object or configure its lifecycle
(PutBucketLifecycle)
to enable Amazon S3 to remove them for you. If you want to block users or accounts
from removing or deleting objects from your bucket, you must deny them the
The following action is related to |
![]() |
DeleteObjectAsync(DeleteObjectRequest, CancellationToken) |
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.
To remove a specific version, you must be the bucket owner and you must use the version
Id subresource. Using this subresource permanently deletes the version. If the object
deleted is a delete marker, Amazon S3 sets the response header,
If the object you want to delete is in a bucket where the bucket versioning configuration
is MFA Delete enabled, you must include the For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling DELETE Object or configure its lifecycle
(PutBucketLifecycle)
to enable Amazon S3 to remove them for you. If you want to block users or accounts
from removing or deleting objects from your bucket, you must deny them the
The following action is related to |
![]() |
DeleteObjects(DeleteObjectsRequest) |
This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead. The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete action and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted. The action supports two modes for the response: verbose and quiet. By default, the action uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete action encountered an error. For a successful deletion, the action does not return any information about the delete in the response body. When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete. Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.
The following operations are related to |
![]() |
DeleteObjectsAsync(DeleteObjectsRequest, CancellationToken) |
This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead. The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete action and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted. The action supports two modes for the response: verbose and quiet. By default, the action uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete action encountered an error. For a successful deletion, the action does not return any information about the delete in the response body. When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete. Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.
The following operations are related to |
![]() |
DeleteObjectTagging(DeleteObjectTaggingRequest) |
Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.
To use this operation, you must have permission to perform the
To delete tags of a specific object version, add the
The following operations are related to |
![]() |
DeleteObjectTaggingAsync(DeleteObjectTaggingRequest, CancellationToken) |
Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.
To use this operation, you must have permission to perform the
To delete tags of a specific object version, add the
The following operations are related to |
![]() |
DeletePublicAccessBlock(DeletePublicAccessBlockRequest) |
Removes the
The following operations are related to |
![]() |
DeletePublicAccessBlockAsync(DeletePublicAccessBlockRequest, CancellationToken) |
Removes the
The following operations are related to |
![]() |
Dispose() | Inherited from Amazon.Runtime.AmazonServiceClient. |
![]() |
GetACL(string) | |
![]() |
GetACL(GetACLRequest) | |
![]() |
GetACLAsync(string, CancellationToken) | |
![]() |
GetACLAsync(GetACLRequest, CancellationToken) | |
![]() |
GetBucketAccelerateConfiguration(string) | |
![]() |
GetBucketAccelerateConfiguration(GetBucketAccelerateConfigurationRequest) | |
![]() |
GetBucketAccelerateConfigurationAsync(string, CancellationToken) | |
![]() |
GetBucketAccelerateConfigurationAsync(GetBucketAccelerateConfigurationRequest, CancellationToken) | |
![]() |
GetBucketAnalyticsConfiguration(GetBucketAnalyticsConfigurationRequest) | |
![]() |
GetBucketAnalyticsConfigurationAsync(GetBucketAnalyticsConfigurationRequest, CancellationToken) | |
![]() |
GetBucketEncryption(GetBucketEncryptionRequest) |
Returns the default encryption configuration for an Amazon S3 bucket. If the bucket
does not have a default encryption configuration, GetBucketEncryption returns For information about the Amazon S3 default encryption feature, see Amazon S3 Default Bucket Encryption.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
GetBucketEncryptionAsync(GetBucketEncryptionRequest, CancellationToken) |
Returns the default encryption configuration for an Amazon S3 bucket. If the bucket
does not have a default encryption configuration, GetBucketEncryption returns For information about the Amazon S3 default encryption feature, see Amazon S3 Default Bucket Encryption.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
GetBucketIntelligentTieringConfiguration(GetBucketIntelligentTieringConfigurationRequest) |
Gets the S3 Intelligent-Tiering configuration from the specified bucket. The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities. The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to |
![]() |
GetBucketIntelligentTieringConfigurationAsync(GetBucketIntelligentTieringConfigurationRequest, CancellationToken) |
Gets the S3 Intelligent-Tiering configuration from the specified bucket. The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities. The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to |
![]() |
GetBucketInventoryConfiguration(GetBucketInventoryConfigurationRequest) |
Returns an inventory configuration (identified by the inventory configuration ID) from the bucket.
To use this operation, you must have permissions to perform the For information about the Amazon S3 inventory feature, see Amazon S3 Inventory.
The following operations are related to |
![]() |
GetBucketInventoryConfigurationAsync(GetBucketInventoryConfigurationRequest, CancellationToken) |
Returns an inventory configuration (identified by the inventory configuration ID) from the bucket.
To use this operation, you must have permissions to perform the For information about the Amazon S3 inventory feature, see Amazon S3 Inventory.
The following operations are related to |
![]() |
GetBucketLocation(string) |
Returns the Region the bucket resides in. You set the bucket's Region using the To use this implementation of the operation, you must be the bucket owner. To use this API against an access point, provide the alias of the access point in place of the bucket name.
The following operations are related to |
![]() |
GetBucketLocation(GetBucketLocationRequest) |
Returns the Region the bucket resides in. You set the bucket's Region using the To use this implementation of the operation, you must be the bucket owner. To use this API against an access point, provide the alias of the access point in place of the bucket name.
The following operations are related to |
![]() |
GetBucketLocationAsync(string, CancellationToken) |
Returns the Region the bucket resides in. You set the bucket's Region using the To use this implementation of the operation, you must be the bucket owner. To use this API against an access point, provide the alias of the access point in place of the bucket name.
The following operations are related to |
![]() |
GetBucketLocationAsync(GetBucketLocationRequest, CancellationToken) |
Returns the Region the bucket resides in. You set the bucket's Region using the To use this implementation of the operation, you must be the bucket owner. To use this API against an access point, provide the alias of the access point in place of the bucket name.
The following operations are related to |
![]() |
GetBucketLogging(string) |
Returns the logging status of a bucket and the permissions users have to view and modify that status. To use GET, you must be the bucket owner.
The following operations are related to |
![]() |
GetBucketLogging(GetBucketLoggingRequest) |
Returns the logging status of a bucket and the permissions users have to view and modify that status. To use GET, you must be the bucket owner.
The following operations are related to |
![]() |
GetBucketLoggingAsync(string, CancellationToken) |
Returns the logging status of a bucket and the permissions users have to view and modify that status. To use GET, you must be the bucket owner.
The following operations are related to |
![]() |
GetBucketLoggingAsync(GetBucketLoggingRequest, CancellationToken) |
Returns the logging status of a bucket and the permissions users have to view and modify that status. To use GET, you must be the bucket owner.
The following operations are related to |
![]() |
GetBucketMetricsConfiguration(GetBucketMetricsConfigurationRequest) |
Gets a metrics configuration (specified by the metrics configuration ID) from the bucket. Note that this doesn't include the daily storage metrics.
To use this operation, you must have permissions to perform the For information about CloudWatch request metrics for Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to |
![]() |
GetBucketMetricsConfigurationAsync(GetBucketMetricsConfigurationRequest, CancellationToken) |
Gets a metrics configuration (specified by the metrics configuration ID) from the bucket. Note that this doesn't include the daily storage metrics.
To use this operation, you must have permissions to perform the For information about CloudWatch request metrics for Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to |
![]() |
GetBucketNotification(string) |
Returns the notification configuration of a bucket.
If notifications are not enabled on the bucket, the action returns an empty
By default, you must be the bucket owner to read the notification configuration of
a bucket. However, the bucket owner can use a bucket policy to grant permission to
other users to read this configuration with the For more information about setting and reading the notification configuration on a bucket, see Setting Up Notification of Bucket Events. For more information about bucket policies, see Using Bucket Policies.
The following action is related to |
![]() |
GetBucketNotification(GetBucketNotificationRequest) |
Returns the notification configuration of a bucket.
If notifications are not enabled on the bucket, the action returns an empty
By default, you must be the bucket owner to read the notification configuration of
a bucket. However, the bucket owner can use a bucket policy to grant permission to
other users to read this configuration with the For more information about setting and reading the notification configuration on a bucket, see Setting Up Notification of Bucket Events. For more information about bucket policies, see Using Bucket Policies.
The following action is related to |
![]() |
GetBucketNotificationAsync(string, CancellationToken) |
Returns the notification configuration of a bucket.
If notifications are not enabled on the bucket, the action returns an empty
By default, you must be the bucket owner to read the notification configuration of
a bucket. However, the bucket owner can use a bucket policy to grant permission to
other users to read this configuration with the For more information about setting and reading the notification configuration on a bucket, see Setting Up Notification of Bucket Events. For more information about bucket policies, see Using Bucket Policies.
The following action is related to |
![]() |
GetBucketNotificationAsync(GetBucketNotificationRequest, CancellationToken) |
Returns the notification configuration of a bucket.
If notifications are not enabled on the bucket, the action returns an empty
By default, you must be the bucket owner to read the notification configuration of
a bucket. However, the bucket owner can use a bucket policy to grant permission to
other users to read this configuration with the For more information about setting and reading the notification configuration on a bucket, see Setting Up Notification of Bucket Events. For more information about bucket policies, see Using Bucket Policies.
The following action is related to |
![]() |
GetBucketOwnershipControls(GetBucketOwnershipControlsRequest) |
Retrieves For information about Amazon S3 Object Ownership, see Using Object Ownership.
The following operations are related to |
![]() |
GetBucketOwnershipControlsAsync(GetBucketOwnershipControlsRequest, CancellationToken) |
Retrieves For information about Amazon S3 Object Ownership, see Using Object Ownership.
The following operations are related to |
![]() |
GetBucketPolicy(string) |
Returns the policy of a specified bucket. If you are using an identity other than
the root user of the Amazon Web Services account that owns the bucket, the calling
identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information about bucket policies, see Using Bucket Policies and User Policies.
The following action is related to |
![]() |
GetBucketPolicy(GetBucketPolicyRequest) |
Returns the policy of a specified bucket. If you are using an identity other than
the root user of the Amazon Web Services account that owns the bucket, the calling
identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information about bucket policies, see Using Bucket Policies and User Policies.
The following action is related to |
![]() |
GetBucketPolicyAsync(string, CancellationToken) |
Returns the policy of a specified bucket. If you are using an identity other than
the root user of the Amazon Web Services account that owns the bucket, the calling
identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information about bucket policies, see Using Bucket Policies and User Policies.
The following action is related to |
![]() |
GetBucketPolicyAsync(GetBucketPolicyRequest, CancellationToken) |
Returns the policy of a specified bucket. If you are using an identity other than
the root user of the Amazon Web Services account that owns the bucket, the calling
identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information about bucket policies, see Using Bucket Policies and User Policies.
The following action is related to |
![]() |
GetBucketPolicyStatus(GetBucketPolicyStatusRequest) |
Retrieves the policy status for an Amazon S3 bucket, indicating whether the bucket
is public. In order to use this operation, you must have the For more information about when Amazon S3 considers a bucket public, see The Meaning of "Public".
The following operations are related to |
![]() |
GetBucketPolicyStatusAsync(GetBucketPolicyStatusRequest, CancellationToken) |
Retrieves the policy status for an Amazon S3 bucket, indicating whether the bucket
is public. In order to use this operation, you must have the For more information about when Amazon S3 considers a bucket public, see The Meaning of "Public".
The following operations are related to |
![]() |
GetBucketReplication(GetBucketReplicationRequest) |
Retrieves the replication configuration for the given Amazon S3 bucket. |
![]() |
GetBucketReplicationAsync(GetBucketReplicationRequest, CancellationToken) |
Retrieves the replication configuration for the given Amazon S3 bucket. |
![]() |
GetBucketRequestPayment(string) |
Returns the request payment configuration of a bucket. To use this version of the operation, you must be the bucket owner. For more information, see Requester Pays Buckets.
The following operations are related to |
![]() |
GetBucketRequestPayment(GetBucketRequestPaymentRequest) |
Returns the request payment configuration of a bucket. To use this version of the operation, you must be the bucket owner. For more information, see Requester Pays Buckets.
The following operations are related to |
![]() |
GetBucketRequestPaymentAsync(string, CancellationToken) |
Returns the request payment configuration of a bucket. To use this version of the operation, you must be the bucket owner. For more information, see Requester Pays Buckets.
The following operations are related to |
![]() |
GetBucketRequestPaymentAsync(GetBucketRequestPaymentRequest, CancellationToken) |
Returns the request payment configuration of a bucket. To use this version of the operation, you must be the bucket owner. For more information, see Requester Pays Buckets.
The following operations are related to |
![]() |
GetBucketTagging(GetBucketTaggingRequest) |
Returns the tag set associated with the bucket.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
GetBucketTaggingAsync(GetBucketTaggingRequest, CancellationToken) |
Returns the tag set associated with the bucket.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
GetBucketVersioning(string) |
Returns the versioning state of a bucket. To retrieve the versioning state of a bucket, you must be the bucket owner.
This implementation also returns the MFA Delete status of the versioning state. If
the MFA Delete status is
The following operations are related to |
![]() |
GetBucketVersioning(GetBucketVersioningRequest) |
Returns the versioning state of a bucket. To retrieve the versioning state of a bucket, you must be the bucket owner.
This implementation also returns the MFA Delete status of the versioning state. If
the MFA Delete status is
The following operations are related to |
![]() |
GetBucketVersioningAsync(string, CancellationToken) |
Returns the versioning state of a bucket. To retrieve the versioning state of a bucket, you must be the bucket owner.
This implementation also returns the MFA Delete status of the versioning state. If
the MFA Delete status is
The following operations are related to |
![]() |
GetBucketVersioningAsync(GetBucketVersioningRequest, CancellationToken) |
Returns the versioning state of a bucket. To retrieve the versioning state of a bucket, you must be the bucket owner.
This implementation also returns the MFA Delete status of the versioning state. If
the MFA Delete status is
The following operations are related to |
![]() |
GetBucketWebsite(string) |
Returns the website configuration for a bucket. To host website on Amazon S3, you can configure a bucket as website by adding a website configuration. For more information about hosting websites, see Hosting Websites on Amazon S3.
This GET action requires the
The following operations are related to |
![]() |
GetBucketWebsite(GetBucketWebsiteRequest) |
Returns the website configuration for a bucket. To host website on Amazon S3, you can configure a bucket as website by adding a website configuration. For more information about hosting websites, see Hosting Websites on Amazon S3.
This GET action requires the
The following operations are related to |
![]() |
GetBucketWebsiteAsync(string, CancellationToken) |
Returns the website configuration for a bucket. To host website on Amazon S3, you can configure a bucket as website by adding a website configuration. For more information about hosting websites, see Hosting Websites on Amazon S3.
This GET action requires the
The following operations are related to |
![]() |
GetBucketWebsiteAsync(GetBucketWebsiteRequest, CancellationToken) |
Returns the website configuration for a bucket. To host website on Amazon S3, you can configure a bucket as website by adding a website configuration. For more information about hosting websites, see Hosting Websites on Amazon S3.
This GET action requires the
The following operations are related to |
![]() |
GetCORSConfiguration(string) |
Returns the Cross-Origin Resource Sharing (CORS) configuration information set for the bucket.
To use this operation, you must have permission to perform the For more information about CORS, see Enabling Cross-Origin Resource Sharing.
The following operations are related to |
![]() |
GetCORSConfiguration(GetCORSConfigurationRequest) |
Returns the Cross-Origin Resource Sharing (CORS) configuration information set for the bucket.
To use this operation, you must have permission to perform the For more information about CORS, see Enabling Cross-Origin Resource Sharing.
The following operations are related to |
![]() |
GetCORSConfigurationAsync(string, CancellationToken) |
Returns the Cross-Origin Resource Sharing (CORS) configuration information set for the bucket.
To use this operation, you must have permission to perform the For more information about CORS, see Enabling Cross-Origin Resource Sharing.
The following operations are related to |
![]() |
GetCORSConfigurationAsync(GetCORSConfigurationRequest, CancellationToken) |
Returns the Cross-Origin Resource Sharing (CORS) configuration information set for the bucket.
To use this operation, you must have permission to perform the For more information about CORS, see Enabling Cross-Origin Resource Sharing.
The following operations are related to |
![]() |
GetLifecycleConfiguration(string) |
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object
key name prefix, one or more object tags, or a combination of both. Accordingly, this
section describes the latest API. The response describes the new filter element that
you can use to specify a filter to select a subset of objects to which the rule applies.
If you are using a previous version of the lifecycle configuration, it still works.
For the earlier action, see GetBucketLifecycle.
Returns the lifecycle configuration information set on the bucket. For information about lifecycle configuration, see Object Lifecycle Management.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
GetLifecycleConfiguration(GetLifecycleConfigurationRequest) |
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object
key name prefix, one or more object tags, or a combination of both. Accordingly, this
section describes the latest API. The response describes the new filter element that
you can use to specify a filter to select a subset of objects to which the rule applies.
If you are using a previous version of the lifecycle configuration, it still works.
For the earlier action, see GetBucketLifecycle.
Returns the lifecycle configuration information set on the bucket. For information about lifecycle configuration, see Object Lifecycle Management.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
GetLifecycleConfigurationAsync(string, CancellationToken) |
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object
key name prefix, one or more object tags, or a combination of both. Accordingly, this
section describes the latest API. The response describes the new filter element that
you can use to specify a filter to select a subset of objects to which the rule applies.
If you are using a previous version of the lifecycle configuration, it still works.
For the earlier action, see GetBucketLifecycle.
Returns the lifecycle configuration information set on the bucket. For information about lifecycle configuration, see Object Lifecycle Management.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
GetLifecycleConfigurationAsync(GetLifecycleConfigurationRequest, CancellationToken) |
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object
key name prefix, one or more object tags, or a combination of both. Accordingly, this
section describes the latest API. The response describes the new filter element that
you can use to specify a filter to select a subset of objects to which the rule applies.
If you are using a previous version of the lifecycle configuration, it still works.
For the earlier action, see GetBucketLifecycle.
Returns the lifecycle configuration information set on the bucket. For information about lifecycle configuration, see Object Lifecycle Management.
To use this operation, you must have permission to perform the
The following operations are related to |
![]() |
GetObject(string, string) |
Retrieves objects from Amazon S3. To use
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical
computer file system. You can, however, create a logical hierarchy by using object
key names that imply a folder structure. For example, instead of naming an object
To get an object from such a logical hierarchy, specify the full key name for the
object in the For more information about returning the ACL of an object, see GetObjectAcl.
If the object you are retrieving is stored in the S3 Glacier or S3 Glacier Deep Archive
storage class, or S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive
tiers, before you can retrieve the object you must first restore a copy using RestoreObject.
Otherwise, this action returns an
Encryption request headers, like If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Assuming you have the relevant permission to read object tags, the response also returns
the Permissions
You need the relevant read object (or version) permission for this operation. For
more information, see Specifying
Permissions in a Policy. If the object you request does not exist, the error Amazon
S3 returns depends on whether you also have the
Versioning
By default, the GET action returns the current version of an object. To return a different
version, use the
For more information about versioning, see PutBucketVersioning. Overriding Response Header Values
There are times when you want to override certain response header values in a GET
response. For example, you might override the
You can override values for a set of response headers using the following query parameters.
These response header values are sent only on a successful request, that is, when
status code 200 OK is returned. The set of headers you can override using these parameters
is a subset of the headers that Amazon S3 accepts when you create an object. The response
headers that you can override for the GET response are You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request.
Additional Considerations about Request Headers
If both of the
If both of the For more information about conditional requests, see RFC 7232.
The following operations are related to |
![]() |
GetObject(string, string, string) |
Retrieves objects from Amazon S3. To use
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical
computer file system. You can, however, create a logical hierarchy by using object
key names that imply a folder structure. For example, instead of naming an object
To get an object from such a logical hierarchy, specify the full key name for the
object in the For more information about returning the ACL of an object, see GetObjectAcl.
If the object you are retrieving is stored in the S3 Glacier or S3 Glacier Deep Archive
storage class, or S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive
tiers, before you can retrieve the object you must first restore a copy using RestoreObject.
Otherwise, this action returns an
Encryption request headers, like If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Assuming you have the relevant permission to read object tags, the response also returns
the Permissions
You need the relevant read object (or version) permission for this operation. For
more information, see Specifying
Permissions in a Policy. If the object you request does not exist, the error Amazon
S3 returns depends on whether you also have the
Versioning
By default, the GET action returns the current version of an object. To return a different
version, use the
For more information about versioning, see PutBucketVersioning. Overriding Response Header Values
There are times when you want to override certain response header values in a GET
response. For example, you might override the
You can override values for a set of response headers using the following query parameters.
These response header values are sent only on a successful request, that is, when
status code 200 OK is returned. The set of headers you can override using these parameters
is a subset of the headers that Amazon S3 accepts when you create an object. The response
headers that you can override for the GET response are You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request.
Additional Considerations about Request Headers
If both of the
If both of the For more information about conditional requests, see RFC 7232.
The following operations are related to |
![]() |
GetObject(GetObjectRequest) |
Retrieves objects from Amazon S3. To use
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical
computer file system. You can, however, create a logical hierarchy by using object
key names that imply a folder structure. For example, instead of naming an object
To get an object from such a logical hierarchy, specify the full key name for the
object in the For more information about returning the ACL of an object, see GetObjectAcl.
If the object you are retrieving is stored in the S3 Glacier or S3 Glacier Deep Archive
storage class, or S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive
tiers, before you can retrieve the object you must first restore a copy using RestoreObject.
Otherwise, this action returns an
Encryption request headers, like If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Assuming you have the relevant permission to read object tags, the response also returns
the Permissions
You need the relevant read object (or version) permission for this operation. For
more information, see Specifying
Permissions in a Policy. If the object you request does not exist, the error Amazon
S3 returns depends on whether you also have the
Versioning
By default, the GET action returns the current version of an object. To return a different
version, use the
For more information about versioning, see PutBucketVersioning. Overriding Response Header Values
There are times when you want to override certain response header values in a GET
response. For example, you might override the
You can override values for a set of response headers using the following query parameters.
These response header values are sent only on a successful request, that is, when
status code 200 OK is returned. The set of headers you can override using these parameters
is a subset of the headers that Amazon S3 accepts when you create an object. The response
headers that you can override for the GET response are You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request.
Additional Considerations about Request Headers
If both of the
If both of the For more information about conditional requests, see RFC 7232.
The following operations are related to |
![]() |
GetObjectAsync(string, string, CancellationToken) |
Retrieves objects from Amazon S3. To use
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical
computer file system. You can, however, create a logical hierarchy by using object
key names that imply a folder structure. For example, instead of naming an object
To get an object from such a logical hierarchy, specify the full key name for the
object in the For more information about returning the ACL of an object, see GetObjectAcl.
If the object you are retrieving is stored in the S3 Glacier or S3 Glacier Deep Archive
storage class, or S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive
tiers, before you can retrieve the object you must first restore a copy using RestoreObject.
Otherwise, this action returns an
Encryption request headers, like If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Assuming you have the relevant permission to read object tags, the response also returns
the Permissions
You need the relevant read object (or version) permission for this operation. For
more information, see Specifying
Permissions in a Policy. If the object you request does not exist, the error Amazon
S3 returns depends on whether you also have the
Versioning
By default, the GET action returns the current version of an object. To return a different
version, use the
For more information about versioning, see PutBucketVersioning. Overriding Response Header Values
There are times when you want to override certain response header values in a GET
response. For example, you might override the
You can override values for a set of response headers using the following query parameters.
These response header values are sent only on a successful request, that is, when
status code 200 OK is returned. The set of headers you can override using these parameters
is a subset of the headers that Amazon S3 accepts when you create an object. The response
headers that you can override for the GET response are You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request.
Additional Considerations about Request Headers
If both of the
If both of the For more information about conditional requests, see RFC 7232.
The following operations are related to |
![]() |
GetObjectAsync(string, string, string, CancellationToken) |
Retrieves objects from Amazon S3. To use
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical
computer file system. You can, however, create a logical hierarchy by using object
key names that imply a folder structure. For example, instead of naming an object
To get an object from such a logical hierarchy, specify the full key name for the
object in the For more information about returning the ACL of an object, see GetObjectAcl.
If the object you are retrieving is stored in the S3 Glacier or S3 Glacier Deep Archive
storage class, or S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive
tiers, before you can retrieve the object you must first restore a copy using RestoreObject.
Otherwise, this action returns an
Encryption request headers, like If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Assuming you have the relevant permission to read object tags, the response also returns
the Permissions
You need the relevant read object (or version) permission for this operation. For
more information, see Specifying
Permissions in a Policy. If the object you request does not exist, the error Amazon
S3 returns depends on whether you also have the
Versioning
By default, the GET action returns the current version of an object. To return a different
version, use the
For more information about versioning, see PutBucketVersioning. Overriding Response Header Values
There are times when you want to override certain response header values in a GET
response. For example, you might override the
You can override values for a set of response headers using the following query parameters.
These response header values are sent only on a successful request, that is, when
status code 200 OK is returned. The set of headers you can override using these parameters
is a subset of the headers that Amazon S3 accepts when you create an object. The response
headers that you can override for the GET response are You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request.
Additional Considerations about Request Headers
If both of the
If both of the For more information about conditional requests, see RFC 7232.
The following operations are related to |
![]() |
GetObjectAsync(GetObjectRequest, CancellationToken) |
Retrieves objects from Amazon S3. To use
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical
computer file system. You can, however, create a logical hierarchy by using object
key names that imply a folder structure. For example, instead of naming an object
To get an object from such a logical hierarchy, specify the full key name for the
object in the For more information about returning the ACL of an object, see GetObjectAcl.
If the object you are retrieving is stored in the S3 Glacier or S3 Glacier Deep Archive
storage class, or S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive
tiers, before you can retrieve the object you must first restore a copy using RestoreObject.
Otherwise, this action returns an
Encryption request headers, like If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Assuming you have the relevant permission to read object tags, the response also returns
the Permissions
You need the relevant read object (or version) permission for this operation. For
more information, see Specifying
Permissions in a Policy. If the object you request does not exist, the error Amazon
S3 returns depends on whether you also have the
Versioning
By default, the GET action returns the current version of an object. To return a different
version, use the
For more information about versioning, see PutBucketVersioning. Overriding Response Header Values
There are times when you want to override certain response header values in a GET
response. For example, you might override the
You can override values for a set of response headers using the following query parameters.
These response header values are sent only on a successful request, that is, when
status code 200 OK is returned. The set of headers you can override using these parameters
is a subset of the headers that Amazon S3 accepts when you create an object. The response
headers that you can override for the GET response are You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request.
Additional Considerations about Request Headers
If both of the
If both of the For more information about conditional requests, see RFC 7232.
The following operations are related to |
![]() |
GetObjectAttributes(GetObjectAttributesRequest) |
Retrieves all the metadata from an object without returning the object itself. This
action is useful if you're interested only in an object's metadata. To use
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.
Consider the following when using request headers:
For more information about conditional requests, see RFC 7232. Permissions
The permissions that you need to use this operation depend on whether the bucket is
versioned. If the bucket is versioned, you need both the
The following actions are related to |
![]() |
GetObjectAttributesAsync(GetObjectAttributesRequest, CancellationToken) |
Retrieves all the metadata from an object without returning the object itself. This
action is useful if you're interested only in an object's metadata. To use
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.
Consider the following when using request headers:
For more information about conditional requests, see RFC 7232. Permissions
The permissions that you need to use this operation depend on whether the bucket is
versioned. If the bucket is versioned, you need both the
The following actions are related to |
![]() |
GetObjectLegalHold(GetObjectLegalHoldRequest) |
Gets an object's current legal hold status. For more information, see Locking Objects. This action is not supported by Amazon S3 on Outposts.
The following action is related to |
![]() |
GetObjectLegalHoldAsync(GetObjectLegalHoldRequest, CancellationToken) |
Gets an object's current legal hold status. For more information, see Locking Objects. This action is not supported by Amazon S3 on Outposts.
The following action is related to |
![]() |
GetObjectLockConfiguration(GetObjectLockConfigurationRequest) |
Gets the Object Lock configuration for a bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket. For more information, see Locking Objects.
The following action is related to |
![]() |
GetObjectLockConfigurationAsync(GetObjectLockConfigurationRequest, CancellationToken) |
Gets the Object Lock configuration for a bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket. For more information, see Locking Objects.
The following action is related to |
![]() |
GetObjectMetadata(string, string) |
The HEAD action retrieves metadata from an object without returning the object itself. This action is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
A If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Request headers are limited to 8 KB in size. For more information, see Common Request Headers. Consider the following when using request headers:
For more information about conditional requests, see RFC 7232. Permissions You need the relevant read object (or version) permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
The following actions are related to |
![]() |
GetObjectMetadata(string, string, string) |
The HEAD action retrieves metadata from an object without returning the object itself. This action is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
A If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Request headers are limited to 8 KB in size. For more information, see Common Request Headers. Consider the following when using request headers:
For more information about conditional requests, see RFC 7232. Permissions You need the relevant read object (or version) permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
The following actions are related to |
![]() |
GetObjectMetadata(GetObjectMetadataRequest) |
The HEAD action retrieves metadata from an object without returning the object itself. This action is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
A If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Request headers are limited to 8 KB in size. For more information, see Common Request Headers. Consider the following when using request headers:
For more information about conditional requests, see RFC 7232. Permissions You need the relevant read object (or version) permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
The following actions are related to |
![]() |
GetObjectMetadataAsync(string, string, CancellationToken) |
The HEAD action retrieves metadata from an object without returning the object itself. This action is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
A If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Request headers are limited to 8 KB in size. For more information, see Common Request Headers. Consider the following when using request headers:
For more information about conditional requests, see RFC 7232. Permissions You need the relevant read object (or version) permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
The following actions are related to |
![]() |
GetObjectMetadataAsync(string, string, string, CancellationToken) |
The HEAD action retrieves metadata from an object without returning the object itself. This action is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
A If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Request headers are limited to 8 KB in size. For more information, see Common Request Headers. Consider the following when using request headers:
For more information about conditional requests, see RFC 7232. Permissions You need the relevant read object (or version) permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
The following actions are related to |
![]() |
GetObjectMetadataAsync(GetObjectMetadataRequest, CancellationToken) |
The HEAD action retrieves metadata from an object without returning the object itself. This action is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
A If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Request headers are limited to 8 KB in size. For more information, see Common Request Headers. Consider the following when using request headers:
For more information about conditional requests, see RFC 7232. Permissions You need the relevant read object (or version) permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
The following actions are related to |
![]() |
GetObjectRetention(GetObjectRetentionRequest) |
Retrieves an object's retention settings. For more information, see Locking Objects. This action is not supported by Amazon S3 on Outposts.
The following action is related to |
![]() |
GetObjectRetentionAsync(GetObjectRetentionRequest, CancellationToken) |
Retrieves an object's retention settings. For more information, see Locking Objects. This action is not supported by Amazon S3 on Outposts.
The following action is related to |
![]() |
GetObjectTagging(GetObjectTaggingRequest) |
Returns the tag-set of an object. You send the GET request against the tagging subresource associated with the object.
To use this operation, you must have permission to perform the By default, the bucket owner has this permission and can grant this permission to others. For information about the Amazon S3 object tagging feature, see Object Tagging.
The following actions are related to |
![]() |
GetObjectTaggingAsync(GetObjectTaggingRequest, CancellationToken) |
Returns the tag-set of an object. You send the GET request against the tagging subresource associated with the object.
To use this operation, you must have permission to perform the By default, the bucket owner has this permission and can grant this permission to others. For information about the Amazon S3 object tagging feature, see Object Tagging.
The following actions are related to |
![]() |
GetObjectTorrent(string, string) |
Returns torrent files from a bucket. BitTorrent can save you bandwidth when you're
distributing large files. For more information about BitTorrent, see Using
BitTorrent with Amazon S3.
You can get torrent only for objects that are less than 5 GB in size, and that are
not encrypted using server-side encryption with a customer-provided encryption key.
To use GET, you must have READ access to the object. This action is not supported by Amazon S3 on Outposts.
The following action is related to |
![]() |
GetObjectTorrent(GetObjectTorrentRequest) |
Returns torrent files from a bucket. BitTorrent can save you bandwidth when you're
distributing large files. For more information about BitTorrent, see Using
BitTorrent with Amazon S3.
You can get torrent only for objects that are less than 5 GB in size, and that are
not encrypted using server-side encryption with a customer-provided encryption key.
To use GET, you must have READ access to the object. This action is not supported by Amazon S3 on Outposts.
The following action is related to |
![]() |
GetObjectTorrentAsync(string, string, CancellationToken) |
Returns torrent files from a bucket. BitTorrent can save you bandwidth when you're
distributing large files. For more information about BitTorrent, see Using
BitTorrent with Amazon S3.
You can get torrent only for objects that are less than 5 GB in size, and that are
not encrypted using server-side encryption with a customer-provided encryption key.
To use GET, you must have READ access to the object. This action is not supported by Amazon S3 on Outposts.
The following action is related to |
![]() |
GetObjectTorrentAsync(GetObjectTorrentRequest, CancellationToken) |
Returns torrent files from a bucket. BitTorrent can save you bandwidth when you're
distributing large files. For more information about BitTorrent, see Using
BitTorrent with Amazon S3.
You can get torrent only for objects that are less than 5 GB in size, and that are
not encrypted using server-side encryption with a customer-provided encryption key.
To use GET, you must have READ access to the object. This action is not supported by Amazon S3 on Outposts.
The following action is related to |
![]() |
GetPreSignedURL(GetPreSignedUrlRequest) |
Create a signed URL allowing access to a resource that would usually require authentication. |
![]() |
GetPublicAccessBlock(GetPublicAccessBlockRequest) |
Retrieves the
When Amazon S3 evaluates the For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of "Public".
The following operations are related to |
![]() |
GetPublicAccessBlockAsync(GetPublicAccessBlockRequest, CancellationToken) |
Retrieves the
When Amazon S3 evaluates the For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of "Public".
The following operations are related to |
![]() |
InitiateMultipartUpload(string, string) |
This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request. For more information about multipart uploads, see Multipart Upload Overview. If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy. For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions. For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4). After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.
You can optionally request server-side encryption. For server-side encryption, Amazon
S3 encrypts your data as it writes it to disks in its data centers and decrypts it
when you access it. You can provide your own encryption key, or use Amazon Web Services
KMS keys or Amazon S3-managed encryption keys. If you choose to provide your own encryption
key, the request headers you provide in UploadPart
and UploadPartCopy
requests must match the headers you used in the request to initiate the upload by
using
To perform a multipart upload with encryption using an Amazon Web Services KMS key,
the requester must have permission to the If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role. For more information, see Protecting Data Using Server-Side Encryption.
The following operations are related to |
![]() |
InitiateMultipartUpload(InitiateMultipartUploadRequest) |
This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request. For more information about multipart uploads, see Multipart Upload Overview. If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy. For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions. For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4). After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.
You can optionally request server-side encryption. For server-side encryption, Amazon
S3 encrypts your data as it writes it to disks in its data centers and decrypts it
when you access it. You can provide your own encryption key, or use Amazon Web Services
KMS keys or Amazon S3-managed encryption keys. If you choose to provide your own encryption
key, the request headers you provide in UploadPart
and UploadPartCopy
requests must match the headers you used in the request to initiate the upload by
using
To perform a multipart upload with encryption using an Amazon Web Services KMS key,
the requester must have permission to the If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role. For more information, see Protecting Data Using Server-Side Encryption.
The following operations are related to |
![]() |
InitiateMultipartUploadAsync(string, string, CancellationToken) |
This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request. For more information about multipart uploads, see Multipart Upload Overview. If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy. For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions. For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4). After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.
You can optionally request server-side encryption. For server-side encryption, Amazon
S3 encrypts your data as it writes it to disks in its data centers and decrypts it
when you access it. You can provide your own encryption key, or use Amazon Web Services
KMS keys or Amazon S3-managed encryption keys. If you choose to provide your own encryption
key, the request headers you provide in UploadPart
and UploadPartCopy
requests must match the headers you used in the request to initiate the upload by
using
To perform a multipart upload with encryption using an Amazon Web Services KMS key,
the requester must have permission to the If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role. For more information, see Protecting Data Using Server-Side Encryption.
The following operations are related to |
![]() |
InitiateMultipartUploadAsync(InitiateMultipartUploadRequest, CancellationToken) |
This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request. For more information about multipart uploads, see Multipart Upload Overview. If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy. For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions. For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4). After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.
You can optionally request server-side encryption. For server-side encryption, Amazon
S3 encrypts your data as it writes it to disks in its data centers and decrypts it
when you access it. You can provide your own encryption key, or use Amazon Web Services
KMS keys or Amazon S3-managed encryption keys. If you choose to provide your own encryption
key, the request headers you provide in UploadPart
and UploadPartCopy
requests must match the headers you used in the request to initiate the upload by
using
To perform a multipart upload with encryption using an Amazon Web Services KMS key,
the requester must have permission to the If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role. For more information, see Protecting Data Using Server-Side Encryption.
The following operations are related to |
![]() |
ListBucketAnalyticsConfigurations(ListBucketAnalyticsConfigurationsRequest) |
Lists the analytics configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.
This action supports list pagination and does not return more than 100 configurations
at a time. You should always check the
To use this operation, you must have permissions to perform the For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.
The following operations are related to |
![]() |
ListBucketAnalyticsConfigurationsAsync(ListBucketAnalyticsConfigurationsRequest, CancellationToken) |
Lists the analytics configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.
This action supports list pagination and does not return more than 100 configurations
at a time. You should always check the
To use this operation, you must have permissions to perform the For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.
The following operations are related to |
![]() |
ListBucketIntelligentTieringConfigurations(ListBucketIntelligentTieringConfigurationsRequest) |
Lists the S3 Intelligent-Tiering configuration from the specified bucket. The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities. The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to |
![]() |
ListBucketIntelligentTieringConfigurationsAsync(ListBucketIntelligentTieringConfigurationsRequest, CancellationToken) |
Lists the S3 Intelligent-Tiering configuration from the specified bucket. The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities. The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to |
![]() |
ListBucketInventoryConfigurations(ListBucketInventoryConfigurationsRequest) |
Returns a list of inventory configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.
This action supports list pagination and does not return more than 100 configurations
at a time. Always check the
To use this operation, you must have permissions to perform the For information about the Amazon S3 inventory feature, see Amazon S3 Inventory
The following operations are related to |
![]() |
ListBucketInventoryConfigurationsAsync(ListBucketInventoryConfigurationsRequest, CancellationToken) |
Returns a list of inventory configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.
This action supports list pagination and does not return more than 100 configurations
at a time. Always check the
To use this operation, you must have permissions to perform the For information about the Amazon S3 inventory feature, see Amazon S3 Inventory
The following operations are related to |
![]() |
ListBucketMetricsConfigurations(ListBucketMetricsConfigurationsRequest) |
Lists the metrics configurations for the bucket. The metrics configurations are only for the request metrics of the bucket and do not provide information on daily storage metrics. You can have up to 1,000 configurations per bucket.
This action supports list pagination and does not return more than 100 configurations
at a time. Always check the
To use this operation, you must have permissions to perform the For more information about metrics configurations and CloudWatch request metrics, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to |
![]() |
ListBucketMetricsConfigurationsAsync(ListBucketMetricsConfigurationsRequest, CancellationToken) |
Lists the metrics configurations for the bucket. The metrics configurations are only for the request metrics of the bucket and do not provide information on daily storage metrics. You can have up to 1,000 configurations per bucket.
This action supports list pagination and does not return more than 100 configurations
at a time. Always check the
To use this operation, you must have permissions to perform the For more information about metrics configurations and CloudWatch request metrics, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to |
![]() |
ListBuckets() |
Returns a list of all buckets owned by the authenticated sender of the request. To
use this operation, you must have the |
![]() |
ListBuckets(ListBucketsRequest) |
Returns a list of all buckets owned by the authenticated sender of the request. To
use this operation, you must have the |
![]() |
ListBucketsAsync(CancellationToken) |
Returns a list of all buckets owned by the authenticated sender of the request. To
use this operation, you must have the |
![]() |
ListBucketsAsync(ListBucketsRequest, CancellationToken) |
Returns a list of all buckets owned by the authenticated sender of the request. To
use this operation, you must have the |
![]() |
ListMultipartUploads(string) |
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
This action returns at most 1,000 multipart uploads in the response. 1,000 multipart
uploads is the maximum number of uploads a response can include, which is also the
default value. You can further limit the number of uploads in a response by specifying
the In the response, the uploads are sorted by key. If your application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time. For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListMultipartUploads(string, string) |
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
This action returns at most 1,000 multipart uploads in the response. 1,000 multipart
uploads is the maximum number of uploads a response can include, which is also the
default value. You can further limit the number of uploads in a response by specifying
the In the response, the uploads are sorted by key. If your application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time. For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListMultipartUploads(ListMultipartUploadsRequest) |
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
This action returns at most 1,000 multipart uploads in the response. 1,000 multipart
uploads is the maximum number of uploads a response can include, which is also the
default value. You can further limit the number of uploads in a response by specifying
the In the response, the uploads are sorted by key. If your application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time. For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListMultipartUploadsAsync(string, string, CancellationToken) |
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
This action returns at most 1,000 multipart uploads in the response. 1,000 multipart
uploads is the maximum number of uploads a response can include, which is also the
default value. You can further limit the number of uploads in a response by specifying
the In the response, the uploads are sorted by key. If your application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time. For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListMultipartUploadsAsync(ListMultipartUploadsRequest, CancellationToken) |
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
This action returns at most 1,000 multipart uploads in the response. 1,000 multipart
uploads is the maximum number of uploads a response can include, which is also the
default value. You can further limit the number of uploads in a response by specifying
the In the response, the uploads are sorted by key. If your application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time. For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListMultipartUploadsAsync(string, CancellationToken) |
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
This action returns at most 1,000 multipart uploads in the response. 1,000 multipart
uploads is the maximum number of uploads a response can include, which is also the
default value. You can further limit the number of uploads in a response by specifying
the In the response, the uploads are sorted by key. If your application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time. For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListObjects(string) |
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request
parameters as selection criteria to return a subset of the objects in a bucket. A
200 OK response can contain valid or invalid XML. Be sure to design your application
to parse the contents of the response and handle it appropriately.
This action has been revised. We recommend that you use the newer version, ListObjectsV2,
when developing applications. For backward compatibility, Amazon S3 continues to support
The following operations are related to |
![]() |
ListObjects(string, string) |
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request
parameters as selection criteria to return a subset of the objects in a bucket. A
200 OK response can contain valid or invalid XML. Be sure to design your application
to parse the contents of the response and handle it appropriately.
This action has been revised. We recommend that you use the newer version, ListObjectsV2,
when developing applications. For backward compatibility, Amazon S3 continues to support
The following operations are related to |
![]() |
ListObjects(ListObjectsRequest) |
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request
parameters as selection criteria to return a subset of the objects in a bucket. A
200 OK response can contain valid or invalid XML. Be sure to design your application
to parse the contents of the response and handle it appropriately.
This action has been revised. We recommend that you use the newer version, ListObjectsV2,
when developing applications. For backward compatibility, Amazon S3 continues to support
The following operations are related to |
![]() |
ListObjectsAsync(string, CancellationToken) |
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request
parameters as selection criteria to return a subset of the objects in a bucket. A
200 OK response can contain valid or invalid XML. Be sure to design your application
to parse the contents of the response and handle it appropriately.
This action has been revised. We recommend that you use the newer version, ListObjectsV2,
when developing applications. For backward compatibility, Amazon S3 continues to support
The following operations are related to |
![]() |
ListObjectsAsync(string, string, CancellationToken) |
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request
parameters as selection criteria to return a subset of the objects in a bucket. A
200 OK response can contain valid or invalid XML. Be sure to design your application
to parse the contents of the response and handle it appropriately.
This action has been revised. We recommend that you use the newer version, ListObjectsV2,
when developing applications. For backward compatibility, Amazon S3 continues to support
The following operations are related to |
![]() |
ListObjectsAsync(ListObjectsRequest, CancellationToken) |
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request
parameters as selection criteria to return a subset of the objects in a bucket. A
200 OK response can contain valid or invalid XML. Be sure to design your application
to parse the contents of the response and handle it appropriately.
This action has been revised. We recommend that you use the newer version, ListObjectsV2,
when developing applications. For backward compatibility, Amazon S3 continues to support
The following operations are related to |
![]() |
ListObjectsV2(ListObjectsV2Request) |
Returns some or all (up to 1,000) of the objects in a bucket with each request. You
can use the request parameters as selection criteria to return a subset of the objects
in a bucket. A To use this operation, you must have READ access to the bucket.
To use this action in an Identity and Access Management (IAM) policy, you must have
permissions to perform the This section describes the latest revision of this action. We recommend that you use this revised API for application development. For backward compatibility, Amazon S3 continues to support the prior version of this API, ListObjects. To get a list of your buckets, see ListBuckets.
The following operations are related to |
![]() |
ListObjectsV2Async(ListObjectsV2Request, CancellationToken) |
Returns some or all (up to 1,000) of the objects in a bucket with each request. You
can use the request parameters as selection criteria to return a subset of the objects
in a bucket. A To use this operation, you must have READ access to the bucket.
To use this action in an Identity and Access Management (IAM) policy, you must have
permissions to perform the This section describes the latest revision of this action. We recommend that you use this revised API for application development. For backward compatibility, Amazon S3 continues to support the prior version of this API, ListObjects. To get a list of your buckets, see ListBuckets.
The following operations are related to |
![]() |
ListParts(string, string, string) |
Lists the parts that have been uploaded for a specific multipart upload. This operation
must include the upload ID, which you obtain by sending the initiate multipart upload
request (see CreateMultipartUpload).
This request returns a maximum of 1,000 uploaded parts. The default number of parts
returned is 1,000 parts. You can restrict the number of parts returned by specifying
the
If the upload was created using a checksum algorithm, you will need to have permission
to the For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListParts(ListPartsRequest) |
Lists the parts that have been uploaded for a specific multipart upload. This operation
must include the upload ID, which you obtain by sending the initiate multipart upload
request (see CreateMultipartUpload).
This request returns a maximum of 1,000 uploaded parts. The default number of parts
returned is 1,000 parts. You can restrict the number of parts returned by specifying
the
If the upload was created using a checksum algorithm, you will need to have permission
to the For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListPartsAsync(string, string, string, CancellationToken) |
Lists the parts that have been uploaded for a specific multipart upload. This operation
must include the upload ID, which you obtain by sending the initiate multipart upload
request (see CreateMultipartUpload).
This request returns a maximum of 1,000 uploaded parts. The default number of parts
returned is 1,000 parts. You can restrict the number of parts returned by specifying
the
If the upload was created using a checksum algorithm, you will need to have permission
to the For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListPartsAsync(ListPartsRequest, CancellationToken) |
Lists the parts that have been uploaded for a specific multipart upload. This operation
must include the upload ID, which you obtain by sending the initiate multipart upload
request (see CreateMultipartUpload).
This request returns a maximum of 1,000 uploaded parts. The default number of parts
returned is 1,000 parts. You can restrict the number of parts returned by specifying
the
If the upload was created using a checksum algorithm, you will need to have permission
to the For more information on multipart uploads, see Uploading Objects Using Multipart Upload. For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to |
![]() |
ListVersions(string) |
Returns metadata about all versions of the objects in a bucket. You can also use request
parameters as selection criteria to return metadata about a subset of all the object
versions.
To use this operation, you must have permissions to perform the
A 200 OK response can contain valid or invalid XML. Make sure to design your application
to parse the contents of the response and handle it appropriately.
To use this operation, you must have READ access to the bucket. This action is not supported by Amazon S3 on Outposts.
The following operations are related to |
![]() |
ListVersions(string, string) |
Returns metadata about all versions of the objects in a bucket. You can also use request
parameters as selection criteria to return metadata about a subset of all the object
versions.
To use this operation, you must have permissions to perform the
A 200 OK response can contain valid or invalid XML. Make sure to design your application
to parse the contents of the response and handle it appropriately.
To use this operation, you must have READ access to the bucket. This action is not supported by Amazon S3 on Outposts.
The following operations are related to |
![]() |
ListVersions(ListVersionsRequest) |
Returns metadata about all versions of the objects in a bucket. You can also use request
parameters as selection criteria to return metadata about a subset of all the object
versions.
To use this operation, you must have permissions to perform the
A 200 OK response can contain valid or invalid XML. Make sure to design your application
to parse the contents of the response and handle it appropriately.
To use this operation, you must have READ access to the bucket. This action is not supported by Amazon S3 on Outposts.
The following operations are related to |
![]() |
ListVersionsAsync(string, CancellationToken) |
Returns metadata about all versions of the objects in a bucket. You can also use request
parameters as selection criteria to return metadata about a subset of all the object
versions.
To use this operation, you must have permissions to perform the
A 200 OK response can contain valid or invalid XML. Make sure to design your application
to parse the contents of the response and handle it appropriately.
To use this operation, you must have READ access to the bucket. This action is not supported by Amazon S3 on Outposts.
The following operations are related to |
![]() |
ListVersionsAsync(string, string, CancellationToken) |
Returns metadata about all versions of the objects in a bucket. You can also use request
parameters as selection criteria to return metadata about a subset of all the object
versions.
To use this operation, you must have permissions to perform the
A 200 OK response can contain valid or invalid XML. Make sure to design your application
to parse the contents of the response and handle it appropriately.
To use this operation, you must have READ access to the bucket. This action is not supported by Amazon S3 on Outposts.
The following operations are related to |
![]() |
ListVersionsAsync(ListVersionsRequest, CancellationToken) |
Returns metadata about all versions of the objects in a bucket. You can also use request
parameters as selection criteria to return metadata about a subset of all the object
versions.
To use this operation, you must have permissions to perform the
A 200 OK response can contain valid or invalid XML. Make sure to design your application
to parse the contents of the response and handle it appropriately.
To use this operation, you must have READ access to the bucket. This action is not supported by Amazon S3 on Outposts.
The following operations are related to |
![]() |
PutACL(PutACLRequest) | |
![]() |
PutACLAsync(PutACLRequest, CancellationToken) | |
![]() |
PutBucket(string) |
Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner. Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules. If you want to create an Amazon S3 on Outposts bucket, see Create Bucket. By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.
If you send your create bucket request to the Access control lists (ACLs) When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.
If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and
specifies a bucket ACL that provides access to an external Amazon Web Services account,
your request fails with a There are two ways to grant the appropriate permissions using the request headers.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both. Permissions
In addition to
The following operations are related to |
![]() |
PutBucket(PutBucketRequest) |
Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner. Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules. If you want to create an Amazon S3 on Outposts bucket, see Create Bucket. By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.
If you send your create bucket request to the Access control lists (ACLs) When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.
If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and
specifies a bucket ACL that provides access to an external Amazon Web Services account,
your request fails with a There are two ways to grant the appropriate permissions using the request headers.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both. Permissions
In addition to
The following operations are related to |
![]() |
PutBucketAccelerateConfiguration(PutBucketAccelerateConfigurationRequest) |
Sets the accelerate configuration of an existing bucket. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to Amazon S3.
To use this operation, you must have permission to perform the The Transfer Acceleration state of a bucket can be set to one of the following two values:
The GetBucketAccelerateConfiguration action returns the transfer acceleration state of a bucket. After setting the Transfer Acceleration state of a bucket to Enabled, it might take up to thirty minutes before the data transfer rates to the bucket increase. The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods ("."). For more information about transfer acceleration, see Transfer Acceleration.
The following operations are related to |
![]() |
PutBucketAccelerateConfigurationAsync(PutBucketAccelerateConfigurationRequest, CancellationToken) |
Sets the accelerate configuration of an existing bucket. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to Amazon S3.
To use this operation, you must have permission to perform the The Transfer Acceleration state of a bucket can be set to one of the following two values:
The GetBucketAccelerateConfiguration action returns the transfer acceleration state of a bucket. After setting the Transfer Acceleration state of a bucket to Enabled, it might take up to thirty minutes before the data transfer rates to the bucket increase. The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods ("."). For more information about transfer acceleration, see Transfer Acceleration.
The following operations are related to |
![]() |
PutBucketAnalyticsConfiguration(PutBucketAnalyticsConfigurationRequest) | |
![]() |
PutBucketAnalyticsConfigurationAsync(PutBucketAnalyticsConfigurationRequest, CancellationToken) | |
![]() |
PutBucketAsync(string, CancellationToken) |
Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner. Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules. If you want to create an Amazon S3 on Outposts bucket, see Create Bucket. By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.
If you send your create bucket request to the Access control lists (ACLs) When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.
If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and
specifies a bucket ACL that provides access to an external Amazon Web Services account,
your request fails with a There are two ways to grant the appropriate permissions using the request headers.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both. Permissions
In addition to
The following operations are related to |
![]() |
PutBucketAsync(PutBucketRequest, CancellationToken) |
Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner. Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules. If you want to create an Amazon S3 on Outposts bucket, see Create Bucket. By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.
If you send your create bucket request to the Access control lists (ACLs) When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.
If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and
specifies a bucket ACL that provides access to an external Amazon Web Services account,
your request fails with a There are two ways to grant the appropriate permissions using the request headers.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both. Permissions
In addition to
The following operations are related to |
![]() |
PutBucketEncryption(PutBucketEncryptionRequest) | |
![]() |
PutBucketEncryptionAsync(PutBucketEncryptionRequest, CancellationToken) | |
![]() |
PutBucketIntelligentTieringConfiguration(PutBucketIntelligentTieringConfigurationRequest) | |
![]() |
PutBucketIntelligentTieringConfigurationAsync(PutBucketIntelligentTieringConfigurationRequest, CancellationToken) | |
![]() |
PutBucketInventoryConfiguration(PutBucketInventoryConfigurationRequest) | |
![]() |
PutBucketInventoryConfigurationAsync(PutBucketInventoryConfigurationRequest, CancellationToken) | |
![]() |
PutBucketLogging(PutBucketLoggingRequest) |
Set the logging parameters for a bucket and to specify permissions for who can view and modify the logging parameters. All logs are saved to buckets in the same Amazon Web Services Region as the source bucket. To set the logging status of a bucket, you must be the bucket owner.
The bucket owner is automatically granted FULL_CONTROL to all logs. You use the
If the target bucket for log delivery uses the bucket owner enforced setting for S3
Object Ownership, you can't use the Grantee Values You can specify the person (grantee) to whom you're assigning access rights (using request elements) in the following ways:
To enable logging, you use LoggingEnabled and its children request elements. To disable logging, you use an empty BucketLoggingStatus request element: For more information about server access logging, see Server Access Logging in the Amazon S3 User Guide. For more information about creating a bucket, see CreateBucket. For more information about returning the logging status of a bucket, see GetBucketLogging.
The following operations are related to |
![]() |
PutBucketLoggingAsync(PutBucketLoggingRequest, CancellationToken) |
Set the logging parameters for a bucket and to specify permissions for who can view and modify the logging parameters. All logs are saved to buckets in the same Amazon Web Services Region as the source bucket. To set the logging status of a bucket, you must be the bucket owner.
The bucket owner is automatically granted FULL_CONTROL to all logs. You use the
If the target bucket for log delivery uses the bucket owner enforced setting for S3
Object Ownership, you can't use the Grantee Values You can specify the person (grantee) to whom you're assigning access rights (using request elements) in the following ways:
To enable logging, you use LoggingEnabled and its children request elements. To disable logging, you use an empty BucketLoggingStatus request element: For more information about server access logging, see Server Access Logging in the Amazon S3 User Guide. For more information about creating a bucket, see CreateBucket. For more information about returning the logging status of a bucket, see GetBucketLogging.
The following operations are related to |
![]() |
PutBucketMetricsConfiguration(PutBucketMetricsConfigurationRequest) |
Sets a metrics configuration (specified by the metrics configuration ID) for the bucket. You can have up to 1,000 metrics configurations per bucket. If you're updating an existing metrics configuration, note that this is a full replacement of the existing metrics configuration. If you don't include the elements you want to keep, they are erased.
To use this operation, you must have permissions to perform the For information about CloudWatch request metrics for Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to
|
![]() |
PutBucketMetricsConfigurationAsync(PutBucketMetricsConfigurationRequest, CancellationToken) |
Sets a metrics configuration (specified by the metrics configuration ID) for the bucket. You can have up to 1,000 metrics configurations per bucket. If you're updating an existing metrics configuration, note that this is a full replacement of the existing metrics configuration. If you don't include the elements you want to keep, they are erased.
To use this operation, you must have permissions to perform the For information about CloudWatch request metrics for Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to
|
![]() |
PutBucketNotification(PutBucketNotificationRequest) |
Enables notifications of specified events for a bucket. For more information about event notifications, see Configuring Event Notifications. Using this API, you can replace an existing notification configuration. The configuration is an XML file that defines the event types that you want Amazon S3 to publish and the destination where you want Amazon S3 to publish an event notification when it detects an event of the specified type.
By default, your bucket has no event notifications configured. That is, the notification
configuration will be an empty This action replaces the existing notification configuration with the configuration you include in the request body. After Amazon S3 receives this request, it first verifies that any Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Queue Service (Amazon SQS) destination exists, and that the bucket owner has permission to publish to it by sending a test notification. In the case of Lambda destinations, Amazon S3 verifies that the Lambda function permissions grant Amazon S3 permission to invoke the function from the Amazon S3 bucket. For more information, see Configuring Notifications for Amazon S3 Events. You can disable notifications by adding the empty NotificationConfiguration element. For more information about the number of event notification configurations that you can create per bucket, see Amazon S3 service quotas in Amazon Web Services General Reference.
By default, only the bucket owner can configure notifications on a bucket. However,
bucket owners can use a bucket policy to grant permission to other users to set this
configuration with The PUT notification is an atomic operation. For example, suppose your notification configuration includes SNS topic, SQS queue, and Lambda function configurations. When you send a PUT request with this configuration, Amazon S3 sends test messages to your SNS topic. If the message fails, the entire PUT action will fail, and Amazon S3 will not add the configuration to your bucket. Responses
If the configuration in the request body includes only one
The following action is related to |
![]() |
PutBucketNotificationAsync(PutBucketNotificationRequest, CancellationToken) |
Enables notifications of specified events for a bucket. For more information about event notifications, see Configuring Event Notifications. Using this API, you can replace an existing notification configuration. The configuration is an XML file that defines the event types that you want Amazon S3 to publish and the destination where you want Amazon S3 to publish an event notification when it detects an event of the specified type.
By default, your bucket has no event notifications configured. That is, the notification
configuration will be an empty This action replaces the existing notification configuration with the configuration you include in the request body. After Amazon S3 receives this request, it first verifies that any Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Queue Service (Amazon SQS) destination exists, and that the bucket owner has permission to publish to it by sending a test notification. In the case of Lambda destinations, Amazon S3 verifies that the Lambda function permissions grant Amazon S3 permission to invoke the function from the Amazon S3 bucket. For more information, see Configuring Notifications for Amazon S3 Events. You can disable notifications by adding the empty NotificationConfiguration element. For more information about the number of event notification configurations that you can create per bucket, see Amazon S3 service quotas in Amazon Web Services General Reference.
By default, only the bucket owner can configure notifications on a bucket. However,
bucket owners can use a bucket policy to grant permission to other users to set this
configuration with The PUT notification is an atomic operation. For example, suppose your notification configuration includes SNS topic, SQS queue, and Lambda function configurations. When you send a PUT request with this configuration, Amazon S3 sends test messages to your SNS topic. If the message fails, the entire PUT action will fail, and Amazon S3 will not add the configuration to your bucket. Responses
If the configuration in the request body includes only one
The following action is related to |
![]() |
PutBucketOwnershipControls(PutBucketOwnershipControlsRequest) |
Creates or modifies For information about Amazon S3 Object Ownership, see Using object ownership.
The following operations are related to |
![]() |
PutBucketOwnershipControlsAsync(PutBucketOwnershipControlsRequest, CancellationToken) |
Creates or modifies For information about Amazon S3 Object Ownership, see Using object ownership.
The following operations are related to |
![]() |
PutBucketPolicy(string, string) |
Applies an Amazon S3 bucket policy to an Amazon S3 bucket. If you are using an identity
other than the root user of the Amazon Web Services account that owns the bucket,
the calling identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information, see Bucket policy examples.
The following operations are related to |
![]() |
PutBucketPolicy(string, string, string) |
Applies an Amazon S3 bucket policy to an Amazon S3 bucket. If you are using an identity
other than the root user of the Amazon Web Services account that owns the bucket,
the calling identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information, see Bucket policy examples.
The following operations are related to |
![]() |
PutBucketPolicy(PutBucketPolicyRequest) |
Applies an Amazon S3 bucket policy to an Amazon S3 bucket. If you are using an identity
other than the root user of the Amazon Web Services account that owns the bucket,
the calling identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information, see Bucket policy examples.
The following operations are related to |
![]() |
PutBucketPolicyAsync(string, string, CancellationToken) |
Applies an Amazon S3 bucket policy to an Amazon S3 bucket. If you are using an identity
other than the root user of the Amazon Web Services account that owns the bucket,
the calling identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information, see Bucket policy examples.
The following operations are related to |
![]() |
PutBucketPolicyAsync(string, string, string, CancellationToken) |
Applies an Amazon S3 bucket policy to an Amazon S3 bucket. If you are using an identity
other than the root user of the Amazon Web Services account that owns the bucket,
the calling identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information, see Bucket policy examples.
The following operations are related to |
![]() |
PutBucketPolicyAsync(PutBucketPolicyRequest, CancellationToken) |
Applies an Amazon S3 bucket policy to an Amazon S3 bucket. If you are using an identity
other than the root user of the Amazon Web Services account that owns the bucket,
the calling identity must have the
If you don't have 403 Access Deniederror. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror. As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action. For more information, see Bucket policy examples.
The following operations are related to |
![]() |
PutBucketReplication(PutBucketReplicationRequest) |
Creates a replication configuration or replaces an existing one. For more information, see Replication in the Amazon S3 User Guide. Specify the replication configuration in the request body. In the replication configuration, you provide the name of the destination bucket or buckets where you want Amazon S3 to replicate objects, the IAM role that Amazon S3 can assume to replicate objects on your behalf, and other relevant information. A replication configuration must include at least one rule, and can contain a maximum of 1,000. Each rule identifies a subset of objects to replicate by filtering the objects in the source bucket. To choose additional subsets of objects to replicate, add a rule for each subset.
To specify a subset of the objects in the source bucket to apply a replication rule
to, add the Filter element as a child of the Rule element. You can filter objects
based on an object key prefix, one or more object tags, or both. When you add the
Filter element in the configuration, you must also add the following elements: If you are using an earlier version of the replication configuration, Amazon S3 handles replication of delete markers differently. For more information, see Backward Compatibility. For information about enabling versioning on a bucket, see Using Versioning. Handling Replication of Encrypted Objects
By default, Amazon S3 doesn't replicate objects that are stored at rest using server-side
encryption with KMS keys. To replicate Amazon Web Services KMS-encrypted objects,
add the following:
For information on Permissions
To create a By default, a resource owner, in this case the Amazon Web Services account that created the bucket, can perform this operation. The resource owner can also grant others permissions to perform the operation. For more information about permissions, see Specifying Permissions in a Policy and Managing Access Permissions to Your Amazon S3 Resources. To perform this operation, the user or role performing the action must have the iam:PassRole permission.
The following operations are related to |
![]() |
PutBucketReplicationAsync(PutBucketReplicationRequest, CancellationToken) |
Creates a replication configuration or replaces an existing one. For more information, see Replication in the Amazon S3 User Guide. Specify the replication configuration in the request body. In the replication configuration, you provide the name of the destination bucket or buckets where you want Amazon S3 to replicate objects, the IAM role that Amazon S3 can assume to replicate objects on your behalf, and other relevant information. A replication configuration must include at least one rule, and can contain a maximum of 1,000. Each rule identifies a subset of objects to replicate by filtering the objects in the source bucket. To choose additional subsets of objects to replicate, add a rule for each subset.
To specify a subset of the objects in the source bucket to apply a replication rule
to, add the Filter element as a child of the Rule element. You can filter objects
based on an object key prefix, one or more object tags, or both. When you add the
Filter element in the configuration, you must also add the following elements: If you are using an earlier version of the replication configuration, Amazon S3 handles replication of delete markers differently. For more information, see Backward Compatibility. For information about enabling versioning on a bucket, see Using Versioning. Handling Replication of Encrypted Objects
By default, Amazon S3 doesn't replicate objects that are stored at rest using server-side
encryption with KMS keys. To replicate Amazon Web Services KMS-encrypted objects,
add the following:
For information on Permissions
To create a By default, a resource owner, in this case the Amazon Web Services account that created the bucket, can perform this operation. The resource owner can also grant others permissions to perform the operation. For more information about permissions, see Specifying Permissions in a Policy and Managing Access Permissions to Your Amazon S3 Resources. To perform this operation, the user or role performing the action must have the iam:PassRole permission.
The following operations are related to |
![]() |
PutBucketRequestPayment(string, RequestPaymentConfiguration) |
Sets the request payment configuration for a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. For more information, see Requester Pays Buckets.
The following operations are related to |
![]() |
PutBucketRequestPayment(PutBucketRequestPaymentRequest) |
Sets the request payment configuration for a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. For more information, see Requester Pays Buckets.
The following operations are related to |
![]() |
PutBucketRequestPaymentAsync(string, RequestPaymentConfiguration, CancellationToken) |
Sets the request payment configuration for a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. For more information, see Requester Pays Buckets.
The following operations are related to |
![]() |
PutBucketRequestPaymentAsync(PutBucketRequestPaymentRequest, CancellationToken) |
Sets the request payment configuration for a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. For more information, see Requester Pays Buckets.
The following operations are related to |
![]() |
PutBucketTagging(string, List<Tag>) |
Sets the tags for a bucket. Use tags to organize your Amazon Web Services bill to reflect your own cost structure. To do this, sign up to get your Amazon Web Services account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. For more information, see Cost Allocation and Tagging and Using Cost Allocation in Amazon S3 Bucket Tags. When this operation sets the tags for a bucket, it will overwrite any current tags the bucket already has. You cannot use this operation to add tags to an existing list of tags.
To use this operation, you must have permissions to perform the
The following operations are related to |
![]() |
PutBucketTagging(PutBucketTaggingRequest) |
Sets the tags for a bucket. Use tags to organize your Amazon Web Services bill to reflect your own cost structure. To do this, sign up to get your Amazon Web Services account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. For more information, see Cost Allocation and Tagging and Using Cost Allocation in Amazon S3 Bucket Tags. When this operation sets the tags for a bucket, it will overwrite any current tags the bucket already has. You cannot use this operation to add tags to an existing list of tags.
To use this operation, you must have permissions to perform the
The following operations are related to |
![]() |
PutBucketTaggingAsync(string, List<Tag>, CancellationToken) |
Sets the tags for a bucket. Use tags to organize your Amazon Web Services bill to reflect your own cost structure. To do this, sign up to get your Amazon Web Services account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. For more information, see Cost Allocation and Tagging and Using Cost Allocation in Amazon S3 Bucket Tags. When this operation sets the tags for a bucket, it will overwrite any current tags the bucket already has. You cannot use this operation to add tags to an existing list of tags.
To use this operation, you must have permissions to perform the
The following operations are related to |
![]() |
PutBucketTaggingAsync(PutBucketTaggingRequest, CancellationToken) |
Sets the tags for a bucket. Use tags to organize your Amazon Web Services bill to reflect your own cost structure. To do this, sign up to get your Amazon Web Services account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. For more information, see Cost Allocation and Tagging and Using Cost Allocation in Amazon S3 Bucket Tags. When this operation sets the tags for a bucket, it will overwrite any current tags the bucket already has. You cannot use this operation to add tags to an existing list of tags.
To use this operation, you must have permissions to perform the
The following operations are related to |
![]() |
PutBucketVersioning(PutBucketVersioningRequest) | |
![]() |
PutBucketVersioningAsync(PutBucketVersioningRequest, CancellationToken) | |
![]() |
PutBucketWebsite(string, WebsiteConfiguration) |
Sets the configuration of the website that is specified in the
This PUT action requires the To redirect all website requests sent to the bucket's website endpoint, you add a website configuration with the following elements. Because all requests are sent to another website, you don't need to provide index document name for the bucket.
If you want granular control over redirects, you can use the following elements to add routing rules that describe conditions for redirecting requests and information about the redirect destination. In this case, the website configuration must provide an index document for the bucket, because some requests might not be redirected.
Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more than 50 routing rules, you can use object redirect. For more information, see Configuring an Object Redirect in the Amazon S3 User Guide. |
![]() |
PutBucketWebsite(PutBucketWebsiteRequest) |
Sets the configuration of the website that is specified in the
This PUT action requires the To redirect all website requests sent to the bucket's website endpoint, you add a website configuration with the following elements. Because all requests are sent to another website, you don't need to provide index document name for the bucket.
If you want granular control over redirects, you can use the following elements to add routing rules that describe conditions for redirecting requests and information about the redirect destination. In this case, the website configuration must provide an index document for the bucket, because some requests might not be redirected.
Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more than 50 routing rules, you can use object redirect. For more information, see Configuring an Object Redirect in the Amazon S3 User Guide. |
![]() |
PutBucketWebsiteAsync(string, WebsiteConfiguration, CancellationToken) |
Sets the configuration of the website that is specified in the
This PUT action requires the To redirect all website requests sent to the bucket's website endpoint, you add a website configuration with the following elements. Because all requests are sent to another website, you don't need to provide index document name for the bucket.
If you want granular control over redirects, you can use the following elements to add routing rules that describe conditions for redirecting requests and information about the redirect destination. In this case, the website configuration must provide an index document for the bucket, because some requests might not be redirected.
Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more than 50 routing rules, you can use object redirect. For more information, see Configuring an Object Redirect in the Amazon S3 User Guide. |
![]() |
PutBucketWebsiteAsync(PutBucketWebsiteRequest, CancellationToken) |
Sets the configuration of the website that is specified in the
This PUT action requires the To redirect all website requests sent to the bucket's website endpoint, you add a website configuration with the following elements. Because all requests are sent to another website, you don't need to provide index document name for the bucket.
If you want granular control over redirects, you can use the following elements to add routing rules that describe conditions for redirecting requests and information about the redirect destination. In this case, the website configuration must provide an index document for the bucket, because some requests might not be redirected.
Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more than 50 routing rules, you can use object redirect. For more information, see Configuring an Object Redirect in the Amazon S3 User Guide. |
![]() |
PutCORSConfiguration(string, CORSConfiguration) | |
![]() |
PutCORSConfiguration(PutCORSConfigurationRequest) | |
![]() |
PutCORSConfigurationAsync(string, CORSConfiguration, CancellationToken) | |
![]() |
PutCORSConfigurationAsync(PutCORSConfigurationRequest, CancellationToken) | |
![]() |
PutLifecycleConfiguration(string, LifecycleConfiguration) |
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle
configuration. Keep in mind that this will overwrite an existing lifecycle configuration,
so if you want to retain any configuration details, they must be included in the new
lifecycle configuration. For information about lifecycle configuration, see Managing
your storage lifecycle.
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object
key name prefix, one or more object tags, or a combination of both. Accordingly, this
section describes the latest API. The previous version of the API supported filtering
based only on an object key name prefix, which is supported for backward compatibility.
For the related API description, see PutBucketLifecycle.
Rules You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable. Each rule consists of the following:
For more information, see Object Lifecycle Management and Lifecycle Configuration Elements. Permissions
By default, all Amazon S3 resources are private, including buckets, objects, and related
subresources (for example, lifecycle configuration and website configuration). Only
the resource owner (that is, the Amazon Web Services account that created it) can
access the resource. The resource owner can optionally grant access permissions to
others by writing an access policy. For this operation, a user must get the You can also explicitly deny permissions. Explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:
For more information about permissions, see Managing Access Permissions to Your Amazon S3 Resources.
The following are related to |
![]() |
PutLifecycleConfiguration(PutLifecycleConfigurationRequest) |
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle
configuration. Keep in mind that this will overwrite an existing lifecycle configuration,
so if you want to retain any configuration details, they must be included in the new
lifecycle configuration. For information about lifecycle configuration, see Managing
your storage lifecycle.
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object
key name prefix, one or more object tags, or a combination of both. Accordingly, this
section describes the latest API. The previous version of the API supported filtering
based only on an object key name prefix, which is supported for backward compatibility.
For the related API description, see PutBucketLifecycle.
Rules You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable. Each rule consists of the following:
For more information, see Object Lifecycle Management and Lifecycle Configuration Elements. Permissions
By default, all Amazon S3 resources are private, including buckets, objects, and related
subresources (for example, lifecycle configuration and website configuration). Only
the resource owner (that is, the Amazon Web Services account that created it) can
access the resource. The resource owner can optionally grant access permissions to
others by writing an access policy. For this operation, a user must get the You can also explicitly deny permissions. Explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:
For more information about permissions, see Managing Access Permissions to Your Amazon S3 Resources.
The following are related to |
![]() |
PutLifecycleConfigurationAsync(string, LifecycleConfiguration, CancellationToken) |
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle
configuration. Keep in mind that this will overwrite an existing lifecycle configuration,
so if you want to retain any configuration details, they must be included in the new
lifecycle configuration. For information about lifecycle configuration, see Managing
your storage lifecycle.
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object
key name prefix, one or more object tags, or a combination of both. Accordingly, this
section describes the latest API. The previous version of the API supported filtering
based only on an object key name prefix, which is supported for backward compatibility.
For the related API description, see PutBucketLifecycle.
Rules You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable. Each rule consists of the following:
For more information, see Object Lifecycle Management and Lifecycle Configuration Elements. Permissions
By default, all Amazon S3 resources are private, including buckets, objects, and related
subresources (for example, lifecycle configuration and website configuration). Only
the resource owner (that is, the Amazon Web Services account that created it) can
access the resource. The resource owner can optionally grant access permissions to
others by writing an access policy. For this operation, a user must get the You can also explicitly deny permissions. Explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:
For more information about permissions, see Managing Access Permissions to Your Amazon S3 Resources.
The following are related to |
![]() |
PutLifecycleConfigurationAsync(PutLifecycleConfigurationRequest, CancellationToken) |
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle
configuration. Keep in mind that this will overwrite an existing lifecycle configuration,
so if you want to retain any configuration details, they must be included in the new
lifecycle configuration. For information about lifecycle configuration, see Managing
your storage lifecycle.
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object
key name prefix, one or more object tags, or a combination of both. Accordingly, this
section describes the latest API. The previous version of the API supported filtering
based only on an object key name prefix, which is supported for backward compatibility.
For the related API description, see PutBucketLifecycle.
Rules You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable. Each rule consists of the following:
For more information, see Object Lifecycle Management and Lifecycle Configuration Elements. Permissions
By default, all Amazon S3 resources are private, including buckets, objects, and related
subresources (for example, lifecycle configuration and website configuration). Only
the resource owner (that is, the Amazon Web Services account that created it) can
access the resource. The resource owner can optionally grant access permissions to
others by writing an access policy. For this operation, a user must get the You can also explicitly deny permissions. Explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:
For more information about permissions, see Managing Access Permissions to Your Amazon S3 Resources.
The following are related to |
![]() |
PutObject(PutObjectRequest) | |
![]() |
PutObjectAsync(PutObjectRequest, CancellationToken) | |
![]() |
PutObjectLegalHold(PutObjectLegalHoldRequest) |
Applies a legal hold configuration to the specified object. For more information, see Locking Objects. This action is not supported by Amazon S3 on Outposts. |
![]() |
PutObjectLegalHoldAsync(PutObjectLegalHoldRequest, CancellationToken) |
Applies a legal hold configuration to the specified object. For more information, see Locking Objects. This action is not supported by Amazon S3 on Outposts. |
![]() |
PutObjectLockConfiguration(PutObjectLockConfigurationRequest) |
Places an Object Lock configuration on the specified bucket. The rule specified in
the Object Lock configuration will be applied by default to every new object placed
in the specified bucket. For more information, see Locking
Objects.
The
The
You can only enable Object Lock for new buckets. If you want to turn on Object Lock
for an existing bucket, contact Amazon Web Services Support.
|
![]() |
PutObjectLockConfigurationAsync(PutObjectLockConfigurationRequest, CancellationToken) |
Places an Object Lock configuration on the specified bucket. The rule specified in
the Object Lock configuration will be applied by default to every new object placed
in the specified bucket. For more information, see Locking
Objects.
The
The
You can only enable Object Lock for new buckets. If you want to turn on Object Lock
for an existing bucket, contact Amazon Web Services Support.
|
![]() |
PutObjectRetention(PutObjectRetentionRequest) |
Places an Object Retention configuration on an object. For more information, see Locking Objects.
Users or accounts require the This action is not supported by Amazon S3 on Outposts. |
![]() |
PutObjectRetentionAsync(PutObjectRetentionRequest, CancellationToken) |
Places an Object Retention configuration on an object. For more information, see Locking Objects.
Users or accounts require the This action is not supported by Amazon S3 on Outposts. |
![]() |
PutObjectTagging(PutObjectTaggingRequest) | |
![]() |
PutObjectTaggingAsync(PutObjectTaggingRequest, CancellationToken) | |
![]() |
PutPublicAccessBlock(PutPublicAccessBlockRequest) | |
![]() |
PutPublicAccessBlockAsync(PutPublicAccessBlockRequest, CancellationToken) | |
![]() |
RestoreObject(string, string) | |
![]() |
RestoreObject(string, string, int) | |
![]() |
RestoreObject(string, string, string) | |
![]() |
RestoreObject(string, string, string, int) | |
![]() |
RestoreObject(RestoreObjectRequest) | |
![]() |
RestoreObjectAsync(string, string, CancellationToken) | |
![]() |
RestoreObjectAsync(string, string, int, CancellationToken) | |
![]() |
RestoreObjectAsync(string, string, string, CancellationToken) | |
![]() |
RestoreObjectAsync(string, string, string, int, CancellationToken) | |
![]() |
RestoreObjectAsync(RestoreObjectRequest, CancellationToken) | |
![]() |
SelectObjectContent(SelectObjectContentRequest) | |
![]() |
SelectObjectContentAsync(SelectObjectContentRequest, CancellationToken) | |
![]() |
UploadPart(UploadPartRequest) | |
![]() |
UploadPartAsync(UploadPartRequest, CancellationToken) | |
![]() |
WriteGetObjectResponse(WriteGetObjectResponseRequest) |
Passes transformed objects to a
This operation supports metadata that can be returned by GetObject,
in addition to
You can include any number of metadata headers. When including a metadata header,
it should be prefaced with x-amz-meta-my-custom-header: MyCustomValue. The primary use case for this is to forward GetObject
metadata.
Amazon Web Services provides some prebuilt Lambda functions that you can use with S3 Object Lambda to detect and redact personally identifiable information (PII) and decompress S3 objects. These Lambda functions are available in the Amazon Web Services Serverless Application Repository, and can be selected through the Amazon Web Services Management Console when you create your Object Lambda access point. Example 1: PII Access Control - This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service using machine learning to find insights and relationships in text. It automatically detects personally identifiable information (PII) such as names, addresses, dates, credit card numbers, and social security numbers from documents in your Amazon S3 bucket. Example 2: PII Redaction - This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service using machine learning to find insights and relationships in text. It automatically redacts personally identifiable information (PII) such as names, addresses, dates, credit card numbers, and social security numbers from documents in your Amazon S3 bucket. Example 3: Decompression - The Lambda function S3ObjectLambdaDecompression, is equipped to decompress objects stored in S3 in one of six compressed file formats including bzip2, gzip, snappy, zlib, zstandard and ZIP. For information on how to view and use these functions, see Using Amazon Web Services built Lambda functions in the Amazon S3 User Guide. |
![]() |
WriteGetObjectResponseAsync(WriteGetObjectResponseRequest, CancellationToken) |
Passes transformed objects to a
This operation supports metadata that can be returned by GetObject,
in addition to
You can include any number of metadata headers. When including a metadata header,
it should be prefaced with x-amz-meta-my-custom-header: MyCustomValue. The primary use case for this is to forward GetObject
metadata.
Amazon Web Services provides some prebuilt Lambda functions that you can use with S3 Object Lambda to detect and redact personally identifiable information (PII) and decompress S3 objects. These Lambda functions are available in the Amazon Web Services Serverless Application Repository, and can be selected through the Amazon Web Services Management Console when you create your Object Lambda access point. Example 1: PII Access Control - This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service using machine learning to find insights and relationships in text. It automatically detects personally identifiable information (PII) such as names, addresses, dates, credit card numbers, and social security numbers from documents in your Amazon S3 bucket. Example 2: PII Redaction - This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service using machine learning to find insights and relationships in text. It automatically redacts personally identifiable information (PII) such as names, addresses, dates, credit card numbers, and social security numbers from documents in your Amazon S3 bucket. Example 3: Decompression - The Lambda function S3ObjectLambdaDecompression, is equipped to decompress objects stored in S3 in one of six compressed file formats including bzip2, gzip, snappy, zlib, zstandard and ZIP. For information on how to view and use these functions, see Using Amazon Web Services built Lambda functions in the Amazon S3 User Guide. |
Name | Description | |
---|---|---|
![]() |
AfterResponseEvent | Inherited from Amazon.Runtime.AmazonServiceClient. |
![]() |
BeforeRequestEvent | Inherited from Amazon.Runtime.AmazonServiceClient. |
![]() |
ExceptionEvent | Inherited from Amazon.Runtime.AmazonServiceClient. |
.NET Core App:
Supported in: 3.1
.NET Standard:
Supported in: 2.0
.NET Framework:
Supported in: 4.5, 4.0, 3.5