Options
All
  • Public
  • Public/Protected
  • All
Menu

Class S3

Hierarchy

Implements

Index

Constructors

Properties

Methods

Constructors

constructor

Properties

Readonly config

The resolved configuration of S3Client class. This is resolved and normalized from the constructor configuration interface.

middlewareStack

Methods

abortMultipartUpload

  • This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

    To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty.

    For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.

    The following operations are related to AbortMultipartUpload:

    Parameters

    Returns Promise<AbortMultipartUploadCommandOutput>

  • This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

    To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty.

    For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.

    The following operations are related to AbortMultipartUpload:

    Parameters

    Returns void

  • This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

    To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty.

    For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.

    The following operations are related to AbortMultipartUpload:

    Parameters

    Returns void

  • This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

    To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty.

    For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.

    The following operations are related to AbortMultipartUpload:

    Parameters

    • args: AbortMultipartUploadCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<AbortMultipartUploadCommandOutput>

  • This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

    To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty.

    For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.

    The following operations are related to AbortMultipartUpload:

    Parameters

    Returns void

  • This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

    To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty.

    For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.

    The following operations are related to AbortMultipartUpload:

    Parameters

    Returns void

completeMultipartUpload

  • Completes a multipart upload by assembling previously uploaded parts.

    You first initiate the multipart upload and then upload all parts using the UploadPart operation. After successfully uploading all relevant parts of an upload, you call this action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the Complete Multipart Upload request, you must provide the parts list. You must ensure that the parts list is complete. This action concatenates the parts that you provide in the list. For each part in the list, you must provide the part number and the ETag value, returned after that part was uploaded.

    Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.

    Note that if CompleteMultipartUpload fails, applications should be prepared to retry the failed requests. For more information, see Amazon S3 Error Best Practices.

    You cannot use Content-Type: application/x-www-form-urlencoded with Complete Multipart Upload requests. Also, if you do not provide a Content-Type header, CompleteMultipartUpload returns a 200 OK response.

    For more information about multipart uploads, see Uploading Objects Using Multipart Upload.

    For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.

         <p>
            <code>CompleteMultipartUpload</code> has the following special errors:</p>
         <ul>
            <li>
               <p>Error code: <code>EntityTooSmall</code>
               </p>
               <ul>
                  <li>
                     <p>Description: Your proposed upload is smaller than the minimum allowed object
                     size. Each part must be at least 5 MB in size, except the last part.</p>
                  </li>
                  <li>
                     <p>400 Bad Request</p>
                  </li>
               </ul>
            </li>
            <li>
               <p>Error code: <code>InvalidPart</code>
               </p>
               <ul>
                  <li>
                     <p>Description: One or more of the specified parts could not be found. The part
                     might not have been uploaded, or the specified entity tag might not have
                     matched the part's entity tag.</p>
                  </li>
                  <li>
                     <p>400 Bad Request</p>
                  </li>
               </ul>
            </li>
            <li>
               <p>Error code: <code>InvalidPartOrder</code>
               </p>
               <ul>
                  <li>
                     <p>Description: The list of parts was not in ascending order. The parts list
                     must be specified in order by part number.</p>
                  </li>
                  <li>
                     <p>400 Bad Request</p>
                  </li>
               </ul>
            </li>
            <li>
               <p>Error code: <code>NoSuchUpload</code>
               </p>
               <ul>
                  <li>
                     <p>Description: The specified multipart upload does not exist. The upload ID
                     might be invalid, or the multipart upload might have been aborted or
                     completed.</p>
                  </li>
                  <li>
                     <p>404 Not Found</p>
                  </li>
               </ul>
            </li>
         </ul>
    
         <p>The following operations are related to <code>CompleteMultipartUpload</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html">CreateMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html">AbortMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html">ListParts</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html">ListMultipartUploads</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<CompleteMultipartUploadCommandOutput>

  • Completes a multipart upload by assembling previously uploaded parts.

    You first initiate the multipart upload and then upload all parts using the UploadPart operation. After successfully uploading all relevant parts of an upload, you call this action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the Complete Multipart Upload request, you must provide the parts list. You must ensure that the parts list is complete. This action concatenates the parts that you provide in the list. For each part in the list, you must provide the part number and the ETag value, returned after that part was uploaded.

    Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.

    Note that if CompleteMultipartUpload fails, applications should be prepared to retry the failed requests. For more information, see Amazon S3 Error Best Practices.

    You cannot use Content-Type: application/x-www-form-urlencoded with Complete Multipart Upload requests. Also, if you do not provide a Content-Type header, CompleteMultipartUpload returns a 200 OK response.

    For more information about multipart uploads, see Uploading Objects Using Multipart Upload.

    For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.

    CompleteMultipartUpload has the following special errors:

    • Error code: EntityTooSmall

      • Description: Your proposed upload is smaller than the minimum allowed object size. Each part must be at least 5 MB in size, except the last part.

      • 400 Bad Request

    • Error code: InvalidPart

      • Description: One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag.

      • 400 Bad Request

    • Error code: InvalidPartOrder

      • Description: The list of parts was not in ascending order. The parts list must be specified in order by part number.

      • 400 Bad Request

    • Error code: NoSuchUpload

      • Description: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.

      • 404 Not Found

    The following operations are related to CompleteMultipartUpload:

    Parameters

    Returns void

  • Completes a multipart upload by assembling previously uploaded parts.

    You first initiate the multipart upload and then upload all parts using the UploadPart operation. After successfully uploading all relevant parts of an upload, you call this action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the Complete Multipart Upload request, you must provide the parts list. You must ensure that the parts list is complete. This action concatenates the parts that you provide in the list. For each part in the list, you must provide the part number and the ETag value, returned after that part was uploaded.

    Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.

    Note that if CompleteMultipartUpload fails, applications should be prepared to retry the failed requests. For more information, see Amazon S3 Error Best Practices.

    You cannot use Content-Type: application/x-www-form-urlencoded with Complete Multipart Upload requests. Also, if you do not provide a Content-Type header, CompleteMultipartUpload returns a 200 OK response.

    For more information about multipart uploads, see Uploading Objects Using Multipart Upload.

    For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.

    CompleteMultipartUpload has the following special errors:

    • Error code: EntityTooSmall

      • Description: Your proposed upload is smaller than the minimum allowed object size. Each part must be at least 5 MB in size, except the last part.

      • 400 Bad Request

    • Error code: InvalidPart

      • Description: One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag.

      • 400 Bad Request

    • Error code: InvalidPartOrder

      • Description: The list of parts was not in ascending order. The parts list must be specified in order by part number.

      • 400 Bad Request

    • Error code: NoSuchUpload

      • Description: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.

      • 404 Not Found

    The following operations are related to CompleteMultipartUpload:

    Parameters

    Returns void

  • Completes a multipart upload by assembling previously uploaded parts.

    You first initiate the multipart upload and then upload all parts using the UploadPart operation. After successfully uploading all relevant parts of an upload, you call this action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the Complete Multipart Upload request, you must provide the parts list. You must ensure that the parts list is complete. This action concatenates the parts that you provide in the list. For each part in the list, you must provide the part number and the ETag value, returned after that part was uploaded.

    Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.

    Note that if CompleteMultipartUpload fails, applications should be prepared to retry the failed requests. For more information, see Amazon S3 Error Best Practices.

    You cannot use Content-Type: application/x-www-form-urlencoded with Complete Multipart Upload requests. Also, if you do not provide a Content-Type header, CompleteMultipartUpload returns a 200 OK response.

    For more information about multipart uploads, see Uploading Objects Using Multipart Upload.

    For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.

         <p>
            <code>CompleteMultipartUpload</code> has the following special errors:</p>
         <ul>
            <li>
               <p>Error code: <code>EntityTooSmall</code>
               </p>
               <ul>
                  <li>
                     <p>Description: Your proposed upload is smaller than the minimum allowed object
                     size. Each part must be at least 5 MB in size, except the last part.</p>
                  </li>
                  <li>
                     <p>400 Bad Request</p>
                  </li>
               </ul>
            </li>
            <li>
               <p>Error code: <code>InvalidPart</code>
               </p>
               <ul>
                  <li>
                     <p>Description: One or more of the specified parts could not be found. The part
                     might not have been uploaded, or the specified entity tag might not have
                     matched the part's entity tag.</p>
                  </li>
                  <li>
                     <p>400 Bad Request</p>
                  </li>
               </ul>
            </li>
            <li>
               <p>Error code: <code>InvalidPartOrder</code>
               </p>
               <ul>
                  <li>
                     <p>Description: The list of parts was not in ascending order. The parts list
                     must be specified in order by part number.</p>
                  </li>
                  <li>
                     <p>400 Bad Request</p>
                  </li>
               </ul>
            </li>
            <li>
               <p>Error code: <code>NoSuchUpload</code>
               </p>
               <ul>
                  <li>
                     <p>Description: The specified multipart upload does not exist. The upload ID
                     might be invalid, or the multipart upload might have been aborted or
                     completed.</p>
                  </li>
                  <li>
                     <p>404 Not Found</p>
                  </li>
               </ul>
            </li>
         </ul>
    
         <p>The following operations are related to <code>CompleteMultipartUpload</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html">CreateMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html">AbortMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html">ListParts</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html">ListMultipartUploads</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: CompleteMultipartUploadCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<CompleteMultipartUploadCommandOutput>

  • Completes a multipart upload by assembling previously uploaded parts.

    You first initiate the multipart upload and then upload all parts using the UploadPart operation. After successfully uploading all relevant parts of an upload, you call this action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the Complete Multipart Upload request, you must provide the parts list. You must ensure that the parts list is complete. This action concatenates the parts that you provide in the list. For each part in the list, you must provide the part number and the ETag value, returned after that part was uploaded.

    Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.

    Note that if CompleteMultipartUpload fails, applications should be prepared to retry the failed requests. For more information, see Amazon S3 Error Best Practices.

    You cannot use Content-Type: application/x-www-form-urlencoded with Complete Multipart Upload requests. Also, if you do not provide a Content-Type header, CompleteMultipartUpload returns a 200 OK response.

    For more information about multipart uploads, see Uploading Objects Using Multipart Upload.

    For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.

    CompleteMultipartUpload has the following special errors:

    • Error code: EntityTooSmall

      • Description: Your proposed upload is smaller than the minimum allowed object size. Each part must be at least 5 MB in size, except the last part.

      • 400 Bad Request

    • Error code: InvalidPart

      • Description: One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag.

      • 400 Bad Request

    • Error code: InvalidPartOrder

      • Description: The list of parts was not in ascending order. The parts list must be specified in order by part number.

      • 400 Bad Request

    • Error code: NoSuchUpload

      • Description: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.

      • 404 Not Found

    The following operations are related to CompleteMultipartUpload:

    Parameters

    Returns void

  • Completes a multipart upload by assembling previously uploaded parts.

    You first initiate the multipart upload and then upload all parts using the UploadPart operation. After successfully uploading all relevant parts of an upload, you call this action to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the Complete Multipart Upload request, you must provide the parts list. You must ensure that the parts list is complete. This action concatenates the parts that you provide in the list. For each part in the list, you must provide the part number and the ETag value, returned after that part was uploaded.

    Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.

    Note that if CompleteMultipartUpload fails, applications should be prepared to retry the failed requests. For more information, see Amazon S3 Error Best Practices.

    You cannot use Content-Type: application/x-www-form-urlencoded with Complete Multipart Upload requests. Also, if you do not provide a Content-Type header, CompleteMultipartUpload returns a 200 OK response.

    For more information about multipart uploads, see Uploading Objects Using Multipart Upload.

    For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.

    CompleteMultipartUpload has the following special errors:

    • Error code: EntityTooSmall

      • Description: Your proposed upload is smaller than the minimum allowed object size. Each part must be at least 5 MB in size, except the last part.

      • 400 Bad Request

    • Error code: InvalidPart

      • Description: One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag.

      • 400 Bad Request

    • Error code: InvalidPartOrder

      • Description: The list of parts was not in ascending order. The parts list must be specified in order by part number.

      • 400 Bad Request

    • Error code: NoSuchUpload

      • Description: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.

      • 404 Not Found

    The following operations are related to CompleteMultipartUpload:

    Parameters

    Returns void

copyObject

  • Creates a copy of an object that is already stored in Amazon S3.

    You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API.

    All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.

    A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately.

    If the copy is successful, you receive a response with information about the copied object.

    If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body.

    The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing.

    Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Request error. For more information, see Transfer Acceleration.

    Metadata

    When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.

    To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the x-amz-metadata-directive header. When you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.

    x-amz-copy-source-if Headers

    To only copy an object under certain conditions, such as whether the Etag matches or whether the object was modified before or after a specified date, use the following request parameters:

    • x-amz-copy-source-if-match

    • x-amz-copy-source-if-none-match

    • x-amz-copy-source-if-unmodified-since

    • x-amz-copy-source-if-modified-since

    If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:

    • x-amz-copy-source-if-match condition evaluates to true

    • x-amz-copy-source-if-unmodified-since condition evaluates to false

         <p>If both the <code>x-amz-copy-source-if-none-match</code> and
            <code>x-amz-copy-source-if-modified-since</code> headers are present in the request and
         evaluate as follows, Amazon S3 returns the <code>412 Precondition Failed</code> response
         code:</p>
         <ul>
            <li>
               <p>
                  <code>x-amz-copy-source-if-none-match</code> condition evaluates to false</p>
            </li>
            <li>
               <p>
                  <code>x-amz-copy-source-if-modified-since</code> condition evaluates to
               true</p>
            </li>
         </ul>
    
         <note>
            <p>All headers with the <code>x-amz-</code> prefix, including
               <code>x-amz-copy-source</code>, must be signed.</p>
         </note>
         <p>
            <b>Server-side encryption</b>
         </p>
         <p>When you perform a CopyObject operation, you can optionally use the appropriate encryption-related
         headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys
         (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3
         encrypts your data as it writes it to disks in its data centers and decrypts the data when
         you access it. For more information about server-side encryption, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html">Using
         Server-Side Encryption</a>.</p>
         <p>If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more
         information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html">Amazon S3 Bucket Keys</a> in the <i>Amazon S3 User Guide</i>.</p>
         <p>
            <b>Access Control List (ACL)-Specific Request
         Headers</b>
         </p>
         <p>When copying an object, you can optionally use headers to grant ACL-based permissions.
         By default, all objects are private. Only the owner has full access control. When adding a
         new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups
         defined by Amazon S3. These permissions are then added to the ACL on the object. For more
         information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html">Access Control List (ACL) Overview</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-using-rest-api.html">Managing ACLs Using the REST
            API</a>. </p>
         <p>If the bucket that you're copying objects to uses the bucket owner enforced setting for
         S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that
         use this setting only accept PUT requests that don't specify an ACL or PUT requests that
         specify bucket owner full control ACLs, such as the <code>bucket-owner-full-control</code> canned
         ACL or an equivalent form of this ACL expressed in the XML format.</p>
         <p>For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html"> Controlling ownership of
         objects and disabling ACLs</a> in the <i>Amazon S3 User Guide</i>.</p>
         <note>
            <p>If your bucket uses the bucket owner enforced setting for Object Ownership,
            all objects written to the bucket by any account will be owned by the bucket owner.</p>
         </note>
         <p>
            <b>Checksums</b>
         </p>
         <p>When copying an object, if it has a checksum, that checksum will be copied to the new object
           by default. When you copy the object over, you may optionally specify a different checksum
           algorithm to use with the <code>x-amz-checksum-algorithm</code> header.</p>
         <p>
            <b>Storage Class Options</b>
         </p>
         <p>You can use the <code>CopyObject</code> action to change the storage class of an
         object that is already stored in Amazon S3 using the <code>StorageClass</code> parameter. For
         more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html">Storage
            Classes</a> in the <i>Amazon S3 User Guide</i>.</p>
         <p>
            <b>Versioning</b>
         </p>
         <p>By default, <code>x-amz-copy-source</code> identifies the current version of an object
         to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was
         deleted. To copy a different version, use the <code>versionId</code> subresource.</p>
         <p>If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for
         the object being copied. This version ID is different from the version ID of the source
         object. Amazon S3 returns the version ID of the copied object in the
            <code>x-amz-version-id</code> response header in the response.</p>
         <p>If you do not enable versioning or suspend it on the target bucket, the version ID that
         Amazon S3 generates is always null.</p>
         <p>If the source object's storage class is GLACIER, you must restore a copy of this object
         before you can use it as a source object for the copy operation. For more information, see
            <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html">RestoreObject</a>.</p>
         <p>The following operations are related to <code>CopyObject</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html">PutObject</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html">GetObject</a>
               </p>
            </li>
         </ul>
         <p>For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectsExamples.html">Copying
            Objects</a>.</p>
    

    Parameters

    Returns Promise<CopyObjectCommandOutput>

  • Creates a copy of an object that is already stored in Amazon S3.

    You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API.

    All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.

    A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately.

    If the copy is successful, you receive a response with information about the copied object.

    If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body.

    The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing.

    Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Request error. For more information, see Transfer Acceleration.

    Metadata

    When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.

    To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the x-amz-metadata-directive header. When you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.

    x-amz-copy-source-if Headers

    To only copy an object under certain conditions, such as whether the Etag matches or whether the object was modified before or after a specified date, use the following request parameters:

    • x-amz-copy-source-if-match

    • x-amz-copy-source-if-none-match

    • x-amz-copy-source-if-unmodified-since

    • x-amz-copy-source-if-modified-since

    If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:

    • x-amz-copy-source-if-match condition evaluates to true

    • x-amz-copy-source-if-unmodified-since condition evaluates to false

    If both the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed response code:

    • x-amz-copy-source-if-none-match condition evaluates to false

    • x-amz-copy-source-if-modified-since condition evaluates to true

    All headers with the x-amz- prefix, including x-amz-copy-source, must be signed.

    Server-side encryption

    When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption.

    If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.

    Access Control List (ACL)-Specific Request Headers

    When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.

    If the bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format.

    For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.

    If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.

    Checksums

    When copying an object, if it has a checksum, that checksum will be copied to the new object by default. When you copy the object over, you may optionally specify a different checksum algorithm to use with the x-amz-checksum-algorithm header.

    Storage Class Options

    You can use the CopyObject action to change the storage class of an object that is already stored in Amazon S3 using the StorageClass parameter. For more information, see Storage Classes in the Amazon S3 User Guide.

    Versioning

    By default, x-amz-copy-source identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use the versionId subresource.

    If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the x-amz-version-id response header in the response.

    If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null.

    If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.

    The following operations are related to CopyObject:

    For more information, see Copying Objects.

    Parameters

    Returns void

  • Creates a copy of an object that is already stored in Amazon S3.

    You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API.

    All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.

    A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately.

    If the copy is successful, you receive a response with information about the copied object.

    If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body.

    The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing.

    Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Request error. For more information, see Transfer Acceleration.

    Metadata

    When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.

    To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the x-amz-metadata-directive header. When you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.

    x-amz-copy-source-if Headers

    To only copy an object under certain conditions, such as whether the Etag matches or whether the object was modified before or after a specified date, use the following request parameters:

    • x-amz-copy-source-if-match

    • x-amz-copy-source-if-none-match

    • x-amz-copy-source-if-unmodified-since

    • x-amz-copy-source-if-modified-since

    If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:

    • x-amz-copy-source-if-match condition evaluates to true

    • x-amz-copy-source-if-unmodified-since condition evaluates to false

    If both the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed response code:

    • x-amz-copy-source-if-none-match condition evaluates to false

    • x-amz-copy-source-if-modified-since condition evaluates to true

    All headers with the x-amz- prefix, including x-amz-copy-source, must be signed.

    Server-side encryption

    When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption.

    If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.

    Access Control List (ACL)-Specific Request Headers

    When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.

    If the bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format.

    For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.

    If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.

    Checksums

    When copying an object, if it has a checksum, that checksum will be copied to the new object by default. When you copy the object over, you may optionally specify a different checksum algorithm to use with the x-amz-checksum-algorithm header.

    Storage Class Options

    You can use the CopyObject action to change the storage class of an object that is already stored in Amazon S3 using the StorageClass parameter. For more information, see Storage Classes in the Amazon S3 User Guide.

    Versioning

    By default, x-amz-copy-source identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use the versionId subresource.

    If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the x-amz-version-id response header in the response.

    If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null.

    If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.

    The following operations are related to CopyObject:

    For more information, see Copying Objects.

    Parameters

    Returns void

  • Creates a copy of an object that is already stored in Amazon S3.

    You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API.

    All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.

    A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately.

    If the copy is successful, you receive a response with information about the copied object.

    If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body.

    The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing.

    Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Request error. For more information, see Transfer Acceleration.

    Metadata

    When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.

    To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the x-amz-metadata-directive header. When you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.

    x-amz-copy-source-if Headers

    To only copy an object under certain conditions, such as whether the Etag matches or whether the object was modified before or after a specified date, use the following request parameters:

    • x-amz-copy-source-if-match

    • x-amz-copy-source-if-none-match

    • x-amz-copy-source-if-unmodified-since

    • x-amz-copy-source-if-modified-since

    If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:

    • x-amz-copy-source-if-match condition evaluates to true

    • x-amz-copy-source-if-unmodified-since condition evaluates to false

         <p>If both the <code>x-amz-copy-source-if-none-match</code> and
            <code>x-amz-copy-source-if-modified-since</code> headers are present in the request and
         evaluate as follows, Amazon S3 returns the <code>412 Precondition Failed</code> response
         code:</p>
         <ul>
            <li>
               <p>
                  <code>x-amz-copy-source-if-none-match</code> condition evaluates to false</p>
            </li>
            <li>
               <p>
                  <code>x-amz-copy-source-if-modified-since</code> condition evaluates to
               true</p>
            </li>
         </ul>
    
         <note>
            <p>All headers with the <code>x-amz-</code> prefix, including
               <code>x-amz-copy-source</code>, must be signed.</p>
         </note>
         <p>
            <b>Server-side encryption</b>
         </p>
         <p>When you perform a CopyObject operation, you can optionally use the appropriate encryption-related
         headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys
         (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3
         encrypts your data as it writes it to disks in its data centers and decrypts the data when
         you access it. For more information about server-side encryption, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html">Using
         Server-Side Encryption</a>.</p>
         <p>If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more
         information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html">Amazon S3 Bucket Keys</a> in the <i>Amazon S3 User Guide</i>.</p>
         <p>
            <b>Access Control List (ACL)-Specific Request
         Headers</b>
         </p>
         <p>When copying an object, you can optionally use headers to grant ACL-based permissions.
         By default, all objects are private. Only the owner has full access control. When adding a
         new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups
         defined by Amazon S3. These permissions are then added to the ACL on the object. For more
         information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html">Access Control List (ACL) Overview</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-using-rest-api.html">Managing ACLs Using the REST
            API</a>. </p>
         <p>If the bucket that you're copying objects to uses the bucket owner enforced setting for
         S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that
         use this setting only accept PUT requests that don't specify an ACL or PUT requests that
         specify bucket owner full control ACLs, such as the <code>bucket-owner-full-control</code> canned
         ACL or an equivalent form of this ACL expressed in the XML format.</p>
         <p>For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html"> Controlling ownership of
         objects and disabling ACLs</a> in the <i>Amazon S3 User Guide</i>.</p>
         <note>
            <p>If your bucket uses the bucket owner enforced setting for Object Ownership,
            all objects written to the bucket by any account will be owned by the bucket owner.</p>
         </note>
         <p>
            <b>Checksums</b>
         </p>
         <p>When copying an object, if it has a checksum, that checksum will be copied to the new object
           by default. When you copy the object over, you may optionally specify a different checksum
           algorithm to use with the <code>x-amz-checksum-algorithm</code> header.</p>
         <p>
            <b>Storage Class Options</b>
         </p>
         <p>You can use the <code>CopyObject</code> action to change the storage class of an
         object that is already stored in Amazon S3 using the <code>StorageClass</code> parameter. For
         more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html">Storage
            Classes</a> in the <i>Amazon S3 User Guide</i>.</p>
         <p>
            <b>Versioning</b>
         </p>
         <p>By default, <code>x-amz-copy-source</code> identifies the current version of an object
         to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was
         deleted. To copy a different version, use the <code>versionId</code> subresource.</p>
         <p>If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for
         the object being copied. This version ID is different from the version ID of the source
         object. Amazon S3 returns the version ID of the copied object in the
            <code>x-amz-version-id</code> response header in the response.</p>
         <p>If you do not enable versioning or suspend it on the target bucket, the version ID that
         Amazon S3 generates is always null.</p>
         <p>If the source object's storage class is GLACIER, you must restore a copy of this object
         before you can use it as a source object for the copy operation. For more information, see
            <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html">RestoreObject</a>.</p>
         <p>The following operations are related to <code>CopyObject</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html">PutObject</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html">GetObject</a>
               </p>
            </li>
         </ul>
         <p>For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectsExamples.html">Copying
            Objects</a>.</p>
    

    Parameters

    • args: CopyObjectCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<CopyObjectCommandOutput>

  • Creates a copy of an object that is already stored in Amazon S3.

    You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API.

    All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.

    A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately.

    If the copy is successful, you receive a response with information about the copied object.

    If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body.

    The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing.

    Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Request error. For more information, see Transfer Acceleration.

    Metadata

    When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.

    To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the x-amz-metadata-directive header. When you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.

    x-amz-copy-source-if Headers

    To only copy an object under certain conditions, such as whether the Etag matches or whether the object was modified before or after a specified date, use the following request parameters:

    • x-amz-copy-source-if-match

    • x-amz-copy-source-if-none-match

    • x-amz-copy-source-if-unmodified-since

    • x-amz-copy-source-if-modified-since

    If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:

    • x-amz-copy-source-if-match condition evaluates to true

    • x-amz-copy-source-if-unmodified-since condition evaluates to false

    If both the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed response code:

    • x-amz-copy-source-if-none-match condition evaluates to false

    • x-amz-copy-source-if-modified-since condition evaluates to true

    All headers with the x-amz- prefix, including x-amz-copy-source, must be signed.

    Server-side encryption

    When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption.

    If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.

    Access Control List (ACL)-Specific Request Headers

    When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.

    If the bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format.

    For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.

    If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.

    Checksums

    When copying an object, if it has a checksum, that checksum will be copied to the new object by default. When you copy the object over, you may optionally specify a different checksum algorithm to use with the x-amz-checksum-algorithm header.

    Storage Class Options

    You can use the CopyObject action to change the storage class of an object that is already stored in Amazon S3 using the StorageClass parameter. For more information, see Storage Classes in the Amazon S3 User Guide.

    Versioning

    By default, x-amz-copy-source identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use the versionId subresource.

    If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the x-amz-version-id response header in the response.

    If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null.

    If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.

    The following operations are related to CopyObject:

    For more information, see Copying Objects.

    Parameters

    Returns void

  • Creates a copy of an object that is already stored in Amazon S3.

    You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API.

    All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.

    A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately.

    If the copy is successful, you receive a response with information about the copied object.

    If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body.

    The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing.

    Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Request error. For more information, see Transfer Acceleration.

    Metadata

    When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.

    To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the x-amz-metadata-directive header. When you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.

    x-amz-copy-source-if Headers

    To only copy an object under certain conditions, such as whether the Etag matches or whether the object was modified before or after a specified date, use the following request parameters:

    • x-amz-copy-source-if-match

    • x-amz-copy-source-if-none-match

    • x-amz-copy-source-if-unmodified-since

    • x-amz-copy-source-if-modified-since

    If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:

    • x-amz-copy-source-if-match condition evaluates to true

    • x-amz-copy-source-if-unmodified-since condition evaluates to false

    If both the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed response code:

    • x-amz-copy-source-if-none-match condition evaluates to false

    • x-amz-copy-source-if-modified-since condition evaluates to true

    All headers with the x-amz- prefix, including x-amz-copy-source, must be signed.

    Server-side encryption

    When you perform a CopyObject operation, you can optionally use the appropriate encryption-related headers to encrypt the object using server-side encryption with Amazon Web Services managed encryption keys (SSE-S3 or SSE-KMS) or a customer-provided encryption key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption.

    If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.

    Access Control List (ACL)-Specific Request Headers

    When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.

    If the bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format.

    For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.

    If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.

    Checksums

    When copying an object, if it has a checksum, that checksum will be copied to the new object by default. When you copy the object over, you may optionally specify a different checksum algorithm to use with the x-amz-checksum-algorithm header.

    Storage Class Options

    You can use the CopyObject action to change the storage class of an object that is already stored in Amazon S3 using the StorageClass parameter. For more information, see Storage Classes in the Amazon S3 User Guide.

    Versioning

    By default, x-amz-copy-source identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use the versionId subresource.

    If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the x-amz-version-id response header in the response.

    If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null.

    If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject.

    The following operations are related to CopyObject:

    For more information, see Copying Objects.

    Parameters

    Returns void

createBucket

  • Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.

    Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules.

    If you want to create an Amazon S3 on Outposts bucket, see Create Bucket.

    By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.

    If you send your create bucket request to the s3.amazonaws.com endpoint, the request goes to the us-east-1 Region. Accordingly, the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual hosting of buckets.

    Access control lists (ACLs)

    When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.

    If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a 400 error and returns the InvalidBucketAclWithObjectOwnership error code. For more information, see Controlling object ownership in the Amazon S3 User Guide.

    There are two ways to grant the appropriate permissions using the request headers.

    • Specify a canned ACL using the x-amz-acl request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly using the x-amz-grant-read, x-amz-grant-write, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions Amazon S3 supports in an ACL. For more information, see Access control list (ACL) overview.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

         <p>
            <b>Permissions</b>
         </p>
         <p>In addition to <code>s3:CreateBucket</code>, the following permissions are required when your CreateBucket includes specific headers:</p>
         <ul>
            <li>
               <p>
                  <b>ACLs</b> - If your <code>CreateBucket</code> request specifies ACL permissions and the ACL is public-read, public-read-write,
               authenticated-read, or if you specify access permissions explicitly through any other ACL, both
               <code>s3:CreateBucket</code> and <code>s3:PutBucketAcl</code> permissions are needed. If the ACL the
               <code>CreateBucket</code> request is private or doesn't specify any ACLs, only <code>s3:CreateBucket</code> permission is needed. </p>
            </li>
            <li>
               <p>
                  <b>Object Lock</b> - If
                  <code>ObjectLockEnabledForBucket</code> is set to true in your
                  <code>CreateBucket</code> request,
                  <code>s3:PutBucketObjectLockConfiguration</code> and
                  <code>s3:PutBucketVersioning</code> permissions are required.</p>
            </li>
            <li>
               <p>
                  <b>S3 Object Ownership</b> - If your CreateBucket
               request includes the the <code>x-amz-object-ownership</code> header,
                  <code>s3:PutBucketOwnershipControls</code> permission is required.</p>
            </li>
         </ul>
         <p>The following operations are related to <code>CreateBucket</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html">PutObject</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html">DeleteBucket</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<CreateBucketCommandOutput>

  • Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.

    Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules.

    If you want to create an Amazon S3 on Outposts bucket, see Create Bucket.

    By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.

    If you send your create bucket request to the s3.amazonaws.com endpoint, the request goes to the us-east-1 Region. Accordingly, the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual hosting of buckets.

    Access control lists (ACLs)

    When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.

    If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a 400 error and returns the InvalidBucketAclWithObjectOwnership error code. For more information, see Controlling object ownership in the Amazon S3 User Guide.

    There are two ways to grant the appropriate permissions using the request headers.

    • Specify a canned ACL using the x-amz-acl request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly using the x-amz-grant-read, x-amz-grant-write, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions Amazon S3 supports in an ACL. For more information, see Access control list (ACL) overview.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

    Permissions

    In addition to s3:CreateBucket, the following permissions are required when your CreateBucket includes specific headers:

    • ACLs - If your CreateBucket request specifies ACL permissions and the ACL is public-read, public-read-write, authenticated-read, or if you specify access permissions explicitly through any other ACL, both s3:CreateBucket and s3:PutBucketAcl permissions are needed. If the ACL the CreateBucket request is private or doesn't specify any ACLs, only s3:CreateBucket permission is needed.

    • Object Lock - If ObjectLockEnabledForBucket is set to true in your CreateBucket request, s3:PutBucketObjectLockConfiguration and s3:PutBucketVersioning permissions are required.

    • S3 Object Ownership - If your CreateBucket request includes the the x-amz-object-ownership header, s3:PutBucketOwnershipControls permission is required.

    The following operations are related to CreateBucket:

    Parameters

    Returns void

  • Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.

    Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules.

    If you want to create an Amazon S3 on Outposts bucket, see Create Bucket.

    By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.

    If you send your create bucket request to the s3.amazonaws.com endpoint, the request goes to the us-east-1 Region. Accordingly, the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual hosting of buckets.

    Access control lists (ACLs)

    When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.

    If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a 400 error and returns the InvalidBucketAclWithObjectOwnership error code. For more information, see Controlling object ownership in the Amazon S3 User Guide.

    There are two ways to grant the appropriate permissions using the request headers.

    • Specify a canned ACL using the x-amz-acl request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly using the x-amz-grant-read, x-amz-grant-write, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions Amazon S3 supports in an ACL. For more information, see Access control list (ACL) overview.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

    Permissions

    In addition to s3:CreateBucket, the following permissions are required when your CreateBucket includes specific headers:

    • ACLs - If your CreateBucket request specifies ACL permissions and the ACL is public-read, public-read-write, authenticated-read, or if you specify access permissions explicitly through any other ACL, both s3:CreateBucket and s3:PutBucketAcl permissions are needed. If the ACL the CreateBucket request is private or doesn't specify any ACLs, only s3:CreateBucket permission is needed.

    • Object Lock - If ObjectLockEnabledForBucket is set to true in your CreateBucket request, s3:PutBucketObjectLockConfiguration and s3:PutBucketVersioning permissions are required.

    • S3 Object Ownership - If your CreateBucket request includes the the x-amz-object-ownership header, s3:PutBucketOwnershipControls permission is required.

    The following operations are related to CreateBucket:

    Parameters

    Returns void

  • Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.

    Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules.

    If you want to create an Amazon S3 on Outposts bucket, see Create Bucket.

    By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.

    If you send your create bucket request to the s3.amazonaws.com endpoint, the request goes to the us-east-1 Region. Accordingly, the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual hosting of buckets.

    Access control lists (ACLs)

    When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.

    If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a 400 error and returns the InvalidBucketAclWithObjectOwnership error code. For more information, see Controlling object ownership in the Amazon S3 User Guide.

    There are two ways to grant the appropriate permissions using the request headers.

    • Specify a canned ACL using the x-amz-acl request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly using the x-amz-grant-read, x-amz-grant-write, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions Amazon S3 supports in an ACL. For more information, see Access control list (ACL) overview.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

         <p>
            <b>Permissions</b>
         </p>
         <p>In addition to <code>s3:CreateBucket</code>, the following permissions are required when your CreateBucket includes specific headers:</p>
         <ul>
            <li>
               <p>
                  <b>ACLs</b> - If your <code>CreateBucket</code> request specifies ACL permissions and the ACL is public-read, public-read-write,
               authenticated-read, or if you specify access permissions explicitly through any other ACL, both
               <code>s3:CreateBucket</code> and <code>s3:PutBucketAcl</code> permissions are needed. If the ACL the
               <code>CreateBucket</code> request is private or doesn't specify any ACLs, only <code>s3:CreateBucket</code> permission is needed. </p>
            </li>
            <li>
               <p>
                  <b>Object Lock</b> - If
                  <code>ObjectLockEnabledForBucket</code> is set to true in your
                  <code>CreateBucket</code> request,
                  <code>s3:PutBucketObjectLockConfiguration</code> and
                  <code>s3:PutBucketVersioning</code> permissions are required.</p>
            </li>
            <li>
               <p>
                  <b>S3 Object Ownership</b> - If your CreateBucket
               request includes the the <code>x-amz-object-ownership</code> header,
                  <code>s3:PutBucketOwnershipControls</code> permission is required.</p>
            </li>
         </ul>
         <p>The following operations are related to <code>CreateBucket</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html">PutObject</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html">DeleteBucket</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: CreateBucketCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<CreateBucketCommandOutput>

  • Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.

    Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules.

    If you want to create an Amazon S3 on Outposts bucket, see Create Bucket.

    By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.

    If you send your create bucket request to the s3.amazonaws.com endpoint, the request goes to the us-east-1 Region. Accordingly, the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual hosting of buckets.

    Access control lists (ACLs)

    When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.

    If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a 400 error and returns the InvalidBucketAclWithObjectOwnership error code. For more information, see Controlling object ownership in the Amazon S3 User Guide.

    There are two ways to grant the appropriate permissions using the request headers.

    • Specify a canned ACL using the x-amz-acl request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly using the x-amz-grant-read, x-amz-grant-write, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions Amazon S3 supports in an ACL. For more information, see Access control list (ACL) overview.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

    Permissions

    In addition to s3:CreateBucket, the following permissions are required when your CreateBucket includes specific headers:

    • ACLs - If your CreateBucket request specifies ACL permissions and the ACL is public-read, public-read-write, authenticated-read, or if you specify access permissions explicitly through any other ACL, both s3:CreateBucket and s3:PutBucketAcl permissions are needed. If the ACL the CreateBucket request is private or doesn't specify any ACLs, only s3:CreateBucket permission is needed.

    • Object Lock - If ObjectLockEnabledForBucket is set to true in your CreateBucket request, s3:PutBucketObjectLockConfiguration and s3:PutBucketVersioning permissions are required.

    • S3 Object Ownership - If your CreateBucket request includes the the x-amz-object-ownership header, s3:PutBucketOwnershipControls permission is required.

    The following operations are related to CreateBucket:

    Parameters

    Returns void

  • Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.

    Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules.

    If you want to create an Amazon S3 on Outposts bucket, see Create Bucket.

    By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket.

    If you send your create bucket request to the s3.amazonaws.com endpoint, the request goes to the us-east-1 Region. Accordingly, the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual hosting of buckets.

    Access control lists (ACLs)

    When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket.

    If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a 400 error and returns the InvalidBucketAclWithObjectOwnership error code. For more information, see Controlling object ownership in the Amazon S3 User Guide.

    There are two ways to grant the appropriate permissions using the request headers.

    • Specify a canned ACL using the x-amz-acl request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly using the x-amz-grant-read, x-amz-grant-write, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions Amazon S3 supports in an ACL. For more information, see Access control list (ACL) overview.

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

    Permissions

    In addition to s3:CreateBucket, the following permissions are required when your CreateBucket includes specific headers:

    • ACLs - If your CreateBucket request specifies ACL permissions and the ACL is public-read, public-read-write, authenticated-read, or if you specify access permissions explicitly through any other ACL, both s3:CreateBucket and s3:PutBucketAcl permissions are needed. If the ACL the CreateBucket request is private or doesn't specify any ACLs, only s3:CreateBucket permission is needed.

    • Object Lock - If ObjectLockEnabledForBucket is set to true in your CreateBucket request, s3:PutBucketObjectLockConfiguration and s3:PutBucketVersioning permissions are required.

    • S3 Object Ownership - If your CreateBucket request includes the the x-amz-object-ownership header, s3:PutBucketOwnershipControls permission is required.

    The following operations are related to CreateBucket:

    Parameters

    Returns void

createMultipartUpload

  • This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.

         <p>For more information about multipart uploads, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html">Multipart Upload Overview</a>.</p>
    
         <p>If you have configured a lifecycle rule to abort incomplete multipart uploads, the
         upload must complete within the number of days specified in the bucket lifecycle
         configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort
         action and Amazon S3 aborts the multipart upload. For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html#mpu-abort-incomplete-mpu-lifecycle-config">Aborting
            Incomplete Multipart Uploads Using a Bucket Lifecycle Policy</a>.</p>
    
         <p>For information about the permissions required to use the multipart upload API, see
            <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html">Multipart Upload and
            Permissions</a>.</p>
    
         <p>For request signing, multipart upload is just a series of regular requests. You initiate
         a multipart upload, send one or more requests to upload parts, and then complete the
         multipart upload process. You sign each request individually. There is nothing special
         about signing multipart upload requests. For more information about signing, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html">Authenticating
            Requests (Amazon Web Services Signature Version 4)</a>.</p>
    
         <note>
            <p> After you initiate a multipart upload and upload one or more parts, to stop being
            charged for storing the uploaded parts, you must either complete or abort the multipart
            upload. Amazon S3 frees up the space used to store the parts and stop charging you for
            storing them only after you either complete or abort a multipart upload. </p>
         </note>
    
         <p>You can optionally request server-side encryption. For server-side encryption, Amazon S3
         encrypts your data as it writes it to disks in its data centers and decrypts it when you
         access it. You can provide your own encryption key, or use Amazon Web Services KMS keys or Amazon S3-managed encryption keys. If you choose to provide
         your own encryption key, the request headers you provide in <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html">UploadPartCopy</a> requests must match the headers you used in the request to
         initiate the upload by using <code>CreateMultipartUpload</code>. </p>
         <p>To perform a multipart upload with encryption using an Amazon Web Services KMS key, the requester must
         have permission to the <code>kms:Decrypt</code> and <code>kms:GenerateDataKey*</code>
         actions on the key. These permissions are required because Amazon S3 must decrypt and read data
         from the encrypted file parts before it completes the multipart upload. For more
         information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpuAndPermissions">Multipart upload API
            and permissions</a> in the <i>Amazon S3 User Guide</i>.</p>
    
         <p>If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account
         as the KMS key, then you must have these permissions on the key policy. If your IAM
         user or role belongs to a different account than the key, then you must have the
         permissions on both the key policy and your IAM user or role.</p>
    
    
         <p> For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html">Protecting
            Data Using Server-Side Encryption</a>.</p>
    
         <dl>
            <dt>Access Permissions</dt>
            <dd>
               <p>When copying an object, you can optionally specify the accounts or groups that
                  should be granted specific permissions on the new object. There are two ways to
                  grant the permissions using the request headers:</p>
               <ul>
                  <li>
                     <p>Specify a canned ACL with the <code>x-amz-acl</code> request header. For
                        more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#CannedACL">Canned ACL</a>.</p>
                  </li>
                  <li>
                     <p>Specify access permissions explicitly with the
                           <code>x-amz-grant-read</code>, <code>x-amz-grant-read-acp</code>,
                           <code>x-amz-grant-write-acp</code>, and
                           <code>x-amz-grant-full-control</code> headers. These parameters map to
                        the set of permissions that Amazon S3 supports in an ACL. For more information,
                        see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html">Access Control List (ACL)
                           Overview</a>.</p>
                  </li>
               </ul>
               <p>You can use either a canned ACL or specify access permissions explicitly. You
                  cannot do both.</p>
            </dd>
            <dt>Server-Side- Encryption-Specific Request Headers</dt>
            <dd>
               <p>You can optionally tell Amazon S3 to encrypt data at rest using server-side
                  encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts
                  your data as it writes it to disks in its data centers and decrypts it when you
                  access it. The option you use depends on whether you want to use Amazon Web Services managed
                  encryption keys or provide your own encryption key. </p>
               <ul>
                  <li>
                     <p>Use encryption keys managed by Amazon S3 or customer managed key stored
                        in Amazon Web Services Key Management Service (Amazon Web Services KMS) – If you want Amazon Web Services to manage the keys
                        used to encrypt data, specify the following headers in the request.</p>
                     <ul>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-aws-kms-key-id</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-context</code>
                           </p>
                        </li>
                     </ul>
                     <note>
                        <p>If you specify <code>x-amz-server-side-encryption:aws:kms</code>, but
                           don't provide <code>x-amz-server-side-encryption-aws-kms-key-id</code>,
                           Amazon S3 uses the Amazon Web Services managed key in Amazon Web Services KMS to protect the data.</p>
                     </note>
                     <important>
                        <p>All GET and PUT requests for an object protected by Amazon Web Services KMS fail if
                           you don't make them with SSL or by using SigV4.</p>
                     </important>
                     <p>For more information about server-side encryption with KMS key (SSE-KMS),
                        see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html">Protecting Data Using Server-Side Encryption with KMS keys</a>.</p>
                  </li>
                  <li>
                     <p>Use customer-provided encryption keys – If you want to manage your own
                        encryption keys, provide all the following headers in the request.</p>
                     <ul>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-customer-algorithm</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-customer-key</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-customer-key-MD5</code>
                           </p>
                        </li>
                     </ul>
                     <p>For more information about server-side encryption with KMS keys (SSE-KMS),
                        see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html">Protecting Data Using Server-Side Encryption with KMS keys</a>.</p>
                  </li>
               </ul>
            </dd>
            <dt>Access-Control-List (ACL)-Specific Request Headers</dt>
            <dd>
               <p>You also can use the following access control–related headers with this
                  operation. By default, all objects are private. Only the owner has full access
                  control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added
                  to the access control list (ACL) on the object. For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/S3_ACLs_UsingACLs.html">Using ACLs</a>. With this
                  operation, you can grant access permissions using one of the following two
                  methods:</p>
               <ul>
                  <li>
                     <p>Specify a canned ACL (<code>x-amz-acl</code>) — Amazon S3 supports a set of
                        predefined ACLs, known as <i>canned ACLs</i>. Each canned ACL
                        has a predefined set of grantees and permissions. For more information, see
                           <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#CannedACL">Canned
                        ACL</a>.</p>
                  </li>
                  <li>
                     <p>Specify access permissions explicitly — To explicitly grant access
                        permissions to specific Amazon Web Services accounts or groups, use the following headers.
                        Each header maps to specific permissions that Amazon S3 supports in an ACL. For
                        more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html">Access
                           Control List (ACL) Overview</a>. In the header, you specify a list of
                        grantees who get the specific permission. To grant permissions explicitly,
                        use:</p>
                     <ul>
                        <li>
                           <p>
                              <code>x-amz-grant-read</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-grant-write</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-grant-read-acp</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-grant-write-acp</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-grant-full-control</code>
                           </p>
                        </li>
                     </ul>
                     <p>You specify each grantee as a type=value pair, where the type is one of
                        the following:</p>
                     <ul>
                        <li>
                           <p>
                              <code>id</code> – if the value specified is the canonical user ID
                              of an Amazon Web Services account</p>
                        </li>
                        <li>
                           <p>
                              <code>uri</code> – if you are granting permissions to a predefined
                              group</p>
                        </li>
                        <li>
                           <p>
                              <code>emailAddress</code> – if the value specified is the email
                              address of an Amazon Web Services account</p>
                           <note>
                              <p>Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions: </p>
                              <ul>
                                 <li>
                                    <p>US East (N. Virginia)</p>
                                 </li>
                                 <li>
                                    <p>US West (N. California)</p>
                                 </li>
                                 <li>
                                    <p> US West (Oregon)</p>
                                 </li>
                                 <li>
                                    <p> Asia Pacific (Singapore)</p>
                                 </li>
                                 <li>
                                    <p>Asia Pacific (Sydney)</p>
                                 </li>
                                 <li>
                                    <p>Asia Pacific (Tokyo)</p>
                                 </li>
                                 <li>
                                    <p>Europe (Ireland)</p>
                                 </li>
                                 <li>
                                    <p>South America (São Paulo)</p>
                                 </li>
                              </ul>
                              <p>For a list of all the Amazon S3 supported Regions and endpoints, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region">Regions and Endpoints</a> in the Amazon Web Services General Reference.</p>
                           </note>
                        </li>
                     </ul>
                     <p>For example, the following <code>x-amz-grant-read</code> header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:</p>
                     <p>
                        <code>x-amz-grant-read: id="11112222333", id="444455556666" </code>
                     </p>
                  </li>
               </ul>
    
            </dd>
         </dl>
    
         <p>The following operations are related to <code>CreateMultipartUpload</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html">CompleteMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html">AbortMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html">ListParts</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html">ListMultipartUploads</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<CreateMultipartUploadCommandOutput>

  • This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.

    For more information about multipart uploads, see Multipart Upload Overview.

    If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy.

    For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions.

    For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4).

    After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.

    You can optionally request server-side encryption. For server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. You can provide your own encryption key, or use Amazon Web Services KMS keys or Amazon S3-managed encryption keys. If you choose to provide your own encryption key, the request headers you provide in UploadPart and UploadPartCopy requests must match the headers you used in the request to initiate the upload by using CreateMultipartUpload.

    To perform a multipart upload with encryption using an Amazon Web Services KMS key, the requester must have permission to the kms:Decrypt and kms:GenerateDataKey* actions on the key. These permissions are required because Amazon S3 must decrypt and read data from the encrypted file parts before it completes the multipart upload. For more information, see Multipart upload API and permissions in the Amazon S3 User Guide.

    If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role.

    For more information, see Protecting Data Using Server-Side Encryption.

    Access Permissions

    When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:

    • Specify a canned ACL with the x-amz-acl request header. For more information, see Canned ACL.

    • Specify access permissions explicitly with the x-amz-grant-read, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

    Server-Side- Encryption-Specific Request Headers

    You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use Amazon Web Services managed encryption keys or provide your own encryption key.

    • Use encryption keys managed by Amazon S3 or customer managed key stored in Amazon Web Services Key Management Service (Amazon Web Services KMS) – If you want Amazon Web Services to manage the keys used to encrypt data, specify the following headers in the request.

      • x-amz-server-side-encryption

      • x-amz-server-side-encryption-aws-kms-key-id

      • x-amz-server-side-encryption-context

      If you specify x-amz-server-side-encryption:aws:kms, but don't provide x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the Amazon Web Services managed key in Amazon Web Services KMS to protect the data.

      All GET and PUT requests for an object protected by Amazon Web Services KMS fail if you don't make them with SSL or by using SigV4.

      For more information about server-side encryption with KMS key (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.

    • Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.

      • x-amz-server-side-encryption-customer-algorithm

      • x-amz-server-side-encryption-customer-key

      • x-amz-server-side-encryption-customer-key-MD5

      For more information about server-side encryption with KMS keys (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.

    Access-Control-List (ACL)-Specific Request Headers

    You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:

    • Specify a canned ACL (x-amz-acl) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly — To explicitly grant access permissions to specific Amazon Web Services accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:

      • x-amz-grant-read

      • x-amz-grant-write

      • x-amz-grant-read-acp

      • x-amz-grant-write-acp

      • x-amz-grant-full-control

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    The following operations are related to CreateMultipartUpload:

    Parameters

    Returns void

  • This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.

    For more information about multipart uploads, see Multipart Upload Overview.

    If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy.

    For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions.

    For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4).

    After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.

    You can optionally request server-side encryption. For server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. You can provide your own encryption key, or use Amazon Web Services KMS keys or Amazon S3-managed encryption keys. If you choose to provide your own encryption key, the request headers you provide in UploadPart and UploadPartCopy requests must match the headers you used in the request to initiate the upload by using CreateMultipartUpload.

    To perform a multipart upload with encryption using an Amazon Web Services KMS key, the requester must have permission to the kms:Decrypt and kms:GenerateDataKey* actions on the key. These permissions are required because Amazon S3 must decrypt and read data from the encrypted file parts before it completes the multipart upload. For more information, see Multipart upload API and permissions in the Amazon S3 User Guide.

    If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role.

    For more information, see Protecting Data Using Server-Side Encryption.

    Access Permissions

    When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:

    • Specify a canned ACL with the x-amz-acl request header. For more information, see Canned ACL.

    • Specify access permissions explicitly with the x-amz-grant-read, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

    Server-Side- Encryption-Specific Request Headers

    You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use Amazon Web Services managed encryption keys or provide your own encryption key.

    • Use encryption keys managed by Amazon S3 or customer managed key stored in Amazon Web Services Key Management Service (Amazon Web Services KMS) – If you want Amazon Web Services to manage the keys used to encrypt data, specify the following headers in the request.

      • x-amz-server-side-encryption

      • x-amz-server-side-encryption-aws-kms-key-id

      • x-amz-server-side-encryption-context

      If you specify x-amz-server-side-encryption:aws:kms, but don't provide x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the Amazon Web Services managed key in Amazon Web Services KMS to protect the data.

      All GET and PUT requests for an object protected by Amazon Web Services KMS fail if you don't make them with SSL or by using SigV4.

      For more information about server-side encryption with KMS key (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.

    • Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.

      • x-amz-server-side-encryption-customer-algorithm

      • x-amz-server-side-encryption-customer-key

      • x-amz-server-side-encryption-customer-key-MD5

      For more information about server-side encryption with KMS keys (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.

    Access-Control-List (ACL)-Specific Request Headers

    You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:

    • Specify a canned ACL (x-amz-acl) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly — To explicitly grant access permissions to specific Amazon Web Services accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:

      • x-amz-grant-read

      • x-amz-grant-write

      • x-amz-grant-read-acp

      • x-amz-grant-write-acp

      • x-amz-grant-full-control

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    The following operations are related to CreateMultipartUpload:

    Parameters

    Returns void

  • This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.

         <p>For more information about multipart uploads, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html">Multipart Upload Overview</a>.</p>
    
         <p>If you have configured a lifecycle rule to abort incomplete multipart uploads, the
         upload must complete within the number of days specified in the bucket lifecycle
         configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort
         action and Amazon S3 aborts the multipart upload. For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html#mpu-abort-incomplete-mpu-lifecycle-config">Aborting
            Incomplete Multipart Uploads Using a Bucket Lifecycle Policy</a>.</p>
    
         <p>For information about the permissions required to use the multipart upload API, see
            <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html">Multipart Upload and
            Permissions</a>.</p>
    
         <p>For request signing, multipart upload is just a series of regular requests. You initiate
         a multipart upload, send one or more requests to upload parts, and then complete the
         multipart upload process. You sign each request individually. There is nothing special
         about signing multipart upload requests. For more information about signing, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html">Authenticating
            Requests (Amazon Web Services Signature Version 4)</a>.</p>
    
         <note>
            <p> After you initiate a multipart upload and upload one or more parts, to stop being
            charged for storing the uploaded parts, you must either complete or abort the multipart
            upload. Amazon S3 frees up the space used to store the parts and stop charging you for
            storing them only after you either complete or abort a multipart upload. </p>
         </note>
    
         <p>You can optionally request server-side encryption. For server-side encryption, Amazon S3
         encrypts your data as it writes it to disks in its data centers and decrypts it when you
         access it. You can provide your own encryption key, or use Amazon Web Services KMS keys or Amazon S3-managed encryption keys. If you choose to provide
         your own encryption key, the request headers you provide in <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html">UploadPartCopy</a> requests must match the headers you used in the request to
         initiate the upload by using <code>CreateMultipartUpload</code>. </p>
         <p>To perform a multipart upload with encryption using an Amazon Web Services KMS key, the requester must
         have permission to the <code>kms:Decrypt</code> and <code>kms:GenerateDataKey*</code>
         actions on the key. These permissions are required because Amazon S3 must decrypt and read data
         from the encrypted file parts before it completes the multipart upload. For more
         information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpuAndPermissions">Multipart upload API
            and permissions</a> in the <i>Amazon S3 User Guide</i>.</p>
    
         <p>If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account
         as the KMS key, then you must have these permissions on the key policy. If your IAM
         user or role belongs to a different account than the key, then you must have the
         permissions on both the key policy and your IAM user or role.</p>
    
    
         <p> For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html">Protecting
            Data Using Server-Side Encryption</a>.</p>
    
         <dl>
            <dt>Access Permissions</dt>
            <dd>
               <p>When copying an object, you can optionally specify the accounts or groups that
                  should be granted specific permissions on the new object. There are two ways to
                  grant the permissions using the request headers:</p>
               <ul>
                  <li>
                     <p>Specify a canned ACL with the <code>x-amz-acl</code> request header. For
                        more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#CannedACL">Canned ACL</a>.</p>
                  </li>
                  <li>
                     <p>Specify access permissions explicitly with the
                           <code>x-amz-grant-read</code>, <code>x-amz-grant-read-acp</code>,
                           <code>x-amz-grant-write-acp</code>, and
                           <code>x-amz-grant-full-control</code> headers. These parameters map to
                        the set of permissions that Amazon S3 supports in an ACL. For more information,
                        see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html">Access Control List (ACL)
                           Overview</a>.</p>
                  </li>
               </ul>
               <p>You can use either a canned ACL or specify access permissions explicitly. You
                  cannot do both.</p>
            </dd>
            <dt>Server-Side- Encryption-Specific Request Headers</dt>
            <dd>
               <p>You can optionally tell Amazon S3 to encrypt data at rest using server-side
                  encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts
                  your data as it writes it to disks in its data centers and decrypts it when you
                  access it. The option you use depends on whether you want to use Amazon Web Services managed
                  encryption keys or provide your own encryption key. </p>
               <ul>
                  <li>
                     <p>Use encryption keys managed by Amazon S3 or customer managed key stored
                        in Amazon Web Services Key Management Service (Amazon Web Services KMS) – If you want Amazon Web Services to manage the keys
                        used to encrypt data, specify the following headers in the request.</p>
                     <ul>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-aws-kms-key-id</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-context</code>
                           </p>
                        </li>
                     </ul>
                     <note>
                        <p>If you specify <code>x-amz-server-side-encryption:aws:kms</code>, but
                           don't provide <code>x-amz-server-side-encryption-aws-kms-key-id</code>,
                           Amazon S3 uses the Amazon Web Services managed key in Amazon Web Services KMS to protect the data.</p>
                     </note>
                     <important>
                        <p>All GET and PUT requests for an object protected by Amazon Web Services KMS fail if
                           you don't make them with SSL or by using SigV4.</p>
                     </important>
                     <p>For more information about server-side encryption with KMS key (SSE-KMS),
                        see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html">Protecting Data Using Server-Side Encryption with KMS keys</a>.</p>
                  </li>
                  <li>
                     <p>Use customer-provided encryption keys – If you want to manage your own
                        encryption keys, provide all the following headers in the request.</p>
                     <ul>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-customer-algorithm</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-customer-key</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-server-side-encryption-customer-key-MD5</code>
                           </p>
                        </li>
                     </ul>
                     <p>For more information about server-side encryption with KMS keys (SSE-KMS),
                        see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html">Protecting Data Using Server-Side Encryption with KMS keys</a>.</p>
                  </li>
               </ul>
            </dd>
            <dt>Access-Control-List (ACL)-Specific Request Headers</dt>
            <dd>
               <p>You also can use the following access control–related headers with this
                  operation. By default, all objects are private. Only the owner has full access
                  control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added
                  to the access control list (ACL) on the object. For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/S3_ACLs_UsingACLs.html">Using ACLs</a>. With this
                  operation, you can grant access permissions using one of the following two
                  methods:</p>
               <ul>
                  <li>
                     <p>Specify a canned ACL (<code>x-amz-acl</code>) — Amazon S3 supports a set of
                        predefined ACLs, known as <i>canned ACLs</i>. Each canned ACL
                        has a predefined set of grantees and permissions. For more information, see
                           <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#CannedACL">Canned
                        ACL</a>.</p>
                  </li>
                  <li>
                     <p>Specify access permissions explicitly — To explicitly grant access
                        permissions to specific Amazon Web Services accounts or groups, use the following headers.
                        Each header maps to specific permissions that Amazon S3 supports in an ACL. For
                        more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html">Access
                           Control List (ACL) Overview</a>. In the header, you specify a list of
                        grantees who get the specific permission. To grant permissions explicitly,
                        use:</p>
                     <ul>
                        <li>
                           <p>
                              <code>x-amz-grant-read</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-grant-write</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-grant-read-acp</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-grant-write-acp</code>
                           </p>
                        </li>
                        <li>
                           <p>
                              <code>x-amz-grant-full-control</code>
                           </p>
                        </li>
                     </ul>
                     <p>You specify each grantee as a type=value pair, where the type is one of
                        the following:</p>
                     <ul>
                        <li>
                           <p>
                              <code>id</code> – if the value specified is the canonical user ID
                              of an Amazon Web Services account</p>
                        </li>
                        <li>
                           <p>
                              <code>uri</code> – if you are granting permissions to a predefined
                              group</p>
                        </li>
                        <li>
                           <p>
                              <code>emailAddress</code> – if the value specified is the email
                              address of an Amazon Web Services account</p>
                           <note>
                              <p>Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions: </p>
                              <ul>
                                 <li>
                                    <p>US East (N. Virginia)</p>
                                 </li>
                                 <li>
                                    <p>US West (N. California)</p>
                                 </li>
                                 <li>
                                    <p> US West (Oregon)</p>
                                 </li>
                                 <li>
                                    <p> Asia Pacific (Singapore)</p>
                                 </li>
                                 <li>
                                    <p>Asia Pacific (Sydney)</p>
                                 </li>
                                 <li>
                                    <p>Asia Pacific (Tokyo)</p>
                                 </li>
                                 <li>
                                    <p>Europe (Ireland)</p>
                                 </li>
                                 <li>
                                    <p>South America (São Paulo)</p>
                                 </li>
                              </ul>
                              <p>For a list of all the Amazon S3 supported Regions and endpoints, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region">Regions and Endpoints</a> in the Amazon Web Services General Reference.</p>
                           </note>
                        </li>
                     </ul>
                     <p>For example, the following <code>x-amz-grant-read</code> header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:</p>
                     <p>
                        <code>x-amz-grant-read: id="11112222333", id="444455556666" </code>
                     </p>
                  </li>
               </ul>
    
            </dd>
         </dl>
    
         <p>The following operations are related to <code>CreateMultipartUpload</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html">CompleteMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html">AbortMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html">ListParts</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html">ListMultipartUploads</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: CreateMultipartUploadCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<CreateMultipartUploadCommandOutput>

  • This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.

    For more information about multipart uploads, see Multipart Upload Overview.

    If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy.

    For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions.

    For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4).

    After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.

    You can optionally request server-side encryption. For server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. You can provide your own encryption key, or use Amazon Web Services KMS keys or Amazon S3-managed encryption keys. If you choose to provide your own encryption key, the request headers you provide in UploadPart and UploadPartCopy requests must match the headers you used in the request to initiate the upload by using CreateMultipartUpload.

    To perform a multipart upload with encryption using an Amazon Web Services KMS key, the requester must have permission to the kms:Decrypt and kms:GenerateDataKey* actions on the key. These permissions are required because Amazon S3 must decrypt and read data from the encrypted file parts before it completes the multipart upload. For more information, see Multipart upload API and permissions in the Amazon S3 User Guide.

    If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role.

    For more information, see Protecting Data Using Server-Side Encryption.

    Access Permissions

    When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:

    • Specify a canned ACL with the x-amz-acl request header. For more information, see Canned ACL.

    • Specify access permissions explicitly with the x-amz-grant-read, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

    Server-Side- Encryption-Specific Request Headers

    You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use Amazon Web Services managed encryption keys or provide your own encryption key.

    • Use encryption keys managed by Amazon S3 or customer managed key stored in Amazon Web Services Key Management Service (Amazon Web Services KMS) – If you want Amazon Web Services to manage the keys used to encrypt data, specify the following headers in the request.

      • x-amz-server-side-encryption

      • x-amz-server-side-encryption-aws-kms-key-id

      • x-amz-server-side-encryption-context

      If you specify x-amz-server-side-encryption:aws:kms, but don't provide x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the Amazon Web Services managed key in Amazon Web Services KMS to protect the data.

      All GET and PUT requests for an object protected by Amazon Web Services KMS fail if you don't make them with SSL or by using SigV4.

      For more information about server-side encryption with KMS key (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.

    • Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.

      • x-amz-server-side-encryption-customer-algorithm

      • x-amz-server-side-encryption-customer-key

      • x-amz-server-side-encryption-customer-key-MD5

      For more information about server-side encryption with KMS keys (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.

    Access-Control-List (ACL)-Specific Request Headers

    You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:

    • Specify a canned ACL (x-amz-acl) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly — To explicitly grant access permissions to specific Amazon Web Services accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:

      • x-amz-grant-read

      • x-amz-grant-write

      • x-amz-grant-read-acp

      • x-amz-grant-write-acp

      • x-amz-grant-full-control

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    The following operations are related to CreateMultipartUpload:

    Parameters

    Returns void

  • This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.

    For more information about multipart uploads, see Multipart Upload Overview.

    If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy.

    For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions.

    For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4).

    After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.

    You can optionally request server-side encryption. For server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. You can provide your own encryption key, or use Amazon Web Services KMS keys or Amazon S3-managed encryption keys. If you choose to provide your own encryption key, the request headers you provide in UploadPart and UploadPartCopy requests must match the headers you used in the request to initiate the upload by using CreateMultipartUpload.

    To perform a multipart upload with encryption using an Amazon Web Services KMS key, the requester must have permission to the kms:Decrypt and kms:GenerateDataKey* actions on the key. These permissions are required because Amazon S3 must decrypt and read data from the encrypted file parts before it completes the multipart upload. For more information, see Multipart upload API and permissions in the Amazon S3 User Guide.

    If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role.

    For more information, see Protecting Data Using Server-Side Encryption.

    Access Permissions

    When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:

    • Specify a canned ACL with the x-amz-acl request header. For more information, see Canned ACL.

    • Specify access permissions explicitly with the x-amz-grant-read, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.

    You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

    Server-Side- Encryption-Specific Request Headers

    You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use Amazon Web Services managed encryption keys or provide your own encryption key.

    • Use encryption keys managed by Amazon S3 or customer managed key stored in Amazon Web Services Key Management Service (Amazon Web Services KMS) – If you want Amazon Web Services to manage the keys used to encrypt data, specify the following headers in the request.

      • x-amz-server-side-encryption

      • x-amz-server-side-encryption-aws-kms-key-id

      • x-amz-server-side-encryption-context

      If you specify x-amz-server-side-encryption:aws:kms, but don't provide x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the Amazon Web Services managed key in Amazon Web Services KMS to protect the data.

      All GET and PUT requests for an object protected by Amazon Web Services KMS fail if you don't make them with SSL or by using SigV4.

      For more information about server-side encryption with KMS key (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.

    • Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.

      • x-amz-server-side-encryption-customer-algorithm

      • x-amz-server-side-encryption-customer-key

      • x-amz-server-side-encryption-customer-key-MD5

      For more information about server-side encryption with KMS keys (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.

    Access-Control-List (ACL)-Specific Request Headers

    You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:

    • Specify a canned ACL (x-amz-acl) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.

    • Specify access permissions explicitly — To explicitly grant access permissions to specific Amazon Web Services accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:

      • x-amz-grant-read

      • x-amz-grant-write

      • x-amz-grant-read-acp

      • x-amz-grant-write-acp

      • x-amz-grant-full-control

      You specify each grantee as a type=value pair, where the type is one of the following:

      • id – if the value specified is the canonical user ID of an Amazon Web Services account

      • uri – if you are granting permissions to a predefined group

      • emailAddress – if the value specified is the email address of an Amazon Web Services account

        Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:

        • US East (N. Virginia)

        • US West (N. California)

        • US West (Oregon)

        • Asia Pacific (Singapore)

        • Asia Pacific (Sydney)

        • Asia Pacific (Tokyo)

        • Europe (Ireland)

        • South America (São Paulo)

        For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.

      For example, the following x-amz-grant-read header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:

      x-amz-grant-read: id="11112222333", id="444455556666"

    The following operations are related to CreateMultipartUpload:

    Parameters

    Returns void

deleteBucket

  • Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.

         <p class="title">
            <b>Related Resources</b>
         </p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html">CreateBucket</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html">DeleteObject</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<DeleteBucketCommandOutput>

  • Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.

    Related Resources

    Parameters

    Returns void

  • Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.

    Related Resources

    Parameters

    Returns void

  • Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.

         <p class="title">
            <b>Related Resources</b>
         </p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html">CreateBucket</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html">DeleteObject</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: DeleteBucketCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<DeleteBucketCommandOutput>

  • Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.

    Related Resources

    Parameters

    Returns void

  • Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.

    Related Resources

    Parameters

    Returns void

deleteBucketAnalyticsConfiguration

deleteBucketCors

deleteBucketEncryption

deleteBucketIntelligentTieringConfiguration

deleteBucketInventoryConfiguration

deleteBucketLifecycle

  • Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.

    To use this operation, you must have permission to perform the s3:PutLifecycleConfiguration action. By default, the bucket owner has this permission and the bucket owner can grant this permission to others.

         <p>There is usually some time lag before lifecycle configuration deletion is fully
         propagated to all the Amazon S3 systems.</p>
    
         <p>For more information about the object expiration, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#intro-lifecycle-rules-actions">Elements to
            Describe Lifecycle Actions</a>.</p>
         <p>Related actions include:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html">PutBucketLifecycleConfiguration</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html">GetBucketLifecycleConfiguration</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<DeleteBucketLifecycleCommandOutput>

  • Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.

    To use this operation, you must have permission to perform the s3:PutLifecycleConfiguration action. By default, the bucket owner has this permission and the bucket owner can grant this permission to others.

    There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems.

    For more information about the object expiration, see Elements to Describe Lifecycle Actions.

    Related actions include:

    Parameters

    Returns void

  • Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.

    To use this operation, you must have permission to perform the s3:PutLifecycleConfiguration action. By default, the bucket owner has this permission and the bucket owner can grant this permission to others.

    There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems.

    For more information about the object expiration, see Elements to Describe Lifecycle Actions.

    Related actions include:

    Parameters

    Returns void

  • Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.

    To use this operation, you must have permission to perform the s3:PutLifecycleConfiguration action. By default, the bucket owner has this permission and the bucket owner can grant this permission to others.

         <p>There is usually some time lag before lifecycle configuration deletion is fully
         propagated to all the Amazon S3 systems.</p>
    
         <p>For more information about the object expiration, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#intro-lifecycle-rules-actions">Elements to
            Describe Lifecycle Actions</a>.</p>
         <p>Related actions include:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html">PutBucketLifecycleConfiguration</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html">GetBucketLifecycleConfiguration</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: DeleteBucketLifecycleCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<DeleteBucketLifecycleCommandOutput>

  • Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.

    To use this operation, you must have permission to perform the s3:PutLifecycleConfiguration action. By default, the bucket owner has this permission and the bucket owner can grant this permission to others.

    There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems.

    For more information about the object expiration, see Elements to Describe Lifecycle Actions.

    Related actions include:

    Parameters

    Returns void

  • Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.

    To use this operation, you must have permission to perform the s3:PutLifecycleConfiguration action. By default, the bucket owner has this permission and the bucket owner can grant this permission to others.

    There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems.

    For more information about the object expiration, see Elements to Describe Lifecycle Actions.

    Related actions include:

    Parameters

    Returns void

deleteBucketMetricsConfiguration

deleteBucketOwnershipControls

deleteBucketPolicy

  • This implementation of the DELETE action uses the policy subresource to delete the policy of a specified bucket. If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must have the DeleteBucketPolicy permissions on the specified bucket and belong to the bucket owner's account to use this operation.

         <p>If you don't have <code>DeleteBucketPolicy</code> permissions, Amazon S3 returns a <code>403
            Access Denied</code> error. If you have the correct permissions, but you're not using an
         identity that belongs to the bucket owner's account, Amazon S3 returns a <code>405 Method Not
            Allowed</code> error. </p>
    
         <important>
            <p>As a security precaution, the root user of the Amazon Web Services account that owns a bucket can
            always use this operation, even if the policy explicitly denies the root user the
            ability to perform this action.</p>
         </important>
    
         <p>For more information about bucket policies, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html">Using Bucket Policies and
            UserPolicies</a>. </p>
         <p>The following operations are related to <code>DeleteBucketPolicy</code>
         </p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html">CreateBucket</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html">DeleteObject</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<DeleteBucketPolicyCommandOutput>

  • This implementation of the DELETE action uses the policy subresource to delete the policy of a specified bucket. If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must have the DeleteBucketPolicy permissions on the specified bucket and belong to the bucket owner's account to use this operation.

    If you don't have DeleteBucketPolicy permissions, Amazon S3 returns a 403 Access Denied error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowed error.

    As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action.

    For more information about bucket policies, see Using Bucket Policies and UserPolicies.

    The following operations are related to DeleteBucketPolicy

    Parameters

    Returns void

  • This implementation of the DELETE action uses the policy subresource to delete the policy of a specified bucket. If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must have the DeleteBucketPolicy permissions on the specified bucket and belong to the bucket owner's account to use this operation.

    If you don't have DeleteBucketPolicy permissions, Amazon S3 returns a 403 Access Denied error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowed error.

    As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action.

    For more information about bucket policies, see Using Bucket Policies and UserPolicies.

    The following operations are related to DeleteBucketPolicy

    Parameters

    Returns void

  • This implementation of the DELETE action uses the policy subresource to delete the policy of a specified bucket. If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must have the DeleteBucketPolicy permissions on the specified bucket and belong to the bucket owner's account to use this operation.

         <p>If you don't have <code>DeleteBucketPolicy</code> permissions, Amazon S3 returns a <code>403
            Access Denied</code> error. If you have the correct permissions, but you're not using an
         identity that belongs to the bucket owner's account, Amazon S3 returns a <code>405 Method Not
            Allowed</code> error. </p>
    
         <important>
            <p>As a security precaution, the root user of the Amazon Web Services account that owns a bucket can
            always use this operation, even if the policy explicitly denies the root user the
            ability to perform this action.</p>
         </important>
    
         <p>For more information about bucket policies, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html">Using Bucket Policies and
            UserPolicies</a>. </p>
         <p>The following operations are related to <code>DeleteBucketPolicy</code>
         </p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html">CreateBucket</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html">DeleteObject</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: DeleteBucketPolicyCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<DeleteBucketPolicyCommandOutput>

  • This implementation of the DELETE action uses the policy subresource to delete the policy of a specified bucket. If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must have the DeleteBucketPolicy permissions on the specified bucket and belong to the bucket owner's account to use this operation.

    If you don't have DeleteBucketPolicy permissions, Amazon S3 returns a 403 Access Denied error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowed error.

    As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action.

    For more information about bucket policies, see Using Bucket Policies and UserPolicies.

    The following operations are related to DeleteBucketPolicy

    Parameters

    Returns void

  • This implementation of the DELETE action uses the policy subresource to delete the policy of a specified bucket. If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must have the DeleteBucketPolicy permissions on the specified bucket and belong to the bucket owner's account to use this operation.

    If you don't have DeleteBucketPolicy permissions, Amazon S3 returns a 403 Access Denied error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowed error.

    As a security precaution, the root user of the Amazon Web Services account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action.

    For more information about bucket policies, see Using Bucket Policies and UserPolicies.

    The following operations are related to DeleteBucketPolicy

    Parameters

    Returns void

deleteBucketReplication

deleteBucketTagging

  • Deletes the tags from the bucket.

         <p>To use this operation, you must have permission to perform the
            <code>s3:PutBucketTagging</code> action. By default, the bucket owner has this
         permission and can grant this permission to others. </p>
         <p>The following operations are related to <code>DeleteBucketTagging</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketTagging.html">GetBucketTagging</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketTagging.html">PutBucketTagging</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<DeleteBucketTaggingCommandOutput>

  • Deletes the tags from the bucket.

    To use this operation, you must have permission to perform the s3:PutBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others.

    The following operations are related to DeleteBucketTagging:

    Parameters

    Returns void

  • Deletes the tags from the bucket.

    To use this operation, you must have permission to perform the s3:PutBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others.

    The following operations are related to DeleteBucketTagging:

    Parameters

    Returns void

  • Deletes the tags from the bucket.

         <p>To use this operation, you must have permission to perform the
            <code>s3:PutBucketTagging</code> action. By default, the bucket owner has this
         permission and can grant this permission to others. </p>
         <p>The following operations are related to <code>DeleteBucketTagging</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketTagging.html">GetBucketTagging</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketTagging.html">PutBucketTagging</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: DeleteBucketTaggingCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<DeleteBucketTaggingCommandOutput>

  • Deletes the tags from the bucket.

    To use this operation, you must have permission to perform the s3:PutBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others.

    The following operations are related to DeleteBucketTagging:

    Parameters

    Returns void

  • Deletes the tags from the bucket.

    To use this operation, you must have permission to perform the s3:PutBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others.

    The following operations are related to DeleteBucketTagging:

    Parameters

    Returns void

deleteBucketWebsite

  • This action removes the website configuration for a bucket. Amazon S3 returns a 200 OK response upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you are trying to delete does not exist on the bucket. Amazon S3 returns a 404 response if the bucket specified in the request does not exist.

         <p>This DELETE action requires the <code>S3:DeleteBucketWebsite</code> permission. By
         default, only the bucket owner can delete the website configuration attached to a bucket.
         However, bucket owners can grant other users permission to delete the website configuration
         by writing a bucket policy granting them the <code>S3:DeleteBucketWebsite</code>
         permission. </p>
    
         <p>For more information about hosting websites, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html">Hosting Websites on Amazon S3</a>. </p>
    
         <p>The following operations are related to <code>DeleteBucketWebsite</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketWebsite.html">GetBucketWebsite</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html">PutBucketWebsite</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<DeleteBucketWebsiteCommandOutput>

  • This action removes the website configuration for a bucket. Amazon S3 returns a 200 OK response upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you are trying to delete does not exist on the bucket. Amazon S3 returns a 404 response if the bucket specified in the request does not exist.

    This DELETE action requires the S3:DeleteBucketWebsite permission. By default, only the bucket owner can delete the website configuration attached to a bucket. However, bucket owners can grant other users permission to delete the website configuration by writing a bucket policy granting them the S3:DeleteBucketWebsite permission.

    For more information about hosting websites, see Hosting Websites on Amazon S3.

    The following operations are related to DeleteBucketWebsite:

    Parameters

    Returns void

  • This action removes the website configuration for a bucket. Amazon S3 returns a 200 OK response upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you are trying to delete does not exist on the bucket. Amazon S3 returns a 404 response if the bucket specified in the request does not exist.

    This DELETE action requires the S3:DeleteBucketWebsite permission. By default, only the bucket owner can delete the website configuration attached to a bucket. However, bucket owners can grant other users permission to delete the website configuration by writing a bucket policy granting them the S3:DeleteBucketWebsite permission.

    For more information about hosting websites, see Hosting Websites on Amazon S3.

    The following operations are related to DeleteBucketWebsite:

    Parameters

    Returns void

  • This action removes the website configuration for a bucket. Amazon S3 returns a 200 OK response upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you are trying to delete does not exist on the bucket. Amazon S3 returns a 404 response if the bucket specified in the request does not exist.

         <p>This DELETE action requires the <code>S3:DeleteBucketWebsite</code> permission. By
         default, only the bucket owner can delete the website configuration attached to a bucket.
         However, bucket owners can grant other users permission to delete the website configuration
         by writing a bucket policy granting them the <code>S3:DeleteBucketWebsite</code>
         permission. </p>
    
         <p>For more information about hosting websites, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html">Hosting Websites on Amazon S3</a>. </p>
    
         <p>The following operations are related to <code>DeleteBucketWebsite</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketWebsite.html">GetBucketWebsite</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html">PutBucketWebsite</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: DeleteBucketWebsiteCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<DeleteBucketWebsiteCommandOutput>

  • This action removes the website configuration for a bucket. Amazon S3 returns a 200 OK response upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you are trying to delete does not exist on the bucket. Amazon S3 returns a 404 response if the bucket specified in the request does not exist.

    This DELETE action requires the S3:DeleteBucketWebsite permission. By default, only the bucket owner can delete the website configuration attached to a bucket. However, bucket owners can grant other users permission to delete the website configuration by writing a bucket policy granting them the S3:DeleteBucketWebsite permission.

    For more information about hosting websites, see Hosting Websites on Amazon S3.

    The following operations are related to DeleteBucketWebsite:

    Parameters

    Returns void

  • This action removes the website configuration for a bucket. Amazon S3 returns a 200 OK response upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK response if the website configuration you are trying to delete does not exist on the bucket. Amazon S3 returns a 404 response if the bucket specified in the request does not exist.

    This DELETE action requires the S3:DeleteBucketWebsite permission. By default, only the bucket owner can delete the website configuration attached to a bucket. However, bucket owners can grant other users permission to delete the website configuration by writing a bucket policy granting them the S3:DeleteBucketWebsite permission.

    For more information about hosting websites, see Hosting Websites on Amazon S3.

    The following operations are related to DeleteBucketWebsite:

    Parameters

    Returns void

deleteObject

  • Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.

         <p>To remove a specific version, you must be the bucket owner and you must use the version
         Id subresource. Using this subresource permanently deletes the version. If the object
         deleted is a delete marker, Amazon S3 sets the response header,
         <code>x-amz-delete-marker</code>, to true. </p>
    
         <p>If the object you want to delete is in a bucket where the bucket versioning
         configuration is MFA Delete enabled, you must include the <code>x-amz-mfa</code> request
         header in the DELETE <code>versionId</code> request. Requests that include
            <code>x-amz-mfa</code> must use HTTPS. </p>
    
         <p> For more information about MFA Delete, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html">Using MFA Delete</a>. To see sample requests that use versioning, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html#ExampleVersionObjectDelete">Sample Request</a>. </p>
    
         <p>You can delete objects by explicitly calling DELETE Object or configure its
         lifecycle (<a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html">PutBucketLifecycle</a>) to
         enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or
         deleting objects from your bucket, you must deny them the <code>s3:DeleteObject</code>,
            <code>s3:DeleteObjectVersion</code>, and <code>s3:PutLifeCycleConfiguration</code>
         actions. </p>
    
         <p>The following action is related to <code>DeleteObject</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html">PutObject</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<DeleteObjectCommandOutput>

  • Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.

    To remove a specific version, you must be the bucket owner and you must use the version Id subresource. Using this subresource permanently deletes the version. If the object deleted is a delete marker, Amazon S3 sets the response header, x-amz-delete-marker, to true.

    If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include the x-amz-mfa request header in the DELETE versionId request. Requests that include x-amz-mfa must use HTTPS.

    For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.

    You can delete objects by explicitly calling DELETE Object or configure its lifecycle (PutBucketLifecycle) to enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them the s3:DeleteObject, s3:DeleteObjectVersion, and s3:PutLifeCycleConfiguration actions.

    The following action is related to DeleteObject:

    Parameters

    Returns void

  • Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.

    To remove a specific version, you must be the bucket owner and you must use the version Id subresource. Using this subresource permanently deletes the version. If the object deleted is a delete marker, Amazon S3 sets the response header, x-amz-delete-marker, to true.

    If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include the x-amz-mfa request header in the DELETE versionId request. Requests that include x-amz-mfa must use HTTPS.

    For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.

    You can delete objects by explicitly calling DELETE Object or configure its lifecycle (PutBucketLifecycle) to enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them the s3:DeleteObject, s3:DeleteObjectVersion, and s3:PutLifeCycleConfiguration actions.

    The following action is related to DeleteObject:

    Parameters

    Returns void

  • Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.

         <p>To remove a specific version, you must be the bucket owner and you must use the version
         Id subresource. Using this subresource permanently deletes the version. If the object
         deleted is a delete marker, Amazon S3 sets the response header,
         <code>x-amz-delete-marker</code>, to true. </p>
    
         <p>If the object you want to delete is in a bucket where the bucket versioning
         configuration is MFA Delete enabled, you must include the <code>x-amz-mfa</code> request
         header in the DELETE <code>versionId</code> request. Requests that include
            <code>x-amz-mfa</code> must use HTTPS. </p>
    
         <p> For more information about MFA Delete, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html">Using MFA Delete</a>. To see sample requests that use versioning, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html#ExampleVersionObjectDelete">Sample Request</a>. </p>
    
         <p>You can delete objects by explicitly calling DELETE Object or configure its
         lifecycle (<a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html">PutBucketLifecycle</a>) to
         enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or
         deleting objects from your bucket, you must deny them the <code>s3:DeleteObject</code>,
            <code>s3:DeleteObjectVersion</code>, and <code>s3:PutLifeCycleConfiguration</code>
         actions. </p>
    
         <p>The following action is related to <code>DeleteObject</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html">PutObject</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: DeleteObjectCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<DeleteObjectCommandOutput>

  • Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.

    To remove a specific version, you must be the bucket owner and you must use the version Id subresource. Using this subresource permanently deletes the version. If the object deleted is a delete marker, Amazon S3 sets the response header, x-amz-delete-marker, to true.

    If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include the x-amz-mfa request header in the DELETE versionId request. Requests that include x-amz-mfa must use HTTPS.

    For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.

    You can delete objects by explicitly calling DELETE Object or configure its lifecycle (PutBucketLifecycle) to enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them the s3:DeleteObject, s3:DeleteObjectVersion, and s3:PutLifeCycleConfiguration actions.

    The following action is related to DeleteObject:

    Parameters

    Returns void

  • Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.

    To remove a specific version, you must be the bucket owner and you must use the version Id subresource. Using this subresource permanently deletes the version. If the object deleted is a delete marker, Amazon S3 sets the response header, x-amz-delete-marker, to true.

    If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include the x-amz-mfa request header in the DELETE versionId request. Requests that include x-amz-mfa must use HTTPS.

    For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.

    You can delete objects by explicitly calling DELETE Object or configure its lifecycle (PutBucketLifecycle) to enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them the s3:DeleteObject, s3:DeleteObjectVersion, and s3:PutLifeCycleConfiguration actions.

    The following action is related to DeleteObject:

    Parameters

    Returns void

deleteObjectTagging

  • Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.

         <p>To use this operation, you must have permission to perform the
            <code>s3:DeleteObjectTagging</code> action.</p>
    
         <p>To delete tags of a specific object version, add the <code>versionId</code> query
         parameter in the request. You will need permission for the
            <code>s3:DeleteObjectVersionTagging</code> action.</p>
    
         <p>The following operations are related to
         <code>DeleteBucketMetricsConfiguration</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html">PutObjectTagging</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html">GetObjectTagging</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<DeleteObjectTaggingCommandOutput>

  • Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.

    To use this operation, you must have permission to perform the s3:DeleteObjectTagging action.

    To delete tags of a specific object version, add the versionId query parameter in the request. You will need permission for the s3:DeleteObjectVersionTagging action.

    The following operations are related to DeleteBucketMetricsConfiguration:

    Parameters

    Returns void

  • Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.

    To use this operation, you must have permission to perform the s3:DeleteObjectTagging action.

    To delete tags of a specific object version, add the versionId query parameter in the request. You will need permission for the s3:DeleteObjectVersionTagging action.

    The following operations are related to DeleteBucketMetricsConfiguration:

    Parameters

    Returns void

  • Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.

         <p>To use this operation, you must have permission to perform the
            <code>s3:DeleteObjectTagging</code> action.</p>
    
         <p>To delete tags of a specific object version, add the <code>versionId</code> query
         parameter in the request. You will need permission for the
            <code>s3:DeleteObjectVersionTagging</code> action.</p>
    
         <p>The following operations are related to
         <code>DeleteBucketMetricsConfiguration</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html">PutObjectTagging</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html">GetObjectTagging</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: DeleteObjectTaggingCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<DeleteObjectTaggingCommandOutput>

  • Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.

    To use this operation, you must have permission to perform the s3:DeleteObjectTagging action.

    To delete tags of a specific object version, add the versionId query parameter in the request. You will need permission for the s3:DeleteObjectVersionTagging action.

    The following operations are related to DeleteBucketMetricsConfiguration:

    Parameters

    Returns void

  • Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.

    To use this operation, you must have permission to perform the s3:DeleteObjectTagging action.

    To delete tags of a specific object version, add the versionId query parameter in the request. You will need permission for the s3:DeleteObjectVersionTagging action.

    The following operations are related to DeleteBucketMetricsConfiguration:

    Parameters

    Returns void

deleteObjects

  • This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead.

         <p>The request contains a list of up to 1000 keys that you want to delete. In the XML, you
         provide the object key names, and optionally, version IDs if you want to delete a specific
         version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a
         delete action and returns the result of that delete, success, or failure, in the
         response. Note that if the object specified in the request is not found, Amazon S3 returns the
         result as deleted.</p>
    
         <p> The action supports two modes for the response: verbose and quiet. By default, the
         action uses verbose mode in which the response includes the result of deletion of each
         key in your request. In quiet mode the response includes only keys where the delete
         action encountered an error. For a successful deletion, the action does not return
         any information about the delete in the response body.</p>
    
         <p>When performing this action on an MFA Delete enabled bucket, that attempts to delete
         any versioned objects, you must include an MFA token. If you do not provide one, the entire
         request will fail, even if there are non-versioned objects you are trying to delete. If you
         provide an invalid token, whether there are versioned keys in the request or not, the
         entire Multi-Object Delete request will fail. For information about MFA Delete, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete"> MFA
         Delete</a>.</p>
    
         <p>Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon
         S3 uses the header value to ensure that your request body has not been altered in
         transit.</p>
    
         <p>The following operations are related to <code>DeleteObjects</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html">CreateMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html">CompleteMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html">ListParts</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html">AbortMultipartUpload</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<DeleteObjectsCommandOutput>

  • This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead.

    The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete action and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted.

    The action supports two modes for the response: verbose and quiet. By default, the action uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete action encountered an error. For a successful deletion, the action does not return any information about the delete in the response body.

    When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete.

    Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.

    The following operations are related to DeleteObjects:

    Parameters

    Returns void

  • This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead.

    The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete action and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted.

    The action supports two modes for the response: verbose and quiet. By default, the action uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete action encountered an error. For a successful deletion, the action does not return any information about the delete in the response body.

    When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete.

    Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.

    The following operations are related to DeleteObjects:

    Parameters

    Returns void

  • This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead.

         <p>The request contains a list of up to 1000 keys that you want to delete. In the XML, you
         provide the object key names, and optionally, version IDs if you want to delete a specific
         version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a
         delete action and returns the result of that delete, success, or failure, in the
         response. Note that if the object specified in the request is not found, Amazon S3 returns the
         result as deleted.</p>
    
         <p> The action supports two modes for the response: verbose and quiet. By default, the
         action uses verbose mode in which the response includes the result of deletion of each
         key in your request. In quiet mode the response includes only keys where the delete
         action encountered an error. For a successful deletion, the action does not return
         any information about the delete in the response body.</p>
    
         <p>When performing this action on an MFA Delete enabled bucket, that attempts to delete
         any versioned objects, you must include an MFA token. If you do not provide one, the entire
         request will fail, even if there are non-versioned objects you are trying to delete. If you
         provide an invalid token, whether there are versioned keys in the request or not, the
         entire Multi-Object Delete request will fail. For information about MFA Delete, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete"> MFA
         Delete</a>.</p>
    
         <p>Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon
         S3 uses the header value to ensure that your request body has not been altered in
         transit.</p>
    
         <p>The following operations are related to <code>DeleteObjects</code>:</p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html">CreateMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html">UploadPart</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html">CompleteMultipartUpload</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html">ListParts</a>
               </p>
            </li>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html">AbortMultipartUpload</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: DeleteObjectsCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<DeleteObjectsCommandOutput>

  • This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead.

    The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete action and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted.

    The action supports two modes for the response: verbose and quiet. By default, the action uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete action encountered an error. For a successful deletion, the action does not return any information about the delete in the response body.

    When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete.

    Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.

    The following operations are related to DeleteObjects:

    Parameters

    Returns void

  • This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead.

    The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete action and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted.

    The action supports two modes for the response: verbose and quiet. By default, the action uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete action encountered an error. For a successful deletion, the action does not return any information about the delete in the response body.

    When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete.

    Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.

    The following operations are related to DeleteObjects:

    Parameters

    Returns void

deletePublicAccessBlock

destroy

  • destroy(): void
  • Destroy underlying resources, like sockets. It's usually not necessary to do this. However in Node.js, it's best to explicitly shut down the client's agent when it is no longer needed. Otherwise, sockets might stay open for quite a long time before the server terminates them.

    Returns void

getBucketAccelerateConfiguration

  • This implementation of the GET action uses the accelerate subresource to return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to and from Amazon S3.

    To use this operation, you must have permission to perform the s3:GetAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to your Amazon S3 Resources in the Amazon S3 User Guide.

    You set the Transfer Acceleration state of an existing bucket to Enabled or Suspended by using the PutBucketAccelerateConfiguration operation.

    A GET accelerate request does not return a state value for a bucket that has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state has never been set on the bucket.

         <p>For more information about transfer acceleration, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html">Transfer Acceleration</a> in the
         Amazon S3 User Guide.</p>
         <p class="title">
            <b>Related Resources</b>
         </p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html">PutBucketAccelerateConfiguration</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<GetBucketAccelerateConfigurationCommandOutput>

  • This implementation of the GET action uses the accelerate subresource to return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to and from Amazon S3.

    To use this operation, you must have permission to perform the s3:GetAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to your Amazon S3 Resources in the Amazon S3 User Guide.

    You set the Transfer Acceleration state of an existing bucket to Enabled or Suspended by using the PutBucketAccelerateConfiguration operation.

    A GET accelerate request does not return a state value for a bucket that has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state has never been set on the bucket.

    For more information about transfer acceleration, see Transfer Acceleration in the Amazon S3 User Guide.

    Related Resources

    Parameters

    Returns void

  • This implementation of the GET action uses the accelerate subresource to return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to and from Amazon S3.

    To use this operation, you must have permission to perform the s3:GetAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to your Amazon S3 Resources in the Amazon S3 User Guide.

    You set the Transfer Acceleration state of an existing bucket to Enabled or Suspended by using the PutBucketAccelerateConfiguration operation.

    A GET accelerate request does not return a state value for a bucket that has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state has never been set on the bucket.

    For more information about transfer acceleration, see Transfer Acceleration in the Amazon S3 User Guide.

    Related Resources

    Parameters

    Returns void

  • This implementation of the GET action uses the accelerate subresource to return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to and from Amazon S3.

    To use this operation, you must have permission to perform the s3:GetAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to your Amazon S3 Resources in the Amazon S3 User Guide.

    You set the Transfer Acceleration state of an existing bucket to Enabled or Suspended by using the PutBucketAccelerateConfiguration operation.

    A GET accelerate request does not return a state value for a bucket that has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state has never been set on the bucket.

         <p>For more information about transfer acceleration, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html">Transfer Acceleration</a> in the
         Amazon S3 User Guide.</p>
         <p class="title">
            <b>Related Resources</b>
         </p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html">PutBucketAccelerateConfiguration</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: GetBucketAccelerateConfigurationCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<GetBucketAccelerateConfigurationCommandOutput>

  • This implementation of the GET action uses the accelerate subresource to return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to and from Amazon S3.

    To use this operation, you must have permission to perform the s3:GetAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to your Amazon S3 Resources in the Amazon S3 User Guide.

    You set the Transfer Acceleration state of an existing bucket to Enabled or Suspended by using the PutBucketAccelerateConfiguration operation.

    A GET accelerate request does not return a state value for a bucket that has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state has never been set on the bucket.

    For more information about transfer acceleration, see Transfer Acceleration in the Amazon S3 User Guide.

    Related Resources

    Parameters

    Returns void

  • This implementation of the GET action uses the accelerate subresource to return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to and from Amazon S3.

    To use this operation, you must have permission to perform the s3:GetAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to your Amazon S3 Resources in the Amazon S3 User Guide.

    You set the Transfer Acceleration state of an existing bucket to Enabled or Suspended by using the PutBucketAccelerateConfiguration operation.

    A GET accelerate request does not return a state value for a bucket that has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state has never been set on the bucket.

    For more information about transfer acceleration, see Transfer Acceleration in the Amazon S3 User Guide.

    Related Resources

    Parameters

    Returns void

getBucketAcl

  • This implementation of the GET action uses the acl subresource to return the access control list (ACL) of a bucket. To use GET to return the ACL of the bucket, you must have READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header.

    If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to read ACLs are still supported and return the bucket-owner-full-control ACL with the owner being the account that created the bucket. For more information, see Controlling object ownership and disabling ACLs in the Amazon S3 User Guide.

         <p class="title">
            <b>Related Resources</b>
         </p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html">ListObjects</a>
               </p>
            </li>
         </ul>
    

    Parameters

    Returns Promise<GetBucketAclCommandOutput>

  • This implementation of the GET action uses the acl subresource to return the access control list (ACL) of a bucket. To use GET to return the ACL of the bucket, you must have READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header.

    If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to read ACLs are still supported and return the bucket-owner-full-control ACL with the owner being the account that created the bucket. For more information, see Controlling object ownership and disabling ACLs in the Amazon S3 User Guide.

    Related Resources

    Parameters

    Returns void

  • This implementation of the GET action uses the acl subresource to return the access control list (ACL) of a bucket. To use GET to return the ACL of the bucket, you must have READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header.

    If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to read ACLs are still supported and return the bucket-owner-full-control ACL with the owner being the account that created the bucket. For more information, see Controlling object ownership and disabling ACLs in the Amazon S3 User Guide.

    Related Resources

    Parameters

    Returns void

  • This implementation of the GET action uses the acl subresource to return the access control list (ACL) of a bucket. To use GET to return the ACL of the bucket, you must have READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header.

    If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to read ACLs are still supported and return the bucket-owner-full-control ACL with the owner being the account that created the bucket. For more information, see Controlling object ownership and disabling ACLs in the Amazon S3 User Guide.

         <p class="title">
            <b>Related Resources</b>
         </p>
         <ul>
            <li>
               <p>
                  <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html">ListObjects</a>
               </p>
            </li>
         </ul>
    

    Parameters

    • args: GetBucketAclCommandInput
    • Optional options: __HttpHandlerOptions

    Returns Promise<GetBucketAclCommandOutput>

  • This implementation of the GET action uses the acl subresource to return the access control list (ACL) of a bucket. To use GET to return the ACL of the bucket, you must have READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header.

    If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to read ACLs are still supported and return the bucket-owner-full-control ACL with the owner being the account that created the bucket. For more information, see Controlling object ownership and disabling ACLs in the Amazon S3 User Guide.

    Related Resources

    Parameters

    Returns void

  • This implementation of the GET action uses the acl subresource to return the access control list (ACL) of a bucket. To use GET to return the ACL of the bucket, you must have READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header.

    If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to read ACLs are still supported and return the bucket-owner-full-control ACL with the owner being the account that created the bucket. For more information, see Controlling object ownership and disabling ACLs in the Amazon S3 User Guide.

    Related Resources