Class: Aws::S3::Client
- Inherits:
-
Seahorse::Client::Base
- Object
- Seahorse::Client::Base
- Aws::S3::Client
- Includes:
- ClientStubs
- Defined in:
- gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb
Overview
An API client for S3. To construct a client, you need to configure a :region
and :credentials
.
client = Aws::S3::Client.new(
region: region_name,
credentials: credentials,
# ...
)
For details on configuring region and credentials see the developer guide.
See #initialize for a full list of supported configuration options.
Instance Attribute Summary
Attributes inherited from Seahorse::Client::Base
API Operations collapse
-
#abort_multipart_upload(params = {}) ⇒ Types::AbortMultipartUploadOutput
This action aborts a multipart upload.
-
#complete_multipart_upload(params = {}) ⇒ Types::CompleteMultipartUploadOutput
Completes a multipart upload by assembling previously uploaded parts.
-
#copy_object(params = {}) ⇒ Types::CopyObjectOutput
Creates a copy of an object that is already stored in Amazon S3.
-
#create_bucket(params = {}) ⇒ Types::CreateBucketOutput
Creates a new S3 bucket.
-
#create_multipart_upload(params = {}) ⇒ Types::CreateMultipartUploadOutput
This action initiates a multipart upload and returns an upload ID.
-
#delete_bucket(params = {}) ⇒ Struct
Deletes the S3 bucket.
-
#delete_bucket_analytics_configuration(params = {}) ⇒ Struct
Deletes an analytics configuration for the bucket (specified by the analytics configuration ID).
-
#delete_bucket_cors(params = {}) ⇒ Struct
Deletes the
cors
configuration information set for the bucket. -
#delete_bucket_encryption(params = {}) ⇒ Struct
This implementation of the DELETE action resets the default encryption for the bucket as server-side encryption with Amazon S3 managed keys (SSE-S3).
-
#delete_bucket_intelligent_tiering_configuration(params = {}) ⇒ Struct
Deletes the S3 Intelligent-Tiering configuration from the specified bucket.
-
#delete_bucket_inventory_configuration(params = {}) ⇒ Struct
Deletes an inventory configuration (identified by the inventory ID) from the bucket.
-
#delete_bucket_lifecycle(params = {}) ⇒ Struct
Deletes the lifecycle configuration from the specified bucket.
-
#delete_bucket_metrics_configuration(params = {}) ⇒ Struct
Deletes a metrics configuration for the Amazon CloudWatch request metrics (specified by the metrics configuration ID) from the bucket.
-
#delete_bucket_ownership_controls(params = {}) ⇒ Struct
Removes
OwnershipControls
for an Amazon S3 bucket. -
#delete_bucket_policy(params = {}) ⇒ Struct
This implementation of the DELETE action uses the policy subresource to delete the policy of a specified bucket.
-
#delete_bucket_replication(params = {}) ⇒ Struct
Deletes the replication configuration from the bucket.
-
#delete_bucket_tagging(params = {}) ⇒ Struct
Deletes the tags from the bucket.
-
#delete_bucket_website(params = {}) ⇒ Struct
This action removes the website configuration for a bucket.
-
#delete_object(params = {}) ⇒ Types::DeleteObjectOutput
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object.
-
#delete_object_tagging(params = {}) ⇒ Types::DeleteObjectTaggingOutput
Removes the entire tag set from the specified object.
-
#delete_objects(params = {}) ⇒ Types::DeleteObjectsOutput
This action enables you to delete multiple objects from a bucket using a single HTTP request.
-
#delete_public_access_block(params = {}) ⇒ Struct
Removes the
PublicAccessBlock
configuration for an Amazon S3 bucket. -
#get_bucket_accelerate_configuration(params = {}) ⇒ Types::GetBucketAccelerateConfigurationOutput
This implementation of the GET action uses the
accelerate
subresource to return the Transfer Acceleration state of a bucket, which is eitherEnabled
orSuspended
. -
#get_bucket_acl(params = {}) ⇒ Types::GetBucketAclOutput
This implementation of the
GET
action uses theacl
subresource to return the access control list (ACL) of a bucket. -
#get_bucket_analytics_configuration(params = {}) ⇒ Types::GetBucketAnalyticsConfigurationOutput
This implementation of the GET action returns an analytics configuration (identified by the analytics configuration ID) from the bucket.
-
#get_bucket_cors(params = {}) ⇒ Types::GetBucketCorsOutput
Returns the Cross-Origin Resource Sharing (CORS) configuration information set for the bucket.
-
#get_bucket_encryption(params = {}) ⇒ Types::GetBucketEncryptionOutput
Returns the default encryption configuration for an Amazon S3 bucket.
-
#get_bucket_intelligent_tiering_configuration(params = {}) ⇒ Types::GetBucketIntelligentTieringConfigurationOutput
Gets the S3 Intelligent-Tiering configuration from the specified bucket.
-
#get_bucket_inventory_configuration(params = {}) ⇒ Types::GetBucketInventoryConfigurationOutput
Returns an inventory configuration (identified by the inventory configuration ID) from the bucket.
-
#get_bucket_lifecycle(params = {}) ⇒ Types::GetBucketLifecycleOutput
For an updated version of this API, see [GetBucketLifecycleConfiguration][1].
-
#get_bucket_lifecycle_configuration(params = {}) ⇒ Types::GetBucketLifecycleConfigurationOutput
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object key name prefix, one or more object tags, or a combination of both. -
#get_bucket_location(params = {}) ⇒ Types::GetBucketLocationOutput
Returns the Region the bucket resides in.
-
#get_bucket_logging(params = {}) ⇒ Types::GetBucketLoggingOutput
Returns the logging status of a bucket and the permissions users have to view and modify that status.
-
#get_bucket_metrics_configuration(params = {}) ⇒ Types::GetBucketMetricsConfigurationOutput
Gets a metrics configuration (specified by the metrics configuration ID) from the bucket.
-
#get_bucket_notification(params = {}) ⇒ Types::NotificationConfigurationDeprecated
No longer used, see [GetBucketNotificationConfiguration][1].
-
#get_bucket_notification_configuration(params = {}) ⇒ Types::NotificationConfiguration
Returns the notification configuration of a bucket.
-
#get_bucket_ownership_controls(params = {}) ⇒ Types::GetBucketOwnershipControlsOutput
Retrieves
OwnershipControls
for an Amazon S3 bucket. -
#get_bucket_policy(params = {}) ⇒ Types::GetBucketPolicyOutput
Returns the policy of a specified bucket.
-
#get_bucket_policy_status(params = {}) ⇒ Types::GetBucketPolicyStatusOutput
Retrieves the policy status for an Amazon S3 bucket, indicating whether the bucket is public.
-
#get_bucket_replication(params = {}) ⇒ Types::GetBucketReplicationOutput
Returns the replication configuration of a bucket.
-
#get_bucket_request_payment(params = {}) ⇒ Types::GetBucketRequestPaymentOutput
Returns the request payment configuration of a bucket.
-
#get_bucket_tagging(params = {}) ⇒ Types::GetBucketTaggingOutput
Returns the tag set associated with the bucket.
-
#get_bucket_versioning(params = {}) ⇒ Types::GetBucketVersioningOutput
Returns the versioning state of a bucket.
-
#get_bucket_website(params = {}) ⇒ Types::GetBucketWebsiteOutput
Returns the website configuration for a bucket.
-
#get_object(params = {}) ⇒ Types::GetObjectOutput
Retrieves objects from Amazon S3.
-
#get_object_acl(params = {}) ⇒ Types::GetObjectAclOutput
Returns the access control list (ACL) of an object.
-
#get_object_attributes(params = {}) ⇒ Types::GetObjectAttributesOutput
Retrieves all the metadata from an object without returning the object itself.
-
#get_object_legal_hold(params = {}) ⇒ Types::GetObjectLegalHoldOutput
Gets an object's current legal hold status.
-
#get_object_lock_configuration(params = {}) ⇒ Types::GetObjectLockConfigurationOutput
Gets the Object Lock configuration for a bucket.
-
#get_object_retention(params = {}) ⇒ Types::GetObjectRetentionOutput
Retrieves an object's retention settings.
-
#get_object_tagging(params = {}) ⇒ Types::GetObjectTaggingOutput
Returns the tag-set of an object.
-
#get_object_torrent(params = {}) ⇒ Types::GetObjectTorrentOutput
Returns torrent files from a bucket.
-
#get_public_access_block(params = {}) ⇒ Types::GetPublicAccessBlockOutput
Retrieves the
PublicAccessBlock
configuration for an Amazon S3 bucket. -
#head_bucket(params = {}) ⇒ Struct
This action is useful to determine if a bucket exists and you have permission to access it.
-
#head_object(params = {}) ⇒ Types::HeadObjectOutput
The
HEAD
action retrieves metadata from an object without returning the object itself. -
#list_bucket_analytics_configurations(params = {}) ⇒ Types::ListBucketAnalyticsConfigurationsOutput
Lists the analytics configurations for the bucket.
-
#list_bucket_intelligent_tiering_configurations(params = {}) ⇒ Types::ListBucketIntelligentTieringConfigurationsOutput
Lists the S3 Intelligent-Tiering configuration from the specified bucket.
-
#list_bucket_inventory_configurations(params = {}) ⇒ Types::ListBucketInventoryConfigurationsOutput
Returns a list of inventory configurations for the bucket.
-
#list_bucket_metrics_configurations(params = {}) ⇒ Types::ListBucketMetricsConfigurationsOutput
Lists the metrics configurations for the bucket.
-
#list_buckets(params = {}) ⇒ Types::ListBucketsOutput
Returns a list of all buckets owned by the authenticated sender of the request.
-
#list_multipart_uploads(params = {}) ⇒ Types::ListMultipartUploadsOutput
This action lists in-progress multipart uploads.
-
#list_object_versions(params = {}) ⇒ Types::ListObjectVersionsOutput
Returns metadata about all versions of the objects in a bucket.
-
#list_objects(params = {}) ⇒ Types::ListObjectsOutput
Returns some or all (up to 1,000) of the objects in a bucket.
-
#list_objects_v2(params = {}) ⇒ Types::ListObjectsV2Output
Returns some or all (up to 1,000) of the objects in a bucket with each request.
-
#list_parts(params = {}) ⇒ Types::ListPartsOutput
Lists the parts that have been uploaded for a specific multipart upload.
-
#put_bucket_accelerate_configuration(params = {}) ⇒ Struct
Sets the accelerate configuration of an existing bucket.
-
#put_bucket_acl(params = {}) ⇒ Struct
Sets the permissions on an existing bucket using access control lists (ACL).
-
#put_bucket_analytics_configuration(params = {}) ⇒ Struct
Sets an analytics configuration for the bucket (specified by the analytics configuration ID).
-
#put_bucket_cors(params = {}) ⇒ Struct
Sets the
cors
configuration for your bucket. -
#put_bucket_encryption(params = {}) ⇒ Struct
This action uses the
encryption
subresource to configure default encryption and Amazon S3 Bucket Keys for an existing bucket. -
#put_bucket_intelligent_tiering_configuration(params = {}) ⇒ Struct
Puts a S3 Intelligent-Tiering configuration to the specified bucket.
-
#put_bucket_inventory_configuration(params = {}) ⇒ Struct
This implementation of the
PUT
action adds an inventory configuration (identified by the inventory ID) to the bucket. -
#put_bucket_lifecycle(params = {}) ⇒ Struct
For an updated version of this API, see [PutBucketLifecycleConfiguration][1].
-
#put_bucket_lifecycle_configuration(params = {}) ⇒ Struct
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration.
-
#put_bucket_logging(params = {}) ⇒ Struct
Set the logging parameters for a bucket and to specify permissions for who can view and modify the logging parameters.
-
#put_bucket_metrics_configuration(params = {}) ⇒ Struct
Sets a metrics configuration (specified by the metrics configuration ID) for the bucket.
-
#put_bucket_notification(params = {}) ⇒ Struct
No longer used, see the [PutBucketNotificationConfiguration][1] operation.
-
#put_bucket_notification_configuration(params = {}) ⇒ Struct
Enables notifications of specified events for a bucket.
-
#put_bucket_ownership_controls(params = {}) ⇒ Struct
Creates or modifies
OwnershipControls
for an Amazon S3 bucket. -
#put_bucket_policy(params = {}) ⇒ Struct
Applies an Amazon S3 bucket policy to an Amazon S3 bucket.
-
#put_bucket_replication(params = {}) ⇒ Struct
Creates a replication configuration or replaces an existing one.
-
#put_bucket_request_payment(params = {}) ⇒ Struct
Sets the request payment configuration for a bucket.
-
#put_bucket_tagging(params = {}) ⇒ Struct
Sets the tags for a bucket.
-
#put_bucket_versioning(params = {}) ⇒ Struct
Sets the versioning state of an existing bucket.
-
#put_bucket_website(params = {}) ⇒ Struct
Sets the configuration of the website that is specified in the
website
subresource. -
#put_object(params = {}) ⇒ Types::PutObjectOutput
Adds an object to a bucket.
-
#put_object_acl(params = {}) ⇒ Types::PutObjectAclOutput
Uses the
acl
subresource to set the access control list (ACL) permissions for a new or existing object in an S3 bucket. -
#put_object_legal_hold(params = {}) ⇒ Types::PutObjectLegalHoldOutput
Applies a legal hold configuration to the specified object.
-
#put_object_lock_configuration(params = {}) ⇒ Types::PutObjectLockConfigurationOutput
Places an Object Lock configuration on the specified bucket.
-
#put_object_retention(params = {}) ⇒ Types::PutObjectRetentionOutput
Places an Object Retention configuration on an object.
-
#put_object_tagging(params = {}) ⇒ Types::PutObjectTaggingOutput
Sets the supplied tag-set to an object that already exists in a bucket.
-
#put_public_access_block(params = {}) ⇒ Struct
Creates or modifies the
PublicAccessBlock
configuration for an Amazon S3 bucket. -
#restore_object(params = {}) ⇒ Types::RestoreObjectOutput
Restores an archived copy of an object back into Amazon S3.
-
#select_object_content(params = {}) ⇒ Types::SelectObjectContentOutput
This action filters the contents of an Amazon S3 object based on a simple structured query language (SQL) statement.
-
#upload_part(params = {}) ⇒ Types::UploadPartOutput
Uploads a part in a multipart upload.
-
#upload_part_copy(params = {}) ⇒ Types::UploadPartCopyOutput
Uploads a part by copying data from an existing object as data source.
-
#write_get_object_response(params = {}) ⇒ Struct
Passes transformed objects to a
GetObject
operation when using Object Lambda access points.
Instance Method Summary collapse
-
#initialize(options) ⇒ Client
constructor
A new instance of Client.
-
#wait_until(waiter_name, params = {}, options = {}) {|w.waiter| ... } ⇒ Boolean
Polls an API operation until a resource enters a desired state.
Methods included from ClientStubs
#api_requests, #stub_data, #stub_responses
Methods inherited from Seahorse::Client::Base
add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins
Methods included from Seahorse::Client::HandlerBuilder
#handle, #handle_request, #handle_response
Constructor Details
#initialize(options) ⇒ Client
Returns a new instance of Client.
471 472 473 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 471 def initialize(*args) super end |
Instance Method Details
#abort_multipart_upload(params = {}) ⇒ Types::AbortMultipartUploadOutput
This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.
To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts action and ensure that the parts list is empty.
For information about permissions required to use the multipart upload, see Multipart Upload and Permissions.
The following operations are related to AbortMultipartUpload
:
599 600 601 602 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 599 def abort_multipart_upload(params = {}, = {}) req = build_request(:abort_multipart_upload, params) req.send_request() end |
#complete_multipart_upload(params = {}) ⇒ Types::CompleteMultipartUploadOutput
Completes a multipart upload by assembling previously uploaded parts.
You first initiate the multipart upload and then upload all parts
using the UploadPart operation. After successfully uploading all
relevant parts of an upload, you call this action to complete the
upload. Upon receiving this request, Amazon S3 concatenates all the
parts in ascending order by part number to create a new object. In the
Complete Multipart Upload request, you must provide the parts list.
You must ensure that the parts list is complete. This action
concatenates the parts that you provide in the list. For each part in
the list, you must provide the part number and the ETag
value,
returned after that part was uploaded.
Processing of a Complete Multipart Upload request could take several
minutes to complete. After Amazon S3 begins processing the request, it
sends an HTTP response header that specifies a 200 OK response. While
processing is in progress, Amazon S3 periodically sends white space
characters to keep the connection from timing out. A request could
fail after the initial 200 OK response has been sent. This means that
a 200 OK
response can contain either a success or an error. If you
call the S3 API directly, make sure to design your application to
parse the contents of the response and handle it appropriately. If you
use Amazon Web Services SDKs, SDKs handle this condition. The SDKs
detect the embedded error and apply error handling per your
configuration settings (including automatically retrying the request
as appropriate). If the condition persists, the SDKs throws an
exception (or, for the SDKs that don't use exceptions, they return
the error).
Note that if CompleteMultipartUpload
fails, applications should be
prepared to retry the failed requests. For more information, see
Amazon S3 Error Best Practices.
You cannot use Content-Type: application/x-www-form-urlencoded
with
Complete Multipart Upload requests. Also, if you do not provide a
Content-Type
header, CompleteMultipartUpload
returns a 200 OK
response.
For more information about multipart uploads, see Uploading Objects Using Multipart Upload.
For information about permissions required to use the multipart upload API, see Multipart Upload and Permissions.
CompleteMultipartUpload
has the following special errors:
Error code:
EntityTooSmall
Description: Your proposed upload is smaller than the minimum allowed object size. Each part must be at least 5 MB in size, except the last part.
400 Bad Request
Error code:
InvalidPart
Description: One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag.
400 Bad Request
Error code:
InvalidPartOrder
Description: The list of parts was not in ascending order. The parts list must be specified in order by part number.
400 Bad Request
Error code:
NoSuchUpload
Description: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.
404 Not Found
The following operations are related to CompleteMultipartUpload
:
928 929 930 931 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 928 def complete_multipart_upload(params = {}, = {}) req = build_request(:complete_multipart_upload, params) req.send_request() end |
#copy_object(params = {}) ⇒ Types::CopyObjectOutput
Creates a copy of an object that is already stored in Amazon S3.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
A copy request might return an error when Amazon S3 receives the copy
request or while Amazon S3 is copying the files. If the error occurs
before the copy action starts, you receive a standard Amazon S3 error.
If the error occurs during the copy operation, the error response is
embedded in the 200 OK
response. This means that a 200 OK
response
can contain either a success or an error. If you call the S3 API
directly, make sure to design your application to parse the contents
of the response and handle it appropriately. If you use Amazon Web
Services SDKs, SDKs handle this condition. The SDKs detect the
embedded error and apply error handling per your configuration
settings (including automatically retrying the request as
appropriate). If the condition persists, the SDKs throws an exception
(or, for the SDKs that don't use exceptions, they return the error).
If the copy is successful, you receive a response with information about the copied object.
The copy request charge is based on the storage class and Region that you specify for the destination object. The request can also result in a data retrieval charge for the source if the source storage class bills for data retrieval. For pricing information, see Amazon S3 pricing.
Amazon S3 transfer acceleration does not support cross-Region copies.
If you request a cross-Region copy using a transfer acceleration
endpoint, you get a 400 Bad Request
error. For more information, see
Transfer Acceleration.
- Metadata
When copying an object, you can preserve all metadata (the default) or specify new metadata. However, the access control list (ACL) is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the
x-amz-metadata-directive
header. When you grant permissions, you can use thes3:x-amz-metadata-directive
condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.x-amz-website-redirect-location
is unique to each object and must be specified in the request headers to copy the value.- x-amz-copy-source-if Headers
To only copy an object under certain conditions, such as whether the
Etag
matches or whether the object was modified before or after a specified date, use the following request parameters:x-amz-copy-source-if-match
x-amz-copy-source-if-none-match
x-amz-copy-source-if-unmodified-since
x-amz-copy-source-if-modified-since
If both the
x-amz-copy-source-if-match
andx-amz-copy-source-if-unmodified-since
headers are present in the request and evaluate as follows, Amazon S3 returns200 OK
and copies the data:x-amz-copy-source-if-match
condition evaluates to truex-amz-copy-source-if-unmodified-since
condition evaluates to false
If both the
x-amz-copy-source-if-none-match
andx-amz-copy-source-if-modified-since
headers are present in the request and evaluate as follows, Amazon S3 returns the412 Precondition Failed
response code:x-amz-copy-source-if-none-match
condition evaluates to falsex-amz-copy-source-if-modified-since
condition evaluates to true
All headers with the x-amz-
prefix, includingx-amz-copy-source
, must be signed.- Server-side encryption
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don't specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a default encryption configuration that uses server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with customer-provided encryption keys (SSE-C), Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy.
When you perform a
CopyObject
operation, if you want to use a different type of encryption setting for the target object, you can use other appropriate encryption-related headers to encrypt the target object with a KMS key, an Amazon S3 managed key, or a customer-provided key. With server-side encryption, Amazon S3 encrypts your data as it writes your data to disks in its data centers and decrypts the data when you access it. If the encryption setting in your request is different from the default encryption configuration of the destination bucket, the encryption setting in your request takes precedence. If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying. For more information about server-side encryption, see Using Server-Side Encryption.If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.
- Access Control List (ACL)-Specific Request Headers
When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups that are defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.
If the bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept
PUT
requests that don't specify an ACL orPUT
requests that specify bucket owner full control ACLs, such as thebucket-owner-full-control
canned ACL or an equivalent form of this ACL expressed in the XML format.For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.
If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. - Checksums
When copying an object, if it has a checksum, that checksum will be copied to the new object by default. When you copy the object over, you can optionally specify a different checksum algorithm to use with the
x-amz-checksum-algorithm
header.- Storage Class Options
You can use the
CopyObject
action to change the storage class of an object that is already stored in Amazon S3 by using theStorageClass
parameter. For more information, see Storage Classes in the Amazon S3 User Guide.If the source object's storage class is GLACIER or DEEP_ARCHIVE, or the object's storage class is INTELLIGENT_TIERING and it's S3 Intelligent-Tiering access tier is Archive Access or Deep Archive Access, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject. For more information, see Copying Objects.
- Versioning
By default,
x-amz-copy-source
header identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use theversionId
subresource.If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the
x-amz-version-id
response header in the response.If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null.
The following operations are related to CopyObject
:
1553 1554 1555 1556 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 1553 def copy_object(params = {}, = {}) req = build_request(:copy_object, params) req.send_request() end |
#create_bucket(params = {}) ⇒ Types::CreateBucketOutput
Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.
Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules.
If you want to create an Amazon S3 on Outposts bucket, see Create Bucket.
By default, the bucket is created in the US East (N. Virginia) Region.
You can optionally specify a Region in the request body. To constrain
the bucket creation to a specific Region, you can use
LocationConstraint
condition key. You might choose a Region to
optimize latency, minimize costs, or address regulatory requirements.
For example, if you reside in Europe, you will probably find it
advantageous to create buckets in the Europe (Ireland) Region. For
more information, see Accessing a bucket.
s3.amazonaws.com
endpoint, the request goes to the us-east-1
Region. Accordingly, the
signature calculations in Signature Version 4 must use us-east-1
as
the Region, even if the location constraint in the request specifies
another Region where the bucket is to be created. If you create a
bucket in a Region other than US East (N. Virginia), your application
must be able to handle 307 redirect. For more information, see
Virtual hosting of buckets.
- Permissions
In addition to
s3:CreateBucket
, the following permissions are required when yourCreateBucket
request includes specific headers:Access control lists (ACLs) - If your
CreateBucket
request specifies access control list (ACL) permissions and the ACL is public-read, public-read-write, authenticated-read, or if you specify access permissions explicitly through any other ACL, boths3:CreateBucket
ands3:PutBucketAcl
permissions are needed. If the ACL for theCreateBucket
request is private or if the request doesn't specify any ACLs, onlys3:CreateBucket
permission is needed.Object Lock - If
ObjectLockEnabledForBucket
is set to true in yourCreateBucket
request,s3:PutBucketObjectLockConfiguration
ands3:PutBucketVersioning
permissions are required.S3 Object Ownership - If your
CreateBucket
request includes thex-amz-object-ownership
header, then thes3:PutBucketOwnershipControls
permission is required. By default,ObjectOwnership
is set toBucketOWnerEnforced
and ACLs are disabled. We recommend keeping ACLs disabled, except in uncommon use cases where you must control access for each object individually. If you want to change theObjectOwnership
setting, you can use thex-amz-object-ownership
header in yourCreateBucket
request to set theObjectOwnership
setting of your choice. For more information about S3 Object Ownership, see Controlling object ownership in the Amazon S3 User Guide.S3 Block Public Access - If your specific use case requires granting public access to your S3 resources, you can disable Block Public Access. You can create a new bucket with Block Public Access enabled, then separately call the
DeletePublicAccessBlock
API. To use this operation, you must have thes3:PutBucketPublicAccessBlock
permission. By default, all Block Public Access settings are enabled for new buckets. To avoid inadvertent exposure of your resources, we recommend keeping the S3 Block Public Access settings enabled. For more information about S3 Block Public Access, see Blocking public access to your Amazon S3 storage in the Amazon S3 User Guide.
If your CreateBucket
request sets BucketOwnerEnforced
for Amazon
S3 Object Ownership and specifies a bucket ACL that provides access to
an external Amazon Web Services account, your request fails with a
400
error and returns the InvalidBucketAcLWithObjectOwnership
error code. For more information, see Setting Object Ownership on an
existing bucket in the Amazon S3 User Guide.
The following operations are related to CreateBucket
:
1768 1769 1770 1771 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 1768 def create_bucket(params = {}, = {}) req = build_request(:create_bucket, params) req.send_request() end |
#create_multipart_upload(params = {}) ⇒ Types::CreateMultipartUploadOutput
This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.
For more information about multipart uploads, see Multipart Upload Overview.
If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.
For information about the permissions required to use the multipart upload API, see Multipart Upload and Permissions.
For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4).
Server-side encryption is for data encryption at rest. Amazon S3
encrypts your data as it writes it to disks in its data centers and
decrypts it when you access it. Amazon S3 automatically encrypts all
new objects that are uploaded to an S3 bucket. When doing a multipart
upload, if you don't specify encryption information in your request,
the encryption setting of the uploaded parts is set to the default
encryption configuration of the destination bucket. By default, all
buckets have a base level of encryption configuration that uses
server-side encryption with Amazon S3 managed keys (SSE-S3). If the
destination bucket has a default encryption configuration that uses
server-side encryption with an Key Management Service (KMS) key
(SSE-KMS), or a customer-provided encryption key (SSE-C), Amazon S3
uses the corresponding KMS key, or a customer-provided key to encrypt
the uploaded parts. When you perform a CreateMultipartUpload
operation, if you want to use a different type of encryption setting
for the uploaded parts, you can request that Amazon S3 encrypts the
object with a KMS key, an Amazon S3 managed key, or a
customer-provided key. If the encryption setting in your request is
different from the default encryption configuration of the destination
bucket, the encryption setting in your request takes precedence. If
you choose to provide your own encryption key, the request headers you
provide in UploadPart and UploadPartCopy requests must match
the headers you used in the request to initiate the upload by using
CreateMultipartUpload
. You can request that Amazon S3 save the
uploaded parts encrypted with server-side encryption with an Amazon S3
managed key (SSE-S3), an Key Management Service (KMS) key (SSE-KMS),
or a customer-provided encryption key (SSE-C).
To perform a multipart upload with encryption by using an Amazon Web
Services KMS key, the requester must have permission to the
kms:Decrypt
and kms:GenerateDataKey*
actions on the key. These
permissions are required because Amazon S3 must decrypt and read data
from the encrypted file parts before it completes the multipart
upload. For more information, see Multipart upload API and
permissions and Protecting data using server-side encryption with
Amazon Web Services KMS in the Amazon S3 User Guide.
If your Identity and Access Management (IAM) user or role is in the same Amazon Web Services account as the KMS key, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role.
For more information, see Protecting Data Using Server-Side Encryption.
- Access Permissions
When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:
Specify a canned ACL with the
x-amz-acl
request header. For more information, see Canned ACL.Specify access permissions explicitly with the
x-amz-grant-read
,x-amz-grant-read-acp
,x-amz-grant-write-acp
, andx-amz-grant-full-control
headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
- Server-Side- Encryption-Specific Request Headers
Amazon S3 encrypts data by using server-side encryption with an Amazon S3 managed key (SSE-S3) by default. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. You can request that Amazon S3 encrypts data at rest by using server-side encryption with other key options. The option you use depends on whether you want to use KMS keys (SSE-KMS) or provide your own encryption keys (SSE-C).
Use KMS keys (SSE-KMS) that include the Amazon Web Services managed key (
aws/s3
) and KMS customer managed keys stored in Key Management Service (KMS) – If you want Amazon Web Services to manage the keys used to encrypt data, specify the following headers in the request.x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
, but don't providex-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the Amazon Web Services managed key (aws/s3
key) in KMS to protect the data.All
GET
andPUT
requests for an object protected by KMS fail if you don't make them by using Secure Sockets Layer (SSL), Transport Layer Security (TLS), or Signature Version 4.For more information about server-side encryption with KMS keys (SSE-KMS), see Protecting Data Using Server-Side Encryption with KMS keys.
Use customer-provided encryption keys (SSE-C) – If you want to manage your own encryption keys, provide all the following headers in the request.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about server-side encryption with customer-provided encryption keys (SSE-C), see Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).
- Access-Control-List (ACL)-Specific Request Headers
You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:
Specify a canned ACL (
x-amz-acl
) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.Specify access permissions explicitly — To explicitly grant access permissions to specific Amazon Web Services accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:
x-amz-grant-read
x-amz-grant-write
x-amz-grant-read-acp
x-amz-grant-write-acp
x-amz-grant-full-control
You specify each grantee as a type=value pair, where the type is one of the following:
id
– if the value specified is the canonical user ID of an Amazon Web Services accounturi
– if you are granting permissions to a predefined groupemailAddress
– if the value specified is the email address of an Amazon Web Services accountUsing email addresses to specify a grantee is only supported in the following Amazon Web Services Regions: US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
Europe (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
For example, the following
x-amz-grant-read
header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:x-amz-grant-read: id="11112222333", id="444455556666"
The following operations are related to CreateMultipartUpload
:
2310 2311 2312 2313 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2310 def create_multipart_upload(params = {}, = {}) req = build_request(:create_multipart_upload, params) req.send_request() end |
#delete_bucket(params = {}) ⇒ Struct
Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.
The following operations are related to DeleteBucket
:
2360 2361 2362 2363 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2360 def delete_bucket(params = {}, = {}) req = build_request(:delete_bucket, params) req.send_request() end |
#delete_bucket_analytics_configuration(params = {}) ⇒ Struct
Deletes an analytics configuration for the bucket (specified by the analytics configuration ID).
To use this operation, you must have permissions to perform the
s3:PutAnalyticsConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
For information about the Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.
The following operations are related to
DeleteBucketAnalyticsConfiguration
:
2422 2423 2424 2425 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2422 def delete_bucket_analytics_configuration(params = {}, = {}) req = build_request(:delete_bucket_analytics_configuration, params) req.send_request() end |
#delete_bucket_cors(params = {}) ⇒ Struct
Deletes the cors
configuration information set for the bucket.
To use this operation, you must have permission to perform the
s3:PutBucketCORS
action. The bucket owner has this permission by
default and can grant this permission to others.
For information about cors
, see Enabling Cross-Origin Resource
Sharing in the Amazon S3 User Guide.
Related Resources
2478 2479 2480 2481 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2478 def delete_bucket_cors(params = {}, = {}) req = build_request(:delete_bucket_cors, params) req.send_request() end |
#delete_bucket_encryption(params = {}) ⇒ Struct
This implementation of the DELETE action resets the default encryption for the bucket as server-side encryption with Amazon S3 managed keys (SSE-S3). For information about the bucket default encryption feature, see Amazon S3 Bucket Default Encryption in the Amazon S3 User Guide.
To use this operation, you must have permissions to perform the
s3:PutEncryptionConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to your Amazon S3 Resources in the Amazon S3 User
Guide.
The following operations are related to DeleteBucketEncryption
:
2533 2534 2535 2536 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2533 def delete_bucket_encryption(params = {}, = {}) req = build_request(:delete_bucket_encryption, params) req.send_request() end |
#delete_bucket_intelligent_tiering_configuration(params = {}) ⇒ Struct
Deletes the S3 Intelligent-Tiering configuration from the specified bucket.
The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.
The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.
For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to DeleteBucketIntelligentTieringConfiguration
include:
2596 2597 2598 2599 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2596 def delete_bucket_intelligent_tiering_configuration(params = {}, = {}) req = build_request(:delete_bucket_intelligent_tiering_configuration, params) req.send_request() end |
#delete_bucket_inventory_configuration(params = {}) ⇒ Struct
Deletes an inventory configuration (identified by the inventory ID) from the bucket.
To use this operation, you must have permissions to perform the
s3:PutInventoryConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
For information about the Amazon S3 inventory feature, see Amazon S3 Inventory.
Operations related to DeleteBucketInventoryConfiguration
include:
2657 2658 2659 2660 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2657 def delete_bucket_inventory_configuration(params = {}, = {}) req = build_request(:delete_bucket_inventory_configuration, params) req.send_request() end |
#delete_bucket_lifecycle(params = {}) ⇒ Struct
Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.
To use this operation, you must have permission to perform the
s3:PutLifecycleConfiguration
action. By default, the bucket owner
has this permission and the bucket owner can grant this permission to
others.
There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems.
For more information about the object expiration, see Elements to Describe Lifecycle Actions.
Related actions include:
2721 2722 2723 2724 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2721 def delete_bucket_lifecycle(params = {}, = {}) req = build_request(:delete_bucket_lifecycle, params) req.send_request() end |
#delete_bucket_metrics_configuration(params = {}) ⇒ Struct
Deletes a metrics configuration for the Amazon CloudWatch request metrics (specified by the metrics configuration ID) from the bucket. Note that this doesn't include the daily storage metrics.
To use this operation, you must have permissions to perform the
s3:PutMetricsConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
For information about CloudWatch request metrics for Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to
DeleteBucketMetricsConfiguration
:
2787 2788 2789 2790 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2787 def delete_bucket_metrics_configuration(params = {}, = {}) req = build_request(:delete_bucket_metrics_configuration, params) req.send_request() end |
#delete_bucket_ownership_controls(params = {}) ⇒ Struct
Removes OwnershipControls
for an Amazon S3 bucket. To use this
operation, you must have the s3:PutBucketOwnershipControls
permission. For more information about Amazon S3 permissions, see
Specifying Permissions in a Policy.
For information about Amazon S3 Object Ownership, see Using Object Ownership.
The following operations are related to
DeleteBucketOwnershipControls
:
GetBucketOwnershipControls
PutBucketOwnershipControls
2833 2834 2835 2836 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2833 def delete_bucket_ownership_controls(params = {}, = {}) req = build_request(:delete_bucket_ownership_controls, params) req.send_request() end |
#delete_bucket_policy(params = {}) ⇒ Struct
This implementation of the DELETE action uses the policy subresource
to delete the policy of a specified bucket. If you are using an
identity other than the root user of the Amazon Web Services account
that owns the bucket, the calling identity must have the
DeleteBucketPolicy
permissions on the specified bucket and belong to
the bucket owner's account to use this operation.
If you don't have DeleteBucketPolicy
permissions, Amazon S3 returns
a 403 Access Denied
error. If you have the correct permissions, but
you're not using an identity that belongs to the bucket owner's
account, Amazon S3 returns a 405 Method Not Allowed
error.
To ensure that bucket owners don't inadvertently lock themselves out
of their own buckets, the root principal in a bucket owner's Amazon
Web Services account can perform the GetBucketPolicy
,
PutBucketPolicy
, and DeleteBucketPolicy
API actions, even if their
bucket policy explicitly denies the root principal's access. Bucket
owner root principals can only be blocked from performing these API
actions by VPC endpoint policies and Amazon Web Services Organizations
policies.
For more information about bucket policies, see Using Bucket Policies and UserPolicies.
The following operations are related to DeleteBucketPolicy
2904 2905 2906 2907 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2904 def delete_bucket_policy(params = {}, = {}) req = build_request(:delete_bucket_policy, params) req.send_request() end |
#delete_bucket_replication(params = {}) ⇒ Struct
Deletes the replication configuration from the bucket.
To use this operation, you must have permissions to perform the
s3:PutReplicationConfiguration
action. The bucket owner has these
permissions by default and can grant it to others. For more
information about permissions, see Permissions Related to Bucket
Subresource Operations and Managing Access Permissions to Your
Amazon S3 Resources.
For information about replication configuration, see Replication in the Amazon S3 User Guide.
The following operations are related to DeleteBucketReplication
:
2970 2971 2972 2973 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2970 def delete_bucket_replication(params = {}, = {}) req = build_request(:delete_bucket_replication, params) req.send_request() end |
#delete_bucket_tagging(params = {}) ⇒ Struct
Deletes the tags from the bucket.
To use this operation, you must have permission to perform the
s3:PutBucketTagging
action. By default, the bucket owner has this
permission and can grant this permission to others.
The following operations are related to DeleteBucketTagging
:
3022 3023 3024 3025 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3022 def delete_bucket_tagging(params = {}, = {}) req = build_request(:delete_bucket_tagging, params) req.send_request() end |
#delete_bucket_website(params = {}) ⇒ Struct
This action removes the website configuration for a bucket. Amazon S3
returns a 200 OK
response upon successfully deleting a website
configuration on the specified bucket. You will get a 200 OK
response if the website configuration you are trying to delete does
not exist on the bucket. Amazon S3 returns a 404
response if the
bucket specified in the request does not exist.
This DELETE action requires the S3:DeleteBucketWebsite
permission.
By default, only the bucket owner can delete the website configuration
attached to a bucket. However, bucket owners can grant other users
permission to delete the website configuration by writing a bucket
policy granting them the S3:DeleteBucketWebsite
permission.
For more information about hosting websites, see Hosting Websites on Amazon S3.
The following operations are related to DeleteBucketWebsite
:
3086 3087 3088 3089 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3086 def delete_bucket_website(params = {}, = {}) req = build_request(:delete_bucket_website, params) req.send_request() end |
#delete_object(params = {}) ⇒ Types::DeleteObjectOutput
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.
To remove a specific version, you must use the version Id subresource.
Using this subresource permanently deletes the version. If the object
deleted is a delete marker, Amazon S3 sets the response header,
x-amz-delete-marker
, to true.
If the object you want to delete is in a bucket where the bucket
versioning configuration is MFA Delete enabled, you must include the
x-amz-mfa
request header in the DELETE versionId
request. Requests
that include x-amz-mfa
must use HTTPS.
For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling DELETE Object or
configure its lifecycle (PutBucketLifecycle) to enable Amazon S3
to remove them for you. If you want to block users or accounts from
removing or deleting objects from your bucket, you must deny them the
s3:DeleteObject
, s3:DeleteObjectVersion
, and
s3:PutLifeCycleConfiguration
actions.
The following action is related to DeleteObject
:
^
3240 3241 3242 3243 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3240 def delete_object(params = {}, = {}) req = build_request(:delete_object, params) req.send_request() end |
#delete_object_tagging(params = {}) ⇒ Types::DeleteObjectTaggingOutput
Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.
To use this operation, you must have permission to perform the
s3:DeleteObjectTagging
action.
To delete tags of a specific object version, add the versionId
query
parameter in the request. You will need permission for the
s3:DeleteObjectVersionTagging
action.
The following operations are related to DeleteObjectTagging
:
3357 3358 3359 3360 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3357 def delete_object_tagging(params = {}, = {}) req = build_request(:delete_object_tagging, params) req.send_request() end |
#delete_objects(params = {}) ⇒ Types::DeleteObjectsOutput
This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead.
The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete action and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted.
The action supports two modes for the response: verbose and quiet. By default, the action uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete action encountered an error. For a successful deletion, the action does not return any information about the delete in the response body.
When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete.
Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.
The following operations are related to DeleteObjects
:
3608 3609 3610 3611 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3608 def delete_objects(params = {}, = {}) req = build_request(:delete_objects, params) req.send_request() end |
#delete_public_access_block(params = {}) ⇒ Struct
Removes the PublicAccessBlock
configuration for an Amazon S3 bucket.
To use this operation, you must have the
s3:PutBucketPublicAccessBlock
permission. For more information about
permissions, see Permissions Related to Bucket Subresource
Operations and Managing Access Permissions to Your Amazon S3
Resources.
The following operations are related to DeletePublicAccessBlock
:
3661 3662 3663 3664 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3661 def delete_public_access_block(params = {}, = {}) req = build_request(:delete_public_access_block, params) req.send_request() end |
#get_bucket_accelerate_configuration(params = {}) ⇒ Types::GetBucketAccelerateConfigurationOutput
This implementation of the GET action uses the accelerate
subresource to return the Transfer Acceleration state of a bucket,
which is either Enabled
or Suspended
. Amazon S3 Transfer
Acceleration is a bucket-level feature that enables you to perform
faster data transfers to and from Amazon S3.
To use this operation, you must have permission to perform the
s3:GetAccelerateConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to your Amazon S3 Resources in the Amazon S3 User
Guide.
You set the Transfer Acceleration state of an existing bucket to
Enabled
or Suspended
by using the
PutBucketAccelerateConfiguration operation.
A GET accelerate
request does not return a state value for a bucket
that has no transfer acceleration state. A bucket has no Transfer
Acceleration state if a state has never been set on the bucket.
For more information about transfer acceleration, see Transfer Acceleration in the Amazon S3 User Guide.
The following operations are related to
GetBucketAccelerateConfiguration
:
^
3749 3750 3751 3752 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3749 def get_bucket_accelerate_configuration(params = {}, = {}) req = build_request(:get_bucket_accelerate_configuration, params) req.send_request() end |
#get_bucket_acl(params = {}) ⇒ Types::GetBucketAclOutput
This implementation of the GET
action uses the acl
subresource to
return the access control list (ACL) of a bucket. To use GET
to
return the ACL of the bucket, you must have READ_ACP
access to the
bucket. If READ_ACP
permission is granted to the anonymous user, you
can return the ACL of the bucket without using an authorization
header.
To use this API operation against an access point, provide the alias of the access point in place of the bucket name.
To use this API operation against an Object Lambda access point,
provide the alias of the Object Lambda access point in place of the
bucket name. If the Object Lambda access point alias in a request is
not valid, the error code InvalidAccessPointAliasError
is returned.
For more information about InvalidAccessPointAliasError
, see List
of Error Codes.
bucket-owner-full-control
ACL with the owner being the account that
created the bucket. For more information, see Controlling object
ownership and disabling ACLs in the Amazon S3 User Guide.
The following operations are related to GetBucketAcl
:
^
3841 3842 3843 3844 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3841 def get_bucket_acl(params = {}, = {}) req = build_request(:get_bucket_acl, params) req.send_request() end |
#get_bucket_analytics_configuration(params = {}) ⇒ Types::GetBucketAnalyticsConfigurationOutput
This implementation of the GET action returns an analytics configuration (identified by the analytics configuration ID) from the bucket.
To use this operation, you must have permissions to perform the
s3:GetAnalyticsConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources in the Amazon S3 User
Guide.
For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis in the Amazon S3 User Guide.
The following operations are related to
GetBucketAnalyticsConfiguration
:
3923 3924 3925 3926 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3923 def get_bucket_analytics_configuration(params = {}, = {}) req = build_request(:get_bucket_analytics_configuration, params) req.send_request() end |
#get_bucket_cors(params = {}) ⇒ Types::GetBucketCorsOutput
Returns the Cross-Origin Resource Sharing (CORS) configuration information set for the bucket.
To use this operation, you must have permission to perform the
s3:GetBucketCORS
action. By default, the bucket owner has this
permission and can grant it to others.
To use this API operation against an access point, provide the alias of the access point in place of the bucket name.
To use this API operation against an Object Lambda access point,
provide the alias of the Object Lambda access point in place of the
bucket name. If the Object Lambda access point alias in a request is
not valid, the error code InvalidAccessPointAliasError
is returned.
For more information about InvalidAccessPointAliasError
, see List
of Error Codes.
For more information about CORS, see Enabling Cross-Origin Resource Sharing.
The following operations are related to GetBucketCors
:
4039 4040 4041 4042 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4039 def get_bucket_cors(params = {}, = {}) req = build_request(:get_bucket_cors, params) req.send_request() end |
#get_bucket_encryption(params = {}) ⇒ Types::GetBucketEncryptionOutput
Returns the default encryption configuration for an Amazon S3 bucket. By default, all buckets have a default encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). For information about the bucket default encryption feature, see Amazon S3 Bucket Default Encryption in the Amazon S3 User Guide.
To use this operation, you must have permission to perform the
s3:GetEncryptionConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
The following operations are related to GetBucketEncryption
:
4102 4103 4104 4105 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4102 def get_bucket_encryption(params = {}, = {}) req = build_request(:get_bucket_encryption, params) req.send_request() end |
#get_bucket_intelligent_tiering_configuration(params = {}) ⇒ Types::GetBucketIntelligentTieringConfigurationOutput
Gets the S3 Intelligent-Tiering configuration from the specified bucket.
The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.
The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.
For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to GetBucketIntelligentTieringConfiguration
include:
4182 4183 4184 4185 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4182 def get_bucket_intelligent_tiering_configuration(params = {}, = {}) req = build_request(:get_bucket_intelligent_tiering_configuration, params) req.send_request() end |
#get_bucket_inventory_configuration(params = {}) ⇒ Types::GetBucketInventoryConfigurationOutput
Returns an inventory configuration (identified by the inventory configuration ID) from the bucket.
To use this operation, you must have permissions to perform the
s3:GetInventoryConfiguration
action. The bucket owner has this
permission by default and can grant this permission to others. For
more information about permissions, see Permissions Related to Bucket
Subresource Operations and Managing Access Permissions to Your
Amazon S3 Resources.
For information about the Amazon S3 inventory feature, see Amazon S3 Inventory.
The following operations are related to
GetBucketInventoryConfiguration
:
4261 4262 4263 4264 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4261 def get_bucket_inventory_configuration(params = {}, = {}) req = build_request(:get_bucket_inventory_configuration, params) req.send_request() end |
#get_bucket_lifecycle(params = {}) ⇒ Types::GetBucketLifecycleOutput
For an updated version of this API, see
GetBucketLifecycleConfiguration. If you configured a bucket
lifecycle using the filter
element, you should see the updated
version of this topic. This topic is provided for backward
compatibility.
Returns the lifecycle configuration information set on the bucket. For information about lifecycle configuration, see Object Lifecycle Management.
To use this operation, you must have permission to perform the
s3:GetLifecycleConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
GetBucketLifecycle
has the following special error:
Error code:
NoSuchLifecycleConfiguration
Description: The lifecycle configuration does not exist.
HTTP Status Code: 404 Not Found
SOAP Fault Code Prefix: Client
The following operations are related to GetBucketLifecycle
:
4375 4376 4377 4378 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4375 def get_bucket_lifecycle(params = {}, = {}) req = build_request(:get_bucket_lifecycle, params) req.send_request() end |
#get_bucket_lifecycle_configuration(params = {}) ⇒ Types::GetBucketLifecycleConfigurationOutput
Returns the lifecycle configuration information set on the bucket. For information about lifecycle configuration, see Object Lifecycle Management.
To use this operation, you must have permission to perform the
s3:GetLifecycleConfiguration
action. The bucket owner has this
permission, by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
GetBucketLifecycleConfiguration
has the following special error:
Error code:
NoSuchLifecycleConfiguration
Description: The lifecycle configuration does not exist.
HTTP Status Code: 404 Not Found
SOAP Fault Code Prefix: Client
The following operations are related to
GetBucketLifecycleConfiguration
:
4511 4512 4513 4514 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4511 def get_bucket_lifecycle_configuration(params = {}, = {}) req = build_request(:get_bucket_lifecycle_configuration, params) req.send_request() end |
#get_bucket_location(params = {}) ⇒ Types::GetBucketLocationOutput
Returns the Region the bucket resides in. You set the bucket's Region
using the LocationConstraint
request parameter in a CreateBucket
request. For more information, see CreateBucket.
To use this API operation against an access point, provide the alias of the access point in place of the bucket name.
To use this API operation against an Object Lambda access point,
provide the alias of the Object Lambda access point in place of the
bucket name. If the Object Lambda access point alias in a request is
not valid, the error code InvalidAccessPointAliasError
is returned.
For more information about InvalidAccessPointAliasError
, see List
of Error Codes.
The following operations are related to GetBucketLocation
:
4604 4605 4606 4607 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4604 def get_bucket_location(params = {}, = {}) req = build_request(:get_bucket_location, params) req.send_request() end |
#get_bucket_logging(params = {}) ⇒ Types::GetBucketLoggingOutput
Returns the logging status of a bucket and the permissions users have to view and modify that status.
The following operations are related to GetBucketLogging
:
4658 4659 4660 4661 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4658 def get_bucket_logging(params = {}, = {}) req = build_request(:get_bucket_logging, params) req.send_request() end |
#get_bucket_metrics_configuration(params = {}) ⇒ Types::GetBucketMetricsConfigurationOutput
Gets a metrics configuration (specified by the metrics configuration ID) from the bucket. Note that this doesn't include the daily storage metrics.
To use this operation, you must have permissions to perform the
s3:GetMetricsConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
For information about CloudWatch request metrics for Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to
GetBucketMetricsConfiguration
:
4740 4741 4742 4743 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4740 def get_bucket_metrics_configuration(params = {}, = {}) req = build_request(:get_bucket_metrics_configuration, params) req.send_request() end |
#get_bucket_notification(params = {}) ⇒ Types::NotificationConfigurationDeprecated
No longer used, see GetBucketNotificationConfiguration.
4867 4868 4869 4870 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4867 def get_bucket_notification(params = {}, = {}) req = build_request(:get_bucket_notification, params) req.send_request() end |
#get_bucket_notification_configuration(params = {}) ⇒ Types::NotificationConfiguration
Returns the notification configuration of a bucket.
If notifications are not enabled on the bucket, the action returns an
empty NotificationConfiguration
element.
By default, you must be the bucket owner to read the notification
configuration of a bucket. However, the bucket owner can use a bucket
policy to grant permission to other users to read this configuration
with the s3:GetBucketNotification
permission.
To use this API operation against an access point, provide the alias of the access point in place of the bucket name.
To use this API operation against an Object Lambda access point,
provide the alias of the Object Lambda access point in place of the
bucket name. If the Object Lambda access point alias in a request is
not valid, the error code InvalidAccessPointAliasError
is returned.
For more information about InvalidAccessPointAliasError
, see List
of Error Codes.
For more information about setting and reading the notification configuration on a bucket, see Setting Up Notification of Bucket Events. For more information about bucket policies, see Using Bucket Policies.
The following action is related to GetBucketNotification
:
^
4978 4979 4980 4981 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4978 def get_bucket_notification_configuration(params = {}, = {}) req = build_request(:get_bucket_notification_configuration, params) req.send_request() end |
#get_bucket_ownership_controls(params = {}) ⇒ Types::GetBucketOwnershipControlsOutput
Retrieves OwnershipControls
for an Amazon S3 bucket. To use this
operation, you must have the s3:GetBucketOwnershipControls
permission. For more information about Amazon S3 permissions, see
Specifying permissions in a policy.
For information about Amazon S3 Object Ownership, see Using Object Ownership.
The following operations are related to GetBucketOwnershipControls
:
PutBucketOwnershipControls
DeleteBucketOwnershipControls
5031 5032 5033 5034 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5031 def get_bucket_ownership_controls(params = {}, = {}) req = build_request(:get_bucket_ownership_controls, params) req.send_request() end |
#get_bucket_policy(params = {}) ⇒ Types::GetBucketPolicyOutput
Returns the policy of a specified bucket. If you are using an identity
other than the root user of the Amazon Web Services account that owns
the bucket, the calling identity must have the GetBucketPolicy
permissions on the specified bucket and belong to the bucket owner's
account in order to use this operation.
If you don't have GetBucketPolicy
permissions, Amazon S3 returns a
403 Access Denied
error. If you have the correct permissions, but
you're not using an identity that belongs to the bucket owner's
account, Amazon S3 returns a 405 Method Not Allowed
error.
To ensure that bucket owners don't inadvertently lock themselves out
of their own buckets, the root principal in a bucket owner's Amazon
Web Services account can perform the GetBucketPolicy
,
PutBucketPolicy
, and DeleteBucketPolicy
API actions, even if their
bucket policy explicitly denies the root principal's access. Bucket
owner root principals can only be blocked from performing these API
actions by VPC endpoint policies and Amazon Web Services Organizations
policies.
To use this API operation against an access point, provide the alias of the access point in place of the bucket name.
To use this API operation against an Object Lambda access point,
provide the alias of the Object Lambda access point in place of the
bucket name. If the Object Lambda access point alias in a request is
not valid, the error code InvalidAccessPointAliasError
is returned.
For more information about InvalidAccessPointAliasError
, see List
of Error Codes.
For more information about bucket policies, see Using Bucket Policies and User Policies.
The following action is related to GetBucketPolicy
:
^
5136 5137 5138 5139 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5136 def get_bucket_policy(params = {}, = {}, &block) req = build_request(:get_bucket_policy, params) req.send_request(, &block) end |
#get_bucket_policy_status(params = {}) ⇒ Types::GetBucketPolicyStatusOutput
Retrieves the policy status for an Amazon S3 bucket, indicating
whether the bucket is public. In order to use this operation, you must
have the s3:GetBucketPolicyStatus
permission. For more information
about Amazon S3 permissions, see Specifying Permissions in a
Policy.
For more information about when Amazon S3 considers a bucket public, see The Meaning of "Public".
The following operations are related to GetBucketPolicyStatus
:
5197 5198 5199 5200 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5197 def get_bucket_policy_status(params = {}, = {}) req = build_request(:get_bucket_policy_status, params) req.send_request() end |
#get_bucket_replication(params = {}) ⇒ Types::GetBucketReplicationOutput
Returns the replication configuration of a bucket.
For information about replication configuration, see Replication in the Amazon S3 User Guide.
This action requires permissions for the
s3:GetReplicationConfiguration
action. For more information about
permissions, see Using Bucket Policies and User Policies.
If you include the Filter
element in a replication configuration,
you must also include the DeleteMarkerReplication
and Priority
elements. The response also returns those elements.
For information about GetBucketReplication
errors, see List of
replication-related error codes
The following operations are related to GetBucketReplication
:
5316 5317 5318 5319 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5316 def get_bucket_replication(params = {}, = {}) req = build_request(:get_bucket_replication, params) req.send_request() end |
#get_bucket_request_payment(params = {}) ⇒ Types::GetBucketRequestPaymentOutput
Returns the request payment configuration of a bucket. To use this version of the operation, you must be the bucket owner. For more information, see Requester Pays Buckets.
The following operations are related to GetBucketRequestPayment
:
^
5378 5379 5380 5381 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5378 def get_bucket_request_payment(params = {}, = {}) req = build_request(:get_bucket_request_payment, params) req.send_request() end |
#get_bucket_tagging(params = {}) ⇒ Types::GetBucketTaggingOutput
Returns the tag set associated with the bucket.
To use this operation, you must have permission to perform the
s3:GetBucketTagging
action. By default, the bucket owner has this
permission and can grant this permission to others.
GetBucketTagging
has the following special error:
Error code:
NoSuchTagSet
- Description: There is no tag set associated with the bucket.
^
The following operations are related to GetBucketTagging
:
5460 5461 5462 5463 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5460 def get_bucket_tagging(params = {}, = {}) req = build_request(:get_bucket_tagging, params) req.send_request() end |
#get_bucket_versioning(params = {}) ⇒ Types::GetBucketVersioningOutput
Returns the versioning state of a bucket.
To retrieve the versioning state of a bucket, you must be the bucket owner.
This implementation also returns the MFA Delete status of the
versioning state. If the MFA Delete status is enabled
, the bucket
owner must use an authentication device to change the versioning state
of the bucket.
The following operations are related to GetBucketVersioning
:
5533 5534 5535 5536 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5533 def get_bucket_versioning(params = {}, = {}) req = build_request(:get_bucket_versioning, params) req.send_request() end |
#get_bucket_website(params = {}) ⇒ Types::GetBucketWebsiteOutput
Returns the website configuration for a bucket. To host website on Amazon S3, you can configure a bucket as website by adding a website configuration. For more information about hosting websites, see Hosting Websites on Amazon S3.
This GET action requires the S3:GetBucketWebsite
permission. By
default, only the bucket owner can read the bucket website
configuration. However, bucket owners can allow other users to read
the website configuration by writing a bucket policy granting them the
S3:GetBucketWebsite
permission.
The following operations are related to GetBucketWebsite
:
5621 5622 5623 5624 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5621 def get_bucket_website(params = {}, = {}) req = build_request(:get_bucket_website, params) req.send_request() end |
#get_object(params = {}) ⇒ Types::GetObjectOutput
Retrieves objects from Amazon S3. To use GET
, you must have READ
access to the object. If you grant READ
access to the anonymous
user, you can return the object without using an authorization header.
An Amazon S3 bucket has no directory hierarchy such as you would find
in a typical computer file system. You can, however, create a logical
hierarchy by using object key names that imply a folder structure. For
example, instead of naming an object sample.jpg
, you can name it
photos/2006/February/sample.jpg
.
To get an object from such a logical hierarchy, specify the full key
name for the object in the GET
operation. For a virtual hosted-style
request example, if you have the object
photos/2006/February/sample.jpg
, specify the resource as
/photos/2006/February/sample.jpg
. For a path-style request example,
if you have the object photos/2006/February/sample.jpg
in the bucket
named examplebucket
, specify the resource as
/examplebucket/photos/2006/February/sample.jpg
. For more information
about request types, see HTTP Host Header Bucket Specification.
For more information about returning the ACL of an object, see GetObjectAcl.
If the object you are retrieving is stored in the S3 Glacier Flexible
Retrieval or S3 Glacier Deep Archive storage class, or S3
Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive
tiers, before you can retrieve the object you must first restore a
copy using RestoreObject. Otherwise, this action returns an
InvalidObjectState
error. For information about restoring archived
objects, see Restoring Archived Objects.
Encryption request headers, like x-amz-server-side-encryption
,
should not be sent for GET requests if your object uses server-side
encryption with Key Management Service (KMS) keys (SSE-KMS),
dual-layer server-side encryption with Amazon Web Services KMS keys
(DSSE-KMS), or server-side encryption with Amazon S3 managed
encryption keys (SSE-S3). If your object does use these types of keys,
you’ll get an HTTP 400 Bad Request error.
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Assuming you have the relevant permission to read object tags, the
response also returns the x-amz-tagging-count
header that provides
the count of number of tags associated with the object. You can use
GetObjectTagging to retrieve the tag set associated with an
object.
- Permissions
You need the relevant read object (or version) permission for this operation. For more information, see Specifying Permissions in a Policy. If the object that you request doesn’t exist, the error that Amazon S3 returns depends on whether you also have the
s3:ListBucket
permission.If you have the
s3:ListBucket
permission on the bucket, Amazon S3 returns an HTTP status code 404 (Not Found) error.If you don’t have the
s3:ListBucket
permission, Amazon S3 returns an HTTP status code 403 ("access denied") error.- Versioning
By default, the
GET
action returns the current version of an object. To return a different version, use theversionId
subresource.* If you supply a versionId
, you need thes3:GetObjectVersion
permission to access a specific version of an object. If you request a specific version, you do not need to have thes3:GetObject
permission. If you request the current version without a specific version ID, onlys3:GetObject
permission is required.s3:GetObjectVersion
permission won't be required.- If the current version of the object is a delete marker, Amazon S3
behaves as if the object was deleted and includes
x-amz-delete-marker: true
in the response.
For more information about versioning, see PutBucketVersioning.
- If the current version of the object is a delete marker, Amazon S3
behaves as if the object was deleted and includes
- Overriding Response Header Values
There are times when you want to override certain response header values in a
GET
response. For example, you might override theContent-Disposition
response header value in yourGET
request.You can override values for a set of response headers using the following query parameters. These response header values are sent only on a successful request, that is, when status code 200 OK is returned. The set of headers you can override using these parameters is a subset of the headers that Amazon S3 accepts when you create an object. The response headers that you can override for the
GET
response areContent-Type
,Content-Language
,Expires
,Cache-Control
,Content-Disposition
, andContent-Encoding
. To override these header values in theGET
response, you use the following request parameters.You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request. response-content-type
response-content-language
response-expires
response-cache-control
response-content-disposition
response-content-encoding
- Overriding Response Header Values
If both of the
If-Match
andIf-Unmodified-Since
headers are present in the request as follows:If-Match
condition evaluates totrue
, and;If-Unmodified-Since
condition evaluates tofalse
; then, S3 returns 200 OK and the data requested.If both of the
If-None-Match
andIf-Modified-Since
headers are present in the request as follows:If-None-Match
condition evaluates tofalse
, and;If-Modified-Since
condition evaluates totrue
; then, S3 returns 304 Not Modified response code.For more information about conditional requests, see RFC 7232.
The following operations are related to GetObject
:
6103 6104 6105 6106 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6103 def get_object(params = {}, = {}, &block) req = build_request(:get_object, params) req.send_request(, &block) end |
#get_object_acl(params = {}) ⇒ Types::GetObjectAclOutput
Returns the access control list (ACL) of an object. To use this
operation, you must have s3:GetObjectAcl
permissions or READ_ACP
access to the object. For more information, see Mapping of ACL
permissions and access policy permissions in the Amazon S3 User
Guide
This action is not supported by Amazon S3 on Outposts.
By default, GET returns ACL information about the current version of an object. To return ACL information about a different version, use the versionId subresource.
bucket-owner-full-control
ACL with the owner being the account that
created the bucket. For more information, see Controlling object
ownership and disabling ACLs in the Amazon S3 User Guide.
The following operations are related to GetObjectAcl
:
6272 6273 6274 6275 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6272 def get_object_acl(params = {}, = {}) req = build_request(:get_object_acl, params) req.send_request() end |
#get_object_attributes(params = {}) ⇒ Types::GetObjectAttributesOutput
Retrieves all the metadata from an object without returning the object
itself. This action is useful if you're interested only in an
object's metadata. To use GetObjectAttributes
, you must have READ
access to the object.
GetObjectAttributes
combines the functionality of HeadObject
and
ListParts
. All of the data returned with each of those individual
calls can be returned with a single call to GetObjectAttributes
.
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.
x-amz-server-side-encryption
,
should not be sent for GET requests if your object uses server-side
encryption with Amazon Web Services KMS keys stored in Amazon Web
Services Key Management Service (SSE-KMS) or server-side encryption
with Amazon S3 managed keys (SSE-S3). If your object does use these
types of keys, you'll get an HTTP 400 Bad Request
error.
- The last modified property in this case is the creation date of the object.
Consider the following when using request headers:
If both of the
If-Match
andIf-Unmodified-Since
headers are present in the request as follows, then Amazon S3 returns the HTTP status code200 OK
and the data requested:If-Match
condition evaluates totrue
.If-Unmodified-Since
condition evaluates tofalse
.
If both of the
If-None-Match
andIf-Modified-Since
headers are present in the request as follows, then Amazon S3 returns the HTTP status code304 Not Modified
:If-None-Match
condition evaluates tofalse
.If-Modified-Since
condition evaluates totrue
.
For more information about conditional requests, see RFC 7232.
- Permissions
The permissions that you need to use this operation depend on whether the bucket is versioned. If the bucket is versioned, you need both the
s3:GetObjectVersion
ands3:GetObjectVersionAttributes
permissions for this operation. If the bucket is not versioned, you need thes3:GetObject
ands3:GetObjectAttributes
permissions. For more information, see Specifying Permissions in a Policy in the Amazon S3 User Guide. If the object that you request does not exist, the error Amazon S3 returns depends on whether you also have thes3:ListBucket
permission.If you have the
s3:ListBucket
permission on the bucket, Amazon S3 returns an HTTP status code404 Not Found
("no such key") error.If you don't have the
s3:ListBucket
permission, Amazon S3 returns an HTTP status code403 Forbidden
("access denied") error.
The following actions are related to GetObjectAttributes
:
6519 6520 6521 6522 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6519 def get_object_attributes(params = {}, = {}) req = build_request(:get_object_attributes, params) req.send_request() end |
#get_object_legal_hold(params = {}) ⇒ Types::GetObjectLegalHoldOutput
Gets an object's current legal hold status. For more information, see Locking Objects.
This action is not supported by Amazon S3 on Outposts.
The following action is related to GetObjectLegalHold
:
^
6604 6605 6606 6607 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6604 def get_object_legal_hold(params = {}, = {}) req = build_request(:get_object_legal_hold, params) req.send_request() end |
#get_object_lock_configuration(params = {}) ⇒ Types::GetObjectLockConfigurationOutput
Gets the Object Lock configuration for a bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket. For more information, see Locking Objects.
The following action is related to GetObjectLockConfiguration
:
^
6667 6668 6669 6670 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6667 def get_object_lock_configuration(params = {}, = {}) req = build_request(:get_object_lock_configuration, params) req.send_request() end |
#get_object_retention(params = {}) ⇒ Types::GetObjectRetentionOutput
Retrieves an object's retention settings. For more information, see Locking Objects.
This action is not supported by Amazon S3 on Outposts.
The following action is related to GetObjectRetention
:
^
6753 6754 6755 6756 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6753 def get_object_retention(params = {}, = {}) req = build_request(:get_object_retention, params) req.send_request() end |
#get_object_tagging(params = {}) ⇒ Types::GetObjectTaggingOutput
Returns the tag-set of an object. You send the GET request against the tagging subresource associated with the object.
To use this operation, you must have permission to perform the
s3:GetObjectTagging
action. By default, the GET action returns
information about current version of an object. For a versioned
bucket, you can have multiple versions of an object in your bucket. To
retrieve tags of any other version, use the versionId query parameter.
You also need permission for the s3:GetObjectVersionTagging
action.
By default, the bucket owner has this permission and can grant this permission to others.
For information about the Amazon S3 object tagging feature, see Object Tagging.
The following actions are related to GetObjectTagging
:
6911 6912 6913 6914 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6911 def get_object_tagging(params = {}, = {}) req = build_request(:get_object_tagging, params) req.send_request() end |
#get_object_torrent(params = {}) ⇒ Types::GetObjectTorrentOutput
Returns torrent files from a bucket. BitTorrent can save you bandwidth when you're distributing large files.
To use GET, you must have READ access to the object.
This action is not supported by Amazon S3 on Outposts.
The following action is related to GetObjectTorrent
:
^
7004 7005 7006 7007 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7004 def get_object_torrent(params = {}, = {}, &block) req = build_request(:get_object_torrent, params) req.send_request(, &block) end |
#get_public_access_block(params = {}) ⇒ Types::GetPublicAccessBlockOutput
Retrieves the PublicAccessBlock
configuration for an Amazon S3
bucket. To use this operation, you must have the
s3:GetBucketPublicAccessBlock
permission. For more information about
Amazon S3 permissions, see Specifying Permissions in a Policy.
When Amazon S3 evaluates the PublicAccessBlock
configuration for a
bucket or an object, it checks the PublicAccessBlock
configuration
for both the bucket (or the bucket that contains the object) and the
bucket owner's account. If the PublicAccessBlock
settings are
different between the bucket and the account, Amazon S3 uses the most
restrictive combination of the bucket-level and account-level
settings.
For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of "Public".
The following operations are related to GetPublicAccessBlock
:
7075 7076 7077 7078 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7075 def get_public_access_block(params = {}, = {}) req = build_request(:get_public_access_block, params) req.send_request() end |
#head_bucket(params = {}) ⇒ Struct
This action is useful to determine if a bucket exists and you have
permission to access it. The action returns a 200 OK
if the bucket
exists and you have permission to access it.
If the bucket does not exist or you do not have permission to access
it, the HEAD
request returns a generic 400 Bad Request
, 403
Forbidden
or 404 Not Found
code. A message body is not included, so
you cannot determine the exception beyond these error codes.
To use this operation, you must have permissions to perform the
s3:ListBucket
action. The bucket owner has this permission by
default and can grant this permission to others. For more information
about permissions, see Permissions Related to Bucket Subresource
Operations and Managing Access Permissions to Your Amazon S3
Resources.
To use this API operation against an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using the Amazon Web Services SDKs, you provide the ARN in place of the bucket name. For more information, see Using access points.
To use this API operation against an Object Lambda access point,
provide the alias of the Object Lambda access point in place of the
bucket name. If the Object Lambda access point alias in a request is
not valid, the error code InvalidAccessPointAliasError
is returned.
For more information about InvalidAccessPointAliasError
, see List
of Error Codes.
The following waiters are defined for this operation (see #wait_until for detailed usage):
- bucket_exists
- bucket_not_exists
7186 7187 7188 7189 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7186 def head_bucket(params = {}, = {}) req = build_request(:head_bucket, params) req.send_request() end |
#head_object(params = {}) ⇒ Types::HeadObjectOutput
The HEAD
action retrieves metadata from an object without returning
the object itself. This action is useful if you're only interested in
an object's metadata. To use HEAD
, you must have READ access to the
object.
A HEAD
request has the same options as a GET
action on an object.
The response is identical to the GET
response except that there is
no response body. Because of this, if the HEAD
request generates an
error, it returns a generic 400 Bad Request
, 403 Forbidden
or 404
Not Found
code. It is not possible to retrieve the exact exception
beyond these error codes.
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
x-amz-server-side-encryption
,
should not be sent for GET
requests if your object uses
server-side encryption with Key Management Service (KMS) keys
(SSE-KMS), dual-layer server-side encryption with Amazon Web
Services KMS keys (DSSE-KMS), or server-side encryption with Amazon
S3 managed encryption keys (SSE-S3). If your object does use these
types of keys, you’ll get an HTTP 400 Bad Request error.
- The last modified property in this case is the creation date of the object.
Request headers are limited to 8 KB in size. For more information, see Common Request Headers.
Consider the following when using request headers:
Consideration 1 – If both of the
If-Match
andIf-Unmodified-Since
headers are present in the request as follows:If-Match
condition evaluates totrue
, and;If-Unmodified-Since
condition evaluates tofalse
;
Then Amazon S3 returns
200 OK
and the data requested.Consideration 2 – If both of the
If-None-Match
andIf-Modified-Since
headers are present in the request as follows:If-None-Match
condition evaluates tofalse
, and;If-Modified-Since
condition evaluates totrue
;
Then Amazon S3 returns the
304 Not Modified
response code.
For more information about conditional requests, see RFC 7232.
- Permissions
You need the relevant read object (or version) permission for this operation. For more information, see Actions, resources, and condition keys for Amazon S3. If the object you request doesn't exist, the error that Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the
s3:ListBucket
permission on the bucket, Amazon S3 returns an HTTP status code 404 error.If you don’t have the
s3:ListBucket
permission, Amazon S3 returns an HTTP status code 403 error.
The following actions are related to HeadObject
:
The following waiters are defined for this operation (see #wait_until for detailed usage):
- object_exists
- object_not_exists
7514 7515 7516 7517 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7514 def head_object(params = {}, = {}) req = build_request(:head_object, params) req.send_request() end |
#list_bucket_analytics_configurations(params = {}) ⇒ Types::ListBucketAnalyticsConfigurationsOutput
Lists the analytics configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.
This action supports list pagination and does not return more than 100
configurations at a time. You should always check the IsTruncated
element in the response. If there are no more configurations to list,
IsTruncated
is set to false. If there are more configurations to
list, IsTruncated
is set to true, and there will be a value in
NextContinuationToken
. You use the NextContinuationToken
value to
continue the pagination of the list by passing the value in
continuation-token in the request to GET
the next page.
To use this operation, you must have permissions to perform the
s3:GetAnalyticsConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.
The following operations are related to
ListBucketAnalyticsConfigurations
:
7611 7612 7613 7614 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7611 def list_bucket_analytics_configurations(params = {}, = {}) req = build_request(:list_bucket_analytics_configurations, params) req.send_request() end |
#list_bucket_intelligent_tiering_configurations(params = {}) ⇒ Types::ListBucketIntelligentTieringConfigurationsOutput
Lists the S3 Intelligent-Tiering configuration from the specified bucket.
The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.
The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.
For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to ListBucketIntelligentTieringConfigurations
include:
7699 7700 7701 7702 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7699 def list_bucket_intelligent_tiering_configurations(params = {}, = {}) req = build_request(:list_bucket_intelligent_tiering_configurations, params) req.send_request() end |
#list_bucket_inventory_configurations(params = {}) ⇒ Types::ListBucketInventoryConfigurationsOutput
Returns a list of inventory configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.
This action supports list pagination and does not return more than 100
configurations at a time. Always check the IsTruncated
element in
the response. If there are no more configurations to list,
IsTruncated
is set to false. If there are more configurations to
list, IsTruncated
is set to true, and there is a value in
NextContinuationToken
. You use the NextContinuationToken
value to
continue the pagination of the list by passing the value in
continuation-token in the request to GET
the next page.
To use this operation, you must have permissions to perform the
s3:GetInventoryConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
For information about the Amazon S3 inventory feature, see Amazon S3 Inventory
The following operations are related to
ListBucketInventoryConfigurations
:
7797 7798 7799 7800 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7797 def list_bucket_inventory_configurations(params = {}, = {}) req = build_request(:list_bucket_inventory_configurations, params) req.send_request() end |
#list_bucket_metrics_configurations(params = {}) ⇒ Types::ListBucketMetricsConfigurationsOutput
Lists the metrics configurations for the bucket. The metrics configurations are only for the request metrics of the bucket and do not provide information on daily storage metrics. You can have up to 1,000 configurations per bucket.
This action supports list pagination and does not return more than 100
configurations at a time. Always check the IsTruncated
element in
the response. If there are no more configurations to list,
IsTruncated
is set to false. If there are more configurations to
list, IsTruncated
is set to true, and there is a value in
NextContinuationToken
. You use the NextContinuationToken
value to
continue the pagination of the list by passing the value in
continuation-token
in the request to GET
the next page.
To use this operation, you must have permissions to perform the
s3:GetMetricsConfiguration
action. The bucket owner has this
permission by default. The bucket owner can grant this permission to
others. For more information about permissions, see Permissions
Related to Bucket Subresource Operations and Managing Access
Permissions to Your Amazon S3 Resources.
For more information about metrics configurations and CloudWatch request metrics, see Monitoring Metrics with Amazon CloudWatch.
The following operations are related to
ListBucketMetricsConfigurations
:
7895 7896 7897 7898 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7895 def list_bucket_metrics_configurations(params = {}, = {}) req = build_request(:list_bucket_metrics_configurations, params) req.send_request() end |
#list_buckets(params = {}) ⇒ Types::ListBucketsOutput
Returns a list of all buckets owned by the authenticated sender of the
request. To use this operation, you must have the
s3:ListAllMyBuckets
permission.
For information about Amazon S3 buckets, see Creating, configuring, and working with Amazon S3 buckets.
7958 7959 7960 7961 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7958 def list_buckets(params = {}, = {}) req = build_request(:list_buckets, params) req.send_request() end |
#list_multipart_uploads(params = {}) ⇒ Types::ListMultipartUploadsOutput
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
This action returns at most 1,000 multipart uploads in the response.
1,000 multipart uploads is the maximum number of uploads a response
can include, which is also the default value. You can further limit
the number of uploads in a response by specifying the max-uploads
parameter in the response. If additional multipart uploads satisfy the
list criteria, the response will contain an IsTruncated
element with
the value true. To list the additional multipart uploads, use the
key-marker
and upload-id-marker
request parameters.
In the response, the uploads are sorted by key. If your application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time.
For more information on multipart uploads, see Uploading Objects Using Multipart Upload.
For information on permissions required to use the multipart upload API, see Multipart Upload and Permissions.
The following operations are related to ListMultipartUploads
:
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
8262 8263 8264 8265 |
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8262 def list_multipart_uploads(params = {}, = {}) req = build_request(:list_multipart_uploads, params) req.send_request() end |
#list_object_versions(params = {}) ⇒ Types::ListObjectVersionsOutput
Returns metadata about all versions of the objects in a bucket. You can also use request parameters as selection criteria to return metadata about a subset of all the object versions.
To use this operation, you must have permission to perform the
s3:ListBucketVersions
action. Be aware of the name difference.
200 OK
response can contain valid or invalid XML. Make sure to
design your application to parse the contents of the response and
handle it appropriately.
To use this operation, you must have READ access to the bucket.
This action is not supported by Amazon S3 on Outposts.
The following operations are related to ListObjectVersions
:
The returned response is a pageable response and is Enumerable. For details on usage see