You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::S3::ObjectSummary
- Inherits:
-
Resources::Resource
- Object
- Resources::Resource
- Aws::S3::ObjectSummary
- Defined in:
- aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb
Instance Attribute Summary collapse
-
#bucket_name ⇒ String
readonly
-
#etag ⇒ String
readonly
The entity tag is a hash of the object.
-
#key ⇒ String
readonly
-
#last_modified ⇒ Time
readonly
The date the Object was Last Modified.
-
#owner ⇒ Types::Owner
readonly
The owner of the object.
-
#size ⇒ Integer
(also: #content_length)
readonly
Size in bytes of the object.
-
#storage_class ⇒ String
readonly
The class of storage used to store the object.
Attributes inherited from Resources::Resource
Instance Method Summary collapse
-
#acl ⇒ ObjectAcl
-
#bucket ⇒ Bucket
-
#copy_from(source, options = {}) ⇒ Types::CopyObjectOutput
-
#copy_to(target, options = {}) ⇒ Object
-
#delete(options = {}) ⇒ Types::DeleteObjectOutput
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object.
-
#download_file(destination, options = {}) ⇒ Boolean
Returns
true
when the file is downloaded without any errors. -
#exists? ⇒ Boolean
Returns
true
if this ObjectSummary exists. -
#get(options = {}) ⇒ Types::GetObjectOutput
Retrieves objects from Amazon S3.
-
#initialize ⇒ Object
constructor
-
#initiate_multipart_upload(options = {}) ⇒ MultipartUpload
-
#move_to(target, options = {}) ⇒ void
-
#multipart_upload(id) ⇒ MultipartUpload
-
#object ⇒ Object
-
#presigned_post(options = {}) ⇒ PresignedPost
-
#presigned_url(http_method, params = {}) ⇒ String
-
#public_url(options = {}) ⇒ String
-
#put(options = {}) ⇒ Types::PutObjectOutput
Adds an object to a bucket.
-
#restore_object(options = {}) ⇒ Types::RestoreObjectOutput
Restores an archived copy of an object back into Amazon S3
This action is not supported by Amazon S3 on Outposts.
This action performs the following types of requests:
-
select
- Perform a select query on an archived object -
restore an archive
- Restore an archived object
To use this operation, you must have permissions to perform the
s3:RestoreObject
action. -
-
#upload_file(source, options = {}) ⇒ Boolean
Returns
true
when the object is uploaded without any errors. -
#version(id) ⇒ ObjectVersion
-
#wait_until_exists {|waiter| ... } ⇒ ObjectSummary
Waits until this ObjectSummary is exists.
-
#wait_until_not_exists {|waiter| ... } ⇒ ObjectSummary
Waits until this ObjectSummary is not_exists.
Methods inherited from Resources::Resource
add_data_attribute, add_identifier, #data, data_attributes, #data_loaded?, identifiers, #load, #wait_until
Methods included from Resources::OperationMethods
#add_batch_operation, #add_operation, #batch_operation, #batch_operation_names, #batch_operations, #operation, #operation_names, #operations
Constructor Details
Instance Attribute Details
#bucket_name ⇒ String (readonly)
#etag ⇒ String (readonly)
The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata. The ETag may or may not be an MD5 digest of the object data. Whether or not it is depends on how the object was created and how it is encrypted as described below:
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-S3 or plaintext, have ETags that are an MD5 digest of their object data.
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.
If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption.
#key ⇒ String (readonly)
#last_modified ⇒ Time (readonly)
The date the Object was Last Modified
#owner ⇒ Types::Owner (readonly)
The owner of the object
#size ⇒ Integer (readonly) Also known as: content_length
Size in bytes of the object
#storage_class ⇒ String (readonly)
The class of storage used to store the object.
Possible values:
- STANDARD
- REDUCED_REDUNDANCY
- GLACIER
- STANDARD_IA
- ONEZONE_IA
- INTELLIGENT_TIERING
- DEEP_ARCHIVE
- OUTPOSTS
Instance Method Details
#acl ⇒ ObjectAcl
#bucket ⇒ Bucket
#copy_from(source, options = {}) ⇒ Types::CopyObjectOutput
11 12 13 |
# File 'aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb', line 11 def copy_from(source, = {}) object.copy_from(source, ) end |
#copy_to(target, options = {}) ⇒ Object
19 20 21 |
# File 'aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb', line 19 def copy_to(target, = {}) object.copy_to(target, ) end |
#delete(options = {}) ⇒ Types::DeleteObjectOutput
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects.
To remove a specific version, you must be the bucket owner and you must use the version Id subresource. Using this subresource permanently deletes the version. If the object deleted is a delete marker, Amazon S3 sets the response header, x-amz-delete-marker
, to true.
If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include the x-amz-mfa
request header in the DELETE versionId
request. Requests that include x-amz-mfa
must use HTTPS.
For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling the DELETE Object API or configure its lifecycle (PutBucketLifecycle) to enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them the s3:DeleteObject
, s3:DeleteObjectVersion
, and s3:PutLifeCycleConfiguration
actions.
The following operation is related to DeleteObject
:
#download_file(destination, options = {}) ⇒ Boolean
Returns true
when the file is downloaded
without any errors.
67 68 69 |
# File 'aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb', line 67 def download_file(destination, = {}) object.download_file(destination, ) end |
#exists? ⇒ Boolean
Returns true
if this ObjectSummary exists. Returns false
otherwise.
#get(options = {}) ⇒ Types::GetObjectOutput
Retrieves objects from Amazon S3. To use GET
, you must have READ
access to the object. If you grant READ
access to the anonymous user, you can return the object without using an authorization header.
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical computer file system. You can, however, create a logical hierarchy by using object key names that imply a folder structure. For example, instead of naming an object sample.jpg
, you can name it photos/2006/February/sample.jpg
.
To get an object from such a logical hierarchy, specify the full key name for the object in the GET
operation. For a virtual hosted-style request example, if you have the object photos/2006/February/sample.jpg
, specify the resource as /photos/2006/February/sample.jpg
. For a path-style request example, if you have the object photos/2006/February/sample.jpg
in the bucket named examplebucket
, specify the resource as /examplebucket/photos/2006/February/sample.jpg
. For more information about request types, see HTTP Host Header Bucket Specification.
To distribute large files to many people, you can save bandwidth costs by using BitTorrent. For more information, see Amazon S3 Torrent. For more information about returning the ACL of an object, see GetObjectAcl.
If the object you are retrieving is stored in the S3 Glacier or S3 Glacier Deep Archive storage class, or S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive tiers, before you can retrieve the object you must first restore a copy using RestoreObject. Otherwise, this operation returns an InvalidObjectStateError
error. For information about restoring archived objects, see Restoring Archived Objects.
Encryption request headers, like x-amz-server-side-encryption
, should not be sent for GET requests if your object uses server-side encryption with CMKs stored in AWS KMS (SSE-KMS) or server-side encryption with Amazon S3–managed encryption keys (SSE-S3). If your object does use these types of keys, you’ll get an HTTP 400 BadRequest error.
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
-
x-amz-server-side-encryption-customer-algorithm
-
x-amz-server-side-encryption-customer-key
-
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Assuming you have permission to read object tags (permission for the s3:GetObjectVersionTagging
action), the response also returns the x-amz-tagging-count
header that provides the count of number of tags associated with the object. You can use GetObjectTagging to retrieve the tag set associated with an object.
Permissions
You need the s3:GetObject
permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket
permission.
-
If you have the
s3:ListBucket
permission on the bucket, Amazon S3 will return an HTTP status code 404 ("no such key") error. -
If you don’t have the
s3:ListBucket
permission, Amazon S3 will return an HTTP status code 403 ("access denied") error.
Versioning
By default, the GET operation returns the current version of an object. To return a different version, use the versionId
subresource.
If the current version of the object is a delete marker, Amazon S3 behaves as if the object was deleted and includes x-amz-delete-marker: true
in the response.
For more information about versioning, see PutBucketVersioning.
Overriding Response Header Values
There are times when you want to override certain response header values in a GET response. For example, you might override the Content-Disposition response header value in your GET request.
You can override values for a set of response headers using the following query parameters. These response header values are sent only on a successful request, that is, when status code 200 OK is returned. The set of headers you can override using these parameters is a subset of the headers that Amazon S3 accepts when you create an object. The response headers that you can override for the GET response are Content-Type
, Content-Language
, Expires
, Cache-Control
, Content-Disposition
, and Content-Encoding
. To override these header values in the GET response, you use the following request parameters.
You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request.
-
response-content-type
-
response-content-language
-
response-expires
-
response-cache-control
-
response-content-disposition
-
response-content-encoding
Additional Considerations about Request Headers
If both of the If-Match
and If-Unmodified-Since
headers are present in the request as follows: If-Match
condition evaluates to true
, and; If-Unmodified-Since
condition evaluates to false
; then, S3 returns 200 OK and the data requested.
If both of the If-None-Match
and If-Modified-Since
headers are present in the request as follows: If-None-Match
condition evaluates to false
, and; If-Modified-Since
condition evaluates to true
; then, S3 returns 304 Not Modified response code.
For more information about conditional requests, see RFC 7232.
The following operations are related to GetObject
:
#initiate_multipart_upload(options = {}) ⇒ MultipartUpload
#move_to(target, options = {}) ⇒ void
This method returns an undefined value.
27 28 29 |
# File 'aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb', line 27 def move_to(target, = {}) object.move_to(target, ) end |
#multipart_upload(id) ⇒ MultipartUpload
#object ⇒ Object
#presigned_post(options = {}) ⇒ PresignedPost
35 36 37 |
# File 'aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb', line 35 def presigned_post( = {}) object.presigned_post() end |
#presigned_url(http_method, params = {}) ⇒ String
43 44 45 |
# File 'aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb', line 43 def presigned_url(http_method, params = {}) object.presigned_url(http_method, params) end |
#public_url(options = {}) ⇒ String
51 52 53 |
# File 'aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb', line 51 def public_url( = {}) object.public_url() end |
#put(options = {}) ⇒ Types::PutObjectOutput
Adds an object to a bucket. You must have WRITE permissions on a bucket to add an object to it.
Amazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the entire object to the bucket.
Amazon S3 is a distributed system. If it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. Amazon S3 does not provide object locking; if you need this, make sure to build it into your application layer or use versioning instead.
To ensure that data is not corrupted traversing the network, use the Content-MD5
header. When you use this header, Amazon S3 checks the object against the provided MD5 value and, if they do not match, returns an error. Additionally, you can calculate the MD5 while putting an object to Amazon S3 and compare the returned ETag to the calculated MD5 value.
The Content-MD5
header is required for any request to upload an object with a retention period configured using Amazon S3 Object Lock. For more information about Amazon S3 Object Lock, see Amazon S3 Object Lock Overview in the Amazon Simple Storage Service Developer Guide.
Server-side Encryption
You can optionally request server-side encryption. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. You have the option to provide your own encryption key or use AWS managed encryption keys. For more information, see Using Server-Side Encryption.
Access Control List (ACL)-Specific Request Headers
You can use headers to grant ACL- based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual AWS accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API.
Storage Class Options
By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The STANDARD storage class provides high durability and high availability. Depending on performance needs, you can specify a different Storage Class. Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, see Storage Classes in the Amazon S3 Service Developer Guide.
Versioning
If you enable versioning for a bucket, Amazon S3 automatically generates a unique version ID for the object being stored. Amazon S3 returns this ID in the response. When you enable versioning for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of the objects.
For more information about versioning, see Adding Objects to Versioning Enabled Buckets. For information about returning the versioning state of a bucket, see GetBucketVersioning.
Related Resources
#restore_object(options = {}) ⇒ Types::RestoreObjectOutput
Restores an archived copy of an object back into Amazon S3
This action is not supported by Amazon S3 on Outposts.
This action performs the following types of requests:
-
select
- Perform a select query on an archived object -
restore an archive
- Restore an archived object
To use this operation, you must have permissions to perform the s3:RestoreObject
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
Querying Archives with Select Requests
You use a select type of request to perform SQL queries on archived objects. The archived objects that are being queried by the select request must be formatted as uncompressed comma-separated values (CSV) files. You can run queries and custom analytics on your archived data without having to restore your data to a hotter Amazon S3 tier. For an overview about select requests, see Querying Archived Objects in the Amazon Simple Storage Service Developer Guide.
When making a select request, do the following:
-
Define an output location for the select query's output. This must be an Amazon S3 bucket in the same AWS Region as the bucket that contains the archive object that is being queried. The AWS account that initiates the job must have permissions to write to the S3 bucket. You can specify the storage class and encryption for the output objects stored in the bucket. For more information about output, see Querying Archived Objects in the Amazon Simple Storage Service Developer Guide.
For more information about the
S3
structure in the request body, see the following:-
Managing Access with ACLs in the Amazon Simple Storage Service Developer Guide
-
Protecting Data Using Server-Side Encryption in the Amazon Simple Storage Service Developer Guide
-
Define the SQL expression for the
SELECT
type of restoration for your query in the request body'sSelectParameters
structure. You can use expressions like the following examples.-
The following expression returns all records from the specified object.
SELECT * FROM Object
-
Assuming that you are not using any headers for data stored in the object, you can specify columns with positional headers.
SELECT s.1, s.2 FROM Object s WHERE s._3 > 100
-
If you have headers and you set the
fileHeaderInfo
in theCSV
structure in the request body toUSE
, you can specify headers in the query. (If you set thefileHeaderInfo
field toIGNORE
, the first row is skipped for the query.) You cannot mix ordinal positions with header column names.SELECT s.Id, s.FirstName, s.SSN FROM S3Object s
-
For more information about using SQL with S3 Glacier Select restore, see SQL Reference for Amazon S3 Select and S3 Glacier Select in the Amazon Simple Storage Service Developer Guide.
When making a select request, you can also do the following:
-
To expedite your queries, specify the
Expedited
tier. For more information about tiers, see "Restoring Archives," later in this topic. -
Specify details about the data serialization format of both the input object that is being queried and the serialization of the CSV-encoded query results.
The following are additional important facts about the select feature:
-
The output results are new Amazon S3 objects. Unlike archive retrievals, they are stored until explicitly deleted-manually or through a lifecycle policy.
-
You can issue more than one select request on the same Amazon S3 object. Amazon S3 doesn't deduplicate requests, so avoid issuing duplicate requests.
-
Amazon S3 accepts a select request even if the object has already been restored. A select request doesn’t return error response
409
.
Restoring objects
Objects that you archive to the S3 Glacier or S3 Glacier Deep Archive storage class, and S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive tiers are not accessible in real time. For objects in Archive Access or Deep Archive Access tiers you must first initiate a restore request, and then wait until the object is moved into the Frequent Access tier. For objects in S3 Glacier or S3 Glacier Deep Archive storage classes you must first initiate a restore request, and then wait until a temporary copy of the object is available. To access an archived object, you must restore the object for the duration (number of days) that you specify.
To restore a specific object version, you can provide a version ID. If you don't provide a version ID, Amazon S3 restores the current version.
When restoring an archived object (or using a select request), you can specify one of the following data access tier options in the Tier
element of the request body:
-
Expedited
- Expedited retrievals allow you to quickly access your data stored in the S3 Glacier storage class or S3 Intelligent-Tiering Archive tier when occasional urgent requests for a subset of archives are required. For all but the largest archived objects (250 MB+), data accessed using Expedited retrievals is typically made available within 1–5 minutes. Provisioned capacity ensures that retrieval capacity for Expedited retrievals is available when you need it. Expedited retrievals and provisioned capacity are not available for objects stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive tier. -
Standard
- Standard retrievals allow you to access any of your archived objects within several hours. This is the default option for retrieval requests that do not specify the retrieval option. Standard retrievals typically finish within 3–5 hours for objects stored in the S3 Glacier storage class or S3 Intelligent-Tiering Archive tier. They typically finish within 12 hours for objects stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive tier. Standard retrievals are free for objects stored in S3 Intelligent-Tiering. -
Bulk
- Bulk retrievals are the lowest-cost retrieval option in S3 Glacier, enabling you to retrieve large amounts, even petabytes, of data inexpensively. Bulk retrievals typically finish within 5–12 hours for objects stored in the S3 Glacier storage class or S3 Intelligent-Tiering Archive tier. They typically finish within 48 hours for objects stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive tier. Bulk retrievals are free for objects stored in S3 Intelligent-Tiering.
For more information about archive retrieval options and provisioned capacity for Expedited
data access, see Restoring Archived Objects in the Amazon Simple Storage Service Developer Guide.
You can use Amazon S3 restore speed upgrade to change the restore speed to a faster speed while it is in progress. For more information, see Upgrading the speed of an in-progress restore in the Amazon Simple Storage Service Developer Guide.
To get the status of object restoration, you can send a HEAD
request. Operations return the x-amz-restore
header, which provides information about the restoration status, in the response. You can use Amazon S3 event notifications to notify you when a restore is initiated or completed. For more information, see Configuring Amazon S3 Event Notifications in the Amazon Simple Storage Service Developer Guide.
After restoring an archived object, you can update the restoration period by reissuing the request with a new period. Amazon S3 updates the restoration period relative to the current time and charges only for the request-there are no data transfer charges. You cannot update the restoration period when Amazon S3 is actively processing your current restore request for the object.
If your bucket has a lifecycle configuration with a rule that includes an expiration action, the object expiration overrides the life span that you specify in a restore request. For example, if you restore an object copy for 10 days, but the object is scheduled to expire in 3 days, Amazon S3 deletes the object in 3 days. For more information about lifecycle configuration, see PutBucketLifecycleConfiguration and Object Lifecycle Management in Amazon Simple Storage Service Developer Guide.
Responses
A successful operation returns either the 200 OK
or 202 Accepted
status code.
-
If the object is not previously restored, then Amazon S3 returns
202 Accepted
in the response. -
If the object is previously restored, Amazon S3 returns
200 OK
in the response.
Special Errors
-
-
Code: RestoreAlreadyInProgress
-
Cause: Object restore is already in progress. (This error does not apply to SELECT type requests.)
-
HTTP Status Code: 409 Conflict
-
SOAP Fault Code Prefix: Client
-
-
-
Code: GlacierExpeditedRetrievalNotAvailable
-
Cause: expedited retrievals are currently not available. Try again later. (Returned if there is insufficient capacity to process the Expedited request. This error applies only to Expedited retrievals and not to S3 Standard or Bulk retrievals.)
-
HTTP Status Code: 503
-
SOAP Fault Code Prefix: N/A
-
Related Resources
-
SQL Reference for Amazon S3 Select and S3 Glacier Select in the Amazon Simple Storage Service Developer Guide
#upload_file(source, options = {}) ⇒ Boolean
Returns true
when the object is uploaded
without any errors.
59 60 61 |
# File 'aws-sdk-resources/lib/aws-sdk-resources/services/s3/object_summary.rb', line 59 def upload_file(source, = {}) object.upload_file(source, ) end |
#version(id) ⇒ ObjectVersion
#wait_until_exists {|waiter| ... } ⇒ ObjectSummary
Waits until this ObjectSummary is exists. This method waits by polling Client#head_object until successful. An error is raised after a configurable number of failed checks.
This waiter uses the following defaults:
Configuration | Default |
---|---|
#delay |
5 |
#max_attempts |
20 |
You can modify defaults and register callbacks by passing a block argument.
#wait_until_not_exists {|waiter| ... } ⇒ ObjectSummary
Waits until this ObjectSummary is not_exists. This method waits by polling Client#head_object until successful. An error is raised after a configurable number of failed checks.
This waiter uses the following defaults:
Configuration | Default |
---|---|
#delay |
5 |
#max_attempts |
20 |
You can modify defaults and register callbacks by passing a block argument.