AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Class: AWS::S3::S3Object

Inherits:
Object
  • Object
show all
Defined in:
lib/aws/s3/s3_object.rb

Overview

Represents an object in S3. Objects live in a bucket and have unique keys.

Getting Objects

You can get an object by its key.

s3 = AWS::S3.new
obj = s3.buckets['my-bucket'].objects['key'] # no request made

You can also get objects by enumerating a objects in a bucket.

bucket.objects.each do |obj|
  puts obj.key
end

See ObjectCollection for more information on finding objects.

Creating Objects

You create an object by writing to it. The following two expressions are equivalent.

obj = bucket.objects.create('key', 'data')
obj = bucket.objects['key'].write('data')

Writing Objects

To upload data to S3, you simply need to call #write on an object.

obj.write('Hello World!')
obj.read
#=> 'Hello World!'

Uploading Files

You can upload a file to S3 in a variety of ways. Given a path to a file (as a string) you can do any of the following:

# specify the data as a path to a file
obj.write(Pathname.new(path_to_file))

# also works this way
obj.write(:file => path_to_file)

# Also accepts an open file object
file = File.open(path_to_file, 'rb')
obj.write(file)

All three examples above produce the same result. The file will be streamed to S3 in chunks. It will not be loaded entirely into memory.

Streaming Uploads

When you call #write with an IO-like object, it will be streamed to S3 in chunks.

While it is possible to determine the size of many IO objects, you may have to specify the :content_length of your IO object. If the exact size can not be known, you may provide an :estimated_content_length. Depending on the size (actual or estimated) of your data, it will be uploaded in a single request or in multiple requests via #multipart_upload.

You may also stream uploads to S3 using a block:

obj.write do |buffer, bytes|
  # writing fewer than the requested number of bytes to the buffer
  # will cause write to stop yielding to the block
end

Reading Objects

You can read an object directly using #read. Be warned, this will load the entire object into memory and is not recommended for large objects.

obj.write('abc')
puts obj.read
#=> abc

Streaming Downloads

If you want to stream an object from S3, you can pass a block to #read.

File.open('output', 'wb') do |file|
  large_object.read do |chunk|
    file.write(chunk)
  end
end

Encryption

Amazon S3 can encrypt objects for you service-side. You can also use client-side encryption.

Server Side Encryption

You can specify to use server side encryption when writing an object.

obj.write('data', :server_side_encryption => :aes256)

You can also make this the default behavior.

AWS.config(:s3_server_side_encryption => :aes256)

s3 = AWS::S3.new
s3.buckets['name'].objects['key'].write('abc') # will be encrypted

Client Side Encryption

Client side encryption utilizes envelope encryption, so that your keys are never sent to S3. You can use a symetric key or an asymmetric key pair.

Symmetric Key Encryption

An AES key is used for symmetric encryption. The key can be 128, 192, and 256 bit sizes. Start by generating key or read a previously generated key.

# generate a new random key
my_key = OpenSSL::Cipher.new("AES-256-ECB").random_key

# read an existing key from disk
my_key = File.read("my_key.der")

Now you can encrypt locally and upload the encrypted data to S3. To do this, you need to provide your key.

obj = bucket.objects["my-text-object"]

# encrypt then upload data
obj.write("MY TEXT", :encryption_key => my_key)

# try read the object without decrypting, oops
obj.read
#=> '.....'

Lastly, you can download and decrypt by providing the same key.

obj.read(:encryption_key => my_key)
#=> "MY TEXT"

Asymmetric Key Pair

A RSA key pair is used for asymmetric encryption. The public key is used for encryption and the private key is used for decryption. Start by generating a key.

my_key = OpenSSL::PKey::RSA.new(1024)

Provide your key to #write and the data will be encrypted before it is uploaded. Pass the same key to #read to decrypt the data when you download it.

obj = bucket.objects["my-text-object"]

# encrypt and upload the data
obj.write("MY TEXT", :encryption_key => my_key)

# download and decrypt the data
obj.read(:encryption_key => my_key)
#=> "MY TEXT"

Configuring storage locations

By default, encryption materials are stored in the object metadata. If you prefer, you can store the encryption materials in a separate object in S3. This object will have the same key + '.instruction'.

# new object, does not exist yet
obj = bucket.objects["my-text-object"]

# no instruction file present
bucket.objects['my-text-object.instruction'].exists?
#=> false

# store the encryption materials in the instruction file
# instead of obj#metadata
obj.write("MY TEXT",
  :encryption_key => MY_KEY,
  :encryption_materials_location => :instruction_file)

bucket.objects['my-text-object.instruction'].exists?
#=> true

If you store the encryption materials in an instruction file, you must tell #read this or it will fail to find your encryption materials.

# reading an encrypted file whos materials are stored in an
# instruction file, and not metadata
obj.read(:encryption_key => MY_KEY,
  :encryption_materials_location => :instruction_file)

Configuring default behaviors

You can configure the default key such that it will automatically encrypt and decrypt for you. You can do this globally or for a single S3 interface

# all objects uploaded/downloaded with this s3 object will be
# encrypted/decrypted
s3 = AWS::S3.new(:s3_encryption_key => "MY_KEY")

# set the key to always encrypt/decrypt
AWS.config(:s3_encryption_key => "MY_KEY")

You can also configure the default storage location for the encryption materials.

AWS.config(:s3_encryption_materials_location => :instruction_file)

Constant Summary

Instance Attribute Summary (collapse)

Instance Method Summary (collapse)

Constructor Details

- (S3Object) initialize(bucket, key, opts = {})

Returns a new instance of S3Object

Parameters:

  • bucket (Bucket)

    The bucket this object belongs to.

  • key (String)

    The object's key.



244
245
246
247
248
# File 'lib/aws/s3/s3_object.rb', line 244

def initialize(bucket, key, opts = {})
  super
  @key = key
  @bucket = bucket
end

Instance Attribute Details

- (Bucket) bucket (readonly)

Returns The bucket this object is in.

Returns:

  • (Bucket)

    The bucket this object is in.



254
255
256
# File 'lib/aws/s3/s3_object.rb', line 254

def bucket
  @bucket
end

- (String) key (readonly)

Returns The objects unique key

Returns:

  • (String)

    The objects unique key



251
252
253
# File 'lib/aws/s3/s3_object.rb', line 251

def key
  @key
end

Instance Method Details

- (Boolean) ==(other) Also known as: eql?

Returns true if the other object belongs to the same bucket and has the same key.

Returns:

  • (Boolean)

    Returns true if the other object belongs to the same bucket and has the same key.



263
264
265
# File 'lib/aws/s3/s3_object.rb', line 263

def == other
  other.kind_of?(S3Object) and other.bucket == bucket and other.key == key
end

- (AccessControlList) acl

Returns the object's access control list. This will be an instance of AccessControlList, plus an additional change method:

object.acl.change do |acl|
  # remove any grants to someone other than the bucket owner
  owner_id = object.bucket.owner.id
  acl.grants.reject! do |g|
    g.grantee.canonical_user_id != owner_id
  end
end

Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it's possible that you may overwrite a concurrent update to the ACL using this method.

Returns:



1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
# File 'lib/aws/s3/s3_object.rb', line 1121

def acl

  resp = client.get_object_acl(:bucket_name => bucket.name, :key => key)

  acl = AccessControlList.new(resp.data)
  acl.extend ACLProxy
  acl.object = self
  acl

end

- (nil) acl=(acl)

Sets the objects's ACL (access control list). You can provide an ACL in a number of different formats.

Parameters:

  • acl (Symbol, String, Hash, AccessControlList)

    Accepts an ACL description in one of the following formats:

    ==== Canned ACL

    S3 supports a number of canned ACLs for buckets and objects. These include:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read (object-only)
    • :bucket_owner_full_control (object-only)
    • :log_delivery_write (bucket-only)

    Here is an example of providing a canned ACL to a bucket:

    s3.buckets['bucket-name'].acl = :public_read
    

    ==== ACL Grant Hash

    You can provide a hash of grants. The hash is composed of grants (keys) and grantees (values). Accepted grant keys are:

    • :grant_read
    • :grant_write
    • :grant_read_acp
    • :grant_write_acp
    • :grant_full_control

    Grantee strings (values) should be formatted like some of the following examples:

    id="8a6925ce4adf588a4532142d3f74dd8c71fa124b1ddee97f21c32aa379004fef"
    uri="http://acs.amazonaws.com/groups/global/AllUsers"
    emailAddress="xyz@amazon.com"
    

    You can provide a comma delimited list of multiple grantees in a single string. Please note the use of quotes inside the grantee string. Here is a simple example:

    { :grant_full_control => "emailAddress=\"foo@bar.com\", id=\"abc..mno\"" }
    

    See the S3 API documentation for more information on formatting grants.

    ==== AcessControlList Object

    You can build an ACL using the AccessControlList class and pass this object.

    acl = AWS::S3::AccessControlList.new
    acl.grant(:full_control).to(:canonical_user_id => "8a6...fef")
    acl #=> this is acceptible
    

    ==== ACL XML String

    Lastly you can build your own ACL XML document and pass it as a string.

    <<-XML
      <AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
        <Owner>
          <ID>8a6...fef</ID>
          <DisplayName>owner-display-name</DisplayName>
        </Owner>
        <AccessControlList>
          <Grant>
            <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Canonical User">
              <ID>8a6...fef</ID>
              <DisplayName>owner-display-name</DisplayName>
            </Grantee>
            <Permission>FULL_CONTROL</Permission>
          </Grant>
        </AccessControlList>
      </AccessControlPolicy>
    XML
    

Returns:

  • (nil)


1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
# File 'lib/aws/s3/s3_object.rb', line 1136

def acl=(acl)

  client_opts = {}
  client_opts[:bucket_name] = bucket.name
  client_opts[:key] = key

  client.put_object_acl(acl_options(acl).merge(client_opts))
  nil

end

- (Integer) content_length

Returns Size of the object in bytes.

Returns:

  • (Integer)

    Size of the object in bytes.



316
317
318
# File 'lib/aws/s3/s3_object.rb', line 316

def content_length
  head[:content_length]
end

- (String) content_type

Note:

S3 does not compute content-type. It reports the content-type as was reported during the file upload.

Returns the content type as reported by S3, defaults to an empty string when not provided during upload.

Returns:

  • (String)

    Returns the content type as reported by S3, defaults to an empty string when not provided during upload.



324
325
326
# File 'lib/aws/s3/s3_object.rb', line 324

def content_type
  head[:content_type]
end

- (nil) copy_from(source, options = {})

Note:

This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from the source object. If you don't specify any of these options when copying, the object will have the default values as described below.

Copies data from one S3 object to another.

S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • source (Mixed)
  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the source object can be found in. Defaults to the current object's bucket.

  • :bucket (Bucket)

    The bucket the source object can be found in. Defaults to the current object's bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied. If you set this value, you must set ALL metadata values for the object as we do not preserve existing values.

  • :content_type (String)

    The content type of the copied object. Defaults to the source object's content type.

  • :content_disposition (String)

    The presentational information for the object. Defaults to the source object's content disposition.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :version_id (String) — default: nil

    Causes the copy to read a specific version of the source object.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    Set to true when the object being copied was client-side encrypted. This is important so the encryption metadata will be copied.

  • :use_multipart_copy (Boolean) — default: false

    Set this to true if you need to copy an object that is larger than 5GB.

  • :cache_control (String)

    Can be used to specify caching behavior. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (nil)


864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
# File 'lib/aws/s3/s3_object.rb', line 864

def copy_from source, options = {}

  options = options.dup

  options[:copy_source] =
    case source
    when S3Object
      "#{source.bucket.name}/#{source.key}"
    when ObjectVersion
      options[:version_id] = source.version_id
      "#{source.object.bucket.name}/#{source.object.key}"
    else
      if options[:bucket]
        "#{options.delete(:bucket).name}/#{source}"
      elsif options[:bucket_name]
        "#{options.delete(:bucket_name)}/#{source}"
      else
        "#{self.bucket.name}/#{source}"
      end
    end

  if [:metadata, :content_disposition, :content_type, :cache_control,
    ].any? {|opt| options.key?(opt) }
  then
    options[:metadata_directive] = 'REPLACE'
  else
    options[:metadata_directive] ||= 'COPY'
  end

  # copies client-side encryption materials (from the metadata or
  # instruction file)
  if options.delete(:client_side_encrypted)
    copy_cse_materials(source, options)
  end

  add_sse_options(options)

  options[:storage_class] = options.delete(:reduced_redundancy) ?
    'REDUCED_REDUNDANCY' : 'STANDARD'

  options[:bucket_name] = bucket.name
  options[:key] = key

  if use_multipart_copy?(options)
    multipart_copy(options)
  else
    resp = client.copy_object(options)
  end

  nil

end

- (S3Object) copy_to(target, options = {})

Note:

This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from this object to the new object. If you don't specify any of these options when copying, the new object will have the default values as described below.

Copies data from the current object to another object in S3.

S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • target (S3Object, String)

    An S3Object, or a string key of and object to copy to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the object should be copied into. Defaults to the current object's bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object's bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    When true, the client-side encryption materials will be copied. Without this option, the key and iv are not guaranteed to be transferred to the new object.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (S3Object)

    Returns the copy (target) object.



978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
# File 'lib/aws/s3/s3_object.rb', line 978

def copy_to target, options = {}

  unless target.is_a?(S3Object)

    bucket = case
    when options[:bucket] then options[:bucket]
    when options[:bucket_name]
      Bucket.new(options[:bucket_name], :config => config)
    else self.bucket
    end

    target = S3Object.new(bucket, target)
  end

  copy_opts = options.dup
  copy_opts.delete(:bucket)
  copy_opts.delete(:bucket_name)

  target.copy_from(self, copy_opts)
  target

end

- (nil) delete(options = {})

Deletes the object from its S3 bucket.

Parameters:

  • options (Hash) (defaults to: {})
  • [String] (Hash)

    a customizable set of options

  • [Boolean] (Hash)

    a customizable set of options

Returns:

  • (nil)


393
394
395
396
397
398
399
400
401
402
403
404
405
406
# File 'lib/aws/s3/s3_object.rb', line 393

def delete options = {}
  client.delete_object(options.merge(
    :bucket_name => bucket.name,
    :key => key))

  if options[:delete_instruction_file]
    client.delete_object(
      :bucket_name => bucket.name,
      :key => key + '.instruction')
  end

  nil

end

- (String) etag

Returns the object's ETag.

Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.

Returns:

  • (String)

    Returns the object's ETag



304
305
306
# File 'lib/aws/s3/s3_object.rb', line 304

def etag
  head[:etag]
end

- (Boolean) exists?

Returns true if the object exists in S3.

Returns:

  • (Boolean)

    Returns true if the object exists in S3.



269
270
271
272
273
274
275
# File 'lib/aws/s3/s3_object.rb', line 269

def exists?
  head
rescue Errors::NoSuchKey => e
  false
else
  true
end

- (DateTime?) expiration_date

Returns:

  • (DateTime, nil)


329
330
331
# File 'lib/aws/s3/s3_object.rb', line 329

def expiration_date
  head[:expiration_date]
end

- (String?) expiration_rule_id

Returns:

  • (String, nil)


334
335
336
# File 'lib/aws/s3/s3_object.rb', line 334

def expiration_rule_id
  head[:expiration_rule_id]
end

- (Object) head(options = {})

Performs a HEAD request against this object and returns an object with useful information about the object, including:

  • metadata (hash of user-supplied key-value pairs)
  • content_length (integer, number of bytes)
  • content_type (as sent to S3 when uploading the object)
  • etag (typically the object's MD5)
  • server_side_encryption (the algorithm used to encrypt the object on the server side, e.g. :aes256)

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Which version of this object to make a HEAD request against.

Returns:

  • A head object response with metadata, content_length, content_type, etag and server_side_encryption.



292
293
294
295
# File 'lib/aws/s3/s3_object.rb', line 292

def head options = {}
  client.head_object(options.merge(
    :bucket_name => bucket.name, :key => key))
end

- (Time) last_modified

Returns the object's last modified time.

Returns:

  • (Time)

    Returns the object's last modified time.



311
312
313
# File 'lib/aws/s3/s3_object.rb', line 311

def last_modified
  head[:last_modified]
end

- (ObjectMetadata) metadata(options = {})

Returns an instance of ObjectMetadata representing the metadata for this object.

Parameters:

  • [String] (Hash)

    a customizable set of options

Returns:

  • (ObjectMetadata)

    Returns an instance of ObjectMetadata representing the metadata for this object.



433
434
435
436
# File 'lib/aws/s3/s3_object.rb', line 433

def  options = {}
  options[:config] = config
  ObjectMetadata.new(self, options)
end

- (S3Object) move_to(target, options = {}) Also known as: rename_to

Moves an object to a new key.

This works by copying the object to a new key and then deleting the old object. This function returns the new object once this is done.

bucket = s3.buckets['old-bucket']
old_obj = bucket.objects['old-key']

# renaming an object returns a new object
new_obj = old_obj.move_to('new-key')

old_obj.key     #=> 'old-key'
old_obj.exists? #=> false

new_obj.key     #=> 'new-key'
new_obj.exists? #=> true

If you need to move an object to a different bucket, pass :bucket or :bucket_name.

obj = s3.buckets['old-bucket'].objects['old-key']
obj.move_to('new-key', :bucket_name => 'new_bucket')

If the copy succeeds, but the then the delete fails, an error will be raised.

Parameters:

  • target (String)

    The key to move this object to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the object should be copied into. Defaults to the current object's bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object's bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    When true, the client-side encryption materials will be copied. Without this option, the key and iv are not guaranteed to be transferred to the new object.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (S3Object)

    Returns a new object with the new key.



780
781
782
783
784
# File 'lib/aws/s3/s3_object.rb', line 780

def move_to target, options = {}
  copy = copy_to(target, options)
  delete
  copy
end

- (S3Object, ObjectVersion) multipart_upload(options = {}) {|upload| ... }

Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.

Note: After you initiate multipart upload and upload one or more parts, you must either complete or abort multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort multipart upload, Amazon S3 frees up the parts storage and stops charging you for the parts storage.

Examples:

Uploading an object in two parts


bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("a" * 5242880)
  upload.add_part("b" * 2097152)
end

Uploading parts out of order


bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.add_part("a" * 5242880, :part_number => 1)
end

Aborting an upload after parts have been added


bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.abort
end

Starting an upload and completing it later by ID


upload = bucket.objects.myobject.multipart_upload
upload.add_part("a" * 5242880)
upload.add_part("b" * 2097152)
id = upload.id

# later or in a different process
upload = bucket.objects.myobject.multipart_uploads[id]
upload.complete(:remote_parts)

Parameters:

  • options (Hash) (defaults to: {})

    Options for the upload.

Options Hash (options):

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :reduced_redundancy (Boolean) — default: false

    If true, Reduced Redundancy Storage will be enabled for the uploaded object.

  • :cache_control (String)

    Can be used to specify caching behavior. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

Yield Parameters:

Returns:

  • (S3Object, ObjectVersion)

    If the bucket has versioning enabled, returns the ObjectVersion representing the version that was uploaded. If versioning is disabled, returns self.



715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
# File 'lib/aws/s3/s3_object.rb', line 715

def multipart_upload(options = {})

  options = options.dup
  add_sse_options(options)

  upload = multipart_uploads.create(options)

  if block_given?
    begin
      yield(upload)
      upload.close
    rescue => e
      upload.abort
      raise e
    end
  else
    upload
  end
end

- (ObjectUploadCollection) multipart_uploads

Returns an object representing the collection of uploads that are in progress for this object.

Examples:

Abort any in-progress uploads for the object:


object.multipart_uploads.each(&:abort)

Returns:

  • (ObjectUploadCollection)

    Returns an object representing the collection of uploads that are in progress for this object.



741
742
743
# File 'lib/aws/s3/s3_object.rb', line 741

def multipart_uploads
  ObjectUploadCollection.new(self)
end

- (PresignedPost) presigned_post(options = {})

Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.

Returns:

See Also:



1281
1282
1283
# File 'lib/aws/s3/s3_object.rb', line 1281

def presigned_post(options = {})
  PresignedPost.new(bucket, options.merge(:key => key))
end

- (URI::HTTP, URI::HTTPS) public_url(options = {})

Generates a public (not authenticated) URL for the object.

Parameters:

  • options (Hash) (defaults to: {})

    Options for generating the URL.

Options Hash (options):

  • :secure (Boolean)

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

Returns:

  • (URI::HTTP, URI::HTTPS)


1269
1270
1271
1272
# File 'lib/aws/s3/s3_object.rb', line 1269

def public_url(options = {})
  options[:secure] = config.use_ssl? unless options.key?(:secure)
  build_uri(request_for_signing(options), options)
end

- (Object) read(options = {}, &read_block)

Note:

:range option cannot be used with client-side encryption

Note:

All decryption reads incur at least an extra HEAD operation.

Fetches the object data from S3. If you pass a block to this method, the data will be yielded to the block in chunks as it is read off the HTTP response.

Read an object from S3 in chunks

When downloading large objects it is recommended to pass a block to #read. Data will be yielded to the block as it is read off the HTTP response.

# read an object from S3 to a file
File.open('output.txt', 'wb') do |file|
  bucket.objects['key'].read do |chunk|
    file.write(chunk)
  end
end

Reading an object without a block

When you omit the block argument to #read, then the entire HTTP response and read and the object data is loaded into memory.

bucket.objects['key'].read
#=> 'object-contents-here'

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Reads data from a specific version of this object.

  • :if_unmodified_since (Time)

    If specified, the method will raise AWS::S3::Errors::PreconditionFailed unless the object has not been modified since the given time.

  • :if_modified_since (Time)

    If specified, the method will raise AWS::S3::Errors::NotModified if the object has not been modified since the given time.

  • :if_match (String)

    If specified, the method will raise AWS::S3::Errors::PreconditionFailed unless the object ETag matches the provided value.

  • :if_none_match (String)

    If specified, the method will raise AWS::S3::Errors::NotModified if the object ETag matches the provided value.

  • :range (Range)

    A byte range to read data from

  • :encryption_key (OpenSSL::PKey::RSA, String) — default: nil

    If this option is set, the object will be decrypted using envelope encryption. The valid values are OpenSSL asymmetric keys OpenSSL::Pkey::RSA or strings representing symmetric keys of an AES-128/192/256-ECB cipher as a String. This value defaults to the value in s3_encryption_key; for more information, see AWS.config.

    Symmetric Keys:

    cipher = OpenSSL::Cipher.new('AES-256-ECB') key = cipher.random_key

    Asymmetric keys can also be generated as so: key = OpenSSL::PKey::RSA.new(KEY_SIZE)

  • :encryption_materials_location (Symbol) — default: :metadata

    Set this to :instruction_file if the encryption materials are not stored in the object metadata



1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
# File 'lib/aws/s3/s3_object.rb', line 1075

def read options = {}, &read_block

  options[:bucket_name] = bucket.name
  options[:key] = key

  if should_decrypt?(options)
    get_encrypted_object(options, &read_block)
  else
    resp_data = get_object(options, &read_block)
    block_given? ? resp_data : resp_data[:data]
  end

end

- (true, false) reduced_redundancy=(value)

Note:

Changing the storage class of an object incurs a COPY operation.

Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).

Parameters:

  • value (true, false)

    If this is true, the object will be copied in place and stored with reduced redundancy at a lower cost. Otherwise, the object will be copied and stored with the standard storage class.

Returns:

  • (true, false)

    The value parameter.



1297
1298
1299
1300
# File 'lib/aws/s3/s3_object.rb', line 1297

def reduced_redundancy= value
  copy_from(key, :reduced_redundancy => value)
  value
end

- (Boolean) restore(options = {})

Restores a temporary copy of an archived object from the Glacier storage tier. After the specified days, Amazon S3 deletes the temporary copy. Note that the object remains archived; Amazon S3 deletes only the restored copy.

Restoring an object does not occur immediately. Use #restore_in_progress? to check the status of the operation.

Parameters:

  • [Integer] (Hash)

    a customizable set of options

Returns:

  • (Boolean)

    true if a restore can be initiated.

Since:

  • 1.7.2



419
420
421
422
423
424
425
426
427
# File 'lib/aws/s3/s3_object.rb', line 419

def restore options = {}
  options[:days] ||= 1

  client.restore_object(
    :bucket_name => bucket.name,
    :key => key, :days => options[:days])

  true
end

- (DateTime?) restore_expiration_date

Returns:

  • (DateTime)

    the time when the temporarily restored object will be removed from S3. Note that the original object will remain available in Glacier.

  • (nil)

    if the object was not restored from an archived copy

Since:

  • 1.7.2



365
366
367
# File 'lib/aws/s3/s3_object.rb', line 365

def restore_expiration_date
  head[:restore_expiration_date]
end

- (Boolean) restore_in_progress?

Returns whether a #restore operation on the object is currently being performed on the object.

Returns:

  • (Boolean)

    whether a #restore operation on the object is currently being performed on the object.

See Also:

Since:

  • 1.7.2



355
356
357
# File 'lib/aws/s3/s3_object.rb', line 355

def restore_in_progress?
  head[:restore_in_progress]
end

- (Boolean) restored_object?

Returns whether the object is a temporary copy of an archived object in the Glacier storage class.

Returns:

  • (Boolean)

    whether the object is a temporary copy of an archived object in the Glacier storage class.

Since:

  • 1.7.2



372
373
374
# File 'lib/aws/s3/s3_object.rb', line 372

def restored_object?
  !!head[:restore_expiration_date]
end

- (Symbol?) server_side_encryption

Returns the algorithm used to encrypt the object on the server side, or nil if SSE was not used when storing the object.

Returns:

  • (Symbol, nil)

    Returns the algorithm used to encrypt the object on the server side, or nil if SSE was not used when storing the object.



341
342
343
# File 'lib/aws/s3/s3_object.rb', line 341

def server_side_encryption
  head[:server_side_encryption]
end

- (true, false) server_side_encryption?

Returns true if the object was stored using server side encryption.

Returns:

  • (true, false)

    Returns true if the object was stored using server side encryption.



347
348
349
# File 'lib/aws/s3/s3_object.rb', line 347

def server_side_encryption?
  !server_side_encryption.nil?
end

- (URI::HTTP, URI::HTTPS) url_for(method, options = {})

Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.

Examples:

Generate a url to read an object


bucket.objects.myobject.url_for(:read)

Generate a url to delete an object


bucket.objects.myobject.url_for(:delete)

Override response headers for reading an object


object = bucket.objects.myobject
url = object.url_for(:read,
                     :response_content_type => "application/json")

Generate a url that expires in 10 minutes


bucket.objects.myobject.url_for(:read, :expires => 10*60)

Parameters:

  • method (Symbol, String)

    The HTTP verb or object method for which the returned URL will be valid. Valid values:

    • :get or :read
    • :put or :write
    • :delete
  • options (Hash) (defaults to: {})

    Additional options for generating the URL.

Options Hash (options):

  • :expires (Object)

    Sets the expiration time of the URL; after this time S3 will return an error if the URL is used. This can be an integer (to specify the number of seconds after the current time), a string (which is parsed as a date using Time#parse), a Time, or a DateTime object. This option defaults to one hour after the current time.

  • :secure (Boolean) — default: true

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

  • :content_type (String)

    Object content type for HTTP PUT. When provided, has to be also added to the request header as a 'content-type' field

  • :content_md5 (String)

    Object MD5 hash for HTTP PUT. When provided, has to be also added to the request header as a 'content-md5' field

  • :endpoint (String)

    Sets the hostname of the endpoint.

  • :port (Integer)

    Sets the port of the endpoint (overrides config.s3_port).

  • :force_path_style (Boolean) — default: false

    Indicates whether the generated URL should place the bucket name in the path (true) or as a subdomain (false).

  • :response_content_type (String)

    Sets the Content-Type header of the response when performing an HTTP GET on the returned URL.

  • :response_content_language (String)

    Sets the Content-Language header of the response when performing an HTTP GET on the returned URL.

  • :response_expires (String)

    Sets the Expires header of the response when performing an HTTP GET on the returned URL.

  • :response_cache_control (String)

    Sets the Cache-Control header of the response when performing an HTTP GET on the returned URL.

  • :response_content_disposition (String)

    Sets the Content-Disposition header of the response when performing an HTTP GET on the returned URL.

  • :acl (String)

    The value to use for the x-amz-acl.

  • :response_content_encoding (String)

    Sets the Content-Encoding header of the response when performing an HTTP GET on the returned URL.

  • :signature_version (:v3, :v4) — default: :v3

Returns:

  • (URI::HTTP, URI::HTTPS)


1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
# File 'lib/aws/s3/s3_object.rb', line 1243

def url_for(method, options = {})

  options = options.dup
  options[:expires] = expiration_timestamp(options[:expires])
  options[:secure] = config.use_ssl? unless options.key?(:secure)
  options[:signature_version] ||= config.s3_signature_version

  case options[:signature_version]
  when :v3 then presign_v3(method, options)
  when :v4 then presign_v4(method, options)
  else
    msg = "invalid signature version, expected :v3 or :v4, got "
    msg << options[:signature_version].inspect
    raise ArgumentError, msg
  end
end

- (ObjectVersionCollection) versions

Returns a collection representing all the object versions for this object.

Examples:


bucket.versioning_enabled? # => true
version = bucket.objects["mykey"].versions.latest

Returns:



447
448
449
# File 'lib/aws/s3/s3_object.rb', line 447

def versions
  ObjectVersionCollection.new(self)
end

- (S3Object, ObjectVersion) write(data, options = {})

Uploads data to the object in S3.

obj = s3.buckets['bucket-name'].objects['key']

# strings
obj.write("HELLO")

# files (by path)
obj.write(Pathname.new('path/to/file.txt'))

# file objects
obj.write(File.open('path/to/file.txt', 'rb'))

# IO objects (must respond to #read and #eof?)
obj.write(io)

Multipart Uploads vs Single Uploads

This method will intelligently choose between uploading the file in a signal request and using #multipart_upload. You can control this behavior by configuring the thresholds and you can disable the multipart feature as well.

# always send the file in a single request
obj.write(file, :single_request => true)

# upload the file in parts if the total file size exceeds 100MB
obj.write(file, :multipart_threshold => 100 * 1024 * 1024)

Parameters:

  • data (String, Pathname, File, IO)

    The data to upload. This may be a: * String * Pathname * File * IO * Any object that responds to #read and #eof?.

  • options (Hash) (defaults to: {})

    Additional upload options.

Options Hash (options):

  • :content_length (Integer)

    If provided, this option must match the total number of bytes written to S3. This options is required when it is not possible to automatically determine the size of data.

  • :estimated_content_length (Integer)

    When uploading data of unknown content length, you may specify this option to hint what mode of upload should take place. When :estimated_content_length exceeds the :multipart_threshold, then the data will be uploaded in parts, otherwise it will be read into memory and uploaded via Client#put_object.

  • :single_request (Boolean) — default: false

    When true, this method will always upload the data in a single request (via Client#put_object). When false, this method will choose between Client#put_object and #multipart_upload.

  • :multipart_threshold (Integer) — default: 16777216

    Specifies the maximum size (in bytes) of a single-request upload. If the data exceeds this threshold, it will be uploaded via #multipart_upload. The default threshold is 16MB and can be configured via AWS.config(:s3_multipart_threshold => ...).

  • :multipart_min_part_size (Integer) — default: 5242880

    The minimum size of a part to upload to S3 when using #multipart_upload. S3 will reject parts smaller than 5MB (except the final part). The default is 5MB and can be configured via AWS.config(:s3_multipart_min_part_size => ...).

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.

  • :acl (Symbol, String) — default: :private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :grant_read (String)
  • :grant_write (String)
  • :grant_read_acp (String)
  • :grant_write_acp (String)
  • :grant_full_control (String)
  • :reduced_redundancy (Boolean) — default: false

    When true, this object will be stored with Reduced Redundancy Storage.

  • :cache_control (String)

    Can be used to specify caching behavior. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_md5 (String)

    The base64 encoded content md5 of the data.

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :encryption_key (OpenSSL::PKey::RSA, String)

    Set this to encrypt the data client-side using envelope encryption. The key must be an OpenSSL asymmetric key or a symmetric key string (16, 24 or 32 bytes in length).

  • :encryption_materials_location (Symbol) — default: :metadata

    Set this to :instruction_file if you prefer to store the client-side encryption materials in a separate object in S3 instead of in the object metadata.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:



596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
# File 'lib/aws/s3/s3_object.rb', line 596

def write *args, &block

  options = compute_write_options(*args, &block)

  add_storage_class_option(options)
  add_sse_options(options)
  add_cse_options(options)

  if use_multipart?(options)
    write_with_multipart(options)
  else
    write_with_put_object(options)
  end

end