You are viewing documentation for version 1 of the AWS SDK for Ruby. Version 2 documentation can be found here.

Class: AWS::S3::S3Object

Inherits:
Object
  • Object
show all
Defined in:
lib/aws/s3/s3_object.rb

Overview

Represents an object in S3. Objects live in a bucket and have unique keys.

Getting Objects

You can get an object by its key.

s3 = AWS::S3.new
obj = s3.buckets['my-bucket'].objects['key'] # no request made

You can also get objects by enumerating a objects in a bucket.

bucket.objects.each do |obj|
  puts obj.key
end

See ObjectCollection for more information on finding objects.

Creating Objects

You create an object by writing to it. The following two expressions are equivalent.

obj = bucket.objects.create('key', 'data')
obj = bucket.objects['key'].write('data')

Writing Objects

To upload data to S3, you simply need to call #write on an object.

obj.write('Hello World!')
obj.read
#=> 'Hello World!'

Uploading Files

You can upload a file to S3 in a variety of ways. Given a path to a file (as a string) you can do any of the following:

# specify the data as a path to a file
obj.write(Pathname.new(path_to_file))

# also works this way
obj.write(:file => path_to_file)

# Also accepts an open file object
file = File.open(path_to_file, 'rb')
obj.write(file)

All three examples above produce the same result. The file will be streamed to S3 in chunks. It will not be loaded entirely into memory.

Streaming Uploads

When you call #write with an IO-like object, it will be streamed to S3 in chunks.

While it is possible to determine the size of many IO objects, you may have to specify the :content_length of your IO object. If the exact size can not be known, you may provide an :estimated_content_length. Depending on the size (actual or estimated) of your data, it will be uploaded in a single request or in multiple requests via #multipart_upload.

You may also stream uploads to S3 using a block:

obj.write do |buffer, bytes|
  # writing fewer than the requested number of bytes to the buffer
  # will cause write to stop yielding to the block
end

Reading Objects

You can read an object directly using #read. Be warned, this will load the entire object into memory and is not recommended for large objects.

obj.write('abc')
puts obj.read
#=> abc

Streaming Downloads

If you want to stream an object from S3, you can pass a block to #read.

File.open('output', 'wb') do |file|
  large_object.read do |chunk|
    file.write(chunk)
  end
end

Encryption

Amazon S3 can encrypt objects for you service-side. You can also use client-side encryption.

Server Side Encryption

You can specify to use server side encryption when writing an object.

obj.write('data', :server_side_encryption => :aes256)

You can also make this the default behavior.

AWS.config(:s3_server_side_encryption => :aes256)

s3 = AWS::S3.new
s3.buckets['name'].objects['key'].write('abc') # will be encrypted

Client Side Encryption

Client side encryption utilizes envelope encryption, so that your keys are never sent to S3. You can use a symetric key or an asymmetric key pair.

Symmetric Key Encryption

An AES key is used for symmetric encryption. The key can be 128, 192, and 256 bit sizes. Start by generating key or read a previously generated key.

# generate a new random key
my_key = OpenSSL::Cipher.new("AES-256-ECB").random_key

# read an existing key from disk
my_key = File.read("my_key.der")

Now you can encrypt locally and upload the encrypted data to S3. To do this, you need to provide your key.

obj = bucket.objects["my-text-object"]

# encrypt then upload data
obj.write("MY TEXT", :encryption_key => my_key)

# try read the object without decrypting, oops
obj.read
#=> '.....'

Lastly, you can download and decrypt by providing the same key.

obj.read(:encryption_key => my_key)
#=> "MY TEXT"

Asymmetric Key Pair

A RSA key pair is used for asymmetric encryption. The public key is used for encryption and the private key is used for decryption. Start by generating a key.

my_key = OpenSSL::PKey::RSA.new(1024)

Provide your key to #write and the data will be encrypted before it is uploaded. Pass the same key to #read to decrypt the data when you download it.

obj = bucket.objects["my-text-object"]

# encrypt and upload the data
obj.write("MY TEXT", :encryption_key => my_key)

# download and decrypt the data
obj.read(:encryption_key => my_key)
#=> "MY TEXT"

Configuring storage locations

By default, encryption materials are stored in the object metadata. If you prefer, you can store the encryption materials in a separate object in S3. This object will have the same key + '.instruction'.

# new object, does not exist yet
obj = bucket.objects["my-text-object"]

# no instruction file present
bucket.objects['my-text-object.instruction'].exists?
#=> false

# store the encryption materials in the instruction file
# instead of obj#metadata
obj.write("MY TEXT",
  :encryption_key => MY_KEY,
  :encryption_materials_location => :instruction_file)

bucket.objects['my-text-object.instruction'].exists?
#=> true

If you store the encryption materials in an instruction file, you must tell #read this or it will fail to find your encryption materials.

# reading an encrypted file whos materials are stored in an
# instruction file, and not metadata
obj.read(:encryption_key => MY_KEY,
  :encryption_materials_location => :instruction_file)

Configuring default behaviors

You can configure the default key such that it will automatically encrypt and decrypt for you. You can do this globally or for a single S3 interface

# all objects uploaded/downloaded with this s3 object will be
# encrypted/decrypted
s3 = AWS::S3.new(:s3_encryption_key => "MY_KEY")

# set the key to always encrypt/decrypt
AWS.config(:s3_encryption_key => "MY_KEY")

You can also configure the default storage location for the encryption materials.

AWS.config(:s3_encryption_materials_location => :instruction_file)

Constant Summary

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(bucket, key, opts = {}) ⇒ S3Object

Returns a new instance of S3Object

Parameters:

  • bucket (Bucket)

    The bucket this object belongs to.

  • key (String)

    The object's key.



244
245
246
247
248
249
250
251
# File 'lib/aws/s3/s3_object.rb', line 244

def initialize(bucket, key, opts = {})
  @content_length = opts.delete(:content_length)
  @etag = opts.delete(:etag)
  @last_modified = opts.delete(:last_modified)
  super
  @key = key
  @bucket = bucket
end

Instance Attribute Details

#bucketBucket (readonly)

Returns The bucket this object is in.

Returns:

  • (Bucket)

    The bucket this object is in.



257
258
259
# File 'lib/aws/s3/s3_object.rb', line 257

def bucket
  @bucket
end

#keyString (readonly)

Returns The objects unique key

Returns:

  • (String)

    The objects unique key



254
255
256
# File 'lib/aws/s3/s3_object.rb', line 254

def key
  @key
end

Instance Method Details

#==(other) ⇒ Boolean Also known as: eql?

Returns true if the other object belongs to the same bucket and has the same key.

Returns:

  • (Boolean)

    Returns true if the other object belongs to the same bucket and has the same key.



266
267
268
# File 'lib/aws/s3/s3_object.rb', line 266

def == other
  other.kind_of?(S3Object) and other.bucket == bucket and other.key == key
end

#aclAccessControlList

Returns the object's access control list. This will be an instance of AccessControlList, plus an additional change method:

object.acl.change do |acl|
  # remove any grants to someone other than the bucket owner
  owner_id = object.bucket.owner.id
  acl.grants.reject! do |g|
    g.grantee.canonical_user_id != owner_id
  end
end

Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it's possible that you may overwrite a concurrent update to the ACL using this method.

Returns:



1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
# File 'lib/aws/s3/s3_object.rb', line 1128

def acl

  resp = client.get_object_acl(:bucket_name => bucket.name, :key => key)

  acl = AccessControlList.new(resp.data)
  acl.extend ACLProxy
  acl.object = self
  acl

end

#acl=(acl) ⇒ nil

Sets the objects's ACL (access control list). You can provide an ACL in a number of different formats.

Parameters:

  • acl (Symbol, String, Hash, AccessControlList)

    Accepts an ACL description in one of the following formats:

    ==== Canned ACL

    S3 supports a number of canned ACLs for buckets and objects. These include:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read (object-only)
    • :bucket_owner_full_control (object-only)
    • :log_delivery_write (bucket-only)

    Here is an example of providing a canned ACL to a bucket:

    s3.buckets['bucket-name'].acl = :public_read
    

    ==== ACL Grant Hash

    You can provide a hash of grants. The hash is composed of grants (keys) and grantees (values). Accepted grant keys are:

    • :grant_read
    • :grant_write
    • :grant_read_acp
    • :grant_write_acp
    • :grant_full_control

    Grantee strings (values) should be formatted like some of the following examples:

    id="8a6925ce4adf588a4532142d3f74dd8c71fa124b1ddee97f21c32aa379004fef"
    uri="http://acs.amazonaws.com/groups/global/AllUsers"
    emailAddress="xyz@amazon.com"
    

    You can provide a comma delimited list of multiple grantees in a single string. Please note the use of quotes inside the grantee string. Here is a simple example:

    { :grant_full_control => "emailAddress=\"foo@bar.com\", id=\"abc..mno\"" }
    

    See the S3 API documentation for more information on formatting grants.

    ==== AcessControlList Object

    You can build an ACL using the AccessControlList class and pass this object.

    acl = AWS::S3::AccessControlList.new
    acl.grant(:full_control).to(:canonical_user_id => "8a6...fef")
    acl #=> this is acceptible
    

    ==== ACL XML String

    Lastly you can build your own ACL XML document and pass it as a string.

    <<-XML
      <AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
        <Owner>
          <ID>8a6...fef</ID>
          <DisplayName>owner-display-name</DisplayName>
        </Owner>
        <AccessControlList>
          <Grant>
            <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Canonical User">
              <ID>8a6...fef</ID>
              <DisplayName>owner-display-name</DisplayName>
            </Grantee>
            <Permission>FULL_CONTROL</Permission>
          </Grant>
        </AccessControlList>
      </AccessControlPolicy>
    XML
    

Returns:

  • (nil)


1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
# File 'lib/aws/s3/s3_object.rb', line 1143

def acl=(acl)

  client_opts = {}
  client_opts[:bucket_name] = bucket.name
  client_opts[:key] = key

  client.put_object_acl(acl_options(acl).merge(client_opts))
  nil

end

#content_lengthInteger

Returns Size of the object in bytes.

Returns:

  • (Integer)

    Size of the object in bytes.



319
320
321
# File 'lib/aws/s3/s3_object.rb', line 319

def content_length
  @content_length = config.s3_cache_object_attributes && @content_length || head[:content_length]
end

#content_typeString

Note:

S3 does not compute content-type. It reports the content-type as was reported during the file upload.

Returns the content type as reported by S3, defaults to an empty string when not provided during upload.

Returns:

  • (String)

    Returns the content type as reported by S3, defaults to an empty string when not provided during upload.



327
328
329
# File 'lib/aws/s3/s3_object.rb', line 327

def content_type
  head[:content_type]
end

#copy_from(source, options = {}) ⇒ nil

Note:

This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from the source object. If you don't specify any of these options when copying, the object will have the default values as described below.

Copies data from one S3 object to another.

S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • source (Mixed)
  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The slash-prefixed name of the bucket the source object can be found in. Defaults to the current object's bucket.

  • :bucket (Bucket)

    The bucket the source object can be found in. Defaults to the current object's bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied. If you set this value, you must set ALL metadata values for the object as we do not preserve existing values.

  • :content_type (String)

    The content type of the copied object. Defaults to the source object's content type.

  • :content_disposition (String)

    The presentational information for the object. Defaults to the source object's content disposition.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :version_id (String) — default: nil

    Causes the copy to read a specific version of the source object.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    Set to true when the object being copied was client-side encrypted. This is important so the encryption metadata will be copied.

  • :use_multipart_copy (Boolean) — default: false

    Set this to true if you need to copy an object that is larger than 5GB.

  • :cache_control (String)

    Can be used to specify caching behavior. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (nil)


868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
# File 'lib/aws/s3/s3_object.rb', line 868

def copy_from source, options = {}

  options = options.dup

  options[:copy_source] =
    case source
    when S3Object
      "/#{source.bucket.name}/#{source.key}"
    when ObjectVersion
      options[:version_id] = source.version_id
      "/#{source.object.bucket.name}/#{source.object.key}"
    else
      if options[:bucket]
        "/#{options.delete(:bucket).name}/#{source}"
      elsif options[:bucket_name]
        # oops, this should be slash-prefixed, but unable to change
        # this without breaking users that already work-around this
        # bug by sending :bucket_name => "/bucket-name"
        "#{options.delete(:bucket_name)}/#{source}"
      else
        "/#{self.bucket.name}/#{source}"
      end
    end

  if [:metadata, :content_disposition, :content_type, :cache_control,
    ].any? {|opt| options.key?(opt) }
  then
    options[:metadata_directive] = 'REPLACE'
  else
    options[:metadata_directive] ||= 'COPY'
  end

  # copies client-side encryption materials (from the metadata or
  # instruction file)
  if options.delete(:client_side_encrypted)
    copy_cse_materials(source, options)
  end

  add_sse_options(options)

  options[:storage_class] = options.delete(:reduced_redundancy) ?
    'REDUCED_REDUNDANCY' : 'STANDARD'

  options[:bucket_name] = bucket.name
  options[:key] = key

  if use_multipart_copy?(options)
    multipart_copy(options)
  else
    resp = client.copy_object(options)
  end

  nil

end

#copy_to(target, options = {}) ⇒ S3Object

Note:

This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from this object to the new object. If you don't specify any of these options when copying, the new object will have the default values as described below.

Copies data from the current object to another object in S3.

S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • target (S3Object, String)

    An S3Object, or a string key of and object to copy to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The slash-prefixed name of the bucket the object should be copied into. Defaults to the current object's bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object's bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    When true, the client-side encryption materials will be copied. Without this option, the key and iv are not guaranteed to be transferred to the new object.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (S3Object)

    Returns the copy (target) object.



985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
# File 'lib/aws/s3/s3_object.rb', line 985

def copy_to target, options = {}

  unless target.is_a?(S3Object)

    bucket = case
    when options[:bucket] then options[:bucket]
    when options[:bucket_name]
      Bucket.new(options[:bucket_name], :config => config)
    else self.bucket
    end

    target = S3Object.new(bucket, target)
  end

  copy_opts = options.dup
  copy_opts.delete(:bucket)
  copy_opts.delete(:bucket_name)

  target.copy_from(self, copy_opts)
  target

end

#delete(options = {}) ⇒ nil

Deletes the object from its S3 bucket.

Parameters:

  • options (Hash) (defaults to: {})
  • [String] (Hash)

    a customizable set of options

  • [Boolean] (Hash)

    a customizable set of options

Returns:

  • (nil)


396
397
398
399
400
401
402
403
404
405
406
407
408
409
# File 'lib/aws/s3/s3_object.rb', line 396

def delete options = {}
  client.delete_object(options.merge(
    :bucket_name => bucket.name,
    :key => key))

  if options[:delete_instruction_file]
    client.delete_object(
      :bucket_name => bucket.name,
      :key => key + '.instruction')
  end

  nil

end

#etagString

Returns the object's ETag.

Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.

Returns:

  • (String)

    Returns the object's ETag



307
308
309
# File 'lib/aws/s3/s3_object.rb', line 307

def etag
  @etag = config.s3_cache_object_attributes && @etag || head[:etag]
end

#exists?Boolean

Returns true if the object exists in S3.

Returns:

  • (Boolean)

    Returns true if the object exists in S3.



272
273
274
275
276
277
278
# File 'lib/aws/s3/s3_object.rb', line 272

def exists?
  head
rescue Errors::NoSuchKey => e
  false
else
  true
end

#expiration_dateDateTime?

Returns:

  • (DateTime, nil)


332
333
334
# File 'lib/aws/s3/s3_object.rb', line 332

def expiration_date
  head[:expiration_date]
end

#expiration_rule_idString?

Returns:

  • (String, nil)


337
338
339
# File 'lib/aws/s3/s3_object.rb', line 337

def expiration_rule_id
  head[:expiration_rule_id]
end

#head(options = {}) ⇒ Object

Performs a HEAD request against this object and returns an object with useful information about the object, including:

  • metadata (hash of user-supplied key-value pairs)
  • content_length (integer, number of bytes)
  • content_type (as sent to S3 when uploading the object)
  • etag (typically the object's MD5)
  • server_side_encryption (the algorithm used to encrypt the object on the server side, e.g. :aes256)

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Which version of this object to make a HEAD request against.

Returns:

  • A head object response with metadata, content_length, content_type, etag and server_side_encryption.



295
296
297
298
# File 'lib/aws/s3/s3_object.rb', line 295

def head options = {}
  client.head_object(options.merge(
    :bucket_name => bucket.name, :key => key))
end

#last_modifiedTime

Returns the object's last modified time.

Returns:

  • (Time)

    Returns the object's last modified time.



314
315
316
# File 'lib/aws/s3/s3_object.rb', line 314

def last_modified
  @last_modified = config.s3_cache_object_attributes && @last_modified || head[:last_modified]
end

#metadata(options = {}) ⇒ ObjectMetadata

Returns an instance of ObjectMetadata representing the metadata for this object.

Parameters:

  • [String] (Hash)

    a customizable set of options

Returns:

  • (ObjectMetadata)

    Returns an instance of ObjectMetadata representing the metadata for this object.



435
436
437
438
# File 'lib/aws/s3/s3_object.rb', line 435

def  options = {}
  options[:config] = config
  ObjectMetadata.new(self, options)
end

#move_to(target, options = {}) ⇒ S3Object Also known as: rename_to

Moves an object to a new key.

This works by copying the object to a new key and then deleting the old object. This function returns the new object once this is done.

bucket = s3.buckets['old-bucket']
old_obj = bucket.objects['old-key']

# renaming an object returns a new object
new_obj = old_obj.move_to('new-key')

old_obj.key     #=> 'old-key'
old_obj.exists? #=> false

new_obj.key     #=> 'new-key'
new_obj.exists? #=> true

If you need to move an object to a different bucket, pass :bucket or :bucket_name.

obj = s3.buckets['old-bucket'].objects['old-key']
obj.move_to('new-key', :bucket_name => 'new_bucket')

If the copy succeeds, but the then the delete fails, an error will be raised.

Parameters:

  • target (String)

    The key to move this object to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The slash-prefixed name of the bucket the object should be copied into. Defaults to the current object's bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object's bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    When true, the client-side encryption materials will be copied. Without this option, the key and iv are not guaranteed to be transferred to the new object.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (S3Object)

    Returns a new object with the new key.



784
785
786
787
788
# File 'lib/aws/s3/s3_object.rb', line 784

def move_to target, options = {}
  copy = copy_to(target, options)
  delete
  copy
end

#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion

Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.

Note: After you initiate multipart upload and upload one or more parts, you must either complete or abort multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort multipart upload, Amazon S3 frees up the parts storage and stops charging you for the parts storage.

Examples:

Uploading an object in two parts


bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("a" * 5242880)
  upload.add_part("b" * 2097152)
end

Uploading parts out of order


bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.add_part("a" * 5242880, :part_number => 1)
end

Aborting an upload after parts have been added


bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.abort
end

Starting an upload and completing it later by ID


upload = bucket.objects.myobject.multipart_upload
upload.add_part("a" * 5242880)
upload.add_part("b" * 2097152)
id = upload.id

# later or in a different process
upload = bucket.objects.myobject.multipart_uploads[id]
upload.complete(:remote_parts)

Parameters:

  • options (Hash) (defaults to: {})

    Options for the upload.

Options Hash (options):

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :reduced_redundancy (Boolean) — default: false

    If true, Reduced Redundancy Storage will be enabled for the uploaded object.

  • :cache_control (String)

    Can be used to specify caching behavior. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

Yield Parameters:

Returns:

  • (S3Object, ObjectVersion)

    If the bucket has versioning enabled, returns the ObjectVersion representing the version that was uploaded. If versioning is disabled, returns self.



719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
# File 'lib/aws/s3/s3_object.rb', line 719

def multipart_upload(options = {})

  options = options.dup
  add_sse_options(options)

  upload = multipart_uploads.create(options)

  if block_given?
    begin
      yield(upload)
      upload.close
    rescue => e
      upload.abort
      raise e
    end
  else
    upload
  end
end

#multipart_uploadsObjectUploadCollection

Returns an object representing the collection of uploads that are in progress for this object.

Examples:

Abort any in-progress uploads for the object:


object.multipart_uploads.each(&:abort)

Returns:

  • (ObjectUploadCollection)

    Returns an object representing the collection of uploads that are in progress for this object.



745
746
747
# File 'lib/aws/s3/s3_object.rb', line 745

def multipart_uploads
  ObjectUploadCollection.new(self)
end

#presigned_post(options = {}) ⇒ PresignedPost

Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.

Returns:

See Also:



1289
1290
1291
# File 'lib/aws/s3/s3_object.rb', line 1289

def presigned_post(options = {})
  PresignedPost.new(bucket, options.merge(:key => key))
end

#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS

Generates a public (not authenticated) URL for the object.

Parameters:

  • options (Hash) (defaults to: {})

    Options for generating the URL.

Options Hash (options):

  • :secure (Boolean)

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

Returns:

  • (URI::HTTP, URI::HTTPS)


1277
1278
1279
1280
# File 'lib/aws/s3/s3_object.rb', line 1277

def public_url(options = {})
  options[:secure] = config.use_ssl? unless options.key?(:secure)
  build_uri(request_for_signing(options), options)
end

#read(options = {}, &read_block) ⇒ Object

Note:

:range option cannot be used with client-side encryption

Note:

All decryption reads incur at least an extra HEAD operation.

Fetches the object data from S3. If you pass a block to this method, the data will be yielded to the block in chunks as it is read off the HTTP response.

Read an object from S3 in chunks

When downloading large objects it is recommended to pass a block to #read. Data will be yielded to the block as it is read off the HTTP response.

# read an object from S3 to a file
File.open('output.txt', 'wb') do |file|
  bucket.objects['key'].read do |chunk|
    file.write(chunk)
  end
end

Reading an object without a block

When you omit the block argument to #read, then the entire HTTP response and read and the object data is loaded into memory.

bucket.objects['key'].read
#=> 'object-contents-here'

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Reads data from a specific version of this object.

  • :if_unmodified_since (Time)

    If specified, the method will raise AWS::S3::Errors::PreconditionFailed unless the object has not been modified since the given time.

  • :if_modified_since (Time)

    If specified, the method will raise AWS::S3::Errors::NotModified if the object has not been modified since the given time.

  • :if_match (String)

    If specified, the method will raise AWS::S3::Errors::PreconditionFailed unless the object ETag matches the provided value.

  • :if_none_match (String)

    If specified, the method will raise AWS::S3::Errors::NotModified if the object ETag matches the provided value.

  • :range (Range)

    A byte range to read data from

  • :encryption_key (OpenSSL::PKey::RSA, String) — default: nil

    If this option is set, the object will be decrypted using envelope encryption. The valid values are OpenSSL asymmetric keys OpenSSL::Pkey::RSA or strings representing symmetric keys of an AES-128/192/256-ECB cipher as a String. This value defaults to the value in s3_encryption_key; for more information, see AWS.config.

    Symmetric Keys:

    cipher = OpenSSL::Cipher.new('AES-256-ECB') key = cipher.random_key

    Asymmetric keys can also be generated as so: key = OpenSSL::PKey::RSA.new(KEY_SIZE)

  • :encryption_materials_location (Symbol) — default: :metadata

    Set this to :instruction_file if the encryption materials are not stored in the object metadata



1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
# File 'lib/aws/s3/s3_object.rb', line 1082

def read options = {}, &read_block

  options[:bucket_name] = bucket.name
  options[:key] = key

  if should_decrypt?(options)
    get_encrypted_object(options, &read_block)
  else
    resp_data = get_object(options, &read_block)
    block_given? ? resp_data : resp_data[:data]
  end

end

#reduced_redundancy=(value) ⇒ true, false

Note:

Changing the storage class of an object incurs a COPY operation.

Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).

Parameters:

  • value (true, false)

    If this is true, the object will be copied in place and stored with reduced redundancy at a lower cost. Otherwise, the object will be copied and stored with the standard storage class.

Returns:

  • (true, false)

    The value parameter.



1305
1306
1307
1308
# File 'lib/aws/s3/s3_object.rb', line 1305

def reduced_redundancy= value
  copy_from(key, :reduced_redundancy => value)
  value
end

#restore(options = {}) ⇒ Boolean

Restores a temporary copy of an archived object from the Glacier storage tier. After the specified days, Amazon S3 deletes the temporary copy. Note that the object remains archived; Amazon S3 deletes only the restored copy.

Restoring an object does not occur immediately. Use #restore_in_progress? to check the status of the operation.

Parameters:

  • [Integer] (Hash)

    a customizable set of options

Returns:

  • (Boolean)

    true if a restore can be initiated.

Since:

  • 1.7.2



422
423
424
425
426
427
428
429
# File 'lib/aws/s3/s3_object.rb', line 422

def restore options = {}
  options[:days] ||= 1
  client.restore_object(options.merge({
    :bucket_name => bucket.name,
    :key => key,
  }))
  true
end

#restore_expiration_dateDateTime?

Returns:

  • (DateTime)

    the time when the temporarily restored object will be removed from S3. Note that the original object will remain available in Glacier.

  • (nil)

    if the object was not restored from an archived copy

Since:

  • 1.7.2



368
369
370
# File 'lib/aws/s3/s3_object.rb', line 368

def restore_expiration_date
  head[:restore_expiration_date]
end

#restore_in_progress?Boolean

Returns whether a #restore operation on the object is currently being performed on the object.

Returns:

  • (Boolean)

    whether a #restore operation on the object is currently being performed on the object.

See Also:

Since:

  • 1.7.2



358
359
360
# File 'lib/aws/s3/s3_object.rb', line 358

def restore_in_progress?
  head[:restore_in_progress]
end

#restored_object?Boolean

Returns whether the object is a temporary copy of an archived object in the Glacier storage class.

Returns:

  • (Boolean)

    whether the object is a temporary copy of an archived object in the Glacier storage class.

Since:

  • 1.7.2



375
376
377
# File 'lib/aws/s3/s3_object.rb', line 375

def restored_object?
  !!head[:restore_expiration_date]
end

#server_side_encryptionSymbol?

Returns the algorithm used to encrypt the object on the server side, or nil if SSE was not used when storing the object.

Returns:

  • (Symbol, nil)

    Returns the algorithm used to encrypt the object on the server side, or nil if SSE was not used when storing the object.



344
345
346
# File 'lib/aws/s3/s3_object.rb', line 344

def server_side_encryption
  head[:server_side_encryption]
end

#server_side_encryption?true, false

Returns true if the object was stored using server side encryption.

Returns:

  • (true, false)

    Returns true if the object was stored using server side encryption.



350
351
352
# File 'lib/aws/s3/s3_object.rb', line 350

def server_side_encryption?
  !server_side_encryption.nil?
end

#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS

Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.

Examples:

Generate a url to read an object


bucket.objects.myobject.url_for(:read)

Generate a url to delete an object


bucket.objects.myobject.url_for(:delete)

Override response headers for reading an object


object = bucket.objects.myobject
url = object.url_for(:read,
                     :response_content_type => "application/json")

Generate a url that expires in 10 minutes


bucket.objects.myobject.url_for(:read, :expires => 10*60)

Parameters:

  • method (Symbol, String)

    The HTTP verb or object method for which the returned URL will be valid. Valid values:

    • :get or :read
    • :put or :write
    • :delete
    • :head
  • options (Hash) (defaults to: {})

    Additional options for generating the URL.

Options Hash (options):

  • :expires (Object)

    Sets the expiration time of the URL; after this time S3 will return an error if the URL is used. This can be an integer (to specify the number of seconds after the current time), a string (which is parsed as a date using Time#parse), a Time, or a DateTime object. This option defaults to one hour after the current time.

  • :secure (Boolean) — default: true

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

  • :content_type (String)

    Object content type for HTTP PUT. When provided, has to be also added to the request header as a 'content-type' field

  • :content_md5 (String)

    Object MD5 hash for HTTP PUT. When provided, has to be also added to the request header as a 'content-md5' field

  • :endpoint (String)

    Sets the hostname of the endpoint.

  • :port (Integer)

    Sets the port of the endpoint (overrides config.s3_port).

  • :force_path_style (Boolean) — default: false

    Indicates whether the generated URL should place the bucket name in the path (true) or as a subdomain (false).

  • :response_content_type (String)

    Sets the Content-Type header of the response when performing an HTTP GET on the returned URL.

  • :response_content_language (String)

    Sets the Content-Language header of the response when performing an HTTP GET on the returned URL.

  • :response_expires (String)

    Sets the Expires header of the response when performing an HTTP GET on the returned URL.

  • :response_cache_control (String)

    Sets the Cache-Control header of the response when performing an HTTP GET on the returned URL.

  • :response_content_disposition (String)

    Sets the Content-Disposition header of the response when performing an HTTP GET on the returned URL.

  • :acl (String)

    The value to use for the x-amz-acl.

  • :response_content_encoding (String)

    Sets the Content-Encoding header of the response when performing an HTTP GET on the returned URL.

  • :signature_version (:v3, :v4) — default: :v3

Returns:

  • (URI::HTTP, URI::HTTPS)


1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
# File 'lib/aws/s3/s3_object.rb', line 1251

def url_for(method, options = {})

  options = options.dup
  options[:expires] = expiration_timestamp(options[:expires])
  options[:secure] = config.use_ssl? unless options.key?(:secure)
  options[:signature_version] ||= config.s3_signature_version

  case options[:signature_version]
  when :v3 then presign_v3(method, options)
  when :v4 then presign_v4(method, options)
  else
    msg = "invalid signature version, expected :v3 or :v4, got "
    msg << options[:signature_version].inspect
    raise ArgumentError, msg
  end
end

#versionsObjectVersionCollection

Returns a collection representing all the object versions for this object.

Examples:


bucket.versioning_enabled? # => true
version = bucket.objects["mykey"].versions.latest

Returns:



449
450
451
# File 'lib/aws/s3/s3_object.rb', line 449

def versions
  ObjectVersionCollection.new(self)
end

#write(data, options = {}) ⇒ S3Object, ObjectVersion

Uploads data to the object in S3.

obj = s3.buckets['bucket-name'].objects['key']

# strings
obj.write("HELLO")

# files (by path)
obj.write(Pathname.new('path/to/file.txt'))

# file objects
obj.write(File.open('path/to/file.txt', 'rb'))

# IO objects (must respond to #read and #eof?)
obj.write(io)

Multipart Uploads vs Single Uploads

This method will intelligently choose between uploading the file in a signal request and using #multipart_upload. You can control this behavior by configuring the thresholds and you can disable the multipart feature as well.

# always send the file in a single request
obj.write(file, :single_request => true)

# upload the file in parts if the total file size exceeds 100MB
obj.write(file, :multipart_threshold => 100 * 1024 * 1024)

Parameters:

  • data (String, Pathname, File, IO)

    The data to upload. This may be a: * String * Pathname * File * IO * Any object that responds to #read and #eof?.

  • options (Hash) (defaults to: {})

    Additional upload options.

Options Hash (options):

  • :content_length (Integer)

    If provided, this option must match the total number of bytes written to S3. This options is required when it is not possible to automatically determine the size of data.

  • :estimated_content_length (Integer)

    When uploading data of unknown content length, you may specify this option to hint what mode of upload should take place. When :estimated_content_length exceeds the :multipart_threshold, then the data will be uploaded in parts, otherwise it will be read into memory and uploaded via Client#put_object.

  • :single_request (Boolean) — default: false

    When true, this method will always upload the data in a single request (via Client#put_object). When false, this method will choose between Client#put_object and #multipart_upload.

  • :multipart_threshold (Integer) — default: 16777216

    Specifies the maximum size (in bytes) of a single-request upload. If the data exceeds this threshold, it will be uploaded via #multipart_upload. The default threshold is 16MB and can be configured via AWS.config(:s3_multipart_threshold => ...).

  • :multipart_min_part_size (Integer) — default: 5242880

    The minimum size of a part to upload to S3 when using #multipart_upload. S3 will reject parts smaller than 5MB (except the final part). The default is 5MB and can be configured via AWS.config(:s3_multipart_min_part_size => ...).

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.

  • :acl (Symbol, String) — default: :private

    A canned access control policy. Valid values are:

    • :private
    • :public_read
    • :public_read_write
    • :authenticated_read
    • :bucket_owner_read
    • :bucket_owner_full_control
  • :grant_read (String)
  • :grant_write (String)
  • :grant_read_acp (String)
  • :grant_write_acp (String)
  • :grant_full_control (String)
  • :reduced_redundancy (Boolean) — default: false

    When true, this object will be stored with Reduced Redundancy Storage.

  • :cache_control (String)

    Can be used to specify caching behavior. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_md5 (String)

    The base64 encoded content md5 of the data.

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :encryption_key (OpenSSL::PKey::RSA, String)

    Set this to encrypt the data client-side using envelope encryption. The key must be an OpenSSL asymmetric key or a symmetric key string (16, 24 or 32 bytes in length).

  • :encryption_materials_location (Symbol) — default: :metadata

    Set this to :instruction_file if you prefer to store the client-side encryption materials in a separate object in S3 instead of in the object metadata.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

  • :website_redirect_location (String)

Returns:



600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
# File 'lib/aws/s3/s3_object.rb', line 600

def write *args, &block

  options = compute_write_options(*args, &block)

  add_storage_class_option(options)
  add_sse_options(options)
  add_cse_options(options)

  if use_multipart?(options)
    write_with_multipart(options)
  else
    write_with_put_object(options)
  end

end