You are viewing documentation for version 1 of the AWS SDK for Ruby. Version 2 documentation can be found here.
Class: AWS::S3::S3Object
- Inherits:
-
Object
- Object
- AWS::S3::S3Object
- Defined in:
- lib/aws/s3/s3_object.rb
Overview
Represents an object in S3. Objects live in a bucket and have unique keys.
Getting Objects
You can get an object by its key.
s3 = AWS::S3.new
obj = s3.buckets['my-bucket'].objects['key'] # no request made
You can also get objects by enumerating a objects in a bucket.
bucket.objects.each do |obj|
puts obj.key
end
See ObjectCollection for more information on finding objects.
Creating Objects
You create an object by writing to it. The following two expressions are equivalent.
obj = bucket.objects.create('key', 'data')
obj = bucket.objects['key'].write('data')
Writing Objects
To upload data to S3, you simply need to call #write on an object.
obj.write('Hello World!')
obj.read
#=> 'Hello World!'
Uploading Files
You can upload a file to S3 in a variety of ways. Given a path to a file (as a string) you can do any of the following:
# specify the data as a path to a file
obj.write(Pathname.new(path_to_file))
# also works this way
obj.write(:file => path_to_file)
# Also accepts an open file object
file = File.open(path_to_file, 'rb')
obj.write(file)
All three examples above produce the same result. The file will be streamed to S3 in chunks. It will not be loaded entirely into memory.
Streaming Uploads
When you call #write with an IO-like object, it will be streamed to S3 in chunks.
While it is possible to determine the size of many IO objects, you may
have to specify the :content_length of your IO object.
If the exact size can not be known, you may provide an
:estimated_content_length
. Depending on the size (actual or
estimated) of your data, it will be uploaded in a single request or
in multiple requests via #multipart_upload.
You may also stream uploads to S3 using a block:
obj.write do |buffer, bytes|
# writing fewer than the requested number of bytes to the buffer
# will cause write to stop yielding to the block
end
Reading Objects
You can read an object directly using #read. Be warned, this will load the entire object into memory and is not recommended for large objects.
obj.write('abc')
puts obj.read
#=> abc
Streaming Downloads
If you want to stream an object from S3, you can pass a block to #read.
File.open('output', 'wb') do |file|
large_object.read do |chunk|
file.write(chunk)
end
end
Encryption
Amazon S3 can encrypt objects for you service-side. You can also use client-side encryption.
Server Side Encryption
You can specify to use server side encryption when writing an object.
obj.write('data', :server_side_encryption => :aes256)
You can also make this the default behavior.
AWS.config(:s3_server_side_encryption => :aes256)
s3 = AWS::S3.new
s3.buckets['name'].objects['key'].write('abc') # will be encrypted
Client Side Encryption
Client side encryption utilizes envelope encryption, so that your keys are never sent to S3. You can use a symetric key or an asymmetric key pair.
Symmetric Key Encryption
An AES key is used for symmetric encryption. The key can be 128, 192, and 256 bit sizes. Start by generating key or read a previously generated key.
# generate a new random key
my_key = OpenSSL::Cipher.new("AES-256-ECB").random_key
# read an existing key from disk
my_key = File.read("my_key.der")
Now you can encrypt locally and upload the encrypted data to S3. To do this, you need to provide your key.
obj = bucket.objects["my-text-object"]
# encrypt then upload data
obj.write("MY TEXT", :encryption_key => my_key)
# try read the object without decrypting, oops
obj.read
#=> '.....'
Lastly, you can download and decrypt by providing the same key.
obj.read(:encryption_key => my_key)
#=> "MY TEXT"
Asymmetric Key Pair
A RSA key pair is used for asymmetric encryption. The public key is used for encryption and the private key is used for decryption. Start by generating a key.
my_key = OpenSSL::PKey::RSA.new(1024)
Provide your key to #write and the data will be encrypted before it is uploaded. Pass the same key to #read to decrypt the data when you download it.
obj = bucket.objects["my-text-object"]
# encrypt and upload the data
obj.write("MY TEXT", :encryption_key => my_key)
# download and decrypt the data
obj.read(:encryption_key => my_key)
#=> "MY TEXT"
Configuring storage locations
By default, encryption materials are stored in the object metadata. If you prefer, you can store the encryption materials in a separate object in S3. This object will have the same key + '.instruction'.
# new object, does not exist yet
obj = bucket.objects["my-text-object"]
# no instruction file present
bucket.objects['my-text-object.instruction'].exists?
#=> false
# store the encryption materials in the instruction file
# instead of obj#metadata
obj.write("MY TEXT",
:encryption_key => MY_KEY,
:encryption_materials_location => :instruction_file)
bucket.objects['my-text-object.instruction'].exists?
#=> true
If you store the encryption materials in an instruction file, you must tell #read this or it will fail to find your encryption materials.
# reading an encrypted file whos materials are stored in an
# instruction file, and not metadata
obj.read(:encryption_key => MY_KEY,
:encryption_materials_location => :instruction_file)
Configuring default behaviors
You can configure the default key such that it will automatically encrypt and decrypt for you. You can do this globally or for a single S3 interface
# all objects uploaded/downloaded with this s3 object will be
# encrypted/decrypted
s3 = AWS::S3.new(:s3_encryption_key => "MY_KEY")
# set the key to always encrypt/decrypt
AWS.config(:s3_encryption_key => "MY_KEY")
You can also configure the default storage location for the encryption materials.
AWS.config(:s3_encryption_materials_location => :instruction_file)
Constant Summary
Instance Attribute Summary collapse
-
#bucket ⇒ Bucket
readonly
The bucket this object is in.
-
#key ⇒ String
readonly
The objects unique key.
Instance Method Summary collapse
-
#==(other) ⇒ Boolean
(also: #eql?)
Returns true if the other object belongs to the same bucket and has the same key.
-
#acl ⇒ AccessControlList
Returns the object's access control list.
-
#acl=(acl) ⇒ nil
Sets the objects's ACL (access control list).
-
#content_length ⇒ Integer
Size of the object in bytes.
-
#content_type ⇒ String
Returns the content type as reported by S3, defaults to an empty string when not provided during upload.
-
#copy_from(source, options = {}) ⇒ nil
Copies data from one S3 object to another.
-
#copy_to(target, options = {}) ⇒ S3Object
Copies data from the current object to another object in S3.
-
#delete(options = {}) ⇒ nil
Deletes the object from its S3 bucket.
-
#etag ⇒ String
Returns the object's ETag.
-
#exists? ⇒ Boolean
Returns
true
if the object exists in S3. -
#expiration_date ⇒ DateTime?
-
#expiration_rule_id ⇒ String?
-
#head(options = {}) ⇒ Object
Performs a HEAD request against this object and returns an object with useful information about the object, including:.
-
#initialize(bucket, key, opts = {}) ⇒ S3Object
constructor
A new instance of S3Object.
-
#last_modified ⇒ Time
Returns the object's last modified time.
-
#metadata(options = {}) ⇒ ObjectMetadata
Returns an instance of ObjectMetadata representing the metadata for this object.
-
#move_to(target, options = {}) ⇒ S3Object
(also: #rename_to)
Moves an object to a new key.
-
#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion
Performs a multipart upload.
-
#multipart_uploads ⇒ ObjectUploadCollection
Returns an object representing the collection of uploads that are in progress for this object.
-
#presigned_post(options = {}) ⇒ PresignedPost
Generates fields for a presigned POST to this object.
-
#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a public (not authenticated) URL for the object.
-
#read(options = {}, &read_block) ⇒ Object
Fetches the object data from S3.
-
#reduced_redundancy=(value) ⇒ true, false
Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).
-
#restore(options = {}) ⇒ Boolean
Restores a temporary copy of an archived object from the Glacier storage tier.
-
#restore_expiration_date ⇒ DateTime?
-
#restore_in_progress? ⇒ Boolean
Whether a #restore operation on the object is currently being performed on the object.
-
#restored_object? ⇒ Boolean
Whether the object is a temporary copy of an archived object in the Glacier storage class.
-
#server_side_encryption ⇒ Symbol?
Returns the algorithm used to encrypt the object on the server side, or
nil
if SSE was not used when storing the object. -
#server_side_encryption? ⇒ true, false
Returns true if the object was stored using server side encryption.
-
#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a presigned URL for an operation on this object.
-
#versions ⇒ ObjectVersionCollection
Returns a collection representing all the object versions for this object.
-
#write(data, options = {}) ⇒ S3Object, ObjectVersion
Uploads data to the object in S3.
Constructor Details
#initialize(bucket, key, opts = {}) ⇒ S3Object
Returns a new instance of S3Object
244 245 246 247 248 249 250 251 |
# File 'lib/aws/s3/s3_object.rb', line 244 def initialize(bucket, key, opts = {}) @content_length = opts.delete(:content_length) @etag = opts.delete(:etag) @last_modified = opts.delete(:last_modified) super @key = key @bucket = bucket end |
Instance Attribute Details
#bucket ⇒ Bucket (readonly)
Returns The bucket this object is in.
257 258 259 |
# File 'lib/aws/s3/s3_object.rb', line 257 def bucket @bucket end |
#key ⇒ String (readonly)
Returns The objects unique key
254 255 256 |
# File 'lib/aws/s3/s3_object.rb', line 254 def key @key end |
Instance Method Details
#==(other) ⇒ Boolean Also known as: eql?
Returns true if the other object belongs to the same bucket and has the same key.
266 267 268 |
# File 'lib/aws/s3/s3_object.rb', line 266 def == other other.kind_of?(S3Object) and other.bucket == bucket and other.key == key end |
#acl ⇒ AccessControlList
Returns the object's access control list. This will be an
instance of AccessControlList, plus an additional change
method:
object.acl.change do |acl|
# remove any grants to someone other than the bucket owner
owner_id = object.bucket.owner.id
acl.grants.reject! do |g|
g.grantee.canonical_user_id != owner_id
end
end
Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it's possible that you may overwrite a concurrent update to the ACL using this method.
1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 |
# File 'lib/aws/s3/s3_object.rb', line 1128 def acl resp = client.get_object_acl(:bucket_name => bucket.name, :key => key) acl = AccessControlList.new(resp.data) acl.extend ACLProxy acl.object = self acl end |
#acl=(acl) ⇒ nil
Sets the objects's ACL (access control list). You can provide an ACL in a number of different formats.
1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 |
# File 'lib/aws/s3/s3_object.rb', line 1143 def acl=(acl) client_opts = {} client_opts[:bucket_name] = bucket.name client_opts[:key] = key client.put_object_acl((acl).merge(client_opts)) nil end |
#content_length ⇒ Integer
Returns Size of the object in bytes.
319 320 321 |
# File 'lib/aws/s3/s3_object.rb', line 319 def content_length @content_length = config.s3_cache_object_attributes && @content_length || head[:content_length] end |
#content_type ⇒ String
S3 does not compute content-type. It reports the content-type as was reported during the file upload.
Returns the content type as reported by S3, defaults to an empty string when not provided during upload.
327 328 329 |
# File 'lib/aws/s3/s3_object.rb', line 327 def content_type head[:content_type] end |
#copy_from(source, options = {}) ⇒ nil
This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from the source object. If you don't specify any of these options when copying, the object will have the default values as described below.
Copies data from one S3 object to another.
S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.
868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 |
# File 'lib/aws/s3/s3_object.rb', line 868 def copy_from source, = {} = .dup [:copy_source] = case source when S3Object "/#{source.bucket.name}/#{source.key}" when ObjectVersion [:version_id] = source.version_id "/#{source.object.bucket.name}/#{source.object.key}" else if [:bucket] "/#{.delete(:bucket).name}/#{source}" elsif [:bucket_name] # oops, this should be slash-prefixed, but unable to change # this without breaking users that already work-around this # bug by sending :bucket_name => "/bucket-name" "#{.delete(:bucket_name)}/#{source}" else "/#{self.bucket.name}/#{source}" end end if [:metadata, :content_disposition, :content_type, :cache_control, ].any? {|opt| .key?(opt) } then [:metadata_directive] = 'REPLACE' else [:metadata_directive] ||= 'COPY' end # copies client-side encryption materials (from the metadata or # instruction file) if .delete(:client_side_encrypted) copy_cse_materials(source, ) end () [:storage_class] = .delete(:reduced_redundancy) ? 'REDUCED_REDUNDANCY' : 'STANDARD' [:bucket_name] = bucket.name [:key] = key if use_multipart_copy?() multipart_copy() else resp = client.copy_object() end nil end |
#copy_to(target, options = {}) ⇒ S3Object
This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from this object to the new object. If you don't specify any of these options when copying, the new object will have the default values as described below.
Copies data from the current object to another object in S3.
S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.
985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 |
# File 'lib/aws/s3/s3_object.rb', line 985 def copy_to target, = {} unless target.is_a?(S3Object) bucket = case when [:bucket] then [:bucket] when [:bucket_name] Bucket.new([:bucket_name], :config => config) else self.bucket end target = S3Object.new(bucket, target) end copy_opts = .dup copy_opts.delete(:bucket) copy_opts.delete(:bucket_name) target.copy_from(self, copy_opts) target end |
#delete(options = {}) ⇒ nil
Deletes the object from its S3 bucket.
396 397 398 399 400 401 402 403 404 405 406 407 408 409 |
# File 'lib/aws/s3/s3_object.rb', line 396 def delete = {} client.delete_object(.merge( :bucket_name => bucket.name, :key => key)) if [:delete_instruction_file] client.delete_object( :bucket_name => bucket.name, :key => key + '.instruction') end nil end |
#etag ⇒ String
Returns the object's ETag.
Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.
307 308 309 |
# File 'lib/aws/s3/s3_object.rb', line 307 def etag @etag = config.s3_cache_object_attributes && @etag || head[:etag] end |
#exists? ⇒ Boolean
Returns true
if the object exists in S3.
272 273 274 275 276 277 278 |
# File 'lib/aws/s3/s3_object.rb', line 272 def exists? head rescue Errors::NoSuchKey => e false else true end |
#expiration_date ⇒ DateTime?
332 333 334 |
# File 'lib/aws/s3/s3_object.rb', line 332 def expiration_date head[:expiration_date] end |
#expiration_rule_id ⇒ String?
337 338 339 |
# File 'lib/aws/s3/s3_object.rb', line 337 def expiration_rule_id head[:expiration_rule_id] end |
#head(options = {}) ⇒ Object
Performs a HEAD request against this object and returns an object with useful information about the object, including:
- metadata (hash of user-supplied key-value pairs)
- content_length (integer, number of bytes)
- content_type (as sent to S3 when uploading the object)
- etag (typically the object's MD5)
- server_side_encryption (the algorithm used to encrypt the
object on the server side, e.g.
:aes256
)
295 296 297 298 |
# File 'lib/aws/s3/s3_object.rb', line 295 def head = {} client.head_object(.merge( :bucket_name => bucket.name, :key => key)) end |
#last_modified ⇒ Time
Returns the object's last modified time.
314 315 316 |
# File 'lib/aws/s3/s3_object.rb', line 314 def last_modified @last_modified = config.s3_cache_object_attributes && @last_modified || head[:last_modified] end |
#metadata(options = {}) ⇒ ObjectMetadata
Returns an instance of ObjectMetadata representing the metadata for this object.
435 436 437 438 |
# File 'lib/aws/s3/s3_object.rb', line 435 def = {} [:config] = config ObjectMetadata.new(self, ) end |
#move_to(target, options = {}) ⇒ S3Object Also known as: rename_to
Moves an object to a new key.
This works by copying the object to a new key and then deleting the old object. This function returns the new object once this is done.
bucket = s3.buckets['old-bucket']
old_obj = bucket.objects['old-key']
# renaming an object returns a new object
new_obj = old_obj.move_to('new-key')
old_obj.key #=> 'old-key'
old_obj.exists? #=> false
new_obj.key #=> 'new-key'
new_obj.exists? #=> true
If you need to move an object to a different bucket, pass
:bucket
or :bucket_name
.
obj = s3.buckets['old-bucket'].objects['old-key']
obj.move_to('new-key', :bucket_name => 'new_bucket')
If the copy succeeds, but the then the delete fails, an error will be raised.
784 785 786 787 788 |
# File 'lib/aws/s3/s3_object.rb', line 784 def move_to target, = {} copy = copy_to(target, ) delete copy end |
#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion
Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.
Note: After you initiate multipart upload and upload one or more parts, you must either complete or abort multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort multipart upload, Amazon S3 frees up the parts storage and stops charging you for the parts storage.
719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 |
# File 'lib/aws/s3/s3_object.rb', line 719 def multipart_upload( = {}) = .dup () upload = multipart_uploads.create() if block_given? begin yield(upload) upload.close rescue => e upload.abort raise e end else upload end end |
#multipart_uploads ⇒ ObjectUploadCollection
Returns an object representing the collection of uploads that are in progress for this object.
745 746 747 |
# File 'lib/aws/s3/s3_object.rb', line 745 def multipart_uploads ObjectUploadCollection.new(self) end |
#presigned_post(options = {}) ⇒ PresignedPost
Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.
1289 1290 1291 |
# File 'lib/aws/s3/s3_object.rb', line 1289 def presigned_post( = {}) PresignedPost.new(bucket, .merge(:key => key)) end |
#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a public (not authenticated) URL for the object.
1277 1278 1279 1280 |
# File 'lib/aws/s3/s3_object.rb', line 1277 def public_url( = {}) [:secure] = config.use_ssl? unless .key?(:secure) build_uri(request_for_signing(), ) end |
#read(options = {}, &read_block) ⇒ Object
:range
option cannot be used with client-side encryption
All decryption reads incur at least an extra HEAD operation.
Fetches the object data from S3. If you pass a block to this method, the data will be yielded to the block in chunks as it is read off the HTTP response.
Read an object from S3 in chunks
When downloading large objects it is recommended to pass a block to #read. Data will be yielded to the block as it is read off the HTTP response.
# read an object from S3 to a file
File.open('output.txt', 'wb') do |file|
bucket.objects['key'].read do |chunk|
file.write(chunk)
end
end
Reading an object without a block
When you omit the block argument to #read, then the entire HTTP response and read and the object data is loaded into memory.
bucket.objects['key'].read
#=> 'object-contents-here'
1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 |
# File 'lib/aws/s3/s3_object.rb', line 1082 def read = {}, &read_block [:bucket_name] = bucket.name [:key] = key if should_decrypt?() get_encrypted_object(, &read_block) else resp_data = get_object(, &read_block) block_given? ? resp_data : resp_data[:data] end end |
#reduced_redundancy=(value) ⇒ true, false
Changing the storage class of an object incurs a COPY operation.
Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).
1305 1306 1307 1308 |
# File 'lib/aws/s3/s3_object.rb', line 1305 def reduced_redundancy= value copy_from(key, :reduced_redundancy => value) value end |
#restore(options = {}) ⇒ Boolean
Restores a temporary copy of an archived object from the
Glacier storage tier. After the specified days
, Amazon
S3 deletes the temporary copy. Note that the object
remains archived; Amazon S3 deletes only the restored copy.
Restoring an object does not occur immediately. Use #restore_in_progress? to check the status of the operation.
422 423 424 425 426 427 428 429 |
# File 'lib/aws/s3/s3_object.rb', line 422 def restore = {} [:days] ||= 1 client.restore_object(.merge({ :bucket_name => bucket.name, :key => key, })) true end |
#restore_expiration_date ⇒ DateTime?
368 369 370 |
# File 'lib/aws/s3/s3_object.rb', line 368 def restore_expiration_date head[:restore_expiration_date] end |
#restore_in_progress? ⇒ Boolean
Returns whether a #restore operation on the object is currently being performed on the object.
358 359 360 |
# File 'lib/aws/s3/s3_object.rb', line 358 def restore_in_progress? head[:restore_in_progress] end |
#restored_object? ⇒ Boolean
Returns whether the object is a temporary copy of an archived object in the Glacier storage class.
375 376 377 |
# File 'lib/aws/s3/s3_object.rb', line 375 def restored_object? !!head[:restore_expiration_date] end |
#server_side_encryption ⇒ Symbol?
Returns the algorithm used to encrypt
the object on the server side, or nil
if SSE was not used
when storing the object.
344 345 346 |
# File 'lib/aws/s3/s3_object.rb', line 344 def server_side_encryption head[:server_side_encryption] end |
#server_side_encryption? ⇒ true, false
Returns true if the object was stored using server side encryption.
350 351 352 |
# File 'lib/aws/s3/s3_object.rb', line 350 def server_side_encryption? !server_side_encryption.nil? end |
#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.
1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 |
# File 'lib/aws/s3/s3_object.rb', line 1251 def url_for(method, = {}) = .dup [:expires] = ([:expires]) [:secure] = config.use_ssl? unless .key?(:secure) [:signature_version] ||= config.s3_signature_version case [:signature_version] when :v3 then presign_v3(method, ) when :v4 then presign_v4(method, ) else msg = "invalid signature version, expected :v3 or :v4, got " msg << [:signature_version].inspect raise ArgumentError, msg end end |
#versions ⇒ ObjectVersionCollection
Returns a collection representing all the object versions for this object.
449 450 451 |
# File 'lib/aws/s3/s3_object.rb', line 449 def versions ObjectVersionCollection.new(self) end |
#write(data, options = {}) ⇒ S3Object, ObjectVersion
Uploads data to the object in S3.
obj = s3.buckets['bucket-name'].objects['key']
# strings
obj.write("HELLO")
# files (by path)
obj.write(Pathname.new('path/to/file.txt'))
# file objects
obj.write(File.open('path/to/file.txt', 'rb'))
# IO objects (must respond to #read and #eof?)
obj.write(io)
Multipart Uploads vs Single Uploads
This method will intelligently choose between uploading the file in a signal request and using #multipart_upload. You can control this behavior by configuring the thresholds and you can disable the multipart feature as well.
# always send the file in a single request
obj.write(file, :single_request => true)
# upload the file in parts if the total file size exceeds 100MB
obj.write(file, :multipart_threshold => 100 * 1024 * 1024)
600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 |
# File 'lib/aws/s3/s3_object.rb', line 600 def write *args, &block = (*args, &block) add_storage_class_option() () () if use_multipart?() write_with_multipart() else write_with_put_object() end end |