This guide focuses on the AWS SDK for PHP client for Amazon Simple Storage Service. This guide assumes that you have already downloaded and installed the AWS SDK for PHP. See Installation for more information on getting started.
First you need to create a client object using one of the following techniques.
The easiest way to get up and running quickly is to use the Aws\S3\S3Client::factory()
method
and provide your credential profile (via the profile
option), which identifies the set of credentials you want to
use from your ~/.aws/credentials
file (see Using the AWS credentials file and credential profiles).
use Aws\S3\S3Client;
$client = S3Client::factory(array(
'profile' => '<profile in your aws credentials file>'
));
You can provide your credential profile like in the preceding example, specify your access keys directly (via key
and secret
), or you can choose to omit any credential information if you are using AWS Identity and Access
Management (IAM) roles for EC2 instances
or credentials sourced from the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables.
Note
The profile
option and AWS credential file support is only available for version 2.6.1 of the SDK and higher.
We recommend that all users update their copies of the SDK to take advantage of this feature, which is a safer way
to specify credentials than explicitly providing key
and secret
.
A more robust way to connect to Amazon Simple Storage Service is through the service builder. This allows you to specify credentials and other configuration settings in a configuration file. These settings can then be shared across all clients so that you only have to specify your settings once.
use Aws\Common\Aws;
// Create a service builder using a configuration file
$aws = Aws::factory('/path/to/my_config.json');
// Get the client from the builder by namespace
$client = $aws->get('S3');
For more information about configuration files, see Configuring the SDK.
Now that we've created a client object, let's create a bucket. This bucket will be used throughout the remainder of this guide.
$client->createBucket(array('Bucket' => 'mybucket'));
If you run the above code example unaltered, you'll probably trigger the following exception:
PHP Fatal error: Uncaught Aws\S3\Exception\BucketAlreadyExistsException: AWS Error
Code: BucketAlreadyExists, Status Code: 409, AWS Request ID: D94E6394791E98A4,
AWS Error Type: client, AWS Error Message: The requested bucket name is not
available. The bucket namespace is shared by all users of the system. Please select
a different name and try again.
This is because bucket names in Amazon S3 reside in a global namespace. You'll need to change the actual name of the bucket used in the examples of this tutorial in order for them to work correctly.
The above example creates a bucket in the standard us-east-1 region. You can change the bucket location by passing a
LocationConstraint
value.
// Create a valid bucket and use a LocationConstraint
$result = $client->createBucket(array(
'Bucket' => $bucket,
'LocationConstraint' => 'us-west-2',
));
// Get the Location header of the response
echo $result['Location'] . "\n";
// Get the request ID
echo $result['RequestId'] . "\n";
Now that we've created a bucket, let's force our application to wait until the bucket exists. This can be done easily using a waiter. The following snippet of code will poll the bucket until it exists or the maximum number of polling attempts are completed.
// Poll the bucket until it is accessible
$client->waitUntil('BucketExists', array('Bucket' => $bucket));
Now that you've created a bucket, let's put some data in it. The following example creates an object in your bucket called data.txt that contains 'Hello!'.
// Upload an object to Amazon S3
$result = $client->putObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt',
'Body' => 'Hello!'
));
// Access parts of the result object
echo $result['Expiration'] . "\n";
echo $result['ServerSideEncryption'] . "\n";
echo $result['ETag'] . "\n";
echo $result['VersionId'] . "\n";
echo $result['RequestId'] . "\n";
// Get the URL the object can be downloaded from
echo $result['ObjectURL'] . "\n";
The AWS SDK for PHP will attempt to automatically determine the most appropriate Content-Type header used to store the
object. If you are using a less common file extension and your Content-Type header is not added automatically, you can
add a Content-Type header by passing a ContentType
option to the operation.
The above example uploaded text data to your object. You can alternatively upload the contents of a file by passing
the SourceFile
option. Let's also put some metadata on the object.
// Upload an object by streaming the contents of a file
// $pathToFile should be absolute path to a file on disk
$result = $client->putObject(array(
'Bucket' => $bucket,
'Key' => 'data_from_file.txt',
'SourceFile' => $pathToFile,
'Metadata' => array(
'Foo' => 'abc',
'Baz' => '123'
)
));
// We can poll the object until it is accessible
$client->waitUntil('ObjectExists', array(
'Bucket' => $this->bucket,
'Key' => 'data_from_file.txt'
));
Alternatively, you can pass a resource returned from an fopen
call to the Body
parameter.
// Upload an object by streaming the contents of a PHP stream.
// Note: You must supply a "ContentLength" parameter to an
// operation if the steam does not respond to fstat() or if the
// fstat() of stream does not provide a valid the 'size' attribute.
// For example, the "http" stream wrapper will require a ContentLength
// parameter because it does not respond to fstat().
$client->putObject(array(
'Bucket' => $bucket,
'Key' => 'data_from_stream.txt',
'Body' => fopen($pathToFile, 'r+')
));
Because the AWS SDK for PHP is built around Guzzle, you can also pass an EntityBody object.
// Be sure to add a use statement at the beginning of you script:
// use Guzzle\Http\EntityBody;
// Upload an object by streaming the contents of an EntityBody object
$client->putObject(array(
'Bucket' => $bucket,
'Key' => 'data_from_entity_body.txt',
'Body' => EntityBody::factory(fopen($pathToFile, 'r+'))
));
You can list all of the buckets owned by your account using the listBuckets
method.
$result = $client->listBuckets();
foreach ($result['Buckets'] as $bucket) {
// Each Bucket value will contain a Name and CreationDate
echo "{$bucket['Name']} - {$bucket['CreationDate']}\n";
}
All service operation calls using the AWS SDK for PHP return a Guzzle\Service\Resource\Model
object. This object
contains all of the data returned from the service in a normalized array like object. The object also contains a
get()
method used to retrieve values from the model by name, and a getPath()
method that can be used to
retrieve nested values.
// Grab the nested Owner/ID value from the result model using getPath()
$result = $client->listBuckets();
echo $result->getPath('Owner/ID') . "\n";
Listing objects is a lot easier in the new SDK thanks to iterators. You can list all of the objects in a bucket using
the ListObjectsIterator
.
$iterator = $client->getIterator('ListObjects', array(
'Bucket' => $bucket
));
foreach ($iterator as $object) {
echo $object['Key'] . "\n";
}
Iterators will handle sending any required subsequent requests when a response is truncated. The ListObjects iterator works with other parameters too.
$iterator = $client->getIterator('ListObjects', array(
'Bucket' => $bucket,
'Prefix' => 'foo'
));
foreach ($iterator as $object) {
echo $object['Key'] . "\n";
}
You can convert any iterator to an array using the toArray()
method of the iterator.
Note
Converting an iterator to an array will load the entire contents of the iterator into memory.
You can use the GetObject
operation to download an object.
// Get an object using the getObject operation
$result = $client->getObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt'
));
// The 'Body' value of the result is an EntityBody object
echo get_class($result['Body']) . "\n";
// > Guzzle\Http\EntityBody
// The 'Body' value can be cast to a string
echo $result['Body'] . "\n";
// > Hello!
The contents of the object are stored in the Body
parameter of the model object. Other parameters are stored in
model including ContentType
, ContentLength
, VersionId
, ETag
, etc...
The Body
parameter stores a reference to a Guzzle\Http\EntityBody
object. The SDK will store the data in a
temporary PHP stream by default. This will work for most use-cases and will automatically protect your application from
attempting to download extremely large files into memory.
The EntityBody object has other nice features that allow you to read data using streams.
// Seek to the beginning of the stream
$result['Body']->rewind();
// Read the body off of the underlying stream in chunks
while ($data = $result['Body']->read(1024)) {
echo $data;
}
// Cast the body to a primitive string
// Warning: This loads the entire contents into memory!
$bodyAsString = (string) $result['Body'];
You can save the contents of an object to a file by setting the SaveAs parameter.
$result = $client->getObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt',
'SaveAs' => '/tmp/data.txt'
));
// Contains an EntityBody that wraps a file resource of /tmp/data.txt
echo $result['Body']->getUri() . "\n";
// > /tmp/data.txt
Amazon S3 allows you to uploads large files in pieces. The AWS SDK for PHP provides an abstraction layer that makes it easier to upload large files using multipart upload.
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\Model\MultipartUpload\UploadBuilder;
$uploader = UploadBuilder::newInstance()
->setClient($client)
->setSource('/path/to/large/file.mov')
->setBucket('mybucket')
->setKey('my-object-key')
->setOption('Metadata', array('Foo' => 'Bar'))
->setOption('CacheControl', 'max-age=3600')
->build();
// Perform the upload. Abort the upload if something goes wrong
try {
$uploader->upload();
echo "Upload complete.\n";
} catch (MultipartUploadException $e) {
$uploader->abort();
echo "Upload failed.\n";
}
You can attempt to upload parts in parallel by specifying the concurrency option on the UploadBuilder object. The following example will create a transfer object that will attempt to upload three parts in parallel until the entire object has been uploaded.
$uploader = UploadBuilder::newInstance()
->setClient($client)
->setSource('/path/to/large/file.mov')
->setBucket('mybucket')
->setKey('my-object-key')
->setConcurrency(3)
->build();
You can use the Aws\S3\S3Client::upload()
method if you just want to upload files and not worry if they are too
large to send in a single PutObject operation or require a multipart upload.
$client->upload('bucket', 'key', 'object body', 'public-read');
You can specify a canned ACL on an object when uploading:
$client->putObject(array(
'Bucket' => 'mybucket',
'Key' => 'data.txt',
'SourceFile' => '/path/to/data.txt',
'ACL' => 'public-read'
));
You can specify more complex ACLs using the ACP
parameter when sending PutObject, CopyObject, CreateBucket,
CreateMultipartUpload, PutBucketAcl, PutObjectAcl, and other operations that accept a canned ACL. Using the ACP
parameter allows you specify more granular access control policies using a Aws\S3\Model\Acp
object. The easiest
way to create an Acp object is through the Aws\S3\Model\AcpBuilder
.
use Aws\S3\Enum\Group;
use Aws\S3\Model\AcpBuilder;
$acp = AcpBuilder::newInstance()
->setOwner($myOwnerId)
->addGrantForEmail('READ', 'test@example.com')
->addGrantForUser('FULL_CONTROL', 'user-id')
->addGrantForGroup('READ', Group::AUTHENTICATED_USERS)
->build();
$client->putObject(array(
'Bucket' => 'mybucket',
'Key' => 'data.txt',
'SourceFile' => '/path/to/data.txt',
'ACP' => $acp
));
You can authenticate certain types of requests by passing the required information as query-string parameters instead of using the Authorization HTTP header. This is useful for enabling direct third-party browser access to your private Amazon S3 data, without proxying the request. The idea is to construct a "pre-signed" request and encode it as a URL that an end-user's browser can retrieve. Additionally, you can limit a pre-signed request by specifying an expiration time.
The most common scenario is creating a pre-signed URL to GET an object. The easiest way to do this is to use the
getObjectUrl
method of the Amazon S3 client. This same method can also be used to get an unsigned URL of a public
S3 object.
// Get a plain URL for an Amazon S3 object
$plainUrl = $client->getObjectUrl($bucket, 'data.txt');
// > https://my-bucket.s3.amazonaws.com/data.txt
// Get a pre-signed URL for an Amazon S3 object
$signedUrl = $client->getObjectUrl($bucket, 'data.txt', '+10 minutes');
// > https://my-bucket.s3.amazonaws.com/data.txt?AWSAccessKeyId=[...]&Expires=[...]&Signature=[...]
// Create a vanilla Guzzle HTTP client for accessing the URLs
$http = new \Guzzle\Http\Client;
// Try to get the plain URL. This should result in a 403 since the object is private
try {
$response = $http->get($plainUrl)->send();
} catch (\Guzzle\Http\Exception\ClientErrorResponseException $e) {
$response = $e->getResponse();
}
echo $response->getStatusCode();
// > 403
// Get the contents of the object using the pre-signed URL
$response = $http->get($signedUrl)->send();
echo $response->getBody();
// > Hello!
You can also create pre-signed URLs for any Amazon S3 operation using the getCommand
method for creating a Guzzle
command object and then calling the createPresignedUrl()
method on the command.
// Get a command object from the client and pass in any options
// available in the GetObject command (e.g. ResponseContentDisposition)
$command = $client->getCommand('GetObject', array(
'Bucket' => $bucket,
'Key' => 'data.txt',
'ResponseContentDisposition' => 'attachment; filename="data.txt"'
));
// Create a signed URL from the command object that will last for
// 10 minutes from the current time
$signedUrl = $command->createPresignedUrl('+10 minutes');
echo file_get_contents($signedUrl);
// > Hello!
If you need more flexibility in creating your pre-signed URL, then you can create a pre-signed URL for a completely
custom Guzzle\Http\Message\RequestInterface
object. You can use the get()
, post()
, head()
, put()
,
and delete()
methods of a client object to easily create a Guzzle request object.
$key = 'data.txt';
$url = "{$bucket}/{$key}";
// get() returns a Guzzle\Http\Message\Request object
$request = $client->get($url);
// Create a signed URL from a completely custom HTTP request that
// will last for 10 minutes from the current time
$signedUrl = $client->createPresignedUrl($request, '+10 minutes');
echo file_get_contents($signedUrl);
// > Hello!
The Amazon S3 stream wrapper allows you to store and retrieve data from Amazon S3 using built-in PHP functions like
file_get_contents
, fopen
, copy
, rename
, unlink
, mkdir
, rmdir
, etc.
Uploading a local directory to an Amazon S3 bucket is rather simple:
$client->uploadDirectory('/local/directory', 'my-bucket');
The uploadDirectory()
method of a client will compare the contents of the local directory to the contents in the
Amazon S3 bucket and only transfer files that have changed. While iterating over the keys in the bucket and comparing
against the names of local files using a customizable filename to key converter, the changed files are added to an in
memory queue and uploaded concurrently. When the size of a file exceeds a customizable multipart_upload_size
parameter, the uploader will automatically upload the file using a multipart upload.
The method signature of the uploadDirectory() method allows for the following arguments:
public function uploadDirectory($directory, $bucket, $keyPrefix = null, array $options = array())
By specifying $keyPrefix
, you can cause the uploaded objects to be placed under a virtual folder in the Amazon S3
bucket. For example, if the $bucket
name is my-bucket
and the $keyPrefix
is 'testing/', then your files
will be uploaded to my-bucket
under the testing/
virtual folder:
https://my-bucket.s3.amazonaws.com/testing/filename.txt
The uploadDirectory()
method also accepts an optional associative array of $options
that can be used to further
control the transfer.
params | Array of parameters to use with each PutObject or CreateMultipartUpload operation performed during
the transfer. For example, you can specify an ACL key to change the ACL of each uploaded object.
See PutObject
for a list of available options. |
base_dir | Base directory to remove from each object key. By default, the $directory passed into the
uploadDirectory() method will be removed from each object key. |
force | Set to true to upload every file, even if the file is already in Amazon S3 and has not changed. |
concurrency | Maximum number of parallel uploads (defaults to 5) |
debug | Set to true to enable debug mode to print information about each upload. Setting this value to an fopen
resource will write the debug output to a stream rather than to STDOUT . |
In the following example, a local directory is uploaded with each object stored in the bucket using a public-read
ACL, 20 requests are sent in parallel, and debug information is printed to standard output as each request is
transferred.
$dir = '/local/directory';
$bucket = 'my-bucket';
$keyPrefix = '';
$client->uploadDirectory($dir, $bucket, $keyPrefix, array(
'params' => array('ACL' => 'public-read'),
'concurrency' => 20,
'debug' => true
));
The uploadDirectory()
method is an abstraction layer over the much more powerful Aws\S3\Sync\UploadSyncBuilder
.
You can use an UploadSyncBuilder
object directly if you need more control over the transfer. Using an
UploadSyncBuilder
allows for the following advanced features:
\Iterator
object to use to yield files to an UploadSync
object. This can be used, for
example, to filter out which files are transferred even further using something like the
Symfony 2 Finder component.Aws\S3\Sync\FilenameConverterInterface
objects used to convert Amazon S3 object names to local
filenames and vice versa. This can be useful if you require files to be renamed in a specific way.use Aws\S3\Sync\UploadSyncBuilder;
UploadSyncBuilder::getInstance()
->setClient($client)
->setBucket('my-bucket')
->setAcl('public-read')
->uploadFromGlob('/path/to/file/*.php')
->build()
->transfer();
You can download the objects stored in an Amazon S3 bucket using features similar to the uploadDirectory()
method
and the UploadSyncBuilder
. You can download the entire contents of a bucket using the
Aws\S3\S3Client::downloadBucket()
method.
The following example will download all of the objects from my-bucket
and store them in /local/directory
.
Object keys that are under virtual subfolders are converted into a nested directory structure when downloading the
objects. Any directories missing on the local filesystem will be created automatically.
$client->downloadBucket('/local/directory', 'my-bucket');
The method signature of the downloadBucket()
method allows for the following arguments:
public function downloadBucket($directory, $bucket, $keyPrefix = null, array $options = array())
By specifying $keyPrefix
, you can limit the downloaded objects to only keys that begin with the specified
$keyPrefix
. This, for example, can be useful for downloading objects under a specific virtual directory.
The downloadBucket()
method also accepts an optional associative array of $options
that can be used to further
control the transfer.
params | Array of parameters to use with each GetObject operation performed during the transfer. See
GetObject
for a list of available options. |
base_dir | Base directory to remove from each object key when downloading. By default, the entire object key is used to determine the path to the file on the local filesystem. |
force | Set to true to download every file, even if the file is already on the local filesystem and has not changed. |
concurrency | Maximum number of parallel downloads (defaults to 10) |
debug | Set to true to enable debug mode to print information about each download. Setting this value to an
fopen resource will write the debug output to a stream rather than to STDOUT . |
allow_resumable | Set to true to allow previously interrupted downloads to be resumed using a Range GET |
The downloadBucket()
method is an abstraction layer over the much more powerful
Aws\S3\Sync\DownloadSyncBuilder
. You can use a DownloadSyncBuilder
object directly if you need more control
over the transfer. Using the DownloadSyncBuilder
allows for the following advanced features:
UploadSyncBuilder
, you can specify a custom \Iterator
object to use to yield files to a
DownloadSync
object.Aws\S3\Sync\FilenameConverterInterface
objects used to convert Amazon S3 object names to local
filenames and vice versa.use Aws\S3\Sync\DownloadSyncBuilder;
DownloadSyncBuilder::getInstance()
->setClient($client)
->setDirectory('/path/to/directory')
->setBucket('my-bucket')
->setKeyPrefix('/under-prefix')
->allowResumableDownloads()
->build()
->transfer();
Now that we've taken a tour of how you can use the Amazon S3 client, let's clean up any resources we may have created.
// Delete the objects in the bucket before attempting to delete
// the bucket
$clear = new ClearBucket($client, $bucket);
$clear->clear();
// Delete the bucket
$client->deleteBucket(array('Bucket' => $bucket));
// Wait until the bucket is not accessible
$client->waitUntil('BucketNotExists', array('Bucket' => $bucket));
Please see the Amazon Simple Storage Service Client API reference for a details about all of the available methods, including descriptions of the inputs and outputs.