This documentation is for Version 1 of the AWS CLI only. For documentation related to Version 2 of the AWS CLI, see the Version 2 User Guide.
Using high-level (s3) commands in the AWS CLI
This topic describes some of the commands you can use to manage Amazon S3 buckets and objects
using the
aws
s3
commands in the AWS CLI. For commands not covered in this topic and additional command examples,
see the
aws
s3
commands in the AWS CLI Reference.
The high-level aws s3
commands simplify managing Amazon S3 objects. These commands
enable you to manage the contents of Amazon S3 within itself and with local directories.
Topics
Prerequisites
To run the s3
commands, you need to:
Install and configure the AWS CLI. For more information, see Installing, updating, and uninstalling the AWS CLI and Authentication and access credentials for the AWS CLI.
-
The profile that you use must have permissions that allow the AWS operations performed by the examples.
-
Understand these Amazon S3 terms:
-
Bucket – A top-level Amazon S3 folder.
-
Prefix – An Amazon S3 folder in a bucket.
-
Object – Any item that's hosted in an Amazon S3 bucket.
-
Before you start
This section describes a few things to note before you use aws s3
commands.
Large object uploads
When you use aws s3
commands to upload large objects to an Amazon S3 bucket, the
AWS CLI automatically performs a multipart upload. You can't resume a failed upload when using
these aws s3
commands.
If the multipart upload fails due to a timeout, or if you manually canceled in the AWS CLI, the AWS CLI stops the upload and cleans up any files that were created. This process can take several minutes.
If the multipart upload or cleanup process is canceled by a kill command or system failure, the created files remain in the Amazon S3 bucket. To clean up the multipart upload, use the s3api abort-multipart-upload command.
File properties and tags in multipart copies
When you use the AWS CLI version 1 version of commands in the aws s3
namespace to
copy a file from one Amazon S3 bucket location to another Amazon S3 bucket location, and that
operation uses multipart copy, no
file properties from the source object are copied to the destination object.
Create a bucket
Use the
s3
mb
command
to make a bucket. Bucket names must be globally unique (unique across all of Amazon S3) and should be DNS
compliant.
Bucket names can contain lowercase letters, numbers, hyphens, and periods. Bucket names can start and end only with a letter or number, and cannot contain a period next to a hyphen or another period.
Syntax
$
aws s3 mb <target> [--options]
The following example creates the s3://amzn-s3-demo-bucket
bucket.
$
aws s3 mb s3://
amzn-s3-demo-bucket
List buckets and objects
To list your buckets, folders, or objects, use the
s3
ls
command.
Using the command without a target or options lists all buckets.
Syntax
$
aws s3 ls <target> [--options]
For a few common options to use with this command, and examples, see Frequently used options for s3
commands. For a complete list of available
options, see
s3
ls
in the
AWS CLI Command Reference.
The following example lists all of your Amazon S3 buckets.
$
aws s3 ls
2018-12-11 17:08:50 amzn-s3-demo-bucket1 2018-12-14 14:55:44 amzn-s3-demo-bucket2
The following command lists all objects and prefixes in a bucket. In this example
output, the prefix example/
has one file named
MyFile1.txt
.
$
aws s3 ls
s3://amzn-s3-demo-bucket
PRE example/ 2018-12-04 19:05:48 3 MyFile1.txt
You can filter the output to a specific prefix by including it in the command. The
following command lists the objects in bucket-name/example/
(that is, objects in bucket-name
filtered by the prefix
example/
).
$
aws s3 ls
s3://amzn-s3-demo-bucket/example/
2018-12-06 18:59:32 3 MyFile1.txt
Delete buckets
To delete a bucket, use the
s3
rb
command.
Syntax
$
aws s3 rb <target> [--options]
The following example removes the s3://amzn-s3-demo-bucket
bucket.
$
aws s3 rb
s3://amzn-s3-demo-bucket
By default, the bucket must be empty for the operation to succeed. To remove a bucket
that's not empty, you need to include the --force
option. If you're using a
versioned bucket that contains previously deleted—but retained—objects, this
command does not allow you to remove the bucket. You must first
remove all of the content.
The following example deletes all objects and prefixes in the bucket, and then deletes the bucket.
$
aws s3 rb
s3://amzn-s3-demo-bucket
--force
Delete objects
To delete objects in a bucket or your local directory, use the
s3
rm
command.
Syntax
$
aws s3 rm <target> [--options]
For a few common options to use with this command, and examples, see Frequently used options for s3
commands. For a complete list of options,
see
s3
rm
in the
AWS CLI Command Reference.
The following example deletes filename.txt
from
s3://amzn-s3-demo-bucket/example
.
$
aws s3 rm s3://amzn-s3-demo-bucket/example/filename.txt
The following example deletes all objects from
s3://amzn-s3-demo-bucket/example
using the --recursive
option.
$
aws s3 rm s3://amzn-s3-demo-bucket/example --recursive
Move objects
Use the
s3
mv
command
to move objects from a bucket or a local directory. The s3 mv
command copies the
source object or file to the specified destination and then deletes the source object or
file.
Syntax
$
aws s3 mv <source> <target> [--options]
For a few common options to use with this command, and examples, see Frequently used options for s3
commands. For a complete list of available
options, see
s3
mv
in the
AWS CLI Command Reference.
Warning
If you are using any type of access point ARNs or access point aliases in your Amazon S3
source or destination URIs, you must take extra care that your source and destination Amazon S3
URIs resolve to different underlying buckets. If the source and destination buckets are the
same, the source file or object can be moved onto itself, which can result in accidental
deletion of your source file or object. To verify that the source and destination buckets
are not the same, use the --validate-same-s3-paths
parameter, or set the
environment variable AWS_CLI_S3_MV_VALIDATE_SAME_S3_PATHS
to
true
.
The following example moves all objects from
s3://amzn-s3-demo-bucket/example
to
s3://amzn-s3-demo-bucket/
.
$
aws s3 mv s3://amzn-s3-demo-bucket/example s3://amzn-s3-demo-bucket/
The following example moves a local file from your current working directory to the
Amazon S3 bucket with the s3 mv
command.
$
aws s3 mv filename.txt s3://amzn-s3-demo-bucket
The following example moves a file from your Amazon S3 bucket to your current working
directory, where ./
specifies your current working directory.
$
aws s3 mv s3://amzn-s3-demo-bucket/filename.txt ./
Copy objects
Use the
s3
cp
command
to copy objects from a bucket or a local directory.
Syntax
$
aws s3 cp <source> <target> [--options]
You can use the dash parameter for file streaming to standard input (stdin
)
or standard output (stdout
).
Warning
If you're using PowerShell, the shell might alter the encoding of a CRLF or add a CRLF to piped input or output, or redirected output.
The s3 cp
command uses the following syntax to upload a file stream from
stdin
to a specified bucket.
Syntax
$
aws s3 cp - <target> [--options]
The s3 cp
command uses the following syntax to download an Amazon S3 file stream
for stdout
.
Syntax
$
aws s3 cp <target> [--options] -
For a few common options to use with this command, and examples, see Frequently used options for s3
commands. For the complete list of
options, see
s3
cp
in the
AWS CLI Command Reference.
The following example copies all objects from
s3://amzn-s3-demo-bucket/example
to
s3://amzn-s3-demo-bucket/
.
$
aws s3 cp s3://amzn-s3-demo-bucket/example s3://amzn-s3-demo-bucket/
The following example copies a local file from your current working directory to the
Amazon S3 bucket with the s3 cp
command.
$
aws s3 cp filename.txt s3://amzn-s3-demo-bucket
The following example copies a file from your Amazon S3 bucket to your current working
directory, where ./
specifies your current working directory.
$
aws s3 cp s3://amzn-s3-demo-bucket/filename.txt ./
The following example uses echo to stream the text "hello world" to the
s3://bucket-name/filename.txt
file.
$
echo "hello world" | aws s3 cp - s3://amzn-s3-demo-bucket/filename.txt
The following example streams the
s3://amzn-s3-demo-bucket/filename.txt
file to stdout
and prints the contents to the console.
$
aws s3 cp s3://amzn-s3-demo-bucket/filename.txt -
hello world
The following example streams the contents of s3://bucket-name/pre
to
stdout
, uses the bzip2
command to compress the files, and
uploads the new compressed file named key.bz2
to
s3://bucket-name
.
$
aws s3 cp s3://amzn-s3-demo-bucket/pre - | bzip2 --best | aws s3 cp - s3://amzn-s3-demo-bucket/key.bz2
Sync objects
The
s3
sync
command synchronizes the contents of a bucket and a directory, or the contents of two buckets.
Typically, s3 sync
copies missing or outdated files or objects between the source
and target. However, you can also supply the --delete
option to remove files or
objects from the target that are not present in the source.
Syntax
$
aws s3 sync <source> <target> [--options]
For a few common options to use with this command, and examples, see Frequently used options for s3
commands. For a complete list of options,
see
s3
sync
in
the AWS CLI Command Reference.
The following example synchronizes the contents of an Amazon S3 prefix named path in the bucket named amzn-s3-demo-bucket with the current working directory.
s3 sync
updates any files that have a size or modified time that are
different from files with the same name at the destination. The output displays specific
operations performed during the sync. Notice that the operation recursively synchronizes
the subdirectory MySubdirectory
and its contents with
s3://amzn-s3-demo-bucket/path/MySubdirectory
.
$
aws s3 sync . s3://amzn-s3-demo-bucket/path
upload: MySubdirectory\MyFile3.txt to s3://amzn-s3-demo-bucket/path/MySubdirectory/MyFile3.txt upload: MyFile2.txt to s3://amzn-s3-demo-bucket/path/MyFile2.txt upload: MyFile1.txt to s3://amzn-s3-demo-bucket/path/MyFile1.txt
The following example, which extends the previous one, shows how to use the
--delete
option.
// Delete local file
$
rm ./MyFile1.txt
// Attempt sync without --delete option - nothing happens
$
aws s3 sync . s3://amzn-s3-demo-bucket/path
// Sync with deletion - object is deleted from bucket
$
aws s3 sync . s3://amzn-s3-demo-bucket/path --delete
delete: s3://amzn-s3-demo-bucket/path/MyFile1.txt // Delete object from bucket
$
aws s3 rm s3://amzn-s3-demo-bucket/path/MySubdirectory/MyFile3.txt
delete: s3://amzn-s3-demo-bucket/path/MySubdirectory/MyFile3.txt // Sync with deletion - local file is deleted
$
aws s3 sync s3://amzn-s3-demo-bucket/path . --delete
delete: MySubdirectory\MyFile3.txt // Sync with Infrequent Access storage class
$
aws s3 sync . s3://amzn-s3-demo-bucket/path --storage-class STANDARD_IA
When using the --delete
option, the --exclude
and
--include
options can filter files or objects to delete during an s3
sync
operation. In this case, the parameter string must specify files to exclude
from, or include for, deletion in the context of the target directory or bucket. The
following shows an example.
Assume local directory and s3://amzn-s3-demo-bucket/path currently in sync and each contains 3 files: MyFile1.txt MyFile2.rtf MyFile88.txt '''
// Sync with delete, excluding files that match a pattern. MyFile88.txt is deleted, while remote MyFile1.txt is not.
$
aws s3 sync . s3://amzn-s3-demo-bucket/path --delete --exclude "path/MyFile?.txt"
delete: s3://amzn-s3-demo-bucket/path/MyFile88.txt '''
// Sync with delete, excluding MyFile2.rtf - local file is NOT deleted
$
aws s3 sync s3://amzn-s3-demo-bucket/path . --delete --exclude "./MyFile2.rtf"
download: s3://amzn-s3-demo-bucket/path/MyFile1.txt to MyFile1.txt ''' // Sync with delete, local copy of MyFile2.rtf is deleted
$
aws s3 sync s3://amzn-s3-demo-bucket/path . --delete
delete: MyFile2.rtf
Frequently used options for s3 commands
The following options are frequently used for the commands described in this topic. For a complete list of options you can use on a command, see the specific command in the AWS CLI reference guide.
- acl
-
s3 sync
ands3 cp
can use the--acl
option. This enables you to set the access permissions for files copied to Amazon S3. The--acl
option acceptsprivate
,public-read
, andpublic-read-write
values. For more information, see Canned ACL in the Amazon S3 User Guide.$
aws s3 sync . s3://amzn-s3-demo-bucket/path --acl public-read
- exclude
-
When you use the
s3 cp
,s3 mv
,s3 sync
, ors3 rm
command, you can filter the results by using the--exclude
or--include
option. The--exclude
option sets rules to only exclude objects from the command, and the options apply in the order specified. This is shown in the following example.Local directory contains 3 files: MyFile1.txt MyFile2.rtf MyFile88.txt
// Exclude all .txt files, resulting in only MyFile2.rtf being copied
$
aws s3 cp . s3://amzn-s3-demo-bucket/path --exclude "*.txt"
// Exclude all .txt files but include all files with the "MyFile*.txt" format, resulting in, MyFile1.txt, MyFile2.rtf, MyFile88.txt being copied
$
aws s3 cp . s3://amzn-s3-demo-bucket/path --exclude "*.txt" --include "MyFile*.txt"
// Exclude all .txt files, but include all files with the "MyFile*.txt" format, but exclude all files with the "MyFile?.txt" format resulting in, MyFile2.rtf and MyFile88.txt being copied
$
aws s3 cp . s3://amzn-s3-demo-bucket/path --exclude "*.txt" --include "MyFile*.txt" --exclude "MyFile?.txt"
- include
-
When you use the
s3 cp
,s3 mv
,s3 sync
, ors3 rm
command, you can filter the results using the--exclude
or--include
option. The--include
option sets rules to only include objects specified for the command, and the options apply in the order specified. This is shown in the following example.Local directory contains 3 files: MyFile1.txt MyFile2.rtf MyFile88.txt
// Include all .txt files, resulting in MyFile1.txt and MyFile88.txt being copied
$
aws s3 cp . s3://amzn-s3-demo-bucket/path --include "*.txt"
// Include all .txt files but exclude all files with the "MyFile*.txt" format, resulting in no files being copied
$
aws s3 cp . s3://amzn-s3-demo-bucket/path --include "*.txt" --exclude "MyFile*.txt"
// Include all .txt files, but exclude all files with the "MyFile*.txt" format, but include all files with the "MyFile?.txt" format resulting in MyFile1.txt being copied
$
aws s3 cp . s3://amzn-s3-demo-bucket/path --include "*.txt" --exclude "MyFile*.txt" --include "MyFile?.txt"
- grant
-
The
s3 cp
,s3 mv
, ands3 sync
commands include a--grants
option that you can use to grant permissions on the object to specified users or groups. Set the--grants
option to a list of permissions using the following syntax. ReplacePermission
,Grantee_Type
, andGrantee_ID
with your own values.Syntax
--grants
Permission
=Grantee_Type
=Grantee_ID
[Permission
=Grantee_Type
=Grantee_ID
...]Each value contains the following elements:
-
Permission
– Specifies the granted permissions. Can be set toread
,readacl
,writeacl
, orfull
. -
Grantee_Type
– Specifies how to identify the grantee. Can be set touri
,emailaddress
, orid
. -
Grantee_ID
– Specifies the grantee based onGrantee_Type
.-
uri
– The group's URI. For more information, see Who is a grantee? -
emailaddress
– The account's email address. -
id
– The account's canonical ID.
-
For more information about Amazon S3 access control, see Access control.
The following example copies an object into a bucket. It grants
read
permissions on the object to everyone, andfull
permissions (read
,readacl
, andwriteacl
) to the account associated withuser@example.com
.$
aws s3 cp file.txt s3://amzn-s3-demo-bucket/ --grants
read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=user@example.com
You can also specify a nondefault storage class (
REDUCED_REDUNDANCY
orSTANDARD_IA
) for objects that you upload to Amazon S3. To do this, use the--storage-class
option.$
aws s3 cp file.txt s3://amzn-s3-demo-bucket/
--storage-class REDUCED_REDUNDANCY
-
- recursive
-
When you use this option, the command is performed on all files or objects under the specified directory or prefix. The following example deletes
s3://amzn-s3-demo-bucket/path
and all of its contents.$
aws s3 rm s3://amzn-s3-demo-bucket/path --recursive
Resources
AWS CLI reference:
Service reference:
-
Working with Amazon S3 buckets in the Amazon S3 User Guide
-
Working with Amazon S3 objects in the Amazon S3 User Guide
-
Listing keys hierarchically using a prefix and delimiter in the Amazon S3 User Guide
-
Abort multipart uploads to an S3 bucket using the AWS SDK for .NET (low-level) in the Amazon S3 User Guide