Bucket policy examples
With Amazon S3 bucket policies, you can secure access to objects in your buckets, so that only users with the appropriate permissions can access them. You can even prevent authenticated users without the appropriate permissions from accessing your Amazon S3 resources.
This section presents examples of typical use cases for bucket policies. These sample
policies use
as the resource value. To test these policies,
replace the DOC-EXAMPLE-BUCKET
with your own
information (such as your bucket name). user input placeholders
To grant or deny permissions to a set of objects, you can use wildcard characters
(*
) in Amazon Resource Names (ARNs) and other values. For example, you can
control access to groups of objects that begin with a common prefix or end with a given extension,
such as .html
.
For information about bucket policies, see Using bucket policies. For more information about AWS Identity and Access Management (IAM) policy language, see Policies and Permissions in Amazon S3.
Note
When testing permissions by using the Amazon S3 console, you must grant additional permissions
that the console requires—s3:ListAllMyBuckets
,
s3:GetBucketLocation
, and s3:ListBucket
. For an example
walkthrough that grants permissions to users and tests those permissions by using the console,
see Controlling access to a bucket with user policies.
Topics
- Requiring encryption
- Managing buckets using canned ACLs
- Managing object access with object tagging
- Managing object access by using global condition keys
- Managing access based on specific IP addresses
- Managing access based on HTTP or HTTPS requests
- Managing user access to specific folders
- Managing access for access logs
- Managing access to an Amazon CloudFront OAI
- Managing access for Amazon S3 Storage Lens
- Managing permissions for S3 Inventory, S3 analytics, and S3 Inventory reports
- Requiring MFA
Requiring encryption
Require SSE-KMS for all objects written to a bucket
The following example policy requires every object that is written to the bucket to be encrypted with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). If the object isn't encrypted with SSE-KMS, the request will be denied.
{ "Version": "2012-10-17", "Id": "PutObjPolicy", "Statement": [{ "Sid": "
DenyObjectsThatAreNotSSEKMS
", "Principal": "*", "Effect": "Deny", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*", "Condition": { "Null": { "s3:x-amz-server-side-encryption-aws-kms-key-id": "true" } } }] }
Require SSE-KMS with a specific AWS KMS key for all objects written to a bucket
The following example policy denies any objects from being written to the bucket if they aren’t encrypted with SSE-KMS by using a specific KMS key ID. Even if the objects are encrypted with SSE-KMS by using a per-request header or bucket default encryption, the objects cannot be written to the bucket if they haven't been encrypted with the specified KMS key. Make sure to replace the KMS key ARN that's used in this example with your own KMS key ARN.
{ "Version": "2012-10-17", "Id": "PutObjPolicy", "Statement": [{ "Sid": "
DenyObjectsThatAreNotSSEKMSWithSpecificKey
", "Principal": "*", "Effect": "Deny", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*", "Condition": { "ArnNotEqualsIfExists": { "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-2
:111122223333
:key/01234567-89ab-cdef-0123-456789abcdef
" } } }] }
Managing buckets using canned ACLs
Granting permissions to multiple accounts to upload objects or set object ACLs for public access
The following example policy grants the s3:PutObject
and
s3:PutObjectAcl
permissions to multiple AWS accounts and requires that any
requests for these operations must include the public-read
canned access
control list (ACL). For more information, see Amazon S3 actions and Amazon S3 condition key examples.
Warning
The public-read
canned ACL allows anyone in the world to view the objects
in your bucket. Use caution when granting anonymous access to your Amazon S3 bucket or
disabling block public access settings. When you grant anonymous access, anyone in the
world can access your bucket. We recommend that you never grant anonymous access to your
Amazon S3 bucket unless you specifically need to, such as with static website hosting. If you want to enable block public access settings for
static website hosting, see Tutorial: Configuring a
static website on Amazon S3.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AddPublicReadCannedAcl", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::
111122223333
:root", "arn:aws:iam::444455556666
:root" ] }, "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*", "Condition": { "StringEquals": { "s3:x-amz-acl": [ "public-read" ] } } } ] }
Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control
The following example shows how to allow another AWS account to upload objects to your
bucket while ensuring that you have full control of the uploaded objects. This policy grants
a specific AWS account (
)
the ability to upload objects only if that account includes the
111122223333
bucket-owner-full-control
canned ACL on upload. The StringEquals
condition in the policy specifies the s3:x-amz-acl
condition key to express the
canned ACL requirement. For more information, see Amazon S3 condition key examples.
{ "Version":"2012-10-17", "Statement":[ { "Sid":"PolicyForAllowUploadWithACL", "Effect":"Allow", "Principal":{"AWS":"
111122223333
"}, "Action":"s3:PutObject", "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*", "Condition": { "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"} } } ] }
Managing object access with object tagging
Allow a user to read only objects that have a specific tag key and value
The following permissions policy limits a user to only reading objects that have the
environment: production
tag key and value. This policy uses the
s3:ExistingObjectTag
condition key to specify the tag key and value.
{ "Version":"2012-10-17", "Statement":[ { "Principal":{ "AWS":"arn:aws:iam::111122223333:role/JohnDoe" }, "Effect":"Allow", "Action":[ "s3:GetObject", "s3:GetObjectVersion" ], "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", "Condition":{ "StringEquals":{ "s3:ExistingObjectTag/environment":"production" } } } ] }
Restrict which object tag keys that users can add
The following example policy grants a user permission to perform the
s3:PutObjectTagging
action, which allows a user to add tags to an existing
object. The condition uses the s3:RequestObjectTagKeys
condition key to specify
the allowed tag keys, such as Owner
or CreationDate
. For more
information, see Creating a
condition that tests multiple key values in the IAM User Guide.
The policy ensures that every tag key specified in the request is an authorized tag key.
The ForAnyValue
qualifier in the condition ensures that at least one of the
specified keys must be present in the request.
{ "Version": "2012-10-17", "Statement": [ {"Principal":{"AWS":[ "arn:aws:iam::
111122223333
:role/JohnDoe
" ] }, "Effect": "Allow", "Action": [ "s3:PutObjectTagging" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" ], "Condition": {"ForAnyValue:StringEquals": {"s3:RequestObjectTagKeys": [ "Owner", "CreationDate" ] } } } ] }
Require a specific tag key and value when allowing users to add object tags
The following example policy grants a user permission to perform the
s3:PutObjectTagging
action, which allows a user to add tags to an existing
object. The condition requires the user to include a specific tag key (such as
) with the value set to
Project
.X
{ "Version": "2012-10-17", "Statement": [ {"Principal":{"AWS":[ "arn:aws:iam::
111122223333
:user/JohnDoe
" ] }, "Effect": "Allow", "Action": [ "s3:PutObjectTagging" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" ], "Condition": {"StringEquals": {"s3:RequestObjectTag/Project
": "X
" } } } ] }
Allow a user to only add objects with a specific object tag key and value
The following example policy grants a user permission to perform the
s3:PutObject
action so that they can add objects to a bucket. However, the
Condition
statement restricts the tag keys and values that are allowed on the
uploaded objects. In this example, the user can only add objects that have the specific tag
key (
) with the value set to
Department
to the bucket.Finance
{ "Version": "2012-10-17", "Statement": [{ "Principal":{ "AWS":[ "arn:aws:iam::
111122223333
:user/JohnDoe
" ] }, "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" ], "Condition": { "StringEquals": { "s3:RequestObjectTag/Department
": "Finance
" } } }] }
Managing object access by using global condition keys
Global condition
keys are condition context keys with an aws
prefix. AWS services can
support global condition keys or service-specific keys that include the service prefix. You
can use the Condition
element of a JSON policy to compare the keys in a request
with the key values that you specify in your policy.
Restrict access to only Amazon S3 server access log deliveries
In the following example bucket policy, the aws:SourceArn global condition key is used to compare the Amazon Resource
Name (ARN) of the resource, making a service-to-service request with the ARN that
is specified in the policy. The aws:SourceArn
global condition key is used to
prevent the Amazon S3 service from being used as a confused deputy during
transactions between services. Only the Amazon S3 service is allowed to add objects to the Amazon S3
bucket.
This example bucket policy grants s3:PutObject
permissions to only the
logging service principal (logging.s3.amazonaws.com
).
{ "Version": "2012-10-17", "Statement": [ { "Sid": "
AllowPutObjectS3ServerAccessLogsPolicy
", "Principal": { "Service": "logging.s3.amazonaws.com" }, "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET-logs
/*", "Condition": { "StringEquals": { "aws:SourceAccount": "111111111111
" }, "ArnLike": { "aws:SourceArn": "arn:aws:s3:::EXAMPLE-SOURCE-BUCKET
" } } }, { "Sid": "RestrictToS3ServerAccessLogs
", "Effect": "Deny", "Principal": "*", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET-logs
/*", "Condition": { "ForAllValues:StringNotEquals": { "aws:PrincipalServiceNamesList": "logging.s3.amazonaws.com" } } } ] }
Allow access to only your organization
If you want to require all IAM
principals accessing a resource to be from an AWS account in your organization
(including the AWS Organizations management account), you can use the aws:PrincipalOrgID
global condition key.
To grant or restrict this type of access, define the aws:PrincipalOrgID
condition and set the value to your organization ID
in the bucket policy. The organization ID is used to control access to the bucket. When you
use the aws:PrincipalOrgID
condition, the permissions from the bucket policy
are also applied to all new accounts that are added to the organization.
Here’s an example of a resource-based bucket policy that you can use to grant specific
IAM principals in your organization direct access to your bucket. By adding the
aws:PrincipalOrgID
global condition key to your bucket policy, the principal
account is now required to be in your organization to obtain access to the resource. Even if
you accidentally specify an incorrect account when granting access, the aws:PrincipalOrgID global condition key acts as an additional
safeguard. When this global key is used in a policy, it prevents all principals from outside
of the specified organization from accessing the S3 bucket. Only principals from accounts in
the listed organization are able to obtain access to the resource.
{ "Version": "2012-10-17", "Statement": [{ "Sid": "AllowGetObject", "Principal": { "AWS": "*" }, "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::
DOC-EXAMPLE-BUCKET
/*", "Condition": { "StringEquals": { "aws:PrincipalOrgID": ["o-aa111bb222
"] } } }] }
Managing access based on specific IP addresses
Restrict access to specific IP addresses
The following example denies all users from performing any Amazon S3 operations on objects in the specified buckets unless the request originates from the specified range of IP addresses.
This policy's Condition
statement identifies
as the range of allowed Internet
Protocol version 4 (IPv4) IP addresses. 192.0.2.0/24
The Condition
block uses the NotIpAddress
condition and the
aws:SourceIp
condition key, which is an AWS wide condition key. The
aws:SourceIp
condition key can only be used for public IP address ranges. For
more information about these condition keys, see Amazon S3 condition key examples. The aws:SourceIp
IPv4 values use
standard CIDR notation. For more information, see IAM JSON Policy
Elements Reference in the IAM User Guide.
Warning
Before using this policy, replace the
IP address range in this example
with an appropriate value for your use case. Otherwise, you will lose the ability to
access your bucket.192.0.2.0/24
{ "Version": "2012-10-17", "Id": "S3PolicyId1", "Statement": [ { "Sid": "IPAllow", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::
DOC-EXAMPLE-BUCKET
", "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" ], "Condition": { "NotIpAddress": { "aws:SourceIp": "192.0.2.0/24
" } } } ] }
Allow both IPv4 and IPv6 addresses
When you start using IPv6 addresses, we recommend that you update all of your organization's policies with your IPv6 address ranges in addition to your existing IPv4 ranges. Doing this will help ensure that the policies continue to work as you make the transition to IPv6.
The following example bucket policy shows how to mix IPv4 and IPv6 address ranges
to cover all of your organization's valid IP addresses. The example policy allows access to
the example IP addresses
and
192.0.2.1
and denies access to the
addresses 2001:DB8:1234:5678::1
and
203.0.113.1
.2001:DB8:1234:5678:ABCD::1
The aws:SourceIp
condition key can only be used for public IP address
ranges. The IPv6 values for aws:SourceIp
must be in standard CIDR format.
For IPv6, we support using ::
to represent a range of 0s (for example,
2001:DB8:1234:5678::/64
). For more information, see IP Address Condition Operators in the
IAM User Guide.
Warning
Replace the IP address ranges in this example with appropriate values for your use case before using this policy. Otherwise, you might lose the ability to access your bucket.
{ "Id": "PolicyId2", "Version": "2012-10-17", "Statement": [ { "Sid": "AllowIPmix", "Effect": "Allow", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::
DOC-EXAMPLE-BUCKET
", "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" ], "Condition": { "IpAddress": { "aws:SourceIp": [ "192.0.2.0/24
", "2001:DB8:1234:5678::/64
" ] }, "NotIpAddress": { "aws:SourceIp": [ "203.0.113.0/24
", "2001:DB8:1234:5678:ABCD::/80
" ] } } } ] }
Managing access based on HTTP or HTTPS requests
Restrict access to only HTTPS requests
If you want to prevent potential attackers from manipulating network traffic, you can
use HTTPS (TLS) to only allow encrypted connections while restricting HTTP requests from
accessing your bucket. To determine whether the request is HTTP or HTTPS, use the aws:SecureTransport global condition key in your S3 bucket
policy. The aws:SecureTransport
condition key checks whether a request was sent
by using HTTP.
If a request returns true
, then the request was sent through HTTPS. If the
request returns false
, then the request was sent through HTTP. You can then
allow or deny access to your bucket based on the desired request scheme.
In the following example, the bucket policy explicitly denies HTTP requests.
{ "Version": "2012-10-17", "Statement": [{ "Sid": "RestrictToTLSRequestsOnly", "Action": "s3:*", "Effect": "Deny", "Resource": [ "arn:aws:s3:::
DOC-EXAMPLE-BUCKET
", "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" ], "Condition": { "Bool": { "aws:SecureTransport": "false" } }, "Principal": "*" }] }
Restrict access to a specific HTTP referer
Suppose that you have a website with the domain name
or
www.example.com
with links to photos and videos
stored in your bucket named example.com
. By default, all Amazon S3 resources
are private, so only the AWS account that created the resources can access them. DOC-EXAMPLE-BUCKET
To allow read access to these objects from your website, you can add a bucket policy
that allows the s3:GetObject
permission with a condition that the
GET
request must originate from specific webpages. The following policy
restricts requests by using the StringLike
condition with the
aws:Referer
condition key.
{ "Version":"2012-10-17", "Id":"HTTP referer policy example", "Statement":[ { "Sid":"Allow only GET requests originating from www.example.com and example.com.", "Effect":"Allow", "Principal":"*", "Action":["s3:GetObject","s3:GetObjectVersion"], "Resource":"arn:aws:s3:::
DOC-EXAMPLE-BUCKET
/*", "Condition":{ "StringLike":{"aws:Referer":["http://www.example.com/*
","http://example.com/*
"]} } } ] }
Make sure that the browsers that you use include the HTTP referer
header in
the request.
Warning
We recommend that you use caution when using the aws:Referer
condition
key. It is dangerous to include a publicly known HTTP referer header value. Unauthorized
parties can use modified or custom browsers to provide any aws:Referer
value
that they choose. Therefore, do not use aws:Referer
to prevent unauthorized
parties from making direct AWS requests.
The aws:Referer
condition key is offered only to allow customers to
protect their digital content, such as content stored in Amazon S3, from being referenced on
unauthorized third-party sites. For more information, see aws:Referer in the
IAM User Guide.
Managing user access to specific folders
Grant users access to specific folders
Suppose that you're trying to grant users access to a specific folder. If the IAM user and the S3 bucket belong to the same AWS account, then you can use an IAM policy to grant the user access to a specific bucket folder. With this approach, you don't need to update your bucket policy to grant access. You can add the IAM policy to an IAM role that multiple users can switch to.
If the IAM identity and the S3 bucket belong to different AWS accounts, then you must grant cross-account access in both the IAM policy and the bucket policy. For more information about granting cross-account access, see Bucket owner granting cross-account bucket permissions.
The following example bucket policy grants
full console access to only his folder
(JohnDoe
home/
). By creating a JohnDoe
/home
folder and granting the appropriate permissions to your users, you can have multiple users
share a single bucket. This policy consists of three Allow
statements:
-
: Allows the user (AllowRootAndHomeListingOfCompanyBucket
) to list objects at the root level of theJohnDoe
bucket and in theDOC-EXAMPLE-BUCKET
home
folder. This statement also allows the user to search on the prefixhome/
by using the console. -
: Allows the user (AllowListingOfUserFolder
) to list all objects in theJohnDoe
home/
folder and any subfolders.JohnDoe
/ -
: Allows the user to perform all Amazon S3 actions by grantingAllowAllS3ActionsInUserFolder
Read
,Write
, andDelete
permissions. Permissions are limited to the bucket owner's home folder.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "
AllowRootAndHomeListingOfCompanyBucket
", "Principal": { "AWS": [ "arn:aws:iam::111122223333
:user/JohnDoe
" ] }, "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET
"], "Condition": { "StringEquals": { "s3:prefix": ["", "home/", "home/JohnDoe
"], "s3:delimiter": ["/"] } } }, { "Sid": "AllowListingOfUserFolder
", "Principal": { "AWS": [ "arn:aws:iam::111122223333
:user/JohnDoe
" ] }, "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET
"], "Condition": { "StringLike": { "s3:prefix": ["home/JohnDoe
/*"] } } }, { "Sid": "AllowAllS3ActionsInUserFolder
", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333
:user/JohnDoe
" ] }, "Action": ["s3:*"], "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET
/home/JohnDoe
/*"] } ] }
Managing access for access logs
Grant access to Application Load Balancer for enabling access logs
When you enable access logs for Application Load Balancer, you must specify the name of the S3 bucket where the load balancer will store the logs. The bucket must have an attached policy that grants Elastic Load Balancing permission to write to the bucket.
In the following example, the bucket policy grants Elastic Load Balancing (ELB) permission to write the access logs to the bucket:
{ "Version": "2012-10-17", "Statement": [ { "Principal": { "AWS": "arn:aws:iam::
elb-account-id
:root" }, "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/prefix
/AWSLogs/111122223333
/*" } ] }
Note
Make sure to replace
with the
AWS account ID for Elastic Load Balancing for your AWS Region. For the list of Elastic Load Balancing Regions, see
Attach a policy to your Amazon S3 bucket in the Elastic Load Balancing User
Guide.elb-account-id
If your AWS Region does not appear in the supported Elastic Load Balancing Regions list, use the following policy, which grants permissions to the specified log delivery service.
{ "Version": "2012-10-17", "Statement": [ { "Principal": { "Service": "logdelivery.elasticloadbalancing.amazonaws.com" }, "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::
DOC-EXAMPLE-BUCKET
/prefix
/AWSLogs/111122223333
/*" } ] }
Then, make sure to configure your Elastic Load Balancing access logs by enabling them. You can verify your bucket permissions by creating a test file.
Managing access to an Amazon CloudFront OAI
Grant permission to an Amazon CloudFront OAI
The following example bucket policy grants a CloudFront origin access identity (OAI) permission to get (read) all objects in your S3 bucket. You can use a CloudFront OAI to allow users to access objects in your bucket through CloudFront but not directly through Amazon S3. For more information, see Restricting access to Amazon S3 content by using an Origin Access Identity in the Amazon CloudFront Developer Guide.
The following policy uses the OAI's ID as the policy's Principal
. For more
information about using S3 bucket policies to grant access to a CloudFront OAI, see Migrating from origin access identity (OAI) to origin access control (OAC) in the
Amazon CloudFront Developer Guide.
To use this example:
-
Replace
with the OAI's ID. To find the OAI's ID, see the Origin Access Identity pageEH1HDMB1FH2TC
on the CloudFront console, or use ListCloudFrontOriginAccessIdentities in the CloudFront API. -
Replace
with the name of your bucket.DOC-EXAMPLE-BUCKET
{ "Version": "2012-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity
EH1HDMB1FH2TC
" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" } ] }
Managing access for Amazon S3 Storage Lens
Grant permissions for Amazon S3 Storage Lens
S3 Storage Lens aggregates your metrics and displays the information in the Account snapshot section on the Amazon S3 console Buckets page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data-protection best practices. Your dashboard has drill-down options to generate insights at the organization, account, bucket, object, or prefix level. You can also send a once-daily metrics export in CSV or Parquet format to an S3 bucket.
S3 Storage Lens can export your aggregated storage usage metrics to an Amazon S3 bucket for further analysis. The bucket where S3 Storage Lens places its metrics exports is known as the destination bucket. When setting up your S3 Storage Lens metrics export, you must have a bucket policy for the destination bucket. For more information, see Assessing your storage activity and usage with Amazon S3 Storage Lens.
The following example bucket policy grants Amazon S3 permission to write objects
(PUT
requests) to a destination bucket. You use a bucket policy like this on
the destination bucket when setting up an S3 Storage Lens metrics export.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "S3StorageLensExamplePolicy", "Effect": "Allow", "Principal": { "Service": "storage-lens.s3.amazonaws.com" }, "Action": "s3:PutObject", "Resource": [ "arn:aws:s3:::
destination-bucket
/destination-prefix
/StorageLens/111122223333
/*" ], "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": "111122223333
", "aws:SourceArn": "arn:aws:s3:region-code
:111122223333
:storage-lens/storage-lens-dashboard-configuration-id
" } } } ] }
When you're setting up an S3 Storage Lens organization-level metrics export, use the following
modification to the previous bucket policy's Resource
statement.
"Resource": "arn:aws:s3:::
destination-bucket
/destination-prefix
/StorageLens/your-organization-id
/*",
Managing permissions for S3 Inventory, S3 analytics, and S3 Inventory reports
Grant permissions for S3 Inventory and S3 analytics
S3 Inventory creates lists of the objects in a bucket, and S3 analytics Storage Class Analysis export creates output files of the data used in the analysis. The bucket that the inventory lists the objects for is called the source bucket. The bucket where the inventory file or the analytics export file is written to is called a destination bucket. When setting up an inventory or an analytics export, you must create a bucket policy for the destination bucket. For more information, see Amazon S3 Inventory and Amazon S3 analytics – Storage Class Analysis.
The following example bucket policy grants Amazon S3 permission to write objects
(PUT
requests) from the account for the source bucket to the destination
bucket. You use a bucket policy like this on the destination bucket when setting up S3
Inventory and S3 analytics export.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "InventoryAndAnalyticsExamplePolicy", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "s3:PutObject", "Resource": [ "arn:aws:s3:::
DOC-EXAMPLE-DESTINATION-BUCKET
/*" ], "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:s3:::DOC-EXAMPLE-SOURCE-BUCKET
" }, "StringEquals": { "aws:SourceAccount": "111122223333
", "s3:x-amz-acl": "bucket-owner-full-control" } } } ] }
Restrict access to an S3 Inventory report
Amazon S3 Inventory creates lists of
the objects in an S3 bucket and the metadata for each object. The
s3:PutInventoryConfiguration
permission allows a user to create an inventory
report that includes all object metadata fields that are available and to specify the
destination bucket to store the inventory. A user with read access to objects in the
destination bucket can access all object metadata fields that are available in the inventory
report. For more information about the metadata fields that are available in S3 Inventory,
see Amazon S3 Inventory list.
To restrict a user from configuring an S3 Inventory report of all object metadata
available, remove the s3:PutInventoryConfiguration
permission from the
user.
To restrict a user from accessing your S3 Inventory report in a destination bucket, add
a bucket policy like the following example to the destination bucket. This example bucket
policy denies all the principals except the user
from accessing the inventory report
Ana
in the
destination bucket
DOC-EXAMPLE-DESTINATION-BUCKET-INVENTORY
.DOC-EXAMPLE-DESTINATION-BUCKET
{ "Id": "GetObjectPolicy", "Version": "2012-10-17", "Statement": [{ "Sid": "AllowListBucket", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::
DOC-EXAMPLE-DESTINATION-BUCKET
", "arn:aws:s3:::DOC-EXAMPLE-DESTINATION-BUCKET
/*" ], "Principal": { "AWS": [ "arn:aws:iam::111122223333
:user/Ana
" ] } }, { "Sid": "AllowACertainUserToReadObject", "Action": [ "s3:GetObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::DOC-EXAMPLE-DESTINATION-BUCKET/DOC-EXAMPLE-DESTINATION-BUCKET-INVENTORY
/*", "Principal": { "AWS": [ "arn:aws:iam::111122223333
:user/Ana
" ] } }, { "Sid": "DenyAllTheOtherUsersToReadObject", "Action": [ "s3:GetObject" ], "Effect": "Deny", "Resource": "arn:aws:s3:::DOC-EXAMPLE-DESTINATION-BUCKET/DOC-EXAMPLE-DESTINATION-BUCKET-INVENTORY
/*", "Principal": { "AWS": "*" }, "Condition": { "ArnNotEquals": { "aws:PrincipalArn": "arn:aws:iam::111122223333
:user/Ana
" } } } ] }
Requiring MFA
Amazon S3 supports MFA-protected API access, a feature that can enforce multi-factor
authentication (MFA) for access to your Amazon S3 resources. Multi-factor authentication provides
an extra level of security that you can apply to your AWS environment. MFA is a security
feature that requires users to prove physical possession of an MFA device by providing a valid
MFA code. For more information, see AWS Multi-Factor
Authentication
To enforce the MFA requirement, use the aws:MultiFactorAuthAge
condition key
in a bucket policy. IAM users can access Amazon S3 resources by using temporary credentials
issued by the AWS Security Token Service (AWS STS). You provide the MFA code at the time of the AWS STS
request.
When Amazon S3 receives a request with multi-factor authentication, the
aws:MultiFactorAuthAge
condition key provides a numeric value that indicates
how long ago (in seconds) the temporary credential was created. If the temporary credential
provided in the request was not created by using an MFA device, this key value is null
(absent). In a bucket policy, you can add a condition to check this value, as shown in the
following example.
This example policy denies any Amazon S3 operation on the
folder in the
/taxdocuments
bucket if the request is not authenticated by using MFA. To
learn more about MFA, see Using
Multi-Factor Authentication (MFA) in AWS in the
IAM User Guide.DOC-EXAMPLE-BUCKET
{ "Version": "2012-10-17", "Id": "123", "Statement": [ { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::
DOC-EXAMPLE-BUCKET
/taxdocuments
/*", "Condition": { "Null": { "aws:MultiFactorAuthAge": true }} } ] }
The Null
condition in the Condition
block evaluates to
true
if the aws:MultiFactorAuthAge
condition key value is null,
indicating that the temporary security credentials in the request were created without an MFA
device.
The following bucket policy is an extension of the preceding bucket policy. It includes
two policy statements. One statement allows the s3:GetObject
permission on a
bucket (
) to everyone. Another statement further restricts
access to the DOC-EXAMPLE-BUCKET
folder
in the bucket by requiring MFA. DOC-EXAMPLE-BUCKET
/taxdocuments
{ "Version": "2012-10-17", "Id": "123", "Statement": [ { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::
DOC-EXAMPLE-BUCKET
/taxdocuments
/*", "Condition": { "Null": { "aws:MultiFactorAuthAge": true } } }, { "Sid": "", "Effect": "Allow", "Principal": "*", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" } ] }
You can optionally use a numeric condition to limit the duration for which the
aws:MultiFactorAuthAge
key is valid. The duration that you specify with the
aws:MultiFactorAuthAge
key is independent of the lifetime of the temporary
security credential that's used in authenticating the request.
For example, the following bucket policy, in addition to requiring MFA authentication,
also checks how long ago the temporary session was created. The policy denies any operation if
the aws:MultiFactorAuthAge
key value indicates that the temporary session was
created more than an hour ago (3,600 seconds).
{ "Version": "2012-10-17", "Id": "123", "Statement": [ { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::
DOC-EXAMPLE-BUCKET
/taxdocuments
/*", "Condition": {"Null": {"aws:MultiFactorAuthAge": true }} }, { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/taxdocuments
/*", "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 3600 }} }, { "Sid": "", "Effect": "Allow", "Principal": "*", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET
/*" } ] }