Enabling Logging from Certain AWS Services
While many services publish logs only to CloudWatch Logs, some AWS services can publish logs directly to Amazon Simple Storage Service or Amazon Kinesis Data Firehose. If your main requirement for logs is storage or processing in one of these services, you can easily have the service that produces the logs send them directly to Amazon S3 or Kinesis Data Firehose without additional setup.
Even when logs are published directly to Amazon S3 or Kinesis Data Firehose, charges
apply. For more information,
see Vended Logs on the Logs tab at Amazon CloudWatch Pricing
Permissions
Some of these AWS services use a common infrastructure to send their logs to CloudWatch Logs, Amazon S3, or Kinesis Data Firehose. To enable the AWS services listed in the following table to send their logs to these destinations, you must be logged in as a user that has certain permissions.
Additionally, permissions must be granted to AWS to enable the logs to be sent. AWS can automatically create those permissions when the logs are set up, or you can create them yourself first before you set up the logging.
If you choose to have AWS automatically set up the necessary permissions and resource policies when you or someone in your organization first sets up the sending of logs, then the user who is setting up the sending of logs must have certain permissions, as explained later in this section. Alternatively, you can create the resource policies yourself, and then the users who set up the sending of logs do not need as many permissions.
The following table summarizes which types of logs and which log destinations that the information in this section applies to.
Log type | CloudWatch Logs | Amazon S3 | Kinesis Data Firehose |
---|---|---|---|
✓ |
|||
✓ |
|||
✓ | |||
✓ | |||
✓ |
✓ | ✓ | |
✓ |
✓ | ✓ | |
✓ | |||
✓ |
✓ |
✓ | |
✓ |
|||
✓ | |||
✓ |
|||
✓ |
|||
✓ |
The following sections provide more details for each of these destinations.
Logs sent to CloudWatch Logs
When you set up the log types in the following list to be sent to CloudWatch Logs, AWS creates or changes the resource policies associated with the log group receiving the logs, if needed. Continue reading this section to see the details.
This section applies when the following types of logs are sent to CloudWatch Logs:
-
Amazon API Gateway access logs
-
AWS Storage Gateway audit logs and health logs
-
Amazon Chime media quality metric logs and SIP message logs
-
Amazon Managed Streaming for Apache Kafka broker logs
-
AWS Network Firewall logs
-
Amazon Route 53 resolver query logs
-
Amazon SageMaker worker events
-
AWS Step Functions express workflow history and standard workflow history
User permissions
To be able to set up sending any of these types of logs to CloudWatch Logs for the first time, you must be logged into an account with the following permissions.
-
logs:CreateLogDelivery
-
logs:PutResourcePolicy
-
logs:DescribeResourcePolicies
-
logs:DescribeLogGroups
If any of these types of logs is already being sent to a log group in CloudWatch Logs,
then to
set up the sending of another one of these types of logs to
that same log group, you only need the
logs:CreateLogDelivery
permission.
Log group resource policy
The log group where the logs are being sent must have a resource policy that includes
certain permissions. If the log group currently does not have a resource policy,
and the user
setting up the logging has the logs:PutResourcePolicy
,
logs:DescribeResourcePolicies
, and logs:DescribeLogGroups
permissions for the log group, then AWS automatically creates the
following policy for it when you begin sending the logs
to CloudWatch Logs.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSLogDeliveryWrite20150319", "Effect": "Allow", "Principal": { "Service": [ "delivery.logs.amazonaws.com" ] }, "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-east-1:0123456789:log-group:
my-log-group
:log-stream:*" ] } ] }
If the log group does have a resource policy but that policy doesn't contain the
statement shown in the previous policy, and the user setting up the logging has the
logs:PutResourcePolicy
, logs:DescribeResourcePolicies
, and
logs:DescribeLogGroups
permissions for the log group, that statement is
appended to the log group's resource policy.
Log group resource policy size limit considerations
These services must list each log group that they're sending logs to in the resource policy, and CloudWatch Logs resource policies are limited to 5120 characters. A service that sends logs to a large number of log groups may run into this limit.
To mitigate this, CloudWatch Logs monitors the size of resource policies used by the
service that is sending logs, and
when it detects that a policy approaches the size limit of 5120 characters, CloudWatch
Logs automatically enables
/aws/vendedlogs/*
in the resource policy for that service. You can then
start using log groups with names that start with /aws/vendedlogs/
as the destinations for logs from these services.
Logs sent to Amazon S3
When you set up the log types in the following list to be sent to Amazon S3, AWS creates or changes the resource policies associated with the S3 bucket that is receiving the logs, if needed. Continue reading this section to see the details.
This section applies when the following types of logs are sent to Amazon S3:
-
CloudFront access logs and streaming access logs. CloudFront uses a different permissions model than the other services in this list. For more information, see Permissions required to configure standard logging and to access your log files.
-
Amazon EC2 Spot Instance data feed
-
AWS Global Accelerator flow logs
-
Amazon Managed Streaming for Apache Kafka broker logs
-
Network Load Balancer access logs
-
AWS Network Firewall logs
-
Amazon Virtual Private Cloud flow logs
Logs published directly to Amazon S3 are published to an existing bucket that you specify. One or more log files are created every five minutes in the specified bucket.
User permissions
To be able to set up sending any of these types of logs to Amazon S3 for the first time, you must be logged into an account with the following permissions.
-
logs:CreateLogDelivery
-
S3:GetBucketPolicy
-
S3:PutBucketPolicy
If any of these types of logs is already being sent to an Amazon S3 bucket, then to
set up
the sending of another one of these types of logs to the same bucket you only
need to have the logs:CreateLogDelivery
permission.
S3 bucket resource policy
The S3 bucket where the logs are being sent must have a resource policy that
includes certain permissions. If the bucket currently does not have a resource policy
and the user setting up the logging has the S3:GetBucketPolicy
and
S3:PutBucketPolicy
permissions for the bucket, then AWS automatically
creates the following policy for it when you begin sending the logs to Amazon S3.
{ "Version": "2012-10-17", "Id": "AWSLogDeliveryWrite20150319", "Statement": [ { "Sid": "AWSLogDeliveryAclCheck", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::my-bucket" }, { "Sid": "AWSLogDeliveryWrite", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::
my-bucket
/AWSLogs/account-ID
/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] }
If the bucket does have a resource policy but that policy doesn't contain the
statement shown in the previous policy, and the user setting up the logging has the
S3:GetBucketPolicy
and S3:PutBucketPolicy
permissions for
the bucket, that statement is appended to the bucket's resource policy.
Logs sent to Kinesis Data Firehose
This section applies when the following types of logs are sent to Kinesis Data Firehose:
-
Amazon Managed Streaming for Apache Kafka broker logs
-
AWS Network Firewall logs
-
Amazon Route 53 resolver query logs
User permissions
To be able to set up sending any of these types of logs to Kinesis Data Firehose for the first time, you must be logged into an account with the following permissions.
-
logs:CreateLogDelivery
-
firehose:TagDeliveryStream
-
iam:CreateServiceLinkedRole
If any of these types of logs is already being sent to Kinesis Data Firehose, then
to set up the
sending of another one of these types of logs to Kinesis Data Firehose you need to
have only the
logs:CreateLogDelivery
and firehose:TagDeliveryStream
permissions.
IAM roles used for permissions
Because Kinesis Data Firehose does not use resource policies, AWS uses IAM roles when setting up these logs to be sent to Kinesis Data Firehose. AWS creates a service-linked role named AWSServiceRoleForLogDelivery. This service-linked role includes the following permissions.
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "firehose:PutRecord", "firehose:PutRecordBatch", "firehose:ListTagsForDeliveryStream" ], "Resource": "*", "Condition": { "StringEquals": { "firehose:ResourceTag/LogDeliveryEnabled": "true" } }, "Effect": "Allow" } ] }
This service-linked role grants permission for all Kinesis Data Firehose delivery
streams that have
the LogDeliveryEnabled
tag set to true
. AWS gives this tag to the destination delivery
stream when you set up the logging.
This service-linked role also has a trust policy that
allows the delivery.logs.amazonaws.com
service principal
to assume the needed service-linked role. That trust policy is as follows:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }