Enable logging from AWS services - Amazon CloudWatch Logs

Enable logging from AWS services

While many services publish logs only to CloudWatch Logs, some AWS services can publish logs directly to Amazon Simple Storage Service or Amazon Data Firehose. If your main requirement for logs is storage or processing in one of these services, you can easily have the service that produces the logs send them directly to Amazon S3 or Firehose without additional setup.

Even when logs are published directly to Amazon S3 or Firehose, charges apply. For more information, see Vended Logs on the Logs tab at Amazon CloudWatch Pricing.

Some AWS services use a common infrastructure to send their logs. To enable logging from these services, you must be logged in as a user that has certain permissions. Additionally, you must grant permissions to AWS to enable the logs to be sent.

For services that require these permissions, there are two versions of the permissions needed. The services that require these extra permissions are noted as Supported [V1 Permissions] and Supported [V2 Permissions] in the table. For information about these required permissions, see the sections after the table.

Log type CloudWatch Logs Amazon S3 Firehose

Amazon API Gateway access logs

Supported [V1 Permissions]

AWS AppSync logs

Supported

Amazon Aurora MySQL logs

Supported

Amazon Bedrock Knowledge bases logging

Supported [V2 Permissions] Supported [V2 Permissions] Supported [V2 Permissions]

Amazon Chime media quality metric logs and SIP message logs

Supported [V1 Permissions]

CloudFront: access logs

Supported [V1 Permissions]

AWS CloudHSM audit logs

Supported

CloudWatch Evidently evaluation event logs

Supported [V1 Permissions] Supported [V1 Permissions]

CloudWatch Internet Monitor logs

Supported [V1 Permissions]

CloudTrail logs

Supported

AWS CodeBuild logs

Supported

Amazon CodeWhisperer event logs

Supported [V2 Permissions] Supported [V2 Permissions] Supported [V2 Permissions]

Amazon Cognito logs

Supported [V1 Permissions]

Amazon Connect logs

Supported

AWS DataSync logs

Supported

Amazon ElastiCache for Redis logs

Supported [V1 Permissions] Supported [V1 Permissions]

AWS Elastic Beanstalk logs

Supported

Amazon Elastic Container Service logs

Supported

Amazon Elastic Kubernetes Service control plane logs

Supported

Amazon EventBridge Pipes logging

Supported [V1 Permissions] Supported [V1 Permissions] Supported [V1 Permissions]

AWS Fargate logs

Supported

AWS Fault Injection Service experiment logs

Supported [V1 Permissions]

Amazon FinSpace

Supported [V1 Permissions] Supported [V1 Permissions] Supported [V1 Permissions]

AWS Global Accelerator flow logs

Supported [V1 Permissions]

AWS Glue job logs

Supported

IAM Identity Center error logs

Supported [V2 Permissions] Supported [V2 Permissions] Supported [V2 Permissions]

Amazon Interactive Video Service chat logs

Supported [V1 Permissions] Supported [V1 Permissions] Supported [V1 Permissions]

AWS IoT logs

Supported

AWS IoT FleetWise logs

Supported [V1 Permissions] Supported [V1 Permissions] Supported [V1 Permissions]

AWS Lambda logs

Supported

Amazon Macie logs

Supported

AWS Mainframe Modernization

Supported [V1 Permissions] Supported [V1 Permissions] Supported [V1 Permissions]

Amazon Managed Service for Prometheus logs

Supported [V1 Permissions]

Amazon MSK broker logs

Supported [V1 Permissions]

Supported [V1 Permissions] Supported [V1 Permissions]

Amazon MSK Connect logs

Supported [V1 Permissions]

Supported [V1 Permissions] Supported [V1 Permissions]

Amazon MQ general and audit logs

Supported

AWS Network Firewall logs

Supported [V1 Permissions]

Supported [V1 Permissions] Supported [V1 Permissions]

Network Load Balancer access logs

Supported [V1 Permissions]

OpenSearch logs

Supported

Amazon OpenSearch Service ingestion logs

Supported [V1 Permissions] Supported [V1 Permissions] Supported [V1 Permissions]

AWS OpsWorks logs

Supported

Amazon Relational Database ServicePostgreSQL logs

Supported

AWS RoboMaker logs

Supported

Amazon Route 53 public DNS query logs

Supported

Amazon Route 53 resolver query logs

Supported [V1 Permissions]

Supported [V1 Permissions]

Amazon SageMaker events

Supported [V1 Permissions]

Amazon SageMaker worker events

Supported [V1 Permissions]

AWS Site-to_Site VPN logs

Supported [V1 Permissions]

Supported [V1 Permissions]

Supported [V1 Permissions]

Amazon Simple Notification Service logs

Supported

Amazon Simple Notification Service data protection policy logs

Supported

EC2 Spot Instance data feed files

Supported [V1 Permissions]

AWS Step Functions Express Workflow and Standard Workflow logs

Supported [V1 Permissions]

Storage Gateway audit logs and health logs

Supported [V1 Permissions]

AWS Transfer Family logs

Supported [V1 Permissions]

Supported [V1 Permissions]

Supported [V1 Permissions]

AWS Verified Access logs

Supported [V1 Permissions]

Supported [V1 Permissions]

Supported [V1 Permissions]

Amazon Virtual Private Cloud flow logs

Supported

Supported [V1 Permissions] Supported [V1 Permissions]

Amazon VPC Lattice access logs

Supported [V1 Permissions] Supported [V1 Permissions] Supported [V1 Permissions]

AWS WAF logs

Supported [V1 Permissions] Supported [V1 Permissions]

Supported

Amazon WorkMail logs

Supported [V2 Permissions] Supported [V2 Permissions] Supported [V2 Permissions]

Logging that requires additional permissions [V1]

Some AWS services use a common infrastructure to send their logs to CloudWatch Logs, Amazon S3, or Firehose. To enable the AWS services listed in the following table to send their logs to these destinations, you must be logged in as a user that has certain permissions.

Additionally, permissions must be granted to AWS to enable the logs to be sent. AWS can automatically create those permissions when the logs are set up, or you can create them yourself first before you set up the logging. For cross-account delivery, you must manually create the permission policies yourself.

If you choose to have AWS automatically set up the necessary permissions and resource policies when you or someone in your organization first sets up the sending of logs, then the user who is setting up the sending of logs must have certain permissions, as explained later in this section. Alternatively, you can create the resource policies yourself, and then the users who set up the sending of logs do not need as many permissions.

The following table summarizes which types of logs and which log destinations that the information in this section applies to.

The following sections provide more details for each of these destinations.

Logs sent to CloudWatch Logs

Important

When you set up the log types in the following list to be sent to CloudWatch Logs, AWS creates or changes the resource policies associated with the log group receiving the logs, if needed. Continue reading this section to see the details.

This section applies when the types of logs listed in the table in the preceding section are sent to CloudWatch Logs:

User permissions

To be able to set up sending any of these types of logs to CloudWatch Logs for the first time, you must be logged into an account with the following permissions.

  • logs:CreateLogDelivery

  • logs:PutResourcePolicy

  • logs:DescribeResourcePolicies

  • logs:DescribeLogGroups

    Note

    When you specify the logs:DescribeLogGroups, logs:DescribeResourcePolicies, or logs:PutResourcePolicy permission, be sure to set the ARN of its Resource line to use a * wildcard, instead of specifying only a single log group name. For example, "Resource": "arn:aws:logs:us-east-1:111122223333:log-group:*"

If any of these types of logs is already being sent to a log group in CloudWatch Logs, then to set up the sending of another one of these types of logs to that same log group, you only need the logs:CreateLogDelivery permission.

Log group resource policy

The log group where the logs are being sent must have a resource policy that includes certain permissions. If the log group currently does not have a resource policy, and the user setting up the logging has the logs:PutResourcePolicy, logs:DescribeResourcePolicies, and logs:DescribeLogGroups permissions for the log group, then AWS automatically creates the following policy for it when you begin sending the logs to CloudWatch Logs.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSLogDeliveryWrite20150319", "Effect": "Allow", "Principal": { "Service": [ "delivery.logs.amazonaws.com" ] }, "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-east-1:0123456789:log-group:my-log-group:log-stream:*" ], "Condition": { "StringEquals": { "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"] } } } ] }

If the log group does have a resource policy but that policy doesn't contain the statement shown in the previous policy, and the user setting up the logging has the logs:PutResourcePolicy, logs:DescribeResourcePolicies, and logs:DescribeLogGroups permissions for the log group, that statement is appended to the log group's resource policy.

Log group resource policy size limit considerations

These services must list each log group that they're sending logs to in the resource policy, and CloudWatch Logs resource policies are limited to 5120 characters. A service that sends logs to a large number of log groups might run into this limit.

To mitigate this, CloudWatch Logs monitors the size of resource policies used by the service that is sending logs, and when it detects that a policy approaches the size limit of 5120 characters, CloudWatch Logs automatically enables /aws/vendedlogs/* in the resource policy for that service. You can then start using log groups with names that start with /aws/vendedlogs/ as the destinations for logs from these services.

Logs sent to Amazon S3

When you set logs to be sent to Amazon S3, AWS creates or changes the resource policies associated with the S3 bucket that is receiving the logs, if needed.

Logs published directly to Amazon S3 are published to an existing bucket that you specify. One or more log files are created every five minutes in the specified bucket.

When you deliver logs for the first time to an Amazon S3 bucket, the service that delivers logs records the owner of the bucket to ensure that the logs are delivered only to a bucket belonging to this account. As a result, to change the Amazon S3 bucket owner, you must re-create or update the log subscription in the originating service.

Note

CloudFront uses a different permissions model than the other services that send vended logs to S3. For more information, see Permissions required to configure standard logging and to access your log files.

Additionallly, if you use the same S3 bucket for CloudFront access logs and another log source, enabling ACL on the bucket for CloudFront also grants permission to all other log sources that use this bucket.

User permissions

To be able to set up sending any of these types of logs to Amazon S3 for the first time, you must be logged into an account with the following permissions.

  • logs:CreateLogDelivery

  • S3:GetBucketPolicy

  • S3:PutBucketPolicy

If any of these types of logs is already being sent to an Amazon S3 bucket, then to set up the sending of another one of these types of logs to the same bucket you only need to have the logs:CreateLogDelivery permission.

S3 bucket resource policy

The S3 bucket where the logs are being sent must have a resource policy that includes certain permissions. If the bucket currently does not have a resource policy and the user setting up the logging has the S3:GetBucketPolicy and S3:PutBucketPolicy permissions for the bucket, then AWS automatically creates the following policy for it when you begin sending the logs to Amazon S3.

{ "Version": "2012-10-17", "Id": "AWSLogDeliveryWrite20150319", "Statement": [ { "Sid": "AWSLogDeliveryAclCheck", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::my-bucket", "Condition": { "StringEquals": { "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"] } } }, { "Sid": "AWSLogDeliveryWrite", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-bucket/AWSLogs/account-ID/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"] } } } ] }

In the previous policy, for aws:SourceAccount, specify the list of account IDS for which logs are being delivered to this bucket. For aws:SourceArn, specify the list of ARNs of the resource that generates the logs, in the form arn:aws:logs:source-region:source-account-id:*.

If the bucket has a resource policy but that policy doesn't contain the statement shown in the previous policy, and the user setting up the logging has the S3:GetBucketPolicy and S3:PutBucketPolicy permissions for the bucket, that statement is appended to the bucket's resource policy.

Note

In some cases, you may see AccessDenied errors in AWS CloudTrail if the s3:ListBucket permission has not been granted to delivery.logs.amazonaws.com. To avoid these errors in your CloudTrail logs, you must grant the s3:ListBucket permission to delivery.logs.amazonaws.com and you must include the Condition parameters shown with the s3:GetBucketAcl permission set in the preceding bucket policy. To make this simpler, instead of creating a new Statement, you can directly update the AWSLogDeliveryAclCheck to be “Action”: [“s3:GetBucketAcl”, “s3:ListBucket”]

Amazon S3 bucket server-side encryption

You can protect the data in your Amazon S3 bucket by enabling either server-side Encryption with Amazon S3-managed keys (SSE-S3) or server-side encryption with a AWS KMS key stored in AWS Key Management Service (SSE-KMS). For more information, see Protecting data using server-side encryption.

If you choose SSE-S3, no additional configuration is required. Amazon S3 handles the encryption key.

Warning

If you choose SSE-KMS, you must use a customer managed key, because using an AWS managed key is not supported for this scenario. If you set up encryption using an AWS managed key, the logs will be delivered in an unreadable format.

When you use a customer managed AWS KMS key, you can specify the Amazon Resource Name (ARN) of the customer managed key when you enable bucket encryption. You must add the following to the key policy for your customer managed key (not to the bucket policy for your S3 bucket), so that the log delivery account can write to your S3 bucket.

If you choose SSE-KMS, you must use a customer managed key, because using an AWS managed key is not supported for this scenario. When you use a customer managed AWS KMS key, you can specify the Amazon Resource Name (ARN) of the customer managed key when you enable bucket encryption. You must add the following to the key policy for your customer managed key (not to the bucket policy for your S3 bucket), so that the log delivery account can write to your S3 bucket.

{ "Sid": "Allow Logs Delivery to use the key", "Effect": "Allow", "Principal": { "Service": [ "delivery.logs.amazonaws.com" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*", "Condition": { "StringEquals": { "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"] } } }

For aws:SourceAccount, specify the list of account IDS for which logs are being delivered to this bucket. For aws:SourceArn, specify the list of ARNs of the resource that generates the logs, in the form arn:aws:logs:source-region:source-account-id:*.

Logs sent to Firehose

This section applies when the types of logs listed in the table in the preceding section are sent to Firehose:

User permissions

To be able to set up sending any of these types of logs to Firehose for the first time, you must be logged into an account with the following permissions.

  • logs:CreateLogDelivery

  • firehose:TagDeliveryStream

  • iam:CreateServiceLinkedRole

If any of these types of logs is already being sent to Firehose, then to set up the sending of another one of these types of logs to Firehose you need to have only the logs:CreateLogDelivery and firehose:TagDeliveryStream permissions.

IAM roles used for permissions

Because Firehose does not use resource policies, AWS uses IAM roles when setting up these logs to be sent to Firehose. AWS creates a service-linked role named AWSServiceRoleForLogDelivery. This service-linked role includes the following permissions.

{ "Version": "2012-10-17", "Statement": [ { "Action": [ "firehose:PutRecord", "firehose:PutRecordBatch", "firehose:ListTagsForDeliveryStream" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/LogDeliveryEnabled": "true" } }, "Effect": "Allow" } ] }

This service-linked role grants permission for all Firehose delivery streams that have the LogDeliveryEnabled tag set to true. AWS gives this tag to the destination delivery stream when you set up the logging.

This service-linked role also has a trust policy that allows the delivery.logs.amazonaws.com service principal to assume the needed service-linked role. That trust policy is as follows:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }

Logging that requires additional permissions [V2]

Some AWS services use a new method to send their logs. This is a flexible method that enables you to set up log delivery from these services to one or more of the following destinations: CloudWatch Logs, Amazon S3, or Firehose.

A working log delivery consists of three elements:

  • A DeliverySource, which is a logical object that represents the resource(s) that actually send the logs.

  • A DeliveryDestination, which is a logical object that represents the actual delivery destination.

  • A Delivery, which connects a delivery source to delivery destination

To configure logs delivery between a supported AWS service and a destination, you must do the following:

  • Create a delivery source with PutDeliverySource.

  • Create a delivery destination with PutDeliveryDestination.

  • If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy authorizes creating a delivery from the delivery source in account A to the delivery destination in account B. For cross-account delivery, you must manually create the permission policies yourself.

  • Create a delivery by pairing exactly one delivery source and one delivery destination, by using CreateDelivery.

The following sections provide the details of the permissions you need to have when you are signed in to set up log delivery to each type of destination, using the V2 process. These permissions can be granted to an IAM role that you are signed in with.

Important

It is your responsibility to remove log delivery resources after deleting the log-generating resource. To do so, follow these steps.

  1. Delete the Delivery by using the DeleteDelivery operation.

  2. Delete the DeliverySource by using the DeleteDeliverySource operation.

  3. If the DeliveryDestination associated with the DeliverySource that you just deleted is used only for this specific DeliverySource, then you can remove it by using the DeleteDeliveryDestinations operation.

Logs sent to CloudWatch Logs

User permissions

To enable sending logs to CloudWatch Logs, you must be signed in with the following permissions.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "ReadWriteAccessForLogDeliveryActions", "Effect": "Allow", "Action": [ "logs:GetDelivery", "logs:GetDeliverySource", "logs:PutDeliveryDestination", "logs:GetDeliveryDestinationPolicy", "logs:DeleteDeliverySource", "logs:PutDeliveryDestinationPolicy", "logs:CreateDelivery", "logs:GetDeliveryDestination", "logs:PutDeliverySource", "logs:DeleteDeliveryDestination", "logs:DeleteDeliveryDestinationPolicy", "logs:DeleteDelivery" ], "Resource": [ "arn:aws:logs:region:account-id:delivery:*", "arn:aws:logs:region:account-id:delivery-source:*", "arn:aws:logs:region:account-id:delivery-destination:*" ] }, { "Sid": "ListAccessForLogDeliveryActions", "Effect": "Allow", "Action": [ "logs:DescribeDeliveryDestinations", "logs:DescribeDeliverySources", "logs:DescribeDeliveries" ], "Resource": "*" }, { "Sid": "AllowUpdatesToResourcePolicyCWL", "Effect": "Allow", "Action": [ "logs:PutResourcePolicy", "logs:DescribeResourcePolicies", "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:region:account-id:*" ] } ] }

Log group resource policy

The log group where the logs are being sent must have a resource policy that includes certain permissions. If the log group currently does not have a resource policy, and the user setting up the logging has the logs:PutResourcePolicy, logs:DescribeResourcePolicies, and logs:DescribeLogGroups permissions for the log group, then AWS automatically creates the following policy for it when you begin sending the logs to CloudWatch Logs.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSLogDeliveryWrite20150319", "Effect": "Allow", "Principal": { "Service": [ "delivery.logs.amazonaws.com" ] }, "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-east-1:0123456789:log-group:my-log-group:log-stream:*" ], "Condition": { "StringEquals": { "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"] } } } ] }

Log group resource policy size limit considerations

These services must list each log group that they're sending logs to in the resource policy, and CloudWatch Logs resource policies are limited to 5120 characters. A service that sends logs to a large number of log groups may run into this limit.

To mitigate this, CloudWatch Logs monitors the size of resource policies used by the service that is sending logs, and when it detects that a policy approaches the size limit of 5120 characters, CloudWatch Logs automatically enables /aws/vendedlogs/* in the resource policy for that service. You can then start using log groups with names that start with /aws/vendedlogs/ as the destinations for logs from these services.

Logs sent to Amazon S3

User permissions

To enable sending logs to Amazon S3, you must be signed in with the following permissions.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "ReadWriteAccessForLogDeliveryActions", "Effect": "Allow", "Action": [ "logs:GetDelivery", "logs:GetDeliverySource", "logs:PutDeliveryDestination", "logs:GetDeliveryDestinationPolicy", "logs:DeleteDeliverySource", "logs:PutDeliveryDestinationPolicy", "logs:CreateDelivery", "logs:GetDeliveryDestination", "logs:PutDeliverySource", "logs:DeleteDeliveryDestination", "logs:DeleteDeliveryDestinationPolicy", "logs:DeleteDelivery" ], "Resource": [ "arn:aws:logs:region:account-id:delivery:*", "arn:aws:logs:region:account-id:delivery-source:*", "arn:aws:logs:region:account-id:delivery-destination:*" ] }, { "Sid": "ListAccessForLogDeliveryActions", "Effect": "Allow", "Action": [ "logs:DescribeDeliveryDestinations", "logs:DescribeDeliverySources", "logs:DescribeDeliveries" ], "Resource": "*" }, { "Sid": "AllowUpdatesToResourcePolicyS3", "Effect": "Allow", "Action": [ "s3:PutBucketPolicy", "s3:GetBucketPolicy" ], "Resource": "arn:aws:s3:::bucket-name" } ] }

The S3 bucket where the logs are being sent must have a resource policy that includes certain permissions. If the bucket currently does not have a resource policy and the user setting up the logging has the S3:GetBucketPolicy and S3:PutBucketPolicy permissions for the bucket, then AWS automatically creates the following policy for it when you begin sending the logs to Amazon S3.

{ "Version": "2012-10-17", "Id": "AWSLogDeliveryWrite20150319", "Statement": [ { "Sid": "AWSLogDeliveryAclCheck", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::my-bucket", "Condition": { "StringEquals": { "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:delivery-source*"] } } }, { "Sid": "AWSLogDeliveryWrite", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-bucket/AWSLogs/account-ID/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:delivery-source:*"] } } } ] }

In the previous policy, for aws:SourceAccount, specify the list of account IDS for which logs are being delivered to this bucket. For aws:SourceArn, specify the list of ARNs of the resource that generates the logs, in the form arn:aws:logs:source-region:source-account-id:*.

If the bucket has a resource policy but that policy doesn't contain the statement shown in the previous policy, and the user setting up the logging has the S3:GetBucketPolicy and S3:PutBucketPolicy permissions for the bucket, that statement is appended to the bucket's resource policy.

Note

In some cases, you may see AccessDenied errors in AWS CloudTrail if the s3:ListBucket permission has not been granted to delivery.logs.amazonaws.com. To avoid these errors in your CloudTrail logs, you must grant the s3:ListBucket permission to delivery.logs.amazonaws.com and you must include the Condition parameters shown with the s3:GetBucketAcl permission set in the preceding bucket policy. To make this simpler, instead of creating a new Statement, you can directly update the AWSLogDeliveryAclCheck to be “Action”: [“s3:GetBucketAcl”, “s3:ListBucket”]

Amazon S3 bucket server-side encryption

You can protect the data in your Amazon S3 bucket by enabling either server-side Encryption with Amazon S3-managed keys (SSE-S3) or server-side encryption with a AWS KMS key stored in AWS Key Management Service (SSE-KMS). For more information, see Protecting data using server-side encryption.

If you choose SSE-S3, no additional configuration is required. Amazon S3 handles the encryption key.

Warning

If you choose SSE-KMS, you must use a customer managed key, because using an AWS managed key is not supported for this scenario. If you set up encryption using an AWS managed key, the logs will be delivered in an unreadable format.

When you use a customer managed AWS KMS key, you can specify the Amazon Resource Name (ARN) of the customer managed key when you enable bucket encryption. You must add the following to the key policy for your customer managed key (not to the bucket policy for your S3 bucket), so that the log delivery account can write to your S3 bucket.

If you choose SSE-KMS, you must use a customer managed key, because using an AWS managed key is not supported for this scenario. When you use a customer managed AWS KMS key, you can specify the Amazon Resource Name (ARN) of the customer managed key when you enable bucket encryption. You must add the following to the key policy for your customer managed key (not to the bucket policy for your S3 bucket), so that the log delivery account can write to your S3 bucket.

{ "Sid": "Allow Logs Delivery to use the key", "Effect": "Allow", "Principal": { "Service": [ "delivery.logs.amazonaws.com" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*", "Condition": { "StringEquals": { "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:delivery-source:*"] } } }

For aws:SourceAccount, specify the list of account IDS for which logs are being delivered to this bucket. For aws:SourceArn, specify the list of ARNs of the resource that generates the logs, in the form arn:aws:logs:source-region:source-account-id:*.

Logs sent to Firehose

User permissions

To enable sending logs to Firehose, you must be signed in with the following permissions.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "ReadWriteAccessForLogDeliveryActions", "Effect": "Allow", "Action": [ "logs:GetDelivery", "logs:GetDeliverySource", "logs:PutDeliveryDestination", "logs:GetDeliveryDestinationPolicy", "logs:DeleteDeliverySource", "logs:PutDeliveryDestinationPolicy", "logs:CreateDelivery", "logs:GetDeliveryDestination", "logs:PutDeliverySource", "logs:DeleteDeliveryDestination", "logs:DeleteDeliveryDestinationPolicy", "logs:DeleteDelivery" ], "Resource": [ "arn:aws:logs:region:account-id:delivery:*", "arn:aws:logs:region:account-id:delivery-source:*", "arn:aws:logs:region:account-id:delivery-destination:*" ] }, { "Sid": "ListAccessForLogDeliveryActions", "Effect": "Allow", "Action": [ "logs:DescribeDeliveryDestinations", "logs:DescribeDeliverySources", "logs:DescribeDeliveries" ], "Resource": "*" }, { "Sid": "AllowUpdatesToResourcePolicyFH", "Effect": "Allow", "Action": [ "firehose:TagDeliveryStream" ], "Resource": [ "arn:aws:firehose:region:account-id:deliverystream/*" ] }, { "Sid": "CreateServiceLinkedRole", "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole" ], "Resource": "arn:aws:iam::account-id:role/aws-service-role/delivery.logs.amazonaws.com/AWSServiceRoleForLogDelivery" } ] }

IAM roles used for resource permissions

Because Firehose does not use resource policies, AWS uses IAM roles when setting up these logs to be sent to Firehose. AWS creates a service-linked role named AWSServiceRoleForLogDelivery. This service-linked role includes the following permissions.

{ "Version": "2012-10-17", "Statement": [ { "Action": [ "firehose:PutRecord", "firehose:PutRecordBatch", "firehose:ListTagsForDeliveryStream" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/LogDeliveryEnabled": "true" } }, "Effect": "Allow" } ] }

This service-linked role grants permission for all Firehose delivery streams that have the LogDeliveryEnabled tag set to true. AWS gives this tag to the destination delivery stream when you set up the logging.

This service-linked role also has a trust policy that allows the delivery.logs.amazonaws.com service principal to assume the needed service-linked role. That trust policy is as follows:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }

Service-specific permissions

In addition to the destination-specific permissions listed in the previous sections, some services require explicit authorization that customers are allowed to send logs from their resources, as an additional layer of security. It authorizes the AllowVendedLogDeliveryForResource action for resources that vend logs within that service. For these services, use the following policy and replace service and resource-type with the appropriate values. For the service-specific values for these fields, see those services' documentation page for vended logs.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "ServiceLevelAccessForLogDelivery", "Effect": "Allow", "Action": [ "service:AllowVendedLogDeliveryForResource" ], "Resource": "arn:aws:service:region:account-id:resource-type/*" } ] }

Console-specific permissions

In addition to the permissions listed in the previous sections, if you are setting up log delivery using the console instead of the APIs, you also need the following additional permissions:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowLogDeliveryActionsConsoleCWL", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-east-1:111122223333:log-group:*" ] }, { "Sid": "AllowLogDeliveryActionsConsoleS3", "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::*" ] }, { "Sid": "AllowLogDeliveryActionsConsoleFH", "Effect": "Allow", "Action": [ "firehose:ListDeliveryStreams", "firehose:DescribeDeliveryStream" ], "Resource": [ "*" ] } ] }

Cross-service confused deputy prevention

The confused deputy problem is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In AWS, cross-service impersonation can result in the confused deputy problem. Cross-service impersonation can occur when one service (the calling service) calls another service (the called service). The calling service can be manipulated to use its permissions to act on another customer's resources in a way it should not otherwise have permission to access. To prevent this, AWS provides tools that help you protect your data for all services with service principals that have been given access to resources in your account.

We recommend using the aws:SourceArn, aws:SourceAccount, aws:SourceOrgID, and aws:SourceOrgPaths global condition context keys in resource policies to limit the permissions that CloudWatch Logs gives another service to the resource. Use aws:SourceArn to associate only one resource with cross-service access. Use aws:SourceAccount to let any resource in that account be associated with the cross-service use. Use aws:SourceOrgID to allow any resource from any account within an organization be associated with the cross-service use. Use aws:SourceOrgPaths to associate any resource from accounts within an AWS Organizations path with the cross-service use. For more information about using and understanding paths, see Understand the AWS Organizations entity path.

The most effective way to protect against the confused deputy problem is to use the aws:SourceArn global condition context key with the full ARN of the resource. If you don't know the full ARN of the resource or if you are specifying multiple resources, use the aws:SourceArn global context condition key with wildcard characters (*) for the unknown portions of the ARN. For example, arn:aws:servicename:*:123456789012:*.

If the aws:SourceArn value does not contain the account ID, such as an Amazon S3 bucket ARN, you must use both aws:SourceAccount and aws:SourceArn to limit permissions.

To protect against the confused deputy problem at scale, use the aws:SourceOrgID or aws:SourceOrgPaths global condition context key with the organization ID or organization path of the resource in your resource-based policies. Policies that include the aws:SourceOrgID or aws:SourceOrgPaths key will automatically include the correct accounts and you don't have to manually update the policies when you add, remove, or move accounts in your organization.

The policies in the previous sections of this page show how you can use the aws:SourceArn and aws:SourceAccount global condition context keys to prevent the confused deputy problem.

CloudWatch Logs updates to AWS managed policies

View details about updates to AWS managed policies for CloudWatch Logs since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the CloudWatch Logs Document history page.

Change Description Date

AWSServiceRoleForLogDelivery service-linked role policy – Update to an existing policy

CloudWatch Logs changed the permissions in the IAM policy associated with the AWSServiceRoleForLogDelivery service-linked role. The following change was made:

  • The firehose:ResourceTag/LogDeliveryEnabled": "true" condition key was changed to aws:ResourceTag/LogDeliveryEnabled": "true".

July 15, 2021

CloudWatch Logs started tracking changes

CloudWatch Logs started tracking changes for its AWS managed policies.

June 10, 2021