Export log data to Amazon S3 using the AWS CLI
In the following example, you use an export task to export all data from a CloudWatch Logs log
group named my-log-group
to an Amazon S3 bucket named
my-exported-logs
. This example assumes that you have already created a
log group called my-log-group
.
Exporting log data to S3 buckets that are encrypted by AWS KMS is supported.
Step 1: Create an S3 bucket
We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip to step 2.
The S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to S3 buckets in a different Region.
To create an S3 bucket using the AWS CLI
At a command prompt, run the following create-bucket command,
where LocationConstraint
is the Region where you are exporting log
data.
aws s3api create-bucket --bucket
my-exported-logs
--create-bucket-configuration LocationConstraint=us-east-2
The following is example output.
{ "Location": "/
my-exported-logs
" }
Step 2: Set up access permissions
To create the export task in step 5, you'll need to be signed on with the AmazonS3ReadOnlyAccess
IAM role
and with the following permissions:
logs:CreateExportTask
logs:CancelExportTask
logs:DescribeExportTasks
logs:DescribeLogStreams
logs:DescribeLogGroups
To provide access, add permissions to your users, groups, or roles:
-
Users and groups in AWS IAM Identity Center (successor to AWS Single Sign-On):
Create a permission set. Follow the instructions in Create a permission set in the AWS IAM Identity Center (successor to AWS Single Sign-On) User Guide.
-
Users managed in IAM through an identity provider:
Create a role for identity federation. Follow the instructions in Creating a role for a third-party identity provider (federation) in the IAM User Guide.
-
IAM users:
-
Create a role that your user can assume. Follow the instructions in Creating a role for an IAM user in the IAM User Guide.
-
(Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in Adding permissions to a user (console) in the IAM User Guide.
-
Step 3: Set permissions on an S3 bucket
By default, all S3 buckets and objects are private. Only the resource owner, the account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.
To make exports to S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.
In the following example, the list of of account IDs in the aws:SourceAccount
key
would be the accounts from which a user can export log data to your S3 bucket. The aws:SourceArn
key would be the resource for which the action is being taken. You may restrict this to a
specific log group, or use a wildcard as shown in this example.
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.
To set permissions on an S3 bucket
-
Create a file named
policy.json
and add the following access policy, changingmy-exported-logs
to the name of your S3 bucket andPrincipal
to the endpoint of the Region where you are exporting log data, such asus-west-1
. Use a text editor to create this policy file. Don't use the IAM console.{ "Version": "2012-10-17", "Statement": [ { "Action": "s3:GetBucketAcl", "Effect": "Allow", "Resource": "arn:aws:s3:::
my-exported-logs
", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } }, { "Action": "s3:PutObject" , "Effect": "Allow", "Resource": "arn:aws:s3:::my-exported-logs
/*", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } } ] } -
Set the policy that you just added as the access policy on your bucket by using the put-bucket-policy command. This policy enables CloudWatch Logs to export log data to your S3 bucket. The bucket owner will have full permissions on all of the exported objects.
aws s3api put-bucket-policy --bucket my-exported-logs --policy file://policy.json
Warning If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.
(Optional) Step 4: Exporting to a bucket encrypted with SSE-KMS
This step is necessary only if you are exporting to an S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS.
To export to a bucket encrypted with SSE-KMS
-
Use a text editor to create a file named
key_policy.json
and add the following access policy. When you add the policy, make the following changes:-
Replace
Region
with the Region of your logs. -
Replace
account-ARN
with the ARN of the account that owns the KMS key.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Allow CWL Service Principal usage", "Effect": "Allow", "Principal": { "Service": "logs.
Region
.amazonaws.com" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "account-ARN
" }, "Action": [ "kms:GetKeyPolicy*", "kms:PutKeyPolicy*", "kms:DescribeKey*", "kms:CreateAlias*", "kms:ScheduleKeyDeletion*", "kms:Decrypt" ], "Resource": "*" } ] } -
-
Enter the following command:
aws kms create-key --policy file://key_policy.json
The following is example output from this command:
{ "KeyMetadata": { "AWSAccountId": "
account_id
", "KeyId": "key_id
", "Arn": "arn:aws:kms:us-east-2:account_id
:key/key_id
", "CreationDate": "time
", "Enabled": true, "Description": "", "KeyUsage": "ENCRYPT_DECRYPT", "KeyState": "Enabled", "Origin": "AWS_KMS", "KeyManager": "CUSTOMER", "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT", "KeySpec": "SYMMETRIC_DEFAULT", "EncryptionAlgorithms": [ "SYMMETRIC_DEFAULT" ], "MultiRegion": false } -
Use a text editor to create a file called
bucketencryption.json
with the following contents.{ "Rules": [ { "ApplyServerSideEncryptionByDefault": { "SSEAlgorithm": "aws:kms", "KMSMasterKeyID": "{KMS Key ARN}" }, "BucketKeyEnabled": true } ] }
-
Enter the following command, replacing
bucket-name
with the name of the bucket that you are exporting logs to.aws s3api put-bucket-encryption --bucket
bucket-name
--server-side-encryption-configuration file://bucketencryption.jsonIf the command doesn't return an error, the process is successful.
Step 5: Create an export task
Use the following command to create the export task. After you create it, the export task might take anywhere from a few seconds to a few hours, depending on the size of the data to export.
To export data to Amazon S3 using the AWS CLI
-
Sign in with sufficient permissions as documented in Step 2: Set up access permissions.
-
At a command prompt, use the following create-export-task command to create the export task.
aws logs create-export-task --profile CWLExportUser --task-name "
my-log-group-09-10-2015
" --log-group-name "my-log-group
" --from1441490400000
--to1441494000000
--destination "my-exported-logs
" --destination-prefix "export-task-output
"The following is example output.
{ "taskId": "
cda45419-90ea-4db5-9833-aade86253e66
" }
Step 6: Describe export tasks
After you create an export task, you can get the current status of the task.
To describe export tasks using the AWS CLI
At a command prompt, use the following describe-export-tasks command.
aws logs --profile CWLExportUser describe-export-tasks --task-id "
cda45419-90ea-4db5-9833-aade86253e66
"
The following is example output.
{ "exportTasks": [ { "destination": "
my-exported-logs
", "destinationPrefix": "export-task-output
", "executionInfo": { "creationTime":1441495400000
}, "from":1441490400000
, "logGroupName": "my-log-group
", "status": { "code": "RUNNING", "message": "Started Successfully" }, "taskId": "cda45419-90ea-4db5-9833-aade86253e66
", "taskName": "my-log-group-09-10-2015
", "tTo":1441494000000
}] }
You can use the describe-export-tasks
command in three different
ways:
-
Without any filters – Lists all of your export tasks, in reverse order of creation.
-
Filter on task ID – Lists the export task, if one exists, with the specified ID.
-
Filter on task status – Lists the export tasks with the specified status.
For example, use the following command to filter on the FAILED
status.
aws logs --profile CWLExportUser describe-export-tasks --status-code "FAILED"
The following is example output.
{ "exportTasks": [ { "destination": "
my-exported-logs
", "destinationPrefix": "export-task-output
", "executionInfo": { "completionTime":1441498600000
"creationTime":1441495400000
}, "from":1441490400000
, "logGroupName": "my-log-group
", "status": { "code": "FAILED", "message": "FAILED" }, "taskId": "cda45419-90ea-4db5-9833-aade86253e66
", "taskName": "my-log-group-09-10-2015
", "to":1441494000000
}] }
Step 7: Cancel an export task
You can cancel an export task if it's in a PENDING
or
RUNNING
state.
To cancel an export task using the AWS CLI
At a command prompt, use the following cancel-export-task command:
aws logs --profile CWLExportUser cancel-export-task --task-id "
cda45419-90ea-4db5-9833-aade86253e66
"
You can use the describe-export-tasks command to verify that the task was canceled successfully.