Logging
You can deliver Apache Kafka broker logs to one or more of the following destination types: Amazon CloudWatch Logs, Amazon S3, Amazon Kinesis Data Firehose. You can also log Amazon MSK API calls with AWS CloudTrail.
Broker logs
Broker logs enable you to troubleshoot your Apache Kafka applications and to analyze their communications with your MSK cluster. You can configure your new or existing MSK cluster to deliver INFO-level broker logs to one or more of the following types of destination resources: a CloudWatch log group, an S3 bucket, a Kinesis Data Firehose delivery stream. Through Kinesis Data Firehose you can then deliver the log data from your delivery stream to Amazon ES. You must create a destination resource before you configure your cluster to deliver broker logs to it. Amazon MSK doesn't create these destination resources for you if they don't already exist. For information about these three types of destination resources and how to create them, see the following documentation:
Required permissions
For Amazon MSK to deliver broker logs to the destinations that you configure, ensure
that the AmazonMSKFullAccess
policy is attached to your IAM role. To
stream broker logs to an S3 bucket, you also need the
s3:PutBucketPolicy
permission attached to your IAM role. For
information about S3 bucket policies, see How Do I Add an S3 Bucket Policy? in the Amazon S3 Console User Guide. For
information about IAM policies in general, see Access
Management in the IAM User Guide.
Required CMK key policy for use with SSE-KMS buckets
If you enabled server-side encryption for your S3 bucket using AWS KMS-managed keys (SSE-KMS) with a customer managed Customer Master Key (CMK), add the following to the key policy for your CMK so that Amazon MSK can write broker files to the bucket.
{ "Sid": "Allow Amazon MSK to use the key.", "Effect": "Allow", "Principal": { "Service": [ "delivery.logs.amazonaws.com" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" }
Configuring broker logs using the AWS Management Console
If you are creating a new cluster, look for the Broker log delivery heading in the Monitoring section. You can specify the destinations to which you want Amazon MSK to deliver your broker logs.
For an existing cluster, choose the cluster from your list of clusters, then choose the Details tab. Scroll down to the Monitoring section and then choose its Edit button. You can specify the destinations to which you want Amazon MSK to deliver your broker logs.
Configuring broker logs using the AWS CLI
When you use the create-cluster
or the update-monitoring
commands, you can optionally specify the logging-info
parameter and
pass to it a JSON structure like the following example. In this JSON, all three
destination types are optional.
{ "BrokerLogs": { "S3": { "Bucket": "ExampleBucketName", "Prefix": "ExamplePrefix", "Enabled": true }, "Firehose": { "DeliveryStream": "ExampleDeliveryStreamName", "Enabled": true }, "CloudWatchLogs": { "Enabled": true, "LogGroup": "ExampleLogGroupName" } } }
Configuring broker logs using the API
You can specify the optional loggingInfo
structure in the JSON that
you pass to the CreateCluster or UpdateMonitoring operations.
By default, when broker logging is enabled, Amazon MSK logs INFO
level logs to the specified
destinations. However, users of Apache Kafka 2.4.X and later can dynamically set the
broker log level to any of the
log4j log levelsDEBUG
or TRACE
, we recommend using Amazon S3 or Kinesis Data Firehose
as the log destination. If you use CloudWatch Logs as a log destination and you dynamically
enable DEBUG
or TRACE
level logging, Amazon MSK may continuously deliver a sample of logs. This can significantly
impact
broker performance and should only be used when the INFO
log level is not verbose enough to determine
the root cause of an issue.
Logging Amazon MSK API calls with AWS CloudTrail
Amazon MSK is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Amazon MSK. CloudTrail captures all API calls for Amazon MSK as events. The calls captured include calls from the Amazon MSK console and code calls to the Amazon MSK API operations.
If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Amazon MSK. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can determine the request that was made to Amazon MSK, the IP address from which the request was made, who made the request, when it was made, and additional details.
To learn more about CloudTrail, including how to configure and enable it, see the AWS CloudTrail User Guide.
Amazon MSK information in CloudTrail
CloudTrail is enabled on your AWS account when you create the account. When supported event activity occurs in Amazon MSK, that activity is recorded in a CloudTrail event along with other AWS service events in Event history. You can view, search, and download recent events in your AWS account. For more information, see Viewing Events with CloudTrail Event History.
For an ongoing record of events in your AWS account, including events for Amazon MSK, create a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see the following:
Amazon MSK logs all operations as events in CloudTrail log files.
Every event or log entry contains information about who generated the request. The identity information helps you determine the following:
-
Whether the request was made with root or AWS Identity and Access Management (IAM) user credentials.
-
Whether the request was made with temporary security credentials for a role or federated user.
-
Whether the request was made by another AWS service.
For more information, see the CloudTrail userIdentity Element.
Example: Amazon MSK log file entries
A trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket that you specify. CloudTrail log files contain one or more log entries. An event represents a single request from any source and includes information about the requested action, the date and time of the action, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they don't appear in any specific order.
The following example shows CloudTrail log entries that demonstrate the
DescribeCluster
and DeleteCluster
actions.
{ "Records": [ { "eventVersion": "1.05", "userIdentity": { "type": "IAMUser", "principalId": "ABCDEF0123456789ABCDE", "arn": "arn:aws:iam::012345678901:user/Joe", "accountId": "012345678901", "accessKeyId": "AIDACKCEVSQ6C2EXAMPLE", "userName": "Joe" }, "eventTime": "2018-12-12T02:29:24Z", "eventSource": "kafka.amazonaws.com", "eventName": "DescribeCluster", "awsRegion": "us-east-1", "sourceIPAddress": "192.0.2.0", "userAgent": "aws-cli/1.14.67 Python/3.6.0 Windows/10 botocore/1.9.20", "requestParameters": { "clusterArn": "arn%3Aaws%3Akafka%3Aus-east-1%3A012345678901%3Acluster%2Fexamplecluster%2F01234567-abcd-0123-abcd-abcd0123efa-2" }, "responseElements": null, "requestID": "bd83f636-fdb5-abcd-0123-157e2fbf2bde", "eventID": "60052aba-0123-4511-bcde-3e18dbd42aa4", "readOnly": true, "eventType": "AwsApiCall", "recipientAccountId": "012345678901" }, { "eventVersion": "1.05", "userIdentity": { "type": "IAMUser", "principalId": "ABCDEF0123456789ABCDE", "arn": "arn:aws:iam::012345678901:user/Joe", "accountId": "012345678901", "accessKeyId": "AIDACKCEVSQ6C2EXAMPLE", "userName": "Joe" }, "eventTime": "2018-12-12T02:29:40Z", "eventSource": "kafka.amazonaws.com", "eventName": "DeleteCluster", "awsRegion": "us-east-1", "sourceIPAddress": "192.0.2.0", "userAgent": "aws-cli/1.14.67 Python/3.6.0 Windows/10 botocore/1.9.20", "requestParameters": { "clusterArn": "arn%3Aaws%3Akafka%3Aus-east-1%3A012345678901%3Acluster%2Fexamplecluster%2F01234567-abcd-0123-abcd-abcd0123efa-2" }, "responseElements": { "clusterArn": "arn:aws:kafka:us-east-1:012345678901:cluster/examplecluster/01234567-abcd-0123-abcd-abcd0123efa-2", "state": "DELETING" }, "requestID": "c6bfb3f7-abcd-0123-afa5-293519897703", "eventID": "8a7f1fcf-0123-abcd-9bdb-1ebf0663a75c", "readOnly": false, "eventType": "AwsApiCall", "recipientAccountId": "012345678901" } ] }