Package software.amazon.awscdk.services.logs


package software.amazon.awscdk.services.logs

Amazon CloudWatch Logs Construct Library

This library supplies constructs for working with CloudWatch Logs.

Log Groups/Streams

The basic unit of CloudWatch is a Log Group. Every log group typically has the same kind of data logged to it, in the same format. If there are multiple applications or services logging into the Log Group, each of them creates a new Log Stream.

Every log operation creates a "log event", which can consist of a simple string or a single-line JSON object. JSON objects have the advantage that they afford more filtering abilities (see below).

The only configurable attribute for log streams is the retention period, which configures after how much time the events in the log stream expire and are deleted.

The default retention period if not supplied is 2 years, but it can be set to one of the values in the RetentionDays enum to configure a different retention period (including infinite retention).

 // Configure log group for short retention
 LogGroup logGroup = LogGroup.Builder.create(stack, "LogGroup")
         .retention(RetentionDays.ONE_WEEK)
         .build();// Configure log group for infinite retention
 LogGroup logGroup = LogGroup.Builder.create(stack, "LogGroup")
         .retention(Infinity)
         .build();
 

LogRetention

The LogRetention construct is a way to control the retention period of log groups that are created outside of the CDK. The construct is usually used on log groups that are auto created by AWS services, such as AWS lambda.

This is implemented using a CloudFormation custom resource which pre-creates the log group if it doesn't exist, and sets the specified log retention period (never expire, by default).

By default, the log group will be created in the same region as the stack. The logGroupRegion property can be used to configure log groups in other regions. This is typically useful when controlling retention for log groups auto-created by global services that publish their log group to a specific region, such as AWS Chatbot creating a log group in us-east-1.

By default, the log group created by LogRetention will be retained after the stack is deleted. If the RemovalPolicy is set to DESTROY, then the log group will be deleted when the stack is deleted.

Log Group Class

CloudWatch Logs offers two classes of log groups:

  1. The CloudWatch Logs Standard log class is a full-featured option for logs that require real-time monitoring or logs that you access frequently.
  2. The CloudWatch Logs Infrequent Access log class is a new log class that you can use to cost-effectively consolidate your logs. This log class offers a subset of CloudWatch Logs capabilities including managed ingestion, storage, cross-account log analytics, and encryption with a lower ingestion price per GB. The Infrequent Access log class is ideal for ad-hoc querying and after-the-fact forensic analysis on infrequently accessed logs.

For more details please check: log group class documentation

Resource Policy

CloudWatch Resource Policies allow other AWS services or IAM Principals to put log events into the log groups. A resource policy is automatically created when addToResourcePolicy is called on the LogGroup for the first time:

 LogGroup logGroup = new LogGroup(this, "LogGroup");
 logGroup.addToResourcePolicy(PolicyStatement.Builder.create()
         .actions(List.of("logs:CreateLogStream", "logs:PutLogEvents"))
         .principals(List.of(new ServicePrincipal("es.amazonaws.com")))
         .resources(List.of(logGroup.getLogGroupArn()))
         .build());
 

Or more conveniently, write permissions to the log group can be granted as follows which gives same result as in the above example.

 LogGroup logGroup = new LogGroup(this, "LogGroup");
 logGroup.grantWrite(new ServicePrincipal("es.amazonaws.com"));
 

Similarly, read permissions can be granted to the log group as follows.

 LogGroup logGroup = new LogGroup(this, "LogGroup");
 logGroup.grantRead(new ServicePrincipal("es.amazonaws.com"));
 

Be aware that any ARNs or tokenized values passed to the resource policy will be converted into AWS Account IDs. This is because CloudWatch Logs Resource Policies do not accept ARNs as principals, but they do accept Account ID strings. Non-ARN principals, like Service principals or Any principals, are accepted by CloudWatch.

Encrypting Log Groups

By default, log group data is always encrypted in CloudWatch Logs. You have the option to encrypt log group data using a AWS KMS customer master key (CMK) should you not wish to use the default AWS encryption. Keep in mind that if you decide to encrypt a log group, any service or IAM identity that needs to read the encrypted log streams in the future will require the same CMK to decrypt the data.

Here's a simple example of creating an encrypted Log Group using a KMS CMK.

 import software.amazon.awscdk.services.kms.*;
 
 
 LogGroup.Builder.create(this, "LogGroup")
         .encryptionKey(new Key(this, "Key"))
         .build();
 

See the AWS documentation for more detailed information about encrypting CloudWatch Logs.

Subscriptions and Destinations

Log events matching a particular filter can be sent to either a Lambda function or a Kinesis stream.

If the Kinesis stream lives in a different account, a CrossAccountDestination object needs to be added in the destination account which will act as a proxy for the remote Kinesis stream. This object is automatically created for you if you use the CDK Kinesis library.

Create a SubscriptionFilter, initialize it with an appropriate Pattern (see below) and supply the intended destination:

 import software.amazon.awscdk.services.logs.destinations.*;
 
 Function fn;
 LogGroup logGroup;
 
 
 SubscriptionFilter.Builder.create(this, "Subscription")
         .logGroup(logGroup)
         .destination(new LambdaDestination(fn))
         .filterPattern(FilterPattern.allTerms("ERROR", "MainThread"))
         .filterName("ErrorInMainThread")
         .build();
 

When you use KinesisDestination, you can choose the method used to distribute log data to the destination by setting the distribution property.

 import software.amazon.awscdk.services.logs.destinations.*;
 import software.amazon.awscdk.services.kinesis.*;
 
 Stream stream;
 LogGroup logGroup;
 
 
 SubscriptionFilter.Builder.create(this, "Subscription")
         .logGroup(logGroup)
         .destination(new KinesisDestination(stream))
         .filterPattern(FilterPattern.allTerms("ERROR", "MainThread"))
         .filterName("ErrorInMainThread")
         .distribution(Distribution.RANDOM)
         .build();
 

Metric Filters

CloudWatch Logs can extract and emit metrics based on a textual log stream. Depending on your needs, this may be a more convenient way of generating metrics for you application than making calls to CloudWatch Metrics yourself.

A MetricFilter either emits a fixed number every time it sees a log event matching a particular pattern (see below), or extracts a number from the log event and uses that as the metric value.

Example:

 MetricFilter.Builder.create(this, "MetricFilter")
         .logGroup(logGroup)
         .metricNamespace("MyApp")
         .metricName("Latency")
         .filterPattern(FilterPattern.exists("$.latency"))
         .metricValue("$.latency")
         .build();
 

Remember that if you want to use a value from the log event as the metric value, you must mention it in your pattern somewhere.

A very simple MetricFilter can be created by using the logGroup.extractMetric() helper function:

 LogGroup logGroup;
 
 logGroup.extractMetric("$.jsonField", "Namespace", "MetricName");
 

Will extract the value of jsonField wherever it occurs in JSON-structured log records in the LogGroup, and emit them to CloudWatch Metrics under the name Namespace/MetricName.

Exposing Metric on a Metric Filter

You can expose a metric on a metric filter by calling the MetricFilter.metric() API. This has a default of statistic = 'avg' if the statistic is not set in the props.

 LogGroup logGroup;
 
 MetricFilter mf = MetricFilter.Builder.create(this, "MetricFilter")
         .logGroup(logGroup)
         .metricNamespace("MyApp")
         .metricName("Latency")
         .filterPattern(FilterPattern.exists("$.latency"))
         .metricValue("$.latency")
         .dimensions(Map.of(
                 "ErrorCode", "$.errorCode"))
         .unit(Unit.MILLISECONDS)
         .build();
 
 //expose a metric from the metric filter
 Metric metric = mf.metric();
 
 //you can use the metric to create a new alarm
 //you can use the metric to create a new alarm
 Alarm.Builder.create(this, "alarm from metric filter")
         .metric(metric)
         .threshold(100)
         .evaluationPeriods(2)
         .build();
 

Metrics for IncomingLogs and IncomingBytes

Metric methods have been defined for IncomingLogs and IncomingBytes within LogGroups. These metrics allow for the creation of alarms on log ingestion, ensuring that the log ingestion process is functioning correctly.

To define an alarm based on these metrics, you can use the following template:

 LogGroup logGroup = new LogGroup(this, "MyLogGroup");
 Metric incomingEventsMetric = logGroup.metricIncomingLogEvents();
 Alarm.Builder.create(this, "HighLogVolumeAlarm")
         .metric(incomingEventsMetric)
         .threshold(1000)
         .evaluationPeriods(1)
         .build();
 

 LogGroup logGroup = new LogGroup(this, "MyLogGroup");
 Metric incomingBytesMetric = logGroup.metricIncomingBytes();
 Alarm.Builder.create(this, "HighDataVolumeAlarm")
         .metric(incomingBytesMetric)
         .threshold(5000000) // 5 MB
         .evaluationPeriods(1)
         .build();
 

Patterns

Patterns describe which log events match a subscription or metric filter. There are three types of patterns:

  • Text patterns
  • JSON patterns
  • Space-delimited table patterns

All patterns are constructed by using static functions on the FilterPattern class.

In addition to the patterns above, the following special patterns exist:

  • FilterPattern.allEvents(): matches all log events.
  • FilterPattern.literal(string): if you already know what pattern expression to use, this function takes a string and will use that as the log pattern. For more information, see the Filter and Pattern Syntax.

Text Patterns

Text patterns match if the literal strings appear in the text form of the log line.

  • FilterPattern.allTerms(term, term, ...): matches if all of the given terms (substrings) appear in the log event.
  • FilterPattern.anyTerm(term, term, ...): matches if all of the given terms (substrings) appear in the log event.
  • FilterPattern.anyTermGroup([term, term, ...], [term, term, ...], ...): matches if all of the terms in any of the groups (specified as arrays) matches. This is an OR match.

Examples:

 // Search for lines that contain both "ERROR" and "MainThread"
 IFilterPattern pattern1 = FilterPattern.allTerms("ERROR", "MainThread");
 
 // Search for lines that either contain both "ERROR" and "MainThread", or
 // both "WARN" and "Deadlock".
 IFilterPattern pattern2 = FilterPattern.anyTermGroup(List.of("ERROR", "MainThread"), List.of("WARN", "Deadlock"));
 

JSON Patterns

JSON patterns apply if the log event is the JSON representation of an object (without any other characters, so it cannot include a prefix such as timestamp or log level). JSON patterns can make comparisons on the values inside the fields.

  • Strings: the comparison operators allowed for strings are = and !=. String values can start or end with a * wildcard.
  • Numbers: the comparison operators allowed for numbers are =, !=, <, <=, >, >=.

Fields in the JSON structure are identified by identifier the complete object as $ and then descending into it, such as $.field or $.list[0].field.

  • FilterPattern.stringValue(field, comparison, string): matches if the given field compares as indicated with the given string value.
  • FilterPattern.numberValue(field, comparison, number): matches if the given field compares as indicated with the given numerical value.
  • FilterPattern.isNull(field): matches if the given field exists and has the value null.
  • FilterPattern.notExists(field): matches if the given field is not in the JSON structure.
  • FilterPattern.exists(field): matches if the given field is in the JSON structure.
  • FilterPattern.booleanValue(field, boolean): matches if the given field is exactly the given boolean value.
  • FilterPattern.all(jsonPattern, jsonPattern, ...): matches if all of the given JSON patterns match. This makes an AND combination of the given patterns.
  • FilterPattern.any(jsonPattern, jsonPattern, ...): matches if any of the given JSON patterns match. This makes an OR combination of the given patterns.

Example:

 // Search for all events where the component field is equal to
 // "HttpServer" and either error is true or the latency is higher
 // than 1000.
 JsonPattern pattern = FilterPattern.all(FilterPattern.stringValue("$.component", "=", "HttpServer"), FilterPattern.any(FilterPattern.booleanValue("$.error", true), FilterPattern.numberValue("$.latency", ">", 1000)));
 

Space-delimited table patterns

If the log events are rows of a space-delimited table, this pattern can be used to identify the columns in that structure and add conditions on any of them. The canonical example where you would apply this type of pattern is Apache server logs.

Text that is surrounded by "..." quotes or [...] square brackets will be treated as one column.

  • FilterPattern.spaceDelimited(column, column, ...): construct a SpaceDelimitedTextPattern object with the indicated columns. The columns map one-by-one the columns found in the log event. The string "..." may be used to specify an arbitrary number of unnamed columns anywhere in the name list (but may only be specified once).

After constructing a SpaceDelimitedTextPattern, you can use the following two members to add restrictions:

  • pattern.whereString(field, comparison, string): add a string condition. The rules are the same as for JSON patterns.
  • pattern.whereNumber(field, comparison, number): add a numerical condition. The rules are the same as for JSON patterns.

Multiple restrictions can be added on the same column; they must all apply.

Example:

 // Search for all events where the component is "HttpServer" and the
 // result code is not equal to 200.
 SpaceDelimitedTextPattern pattern = FilterPattern.spaceDelimited("time", "component", "...", "result_code", "latency").whereString("component", "=", "HttpServer").whereNumber("result_code", "!=", 200);
 

Logs Insights Query Definition

Creates a query definition for CloudWatch Logs Insights.

Example:

 QueryDefinition.Builder.create(this, "QueryDefinition")
         .queryDefinitionName("MyQuery")
         .queryString(QueryString.Builder.create()
                 .fields(List.of("@timestamp", "@message"))
                 .parseStatements(List.of("@message \"[*] *\" as loggingType, loggingMessage", "@message \"<*>: *\" as differentLoggingType, differentLoggingMessage"))
                 .filterStatements(List.of("loggingType = \"ERROR\"", "loggingMessage = \"A very strange error occurred!\""))
                 .sort("@timestamp desc")
                 .limit(20)
                 .build())
         .build();
 

Data Protection Policy

Creates a data protection policy and assigns it to the log group. A data protection policy can help safeguard sensitive data that's ingested by the log group by auditing and masking the sensitive log data. When a user who does not have permission to view masked data views a log event that includes masked data, the sensitive data is replaced by asterisks.

For more information, see Protect sensitive log data with masking.

For a list of types of managed identifiers that can be audited and masked, see Types of data that you can protect.

If a new identifier is supported but not yet in the DataIdentifiers enum, the name of the identifier can be supplied as name in the constructor instead.

To add a custom data identifier, supply a custom name and regex to the CustomDataIdentifiers constructor. For more information on custom data identifiers, see Custom data identifiers.

Each policy may consist of a log group, S3 bucket, and/or Firehose delivery stream audit destination.

Example:

 import software.amazon.awscdk.services.kinesisfirehose.alpha.*;
 import software.amazon.awscdk.services.kinesisfirehose.destinations.alpha.*;
 
 
 LogGroup logGroupDestination = LogGroup.Builder.create(this, "LogGroupLambdaAudit")
         .logGroupName("auditDestinationForCDK")
         .build();
 
 Bucket bucket = new Bucket(this, "audit-bucket");
 S3Bucket s3Destination = new S3Bucket(bucket);
 
 DeliveryStream deliveryStream = DeliveryStream.Builder.create(this, "Delivery Stream")
         .destinations(List.of(s3Destination))
         .build();
 
 DataProtectionPolicy dataProtectionPolicy = DataProtectionPolicy.Builder.create()
         .name("data protection policy")
         .description("policy description")
         .identifiers(List.of(DataIdentifier.DRIVERSLICENSE_US,  // managed data identifier
             new DataIdentifier("EmailAddress"),  // forward compatibility for new managed data identifiers
             new CustomDataIdentifier("EmployeeId", "EmployeeId-\\d{9}"))) // custom data identifier
         .logGroupAuditDestination(logGroupDestination)
         .s3BucketAuditDestination(bucket)
         .deliveryStreamNameAuditDestination(deliveryStream.getDeliveryStreamName())
         .build();
 
 LogGroup.Builder.create(this, "LogGroupLambda")
         .logGroupName("cdkIntegLogGroup")
         .dataProtectionPolicy(dataProtectionPolicy)
         .build();
 

Notes

Be aware that Log Group ARNs will always have the string :* appended to them, to match the behavior of the CloudFormation AWS::Logs::LogGroup resource.