Skip navigation links

Package software.amazon.awscdk.services.logs

Amazon CloudWatch Logs Construct Library

See: Description

Package software.amazon.awscdk.services.logs Description

Amazon CloudWatch Logs Construct Library

---

cfn-resources: Stable

cdk-constructs: Stable


This library supplies constructs for working with CloudWatch Logs.

Log Groups/Streams

The basic unit of CloudWatch is a Log Group. Every log group typically has the same kind of data logged to it, in the same format. If there are multiple applications or services logging into the Log Group, each of them creates a new Log Stream.

Every log operation creates a "log event", which can consist of a simple string or a single-line JSON object. JSON objects have the advantage that they afford more filtering abilities (see below).

The only configurable attribute for log streams is the retention period, which configures after how much time the events in the log stream expire and are deleted.

The default retention period if not supplied is 2 years, but it can be set to one of the values in the RetentionDays enum to configure a different retention period (including infinite retention).

 // Example automatically generated. See https://github.com/aws/jsii/issues/826
 // Configure log group for short retention
 LogGroup logGroup = new LogGroup(stack, "LogGroup", new LogGroupProps()
         .retention(RetentionDays.getONE_WEEK()));// Configure log group for infinite retention
 LogGroup logGroup = new LogGroup(stack, "LogGroup", new LogGroupProps()
         .retention(Infinity));
 

LogRetention

The LogRetention construct is a way to control the retention period of log groups that are created outside of the CDK. The construct is usually used on log groups that are auto created by AWS services, such as AWS lambda.

This is implemented using a CloudFormation custom resource which pre-creates the log group if it doesn't exist, and sets the specified log retention period (never expire, by default).

By default, the log group will be created in the same region as the stack. The logGroupRegion property can be used to configure log groups in other regions. This is typically useful when controlling retention for log groups auto-created by global services that publish their log group to a specific region, such as AWS Chatbot creating a log group in us-east-1.

Encrypting Log Groups

By default, log group data is always encrypted in CloudWatch Logs. You have the option to encrypt log group data using a AWS KMS customer master key (CMK) should you not wish to use the default AWS encryption. Keep in mind that if you decide to encrypt a log group, any service or IAM identity that needs to read the encrypted log streams in the future will require the same CMK to decrypt the data.

Here's a simple example of creating an encrypted Log Group using a KMS CMK.

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 import software.amazon.awscdk.services.kms.*;
 
 LogGroup.Builder.create(this, "LogGroup")
         .encryptionKey(new Key(this, "Key"))
         .build();
 

See the AWS documentation for more detailed information about encrypting CloudWatch Logs.

Subscriptions and Destinations

Log events matching a particular filter can be sent to either a Lambda function or a Kinesis stream.

If the Kinesis stream lives in a different account, a CrossAccountDestination object needs to be added in the destination account which will act as a proxy for the remote Kinesis stream. This object is automatically created for you if you use the CDK Kinesis library.

Create a SubscriptionFilter, initialize it with an appropriate Pattern (see below) and supply the intended destination:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Function fn = new Function(this, "Lambda", new FunctionProps()...);
 Object logGroup = LogGroup.Builder.create(this, "LogGroup")....build();
 
 SubscriptionFilter.Builder.create(this, "Subscription")
         .logGroup(logGroup)
         .destination(new LambdaDestination(fn))
         .filterPattern(FilterPattern.allTerms("ERROR", "MainThread"))
         .build();
 

Metric Filters

CloudWatch Logs can extract and emit metrics based on a textual log stream. Depending on your needs, this may be a more convenient way of generating metrics for you application than making calls to CloudWatch Metrics yourself.

A MetricFilter either emits a fixed number every time it sees a log event matching a particular pattern (see below), or extracts a number from the log event and uses that as the metric value.

Example:

 // Example automatically generated. See https://github.com/aws/jsii/issues/826
 new MetricFilter(this, "MetricFilter", new MetricFilterProps()
         .logGroup(logGroup)
         .metricNamespace("MyApp")
         .metricName("Latency")
         .filterPattern(FilterPattern.exists("$.latency"))
         .metricValue("$.latency"));
 

Remember that if you want to use a value from the log event as the metric value, you must mention it in your pattern somewhere.

A very simple MetricFilter can be created by using the logGroup.extractMetric() helper function:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 logGroup.extractMetric("$.jsonField", "Namespace", "MetricName");
 

Will extract the value of jsonField wherever it occurs in JSON-structed log records in the LogGroup, and emit them to CloudWatch Metrics under the name Namespace/MetricName.

Exposing Metric on a Metric Filter

You can expose a metric on a metric filter by calling the MetricFilter.metric() API. This has a default of statistic = 'avg' if the statistic is not set in the props.

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object mf = MetricFilter.Builder.create(this, "MetricFilter")
         .logGroup(logGroup)
         .metricNamespace("MyApp")
         .metricName("Latency")
         .filterPattern(FilterPattern.exists("$.latency"))
         .metricValue("$.latency")
         .build();
 
 //expose a metric from the metric filter
 Object metric = mf.metric();
 
 //you can use the metric to create a new alarm
 //you can use the metric to create a new alarm
 Alarm.Builder.create(this, "alarm from metric filter")
         .metric(metric)
         .threshold(100)
         .evaluationPeriods(2)
         .build();
 

Patterns

Patterns describe which log events match a subscription or metric filter. There are three types of patterns:

All patterns are constructed by using static functions on the FilterPattern class.

In addition to the patterns above, the following special patterns exist:

Text Patterns

Text patterns match if the literal strings appear in the text form of the log line.

Examples:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 // Search for lines that contain both "ERROR" and "MainThread"
 Object pattern1 = FilterPattern.allTerms("ERROR", "MainThread");
 
 // Search for lines that either contain both "ERROR" and "MainThread", or
 // both "WARN" and "Deadlock".
 Object pattern2 = FilterPattern.anyGroup(asList("ERROR", "MainThread"), asList("WARN", "Deadlock"));
 

JSON Patterns

JSON patterns apply if the log event is the JSON representation of an object (without any other characters, so it cannot include a prefix such as timestamp or log level). JSON patterns can make comparisons on the values inside the fields.

Fields in the JSON structure are identified by identifier the complete object as $ and then descending into it, such as $.field or $.list[0].field.

Example:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 // Search for all events where the component field is equal to
 // "HttpServer" and either error is true or the latency is higher
 // than 1000.
 Object pattern = FilterPattern.all(FilterPattern.stringValue("$.component", "=", "HttpServer"), FilterPattern.any(FilterPattern.booleanValue("$.error", true), FilterPattern.numberValue("$.latency", ">", 1000)));
 

Space-delimited table patterns

If the log events are rows of a space-delimited table, this pattern can be used to identify the columns in that structure and add conditions on any of them. The canonical example where you would apply this type of pattern is Apache server logs.

Text that is surrounded by "..." quotes or [...] square brackets will be treated as one column.

After constructing a SpaceDelimitedTextPattern, you can use the following two members to add restrictions:

Multiple restrictions can be added on the same column; they must all apply.

Example:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 // Search for all events where the component is "HttpServer" and the
 // result code is not equal to 200.
 Object pattern = FilterPattern.spaceDelimited('time', 'component', '...', 'result_code', 'latency')
     .whereString('component', '=', 'HttpServer').whereNumber("result_code", "!=", 200);
 

Notes

Be aware that Log Group ARNs will always have the string :* appended to them, to match the behavior of the CloudFormation AWS::Logs::LogGroup resource.

Skip navigation links