Amazon CloudWatch Logs Construct Library
AWS CDK v1 has reached End-of-Support on 2023-06-01. This package is no longer being updated, and users should migrate to AWS CDK v2.
For more information on how to migrate, see the Migrating to AWS CDK v2 guide.
This library supplies constructs for working with CloudWatch Logs.
The basic unit of CloudWatch is a Log Group. Every log group typically has the same kind of data logged to it, in the same format. If there are multiple applications or services logging into the Log Group, each of them creates a new Log Stream.
Every log operation creates a "log event", which can consist of a simple string or a single-line JSON object. JSON objects have the advantage that they afford more filtering abilities (see below).
The only configurable attribute for log streams is the retention period, which configures after how much time the events in the log stream expire and are deleted.
The default retention period if not supplied is 2 years, but it can be set to
one of the values in the
RetentionDays enum to configure a different
retention period (including infinite retention).
// Configure log group for short retention LogGroup logGroup = LogGroup.Builder.create(stack, "LogGroup") .retention(RetentionDays.ONE_WEEK) .build();// Configure log group for infinite retention LogGroup logGroup = LogGroup.Builder.create(stack, "LogGroup") .retention(Infinity) .build();
LogRetention construct is a way to control the retention period of log groups that are created outside of the CDK. The construct is usually
used on log groups that are auto created by AWS services, such as AWS
This is implemented using a CloudFormation custom resource which pre-creates the log group if it doesn't exist, and sets the specified log retention period (never expire, by default).
By default, the log group will be created in the same region as the stack. The
logGroupRegion property can be used to configure
log groups in other regions. This is typically useful when controlling retention for log groups auto-created by global services that
publish their log group to a specific region, such as AWS Chatbot creating a log group in
CloudWatch Resource Policies allow other AWS services or IAM Principals to put log events into the log groups.
A resource policy is automatically created when
addToResourcePolicy is called on the LogGroup for the first time:
LogGroup logGroup = new LogGroup(this, "LogGroup"); logGroup.addToResourcePolicy(PolicyStatement.Builder.create() .actions(List.of("logs:CreateLogStream", "logs:PutLogEvents")) .principals(List.of(new ServicePrincipal("es.amazonaws.com"))) .resources(List.of(logGroup.getLogGroupArn())) .build());
Or more conveniently, write permissions to the log group can be granted as follows which gives same result as in the above example.
LogGroup logGroup = new LogGroup(this, "LogGroup"); logGroup.grantWrite(new ServicePrincipal("es.amazonaws.com"));
Be aware that any ARNs or tokenized values passed to the resource policy will be converted into AWS Account IDs. This is because CloudWatch Logs Resource Policies do not accept ARNs as principals, but they do accept Account ID strings. Non-ARN principals, like Service principals or Any princpals, are accepted by CloudWatch.
Encrypting Log Groups
By default, log group data is always encrypted in CloudWatch Logs. You have the option to encrypt log group data using a AWS KMS customer master key (CMK) should you not wish to use the default AWS encryption. Keep in mind that if you decide to encrypt a log group, any service or IAM identity that needs to read the encrypted log streams in the future will require the same CMK to decrypt the data.
Here's a simple example of creating an encrypted Log Group using a KMS CMK.
import software.amazon.awscdk.services.kms.*; LogGroup.Builder.create(this, "LogGroup") .encryptionKey(new Key(this, "Key")) .build();
See the AWS documentation for more detailed information about encrypting CloudWatch Logs.
Subscriptions and Destinations
Log events matching a particular filter can be sent to either a Lambda function or a Kinesis stream.
If the Kinesis stream lives in a different account, a
object needs to be added in the destination account which will act as a proxy
for the remote Kinesis stream. This object is automatically created for you
if you use the CDK Kinesis library.
SubscriptionFilter, initialize it with an appropriate
below) and supply the intended destination:
import software.amazon.awscdk.services.logs.destinations.*; Function fn; LogGroup logGroup; SubscriptionFilter.Builder.create(this, "Subscription") .logGroup(logGroup) .destination(new LambdaDestination(fn)) .filterPattern(FilterPattern.allTerms("ERROR", "MainThread")) .build();
CloudWatch Logs can extract and emit metrics based on a textual log stream. Depending on your needs, this may be a more convenient way of generating metrics for you application than making calls to CloudWatch Metrics yourself.
MetricFilter either emits a fixed number every time it sees a log event
matching a particular pattern (see below), or extracts a number from the log
event and uses that as the metric value.
MetricFilter.Builder.create(this, "MetricFilter") .logGroup(logGroup) .metricNamespace("MyApp") .metricName("Latency") .filterPattern(FilterPattern.exists("$.latency")) .metricValue("$.latency") .build();
Remember that if you want to use a value from the log event as the metric value, you must mention it in your pattern somewhere.
A very simple MetricFilter can be created by using the
LogGroup logGroup; logGroup.extractMetric("$.jsonField", "Namespace", "MetricName");
Will extract the value of
jsonField wherever it occurs in JSON-structed
log records in the LogGroup, and emit them to CloudWatch Metrics under
Exposing Metric on a Metric Filter
You can expose a metric on a metric filter by calling the
This has a default of
statistic = 'avg' if the statistic is not set in the
LogGroup logGroup; MetricFilter mf = MetricFilter.Builder.create(this, "MetricFilter") .logGroup(logGroup) .metricNamespace("MyApp") .metricName("Latency") .filterPattern(FilterPattern.exists("$.latency")) .metricValue("$.latency") .build(); //expose a metric from the metric filter Metric metric = mf.metric(); //you can use the metric to create a new alarm //you can use the metric to create a new alarm Alarm.Builder.create(this, "alarm from metric filter") .metric(metric) .threshold(100) .evaluationPeriods(2) .build();
Patterns describe which log events match a subscription or metric filter. There are three types of patterns:
- Text patterns
- JSON patterns
- Space-delimited table patterns
All patterns are constructed by using static functions on the
In addition to the patterns above, the following special patterns exist:
FilterPattern.allEvents(): matches all log events.
FilterPattern.literal(string): if you already know what pattern expression to use, this function takes a string and will use that as the log pattern. For more information, see the Filter and Pattern Syntax.
Text patterns match if the literal strings appear in the text form of the log line.
FilterPattern.allTerms(term, term, ...): matches if all of the given terms (substrings) appear in the log event.
FilterPattern.anyTerm(term, term, ...): matches if all of the given terms (substrings) appear in the log event.
FilterPattern.anyTermGroup([term, term, ...], [term, term, ...], ...): matches if all of the terms in any of the groups (specified as arrays) matches. This is an OR match.
// Search for lines that contain both "ERROR" and "MainThread" IFilterPattern pattern1 = FilterPattern.allTerms("ERROR", "MainThread"); // Search for lines that either contain both "ERROR" and "MainThread", or // both "WARN" and "Deadlock". IFilterPattern pattern2 = FilterPattern.anyTermGroup(List.of("ERROR", "MainThread"), List.of("WARN", "Deadlock"));
JSON patterns apply if the log event is the JSON representation of an object (without any other characters, so it cannot include a prefix such as timestamp or log level). JSON patterns can make comparisons on the values inside the fields.
- Strings: the comparison operators allowed for strings are
!=. String values can start or end with a
- Numbers: the comparison operators allowed for numbers are
Fields in the JSON structure are identified by identifier the complete object as
and then descending into it, such as
FilterPattern.stringValue(field, comparison, string): matches if the given field compares as indicated with the given string value.
FilterPattern.numberValue(field, comparison, number): matches if the given field compares as indicated with the given numerical value.
FilterPattern.isNull(field): matches if the given field exists and has the value
FilterPattern.notExists(field): matches if the given field is not in the JSON structure.
FilterPattern.exists(field): matches if the given field is in the JSON structure.
FilterPattern.booleanValue(field, boolean): matches if the given field is exactly the given boolean value.
FilterPattern.all(jsonPattern, jsonPattern, ...): matches if all of the given JSON patterns match. This makes an AND combination of the given patterns.
FilterPattern.any(jsonPattern, jsonPattern, ...): matches if any of the given JSON patterns match. This makes an OR combination of the given patterns.
// Search for all events where the component field is equal to // "HttpServer" and either error is true or the latency is higher // than 1000. JsonPattern pattern = FilterPattern.all(FilterPattern.stringValue("$.component", "=", "HttpServer"), FilterPattern.any(FilterPattern.booleanValue("$.error", true), FilterPattern.numberValue("$.latency", ">", 1000)));
Space-delimited table patterns
If the log events are rows of a space-delimited table, this pattern can be used to identify the columns in that structure and add conditions on any of them. The canonical example where you would apply this type of pattern is Apache server logs.
Text that is surrounded by
"..." quotes or
[...] square brackets will
be treated as one column.
FilterPattern.spaceDelimited(column, column, ...): construct a
SpaceDelimitedTextPatternobject with the indicated columns. The columns map one-by-one the columns found in the log event. The string
"..."may be used to specify an arbitrary number of unnamed columns anywhere in the name list (but may only be specified once).
After constructing a
SpaceDelimitedTextPattern, you can use the following
two members to add restrictions:
pattern.whereString(field, comparison, string): add a string condition. The rules are the same as for JSON patterns.
pattern.whereNumber(field, comparison, number): add a numerical condition. The rules are the same as for JSON patterns.
Multiple restrictions can be added on the same column; they must all apply.
// Search for all events where the component is "HttpServer" and the // result code is not equal to 200. SpaceDelimitedTextPattern pattern = FilterPattern.spaceDelimited("time", "component", "...", "result_code", "latency").whereString("component", "=", "HttpServer").whereNumber("result_code", "!=", 200);
Logs Insights Query Definition
Creates a query definition for CloudWatch Logs Insights.
QueryDefinition.Builder.create(this, "QueryDefinition") .queryDefinitionName("MyQuery") .queryString(QueryString.Builder.create() .fields(List.of("@timestamp", "@message")) .sort("@timestamp desc") .limit(20) .build()) .build();
Be aware that Log Group ARNs will always have the string
:* appended to
them, to match the behavior of the CloudFormation
Deprecated: AWS CDK v1 has reached End-of-Support on 2023-06-01.
This package is no longer being updated, and users should migrate to AWS CDK v2.
For more information on how to migrate, see https://docs.aws.amazon.com/cdk/v2/guide/migrating-v2.html
AWS::Logs::Destination.A fluent builder for
CfnDestination.Properties for defining a
CfnDestination.A builder for
CfnDestinationPropsAn implementation for
AWS::Logs::LogGroup.A fluent builder for
CfnLogGroup.Properties for defining a
CfnLogGroup.A builder for
CfnLogGroupPropsAn implementation for
AWS::Logs::LogStream.A fluent builder for
CfnLogStream.Properties for defining a
CfnLogStream.A builder for
CfnLogStreamPropsAn implementation for
AWS::Logs::MetricFilter.A fluent builder for
CfnMetricFilter.Specifies the CloudWatch metric dimensions to publish with this metric.A builder for
CfnMetricFilter.DimensionPropertyAn implementation for
MetricTransformationis a property of the
AWS::Logs::MetricFilterresource that describes how to transform log streams into a CloudWatch metric.A builder for
CfnMetricFilter.MetricTransformationPropertyAn implementation for
CfnMetricFilter.MetricTransformationPropertyProperties for defining a
CfnMetricFilter.A builder for
CfnMetricFilterPropsAn implementation for
AWS::Logs::QueryDefinition.A fluent builder for
CfnQueryDefinition.Properties for defining a
CfnQueryDefinition.A builder for
CfnQueryDefinitionPropsAn implementation for
AWS::Logs::ResourcePolicy.A fluent builder for
CfnResourcePolicy.Properties for defining a
CfnResourcePolicy.A builder for
CfnResourcePolicyPropsAn implementation for
AWS::Logs::SubscriptionFilter.A fluent builder for
CfnSubscriptionFilter.Properties for defining a
CfnSubscriptionFilter.A builder for
CfnSubscriptionFilterPropsAn implementation for
CfnSubscriptionFilterPropsExample:A builder for
ColumnRestrictionAn implementation for
ColumnRestrictionA new CloudWatch Logs Destination for use in cross-account scenarios.A fluent builder for
CrossAccountDestination.Properties for a CrossAccountDestination.A builder for
CrossAccountDestinationPropsAn implementation for
CrossAccountDestinationPropsA collection of static methods to generate appropriate ILogPatterns.Interface for objects that can render themselves to log patterns.Internal default implementation for
LogGroup.Properties for a LogGroup.A builder for
LogGroupPropsAn implementation for
LogGroupPropsCreates a custom resource to control the retention policy of a CloudWatch Logs log group.A fluent builder for
LogRetention.Construction properties for a LogRetention.A builder for
LogRetentionPropsAn implementation for
LogRetentionPropsRetry options for all AWS API calls.A builder for
LogRetentionRetryOptionsAn implementation for
LogRetentionRetryOptionsDefine a Log Stream in a Log Group.A fluent builder for
LogStream.Properties for a LogStream.A builder for
LogStreamPropsAn implementation for
LogStreamPropsProperties returned by a Subscription destination.A builder for
LogSubscriptionDestinationConfigAn implementation for
LogSubscriptionDestinationConfigA filter that extracts information from CloudWatch Logs and emits to CloudWatch Metrics.A fluent builder for
MetricFilter.Properties for a MetricFilter created from a LogGroup.A builder for
MetricFilterOptionsAn implementation for
MetricFilterOptionsProperties for a MetricFilter.A builder for
MetricFilterPropsAn implementation for
MetricFilterPropsDefine a query definition for CloudWatch Logs Insights.A fluent builder for
QueryDefinition.Properties for a QueryDefinition.A builder for
QueryDefinitionPropsAn implementation for
QueryDefinitionPropsDefine a QueryString.A fluent builder for
QueryString.Properties for a QueryString.A builder for
QueryStringPropsAn implementation for
QueryStringPropsResource Policy for CloudWatch Log Groups.A fluent builder for
ResourcePolicy.Properties to define Cloudwatch log group resource policy.A builder for
ResourcePolicyPropsAn implementation for
ResourcePolicyPropsHow long, in days, the log contents will be retained.Space delimited text pattern.Properties for a new LogStream created from a LogGroup.A builder for
StreamOptionsAn implementation for
StreamOptionsA new Subscription on a CloudWatch log group.A fluent builder for
SubscriptionFilter.Properties for a new SubscriptionFilter created from a LogGroup.A builder for
SubscriptionFilterOptionsAn implementation for
SubscriptionFilterOptionsProperties for a SubscriptionFilter.A builder for
SubscriptionFilterPropsAn implementation for