Skip navigation links


Amazon EC2 Auto Scaling Construct Library

See: Description

Package Description

Amazon EC2 Auto Scaling Construct Library


cfn-resources: Stable

cdk-constructs: Stable

This module is part of the AWS Cloud Development Kit project.

Auto Scaling Group

An AutoScalingGroup represents a number of instances on which you run your code. You pick the size of the fleet, the instance type and the OS image:

 // Example automatically generated without compilation. See
 new AutoScalingGroup(this, "ASG", new AutoScalingGroupProps()
         .instanceType(ec2.InstanceType.of(ec2.InstanceClass.getBURSTABLE2(), ec2.InstanceSize.getMICRO()))
         .machineImage(new AmazonLinuxImage()));

NOTE: AutoScalingGroup has an property called allowAllOutbound (allowing the instances to contact the internet) which is set to true by default. Be sure to set this to false if you don't want your instances to be able to start arbitrary connections. Alternatively, you can specify an existing security group to attach to the instances that are launched, rather than have the group create a new one.

 // Example automatically generated without compilation. See
 Object mySecurityGroup = SecurityGroup.Builder.create(this, "SecurityGroup");
 AutoScalingGroup.Builder.create(this, "ASG")
         .instanceType(ec2.InstanceType.of(ec2.InstanceClass.getBURSTABLE2(), ec2.InstanceSize.getMICRO()))
         .machineImage(new AmazonLinuxImage())

Machine Images (AMIs)

AMIs control the OS that gets launched when you start your EC2 instance. The EC2 library contains constructs to select the AMI you want to use.

Depending on the type of AMI, you select it a different way.

The latest version of Amazon Linux and Microsoft Windows images are selectable by instantiating one of these classes:

 // Example automatically generated. See
 // Pick a Windows edition to use
 WindowsImage windows = new WindowsImage(ec2.WindowsVersion.getWINDOWS_SERVER_2019_ENGLISH_FULL_BASE());
 // Pick the right Amazon Linux edition. All arguments shown are optional
 // and will default to these values when omitted.
 AmazonLinuxImage amznLinux = new AmazonLinuxImage(new AmazonLinuxImageProps()
 // For other custom (Linux) images, instantiate a `GenericLinuxImage` with
 // a map giving the AMI to in for each region:
 GenericLinuxImage linux = GenericLinuxImage.Builder.create()Map.of(
         "us-east-1", "ami-97785bed",
         "eu-west-1", "ami-12345678");

NOTE: The Amazon Linux images selected will be cached in your cdk.json, so that your AutoScalingGroups don't automatically change out from under you when you're making unrelated changes. To update to the latest version of Amazon Linux, remove the cache entry from the context section of your cdk.json.

We will add command-line options to make this step easier in the future.

AutoScaling Instance Counts

AutoScalingGroups make it possible to raise and lower the number of instances in the group, in response to (or in advance of) changes in workload.

When you create your AutoScalingGroup, you specify a minCapacity and a maxCapacity. AutoScaling policies that respond to metrics will never go higher or lower than the indicated capacity (but scheduled scaling actions might, see below).

There are three ways to scale your capacity:

The general pattern of autoscaling will look like this:

 // Example automatically generated without compilation. See
 Object autoScalingGroup = AutoScalingGroup.Builder.create(this, "ASG")
 // Step scaling
 // Target tracking scaling
 // Scheduled scaling

Step Scaling

This type of scaling scales in and out in deterministics steps that you configure, in response to metric values. For example, your scaling strategy to scale in response to a metric that represents your average worker pool usage might look like this:

  Scaling        -1          (no change)          +1       +3
             │        │                       │        │        │
             │        │                       │        │        │
 Worker use  0%      10%                     50%       70%     100%

(Note that this is not necessarily a recommended scaling strategy, but it's a possible one. You will have to determine what thresholds are right for you).

Note that in order to set up this scaling strategy, you will have to emit a metric representing your worker utilization from your instances. After that, you would configure the scaling something like this:

 // Example automatically generated without compilation. See
 Object workerUtilizationMetric = Metric.Builder.create()
 capacity.scaleOnMetric("ScaleToCPU", Map.of(
         "metric", workerUtilizationMetric,
         "scalingSteps", asList(Map.of("upper", 10, "change", -1), Map.of("lower", 50, "change", +1), Map.of("lower", 70, "change", +3)),
         // Change this to AdjustmentType.PERCENT_CHANGE_IN_CAPACITY to interpret the
         // 'change' numbers before as percentages instead of capacity counts.
         "adjustmentType", autoscaling.AdjustmentType.getCHANGE_IN_CAPACITY()));

The AutoScaling construct library will create the required CloudWatch alarms and AutoScaling policies for you.

Target Tracking Scaling

This type of scaling scales in and out in order to keep a metric around a value you prefer. There are four types of predefined metrics you can track, or you can choose to track a custom metric. If you do choose to track a custom metric, be aware that the metric has to represent instance utilization in some way (AutoScaling will scale out if the metric is higher than the target, and scale in if the metric is lower than the target).

If you configure multiple target tracking policies, AutoScaling will use the one that yields the highest capacity.

The following example scales to keep the CPU usage of your instances around 50% utilization:

 // Example automatically generated without compilation. See
 autoScalingGroup.scaleOnCpuUtilization("KeepSpareCPU", Map.of(
         "targetUtilizationPercent", 50));

To scale on average network traffic in and out of your instances:

 // Example automatically generated without compilation. See
 autoScalingGroup.scaleOnIncomingBytes("LimitIngressPerInstance", Map.of(
         "targetBytesPerSecond", 10 * 1024 * 1024));
 autoScalingGroup.scaleOnOutcomingBytes("LimitEgressPerInstance", Map.of(
         "targetBytesPerSecond", 10 * 1024 * 1024));

To scale on the average request count per instance (only works for AutoScalingGroups that have been attached to Application Load Balancers):

 // Example automatically generated without compilation. See
 autoScalingGroup.scaleOnRequestCount("LimitRPS", Map.of(
         "targetRequestsPerSecond", 1000));

Scheduled Scaling

This type of scaling is used to change capacities based on time. It works by changing minCapacity, maxCapacity and desiredCapacity of the AutoScalingGroup, and so can be used for two purposes:

A schedule is expressed as a cron expression. The Schedule class has a cron method to help build cron expressions.

The following example scales the fleet out in the morning, going back to natural scaling (all the way down to 1 instance if necessary) at night:

 // Example automatically generated without compilation. See
 autoScalingGroup.scaleOnSchedule("PrescaleInTheMorning", Map.of(
         "schedule", autoscaling.Schedule.cron(Map.of("hour", "8", "minute", "0")),
         "minCapacity", 20));
 autoScalingGroup.scaleOnSchedule("AllowDownscalingAtNight", Map.of(
         "schedule", autoscaling.Schedule.cron(Map.of("hour", "20", "minute", "0")),
         "minCapacity", 1));

Configuring Instances using CloudFormation Init

It is possible to use the CloudFormation Init mechanism to configure the instances in the AutoScalingGroup. You can write files to it, run commands, start services, etc. See the documentation of AWS::CloudFormation::Init and the documentation of CDK's aws-ec2 library for more information.

When you specify a CloudFormation Init configuration for an AutoScalingGroup:

Here's an example of using CloudFormation Init to write a file to the instance hosts on startup:

 // Example automatically generated without compilation. See
 AutoScalingGroup.Builder.create(this, "ASG")
         // ...
         .init(ec2.CloudFormationInit.fromElements(ec2.InitFile.fromString("/etc/my_instance", "This got written during instance startup")))
                 "timeout", Duration.minutes(10))))


In normal operation, CloudFormation will send a Create or Update command to an AutoScalingGroup and proceed with the rest of the deployment without waiting for the instances in the AutoScalingGroup.

Configure signals to tell CloudFormation to wait for a specific number of instances in the AutoScalingGroup to have been started (or failed to start) before moving on. An instance is supposed to execute the cfn-signal program as part of its startup to indicate whether it was started successfully or not.

If you use CloudFormation Init support (described in the previous section), the appropriate call to cfn-signal is automatically added to the AutoScalingGroup's UserData. If you don't use the signals directly, you are responsible for adding such a call yourself.

The following type of Signals are available:

There are two options you can configure:

Update Policy

The update policy describes what should happen to running instances when the definition of the AutoScalingGroup is changed. For example, if you add a command to the UserData of an AutoScalingGroup, do the existing instances get replaced with new instances that have executed the new UserData? Or do the "old" instances just keep on running?

It is recommended to always use an update policy, otherwise the current state of your instances also depends the previous state of your instances, rather than just on your source code. This degrades the reproducibility of your deployments.

The following update policies are available:

Allowing Connections

See the documentation of the @aws-cdk/aws-ec2 package for more information about allowing connections between resources backed by instances.

Max Instance Lifetime

To enable the max instance lifetime support, specify maxInstanceLifetime property for the AutoscalingGroup resource. The value must be between 7 and 365 days(inclusive). To clear a previously set value, leave this property undefined.

Instance Monitoring

To disable detailed instance monitoring, specify instanceMonitoring property for the AutoscalingGroup resource as Monitoring.BASIC. Otherwise detailed monitoring will be enabled.

Monitoring Group Metrics

Group metrics are used to monitor group level properties; they describe the group rather than any of its instances (e.g GroupMaxSize, the group maximum size). To enable group metrics monitoring, use the groupMetrics property. All group metrics are reported in a granularity of 1 minute at no additional charge.

See EC2 docs for a list of all available group metrics.

To enable group metrics monitoring using the groupMetrics property:

 // Example automatically generated without compilation. See
 // Enable monitoring of all group metrics
 // Enable monitoring of all group metrics
 AutoScalingGroup.Builder.create(stack, "ASG")
 // Enable monitoring for a subset of group metrics
 // Enable monitoring for a subset of group metrics
 AutoScalingGroup.Builder.create(stack, "ASG")
         .groupMetrics(asList(new GroupMetrics(GroupMetric.getMIN_SIZE(), GroupMetric.getMAX_SIZE())))

Protecting new instances from being terminated on scale-in

By default, Auto Scaling can terminate an instance at any time after launch when scaling in an Auto Scaling Group, subject to the group's termination policy.

However, you may wish to protect newly-launched instances from being scaled in if they are going to run critical applications that should not be prematurely terminated. EC2 Capacity Providers for Amazon ECS requires this attribute be set to true.

 // Example automatically generated without compilation. See
 AutoScalingGroup.Builder.create(stack, "ASG")

Future work

Skip navigation links