Package software.amazon.awscdk.services.ec2


package software.amazon.awscdk.services.ec2

Amazon EC2 Construct Library

The aws-cdk-lib/aws-ec2 package contains primitives for setting up networking and instances.

 import software.amazon.awscdk.services.ec2.*;
 

VPC

Most projects need a Virtual Private Cloud to provide security by means of network partitioning. This is achieved by creating an instance of Vpc:

 Vpc vpc = new Vpc(this, "VPC");
 

All default constructs require EC2 instances to be launched inside a VPC, so you should generally start by defining a VPC whenever you need to launch instances for your project.

Subnet Types

A VPC consists of one or more subnets that instances can be placed into. CDK distinguishes three different subnet types:

  • Public (SubnetType.PUBLIC) - public subnets connect directly to the Internet using an Internet Gateway. If you want your instances to have a public IP address and be directly reachable from the Internet, you must place them in a public subnet.
  • Private with Internet Access (SubnetType.PRIVATE_WITH_EGRESS) - instances in private subnets are not directly routable from the Internet, and you must provide a way to connect out to the Internet. By default, a NAT gateway is created in every public subnet for maximum availability. Be aware that you will be charged for NAT gateways. Alternatively you can set natGateways:0 and provide your own egress configuration (i.e through Transit Gateway)
  • Isolated (SubnetType.PRIVATE_ISOLATED) - isolated subnets do not route from or to the Internet, and as such do not require NAT gateways. They can only connect to or be connected to from other instances in the same VPC. A default VPC configuration will not include isolated subnets,

A default VPC configuration will create public and private subnets. However, if natGateways:0 and subnetConfiguration is undefined, default VPC configuration will create public and isolated subnets. See Advanced Subnet Configuration below for information on how to change the default subnet configuration.

Constructs using the VPC will "launch instances" (or more accurately, create Elastic Network Interfaces) into one or more of the subnets. They all accept a property called subnetSelection (sometimes called vpcSubnets) to allow you to select in what subnet to place the ENIs, usually defaulting to private subnets if the property is omitted.

If you would like to save on the cost of NAT gateways, you can use isolated subnets instead of private subnets (as described in Advanced Subnet Configuration). If you need private instances to have internet connectivity, another option is to reduce the number of NAT gateways created by setting the natGateways property to a lower value (the default is one NAT gateway per availability zone). Be aware that this may have availability implications for your application.

Read more about subnets.

Control over availability zones

By default, a VPC will spread over at most 3 Availability Zones available to it. To change the number of Availability Zones that the VPC will spread over, specify the maxAzs property when defining it.

The number of Availability Zones that are available depends on the region and account of the Stack containing the VPC. If the region and account are specified on the Stack, the CLI will look up the existing Availability Zones and get an accurate count. The result of this operation will be written to a file called cdk.context.json. You must commit this file to source control so that the lookup values are available in non-privileged environments such as CI build steps, and to ensure your template builds are repeatable.

If region and account are not specified, the stack could be deployed anywhere and it will have to make a safe choice, limiting itself to 2 Availability Zones.

Therefore, to get the VPC to spread over 3 or more availability zones, you must specify the environment where the stack will be deployed.

You can gain full control over the availability zones selection strategy by overriding the Stack's get availabilityZones() method:

 // This example is only available in TypeScript
 
 class MyStack extends Stack {
 
   constructor(scope: Construct, id: string, props?: StackProps) {
     super(scope, id, props);
 
     // ...
   }
 
   get availabilityZones(): string[] {
     return ['us-west-2a', 'us-west-2b'];
   }
 
 }
 

Note that overriding the get availabilityZones() method will override the default behavior for all constructs defined within the Stack.

Choosing subnets for resources

When creating resources that create Elastic Network Interfaces (such as databases or instances), there is an option to choose which subnets to place them in. For example, a VPC endpoint by default is placed into a subnet in every availability zone, but you can override which subnets to use. The property is typically called one of subnets, vpcSubnets or subnetSelection.

The example below will place the endpoint into two AZs (us-east-1a and us-east-1c), in Isolated subnets:

 Vpc vpc;
 
 
 InterfaceVpcEndpoint.Builder.create(this, "VPC Endpoint")
         .vpc(vpc)
         .service(new InterfaceVpcEndpointService("com.amazonaws.vpce.us-east-1.vpce-svc-uuddlrlrbastrtsvc", 443))
         .subnets(SubnetSelection.builder()
                 .subnetType(SubnetType.PRIVATE_ISOLATED)
                 .availabilityZones(List.of("us-east-1a", "us-east-1c"))
                 .build())
         .build();
 

You can also specify specific subnet objects for granular control:

 Vpc vpc;
 Subnet subnet1;
 Subnet subnet2;
 
 
 InterfaceVpcEndpoint.Builder.create(this, "VPC Endpoint")
         .vpc(vpc)
         .service(new InterfaceVpcEndpointService("com.amazonaws.vpce.us-east-1.vpce-svc-uuddlrlrbastrtsvc", 443))
         .subnets(SubnetSelection.builder()
                 .subnets(List.of(subnet1, subnet2))
                 .build())
         .build();
 

Which subnets are selected is evaluated as follows:

  • subnets: if specific subnet objects are supplied, these are selected, and no other logic is used.
  • subnetType/subnetGroupName: otherwise, a set of subnets is selected by supplying either type or name:

    • subnetType will select all subnets of the given type.
    • subnetGroupName should be used to distinguish between multiple groups of subnets of the same type (for example, you may want to separate your application instances and your RDS instances into two distinct groups of Isolated subnets).
    • If neither are given, the first available subnet group of a given type that exists in the VPC will be used, in this order: Private, then Isolated, then Public. In short: by default ENIs will preferentially be placed in subnets not connected to the Internet.
  • availabilityZones/onePerAz: finally, some availability-zone based filtering may be done. This filtering by availability zones will only be possible if the VPC has been created or looked up in a non-environment agnostic stack (so account and region have been set and availability zones have been looked up).

    • availabilityZones: only the specific subnets from the selected subnet groups that are in the given availability zones will be returned.
    • onePerAz: per availability zone, a maximum of one subnet will be returned (Useful for resource types that do not allow creating two ENIs in the same availability zone).
  • subnetFilters: additional filtering on subnets using any number of user-provided filters which extend SubnetFilter. The following methods on the SubnetFilter class can be used to create a filter:

    • byIds: chooses subnets from a list of ids
    • availabilityZones: chooses subnets in the provided list of availability zones
    • onePerAz: chooses at most one subnet per availability zone
    • containsIpAddresses: chooses a subnet which contains any of the listed ip addresses
    • byCidrMask: chooses subnets that have the provided CIDR netmask
    • byCidrRanges: chooses subnets which are inside any of the specified CIDR ranges

Using NAT instances

By default, the Vpc construct will create NAT gateways for you, which are managed by AWS. If you would prefer to use your own managed NAT instances instead, specify a different value for the natGatewayProvider property, as follows:

The construct will automatically selects the latest version of Amazon Linux 2023. If you prefer to use a custom AMI, use machineImage: MachineImage.genericLinux({ ... }) and configure the right AMI ID for the regions you want to deploy to.

Warning The NAT instances created using this method will be unmonitored. They are not part of an Auto Scaling Group, and if they become unavailable or are terminated for any reason, will not be restarted or replaced.

By default, the NAT instances will route all traffic. To control what traffic gets routed, pass a custom value for defaultAllowedTraffic and access the NatInstanceProvider.connections member after having passed the NAT provider to the VPC:

 InstanceType instanceType;
 
 
 NatInstanceProviderV2 provider = NatProvider.instanceV2(NatInstanceProps.builder()
         .instanceType(instanceType)
         .defaultAllowedTraffic(NatTrafficDirection.OUTBOUND_ONLY)
         .build());
 Vpc.Builder.create(this, "TheVPC")
         .natGatewayProvider(provider)
         .build();
 provider.connections.allowFrom(Peer.ipv4("1.2.3.4/8"), Port.HTTP);
 

You can also customize the characteristics of your NAT instances, including their security group, as well as their initialization scripts:

 Bucket bucket;
 
 
 UserData userData = UserData.forLinux();
 userData.addCommands(
 (SpreadElement ...ec2.NatInstanceProviderV2.DEFAULT_USER_DATA_COMMANDS
   NatInstanceProviderV2.DEFAULT_USER_DATA_COMMANDS), "echo \"hello world!\" > hello.txt", String.format("aws s3 cp hello.txt s3://%s", bucket.getBucketName()));
 
 NatInstanceProviderV2 provider = NatProvider.instanceV2(NatInstanceProps.builder()
         .instanceType(new InstanceType("t3.small"))
         .creditSpecification(CpuCredits.UNLIMITED)
         .defaultAllowedTraffic(NatTrafficDirection.NONE)
         .build());
 
 Vpc vpc = Vpc.Builder.create(this, "TheVPC")
         .natGatewayProvider(provider)
         .natGateways(2)
         .build();
 
 SecurityGroup securityGroup = SecurityGroup.Builder.create(this, "SecurityGroup").vpc(vpc).build();
 securityGroup.addEgressRule(Peer.anyIpv4(), Port.tcp(443));
 for (Object gateway : provider.getGatewayInstances()) {
     bucket.grantWrite(gateway);
     gateway.addSecurityGroup(securityGroup);
 }
 

 // Configure the `natGatewayProvider` when defining a Vpc
 NatInstanceProvider natGatewayProvider = NatProvider.instance(NatInstanceProps.builder()
         .instanceType(new InstanceType("t3.small"))
         .build());
 
 Vpc vpc = Vpc.Builder.create(this, "MyVpc")
         .natGatewayProvider(natGatewayProvider)
 
         // The 'natGateways' parameter now controls the number of NAT instances
         .natGateways(2)
         .build();
 

The V1 NatProvider.instance construct will use the AWS official NAT instance AMI, which has already reached EOL on Dec 31, 2023. For more information, see the following blog post: Amazon Linux AMI end of life.

 InstanceType instanceType;
 
 
 NatInstanceProvider provider = NatProvider.instance(NatInstanceProps.builder()
         .instanceType(instanceType)
         .defaultAllowedTraffic(NatTrafficDirection.OUTBOUND_ONLY)
         .build());
 Vpc.Builder.create(this, "TheVPC")
         .natGatewayProvider(provider)
         .build();
 provider.connections.allowFrom(Peer.ipv4("1.2.3.4/8"), Port.HTTP);
 

Associate Public IP Address to NAT Instance

You can choose to associate public IP address to a NAT instance V2 by specifying associatePublicIpAddress like the following:

 NatInstanceProviderV2 natGatewayProvider = NatProvider.instanceV2(NatInstanceProps.builder()
         .instanceType(new InstanceType("t3.small"))
         .associatePublicIpAddress(true)
         .build());
 

In certain scenarios where the public subnet has set mapPublicIpOnLaunch to false, NAT instances does not get public IP addresses assigned which would result in non-working NAT instance as NAT instance requires a public IP address to enable outbound internet connectivity. Users can specify associatePublicIpAddress to true to solve this problem.

Ip Address Management

The VPC spans a supernet IP range, which contains the non-overlapping IPs of its contained subnets. Possible sources for this IP range are:

  • You specify an IP range directly by specifying a CIDR
  • You allocate an IP range of a given size automatically from AWS IPAM

By default the Vpc will allocate the 10.0.0.0/16 address range which will be exhaustively spread across all subnets in the subnet configuration. This behavior can be changed by passing an object that implements IIpAddresses to the ipAddress property of a Vpc. See the subsequent sections for the options.

Be aware that if you don't explicitly reserve subnet groups in subnetConfiguration, the address space will be fully allocated! If you predict you may need to add more subnet groups later, add them early on and set reserved: true (see the "Advanced Subnet Configuration" section for more information).

Specifying a CIDR directly

Use IpAddresses.cidr to define a Cidr range for your Vpc directly in code:

 import software.amazon.awscdk.services.ec2.IpAddresses;
 
 
 Vpc.Builder.create(this, "TheVPC")
         .ipAddresses(IpAddresses.cidr("10.0.1.0/20"))
         .build();
 

Space will be allocated to subnets in the following order:

  • First, spaces is allocated for all subnets groups that explicitly have a cidrMask set as part of their configuration (including reserved subnets).
  • Afterwards, any remaining space is divided evenly between the rest of the subnets (if any).

The argument to IpAddresses.cidr may not be a token, and concrete Cidr values are generated in the synthesized CloudFormation template.

Allocating an IP range from AWS IPAM

Amazon VPC IP Address Manager (IPAM) manages a large IP space, from which chunks can be allocated for use in the Vpc. For information on Amazon VPC IP Address Manager please see the official documentation. An example of allocating from AWS IPAM looks like this:

 import software.amazon.awscdk.services.ec2.IpAddresses;
 
 CfnIPAMPool pool;
 
 
 Vpc.Builder.create(this, "TheVPC")
         .ipAddresses(IpAddresses.awsIpamAllocation(AwsIpamProps.builder()
                 .ipv4IpamPoolId(pool.getRef())
                 .ipv4NetmaskLength(18)
                 .defaultSubnetIpv4NetmaskLength(24)
                 .build()))
         .build();
 

IpAddresses.awsIpamAllocation requires the following:

  • ipv4IpamPoolId, the id of an IPAM Pool from which the VPC range should be allocated.
  • ipv4NetmaskLength, the size of the IP range that will be requested from the Pool at deploy time.
  • defaultSubnetIpv4NetmaskLength, the size of subnets in groups that don't have cidrMask set.

With this method of IP address management, no attempt is made to guess at subnet group sizes or to exhaustively allocate the IP range. All subnet groups must have an explicit cidrMask set as part of their subnet configuration, or defaultSubnetIpv4NetmaskLength must be set for a default size. If not, synthesis will fail and you must provide one or the other.

Dual Stack configuration

To allocate both IPv4 and IPv6 addresses in your VPC, you can configure your VPC to have a dual stack protocol.

 Vpc.Builder.create(this, "DualStackVpc")
         .ipProtocol(IpProtocol.DUAL_STACK)
         .build();
 

By default, a dual stack VPC will create an Amazon provided IPv6 /56 CIDR block associated to the VPC. It will then assign /64 portions of the block to each subnet. For each subnet, auto-assigning an IPv6 address will be enabled, and auto-asigning a public IPv4 address will be disabled. An egress only internet gateway will be created for PRIVATE_WITH_EGRESS subnets, and IPv6 routes will be added for IGWs and EIGWs.

Disabling the auto-assigning of a public IPv4 address by default can avoid the cost of public IPv4 addresses starting 2/1/2024. For use cases that need an IPv4 address, the mapPublicIpOnLaunch property in subnetConfiguration can be set to auto-assign the IPv4 address. Note that private IPv4 address allocation will not be changed.

See Advanced Subnet Configuration for all IPv6 specific properties.

Reserving availability zones

There are situations where the IP space for availability zones will need to be reserved. This is useful in situations where availability zones would need to be added after the vpc is originally deployed, without causing IP renumbering for availability zones subnets. The IP space for reserving n availability zones can be done by setting the reservedAzs to n in vpc props, as shown below:

 Vpc vpc = Vpc.Builder.create(this, "TheVPC")
         .cidr("10.0.0.0/21")
         .maxAzs(3)
         .reservedAzs(1)
         .build();
 

In the example above, the subnets for reserved availability zones is not actually provisioned but its IP space is still reserved. If, in the future, new availability zones needs to be provisioned, then we would decrement the value of reservedAzs and increment the maxAzs or availabilityZones accordingly. This action would not cause the IP address of subnets to get renumbered, but rather the IP space that was previously reserved will be used for the new availability zones subnets.

Advanced Subnet Configuration

If the default VPC configuration (public and private subnets spanning the size of the VPC) don't suffice for you, you can configure what subnets to create by specifying the subnetConfiguration property. It allows you to configure the number and size of all subnets. Specifying an advanced subnet configuration could look like this:

 Vpc vpc = Vpc.Builder.create(this, "TheVPC")
         // 'IpAddresses' configures the IP range and size of the entire VPC.
         // The IP space will be divided based on configuration for the subnets.
         .ipAddresses(IpAddresses.cidr("10.0.0.0/21"))
 
         // 'maxAzs' configures the maximum number of availability zones to use.
         // If you want to specify the exact availability zones you want the VPC
         // to use, use `availabilityZones` instead.
         .maxAzs(3)
 
         // 'subnetConfiguration' specifies the "subnet groups" to create.
         // Every subnet group will have a subnet for each AZ, so this
         // configuration will create `3 groups × 3 AZs = 9` subnets.
         .subnetConfiguration(List.of(SubnetConfiguration.builder()
                 // 'subnetType' controls Internet access, as described above.
                 .subnetType(SubnetType.PUBLIC)
 
                 // 'name' is used to name this particular subnet group. You will have to
                 // use the name for subnet selection if you have more than one subnet
                 // group of the same type.
                 .name("Ingress")
 
                 // 'cidrMask' specifies the IP addresses in the range of of individual
                 // subnets in the group. Each of the subnets in this group will contain
                 // `2^(32 address bits - 24 subnet bits) - 2 reserved addresses = 254`
                 // usable IP addresses.
                 //
                 // If 'cidrMask' is left out the available address space is evenly
                 // divided across the remaining subnet groups.
                 .cidrMask(24)
                 .build(), SubnetConfiguration.builder()
                 .cidrMask(24)
                 .name("Application")
                 .subnetType(SubnetType.PRIVATE_WITH_EGRESS)
                 .build(), SubnetConfiguration.builder()
                 .cidrMask(28)
                 .name("Database")
                 .subnetType(SubnetType.PRIVATE_ISOLATED)
 
                 // 'reserved' can be used to reserve IP address space. No resources will
                 // be created for this subnet, but the IP range will be kept available for
                 // future creation of this subnet, or even for future subdivision.
                 .reserved(true)
                 .build()))
         .build();
 

The example above is one possible configuration, but the user can use the constructs above to implement many other network configurations.

The Vpc from the above configuration in a Region with three availability zones will be the following:

Subnet Name |Type |IP Block |AZ|Features ------------------|----------|--------------|--|-------- IngressSubnet1 |PUBLIC |10.0.0.0/24 |#1|NAT Gateway IngressSubnet2 |PUBLIC |10.0.1.0/24 |#2|NAT Gateway IngressSubnet3 |PUBLIC |10.0.2.0/24 |#3|NAT Gateway ApplicationSubnet1|PRIVATE |10.0.3.0/24 |#1|Route to NAT in IngressSubnet1 ApplicationSubnet2|PRIVATE |10.0.4.0/24 |#2|Route to NAT in IngressSubnet2 ApplicationSubnet3|PRIVATE |10.0.5.0/24 |#3|Route to NAT in IngressSubnet3 DatabaseSubnet1 |ISOLATED|10.0.6.0/28 |#1|Only routes within the VPC DatabaseSubnet2 |ISOLATED|10.0.6.16/28|#2|Only routes within the VPC DatabaseSubnet3 |ISOLATED|10.0.6.32/28|#3|Only routes within the VPC

Dual Stack Configurations

Here is a break down of IPv4 and IPv6 specifc subnetConfiguration properties in a dual stack VPC:

 Vpc vpc = Vpc.Builder.create(this, "TheVPC")
         .ipProtocol(IpProtocol.DUAL_STACK)
 
         .subnetConfiguration(List.of(SubnetConfiguration.builder()
                 // general properties
                 .name("Public")
                 .subnetType(SubnetType.PUBLIC)
                 .reserved(false)
 
                 // IPv4 specific properties
                 .mapPublicIpOnLaunch(true)
                 .cidrMask(24)
 
                 // new IPv6 specific property
                 .ipv6AssignAddressOnCreation(true)
                 .build()))
         .build();
 

The property mapPublicIpOnLaunch controls if a public IPv4 address will be assigned. This defaults to false for dual stack VPCs to avoid inadvertant costs of having the public address. However, a public IP must be enabled (or otherwise configured with BYOIP or IPAM) in order for services that rely on the address to function.

The ipv6AssignAddressOnCreation property controls the same behavior for the IPv6 address. It defaults to true.

Using IPv6 specific properties in an IPv4 only VPC will result in errors.

Accessing the Internet Gateway

If you need access to the internet gateway, you can get its ID like so:

 Vpc vpc;
 
 
 String igwId = vpc.getInternetGatewayId();
 

For a VPC with only ISOLATED subnets, this value will be undefined.

This is only supported for VPCs created in the stack - currently you're unable to get the ID for imported VPCs. To do that you'd have to specifically look up the Internet Gateway by name, which would require knowing the name beforehand.

This can be useful for configuring routing using a combination of gateways: for more information see Routing below.

Disabling the creation of the default internet gateway

If you need to control the creation of the internet gateway explicitly, you can disable the creation of the default one using the createInternetGateway property:

 Vpc vpc = Vpc.Builder.create(this, "VPC")
         .createInternetGateway(false)
         .subnetConfiguration(List.of(SubnetConfiguration.builder()
                 .subnetType(SubnetType.PUBLIC)
                 .name("Public")
                 .build()))
         .build();
 

Routing

It's possible to add routes to any subnets using the addRoute() method. If for example you want an isolated subnet to have a static route via the default Internet Gateway created for the public subnet - perhaps for routing a VPN connection - you can do so like this:

 Vpc vpc = Vpc.Builder.create(this, "VPC")
         .subnetConfiguration(List.of(SubnetConfiguration.builder()
                 .subnetType(SubnetType.PUBLIC)
                 .name("Public")
                 .build(), SubnetConfiguration.builder()
                 .subnetType(SubnetType.PRIVATE_ISOLATED)
                 .name("Isolated")
                 .build()))
         .build();
 
 ((Subnet)vpc.isolatedSubnets[0]).addRoute("StaticRoute", AddRouteOptions.builder()
         .routerId(vpc.getInternetGatewayId())
         .routerType(RouterType.GATEWAY)
         .destinationCidrBlock("8.8.8.8/32")
         .build());
 

Note that we cast to Subnet here because the list of subnets only returns an ISubnet.

Reserving subnet IP space

There are situations where the IP space for a subnet or number of subnets will need to be reserved. This is useful in situations where subnets would need to be added after the vpc is originally deployed, without causing IP renumbering for existing subnets. The IP space for a subnet may be reserved by setting the reserved subnetConfiguration property to true, as shown below:

 Vpc vpc = Vpc.Builder.create(this, "TheVPC")
         .natGateways(1)
         .subnetConfiguration(List.of(SubnetConfiguration.builder()
                 .cidrMask(26)
                 .name("Public")
                 .subnetType(SubnetType.PUBLIC)
                 .build(), SubnetConfiguration.builder()
                 .cidrMask(26)
                 .name("Application1")
                 .subnetType(SubnetType.PRIVATE_WITH_EGRESS)
                 .build(), SubnetConfiguration.builder()
                 .cidrMask(26)
                 .name("Application2")
                 .subnetType(SubnetType.PRIVATE_WITH_EGRESS)
                 .reserved(true)
                 .build(), SubnetConfiguration.builder()
                 .cidrMask(27)
                 .name("Database")
                 .subnetType(SubnetType.PRIVATE_ISOLATED)
                 .build()))
         .build();
 

In the example above, the subnet for Application2 is not actually provisioned but its IP space is still reserved. If in the future this subnet needs to be provisioned, then the reserved: true property should be removed. Reserving parts of the IP space prevents the other subnets from getting renumbered.

Sharing VPCs between stacks

If you are creating multiple Stacks inside the same CDK application, you can reuse a VPC defined in one Stack in another by simply passing the VPC instance around:

 /**
  * Stack1 creates the VPC
  */
 public class Stack1 extends Stack {
     public final Vpc vpc;
 
     public Stack1(App scope, String id) {
         this(scope, id, null);
     }
 
     public Stack1(App scope, String id, StackProps props) {
         super(scope, id, props);
 
         this.vpc = new Vpc(this, "VPC");
     }
 }
 
 public class Stack2Props extends StackProps {
     private IVpc vpc;
     public IVpc getVpc() {
         return this.vpc;
     }
     public Stack2Props vpc(IVpc vpc) {
         this.vpc = vpc;
         return this;
     }
 }
 
 /**
  * Stack2 consumes the VPC
  */
 public class Stack2 extends Stack {
     public Stack2(App scope, String id, Stack2Props props) {
         super(scope, id, props);
 
         // Pass the VPC to a construct that needs it
         // Pass the VPC to a construct that needs it
         new ConstructThatTakesAVpc(this, "Construct", new ConstructThatTakesAVpcProps()
                 .vpc(props.getVpc())
                 );
     }
 }
 
 Stack1 stack1 = new Stack1(app, "Stack1");
 Stack2 stack2 = new Stack2(app, "Stack2", new Stack2Props()
         .vpc(stack1.getVpc())
         );
 

Importing an existing VPC

If your VPC is created outside your CDK app, you can use Vpc.fromLookup(). The CDK CLI will search for the specified VPC in the the stack's region and account, and import the subnet configuration. Looking up can be done by VPC ID, but more flexibly by searching for a specific tag on the VPC.

Subnet types will be determined from the aws-cdk:subnet-type tag on the subnet if it exists, or the presence of a route to an Internet Gateway otherwise. Subnet names will be determined from the aws-cdk:subnet-name tag on the subnet if it exists, or will mirror the subnet type otherwise (i.e. a public subnet will have the name "Public").

The result of the Vpc.fromLookup() operation will be written to a file called cdk.context.json. You must commit this file to source control so that the lookup values are available in non-privileged environments such as CI build steps, and to ensure your template builds are repeatable.

Here's how Vpc.fromLookup() can be used:

 IVpc vpc = Vpc.fromLookup(stack, "VPC", VpcLookupOptions.builder()
         // This imports the default VPC but you can also
         // specify a 'vpcName' or 'tags'.
         .isDefault(true)
         .build());
 

Vpc.fromLookup is the recommended way to import VPCs. If for whatever reason you do not want to use the context mechanism to look up a VPC at synthesis time, you can also use Vpc.fromVpcAttributes. This has the following limitations:

  • Every subnet group in the VPC must have a subnet in each availability zone (for example, each AZ must have both a public and private subnet). Asymmetric VPCs are not supported.
  • All VpcId, SubnetId, RouteTableId, ... parameters must either be known at synthesis time, or they must come from deploy-time list parameters whose deploy-time lengths are known at synthesis time.

Using Vpc.fromVpcAttributes() looks like this:

 IVpc vpc = Vpc.fromVpcAttributes(this, "VPC", VpcAttributes.builder()
         .vpcId("vpc-1234")
         .availabilityZones(List.of("us-east-1a", "us-east-1b"))
 
         // Either pass literals for all IDs
         .publicSubnetIds(List.of("s-12345", "s-67890"))
 
         // OR: import a list of known length
         .privateSubnetIds(Fn.importListValue("PrivateSubnetIds", 2))
 
         // OR: split an imported string to a list of known length
         .isolatedSubnetIds(Fn.split(",", StringParameter.valueForStringParameter(this, "MyParameter"), 2))
         .build());
 

For each subnet group the import function accepts optional parameters for subnet names, route table ids and IPv4 CIDR blocks. When supplied, the length of these lists are required to match the length of the list of subnet ids, allowing the lists to be zipped together to form ISubnet instances.

Public subnet group example (for private or isolated subnet groups, use the properties with the respective prefix):

 IVpc vpc = Vpc.fromVpcAttributes(this, "VPC", VpcAttributes.builder()
         .vpcId("vpc-1234")
         .availabilityZones(List.of("us-east-1a", "us-east-1b", "us-east-1c"))
         .publicSubnetIds(List.of("s-12345", "s-34567", "s-56789"))
         .publicSubnetNames(List.of("Subnet A", "Subnet B", "Subnet C"))
         .publicSubnetRouteTableIds(List.of("rt-12345", "rt-34567", "rt-56789"))
         .publicSubnetIpv4CidrBlocks(List.of("10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"))
         .build());
 

The above example will create an IVpc instance with three public subnets:

| Subnet id | Availability zone | Subnet name | Route table id | IPv4 CIDR | | --------- | ----------------- | ----------- | -------------- | ----------- | | s-12345 | us-east-1a | Subnet A | rt-12345 | 10.0.0.0/24 | | s-34567 | us-east-1b | Subnet B | rt-34567 | 10.0.1.0/24 | | s-56789 | us-east-1c | Subnet B | rt-56789 | 10.0.2.0/24 |

Restricting access to the VPC default security group

AWS Security best practices recommend that the VPC default security group should not allow inbound and outbound traffic. When the @aws-cdk/aws-ec2:restrictDefaultSecurityGroup feature flag is set to true (default for new projects) this will be enabled by default. If you do not have this feature flag set you can either set the feature flag or you can set the restrictDefaultSecurityGroup property to true.

 Vpc.Builder.create(this, "VPC")
         .restrictDefaultSecurityGroup(true)
         .build();
 

If you set this property to true and then later remove it or set it to false the default ingress/egress will be restored on the default security group.

Allowing Connections

In AWS, all network traffic in and out of Elastic Network Interfaces (ENIs) is controlled by Security Groups. You can think of Security Groups as a firewall with a set of rules. By default, Security Groups allow no incoming (ingress) traffic and all outgoing (egress) traffic. You can add ingress rules to them to allow incoming traffic streams. To exert fine-grained control over egress traffic, set allowAllOutbound: false on the SecurityGroup, after which you can add egress traffic rules.

You can manipulate Security Groups directly:

 SecurityGroup mySecurityGroup = SecurityGroup.Builder.create(this, "SecurityGroup")
         .vpc(vpc)
         .description("Allow ssh access to ec2 instances")
         .allowAllOutbound(true)
         .build();
 mySecurityGroup.addIngressRule(Peer.anyIpv4(), Port.tcp(22), "allow ssh access from the world");
 

All constructs that create ENIs on your behalf (typically constructs that create EC2 instances or other VPC-connected resources) will all have security groups automatically assigned. Those constructs have an attribute called connections, which is an object that makes it convenient to update the security groups. If you want to allow connections between two constructs that have security groups, you have to add an Egress rule to one Security Group, and an Ingress rule to the other. The connections object will automatically take care of this for you:

 ApplicationLoadBalancer loadBalancer;
 AutoScalingGroup appFleet;
 AutoScalingGroup dbFleet;
 
 
 // Allow connections from anywhere
 loadBalancer.connections.allowFromAnyIpv4(Port.HTTPS, "Allow inbound HTTPS");
 
 // The same, but an explicit IP address
 loadBalancer.connections.allowFrom(Peer.ipv4("1.2.3.4/32"), Port.HTTPS, "Allow inbound HTTPS");
 
 // Allow connection between AutoScalingGroups
 appFleet.connections.allowTo(dbFleet, Port.HTTPS, "App can call database");
 

Connection Peers

There are various classes that implement the connection peer part:

 AutoScalingGroup appFleet;
 AutoScalingGroup dbFleet;
 
 
 // Simple connection peers
 IPeer peer = Peer.ipv4("10.0.0.0/16");
 peer = Peer.anyIpv4();
 peer = Peer.ipv6("::0/0");
 peer = Peer.anyIpv6();
 peer = Peer.prefixList("pl-12345");
 appFleet.connections.allowTo(peer, Port.HTTPS, "Allow outbound HTTPS");
 

Any object that has a security group can itself be used as a connection peer:

 AutoScalingGroup fleet1;
 AutoScalingGroup fleet2;
 AutoScalingGroup appFleet;
 
 
 // These automatically create appropriate ingress and egress rules in both security groups
 fleet1.connections.allowTo(fleet2, Port.HTTP, "Allow between fleets");
 
 appFleet.connections.allowFromAnyIpv4(Port.HTTP, "Allow from load balancer");
 

Port Ranges

The connections that are allowed are specified by port ranges. A number of classes provide the connection specifier:

 Port.tcp(80);
 Port.HTTPS;
 Port.tcpRange(60000, 65535);
 Port.allTcp();
 Port.allIcmp();
 Port.allIcmpV6();
 Port.allTraffic();
 

NOTE: Not all protocols have corresponding helper methods. In the absence of a helper method, you can instantiate Port yourself with your own settings. You are also welcome to contribute new helper methods.

Default Ports

Some Constructs have default ports associated with them. For example, the listener of a load balancer does (it's the public port), or instances of an RDS database (it's the port the database is accepting connections on).

If the object you're calling the peering method on has a default port associated with it, you can call allowDefaultPortFrom() and omit the port specifier. If the argument has an associated default port, call allowDefaultPortTo().

For example:

 ApplicationListener listener;
 AutoScalingGroup appFleet;
 DatabaseCluster rdsDatabase;
 
 
 // Port implicit in listener
 listener.connections.allowDefaultPortFromAnyIpv4("Allow public");
 
 // Port implicit in peer
 appFleet.connections.allowDefaultPortTo(rdsDatabase, "Fleet can access database");
 

Security group rules

By default, security group wills be added inline to the security group in the output cloud formation template, if applicable. This includes any static rules by ip address and port range. This optimization helps to minimize the size of the template.

In some environments this is not desirable, for example if your security group access is controlled via tags. You can disable inline rules per security group or globally via the context key @aws-cdk/aws-ec2.securityGroupDisableInlineRules.

 SecurityGroup mySecurityGroupWithoutInlineRules = SecurityGroup.Builder.create(this, "SecurityGroup")
         .vpc(vpc)
         .description("Allow ssh access to ec2 instances")
         .allowAllOutbound(true)
         .disableInlineRules(true)
         .build();
 //This will add the rule as an external cloud formation construct
 mySecurityGroupWithoutInlineRules.addIngressRule(Peer.anyIpv4(), Port.SSH, "allow ssh access from the world");
 

Importing an existing security group

If you know the ID and the configuration of the security group to import, you can use SecurityGroup.fromSecurityGroupId:

 ISecurityGroup sg = SecurityGroup.fromSecurityGroupId(this, "SecurityGroupImport", "sg-1234", SecurityGroupImportOptions.builder()
         .allowAllOutbound(true)
         .build());
 

Alternatively, use lookup methods to import security groups if you do not know the ID or the configuration details. Method SecurityGroup.fromLookupByName looks up a security group if the security group ID is unknown.

 ISecurityGroup sg = SecurityGroup.fromLookupByName(this, "SecurityGroupLookup", "security-group-name", vpc);
 

If the security group ID is known and configuration details are unknown, use method SecurityGroup.fromLookupById instead. This method will lookup property allowAllOutbound from the current configuration of the security group.

 ISecurityGroup sg = SecurityGroup.fromLookupById(this, "SecurityGroupLookup", "sg-1234");
 

The result of SecurityGroup.fromLookupByName and SecurityGroup.fromLookupById operations will be written to a file called cdk.context.json. You must commit this file to source control so that the lookup values are available in non-privileged environments such as CI build steps, and to ensure your template builds are repeatable.

Cross Stack Connections

If you are attempting to add a connection from a peer in one stack to a peer in a different stack, sometimes it is necessary to ensure that you are making the connection in a specific stack in order to avoid a cyclic reference. If there are no other dependencies between stacks then it will not matter in which stack you make the connection, but if there are existing dependencies (i.e. stack1 already depends on stack2), then it is important to make the connection in the dependent stack (i.e. stack1).

Whenever you make a connections function call, the ingress and egress security group rules will be added to the stack that the calling object exists in. So if you are doing something like peer1.connections.allowFrom(peer2), then the security group rules (both ingress and egress) will be created in peer1's Stack.

As an example, if we wanted to allow a connection from a security group in one stack (egress) to a security group in a different stack (ingress), we would make the connection like:

If Stack1 depends on Stack2

 // Stack 1
 Stack stack1;
 Stack stack2;
 
 
 SecurityGroup sg1 = SecurityGroup.Builder.create(stack1, "SG1")
         .allowAllOutbound(false) // if this is `true` then no egress rule will be created
         .vpc(vpc)
         .build();
 
 // Stack 2
 SecurityGroup sg2 = SecurityGroup.Builder.create(stack2, "SG2")
         .allowAllOutbound(false) // if this is `true` then no egress rule will be created
         .vpc(vpc)
         .build();
 
 // `connections.allowTo` on `sg1` since we want the
 // rules to be created in Stack1
 sg1.connections.allowTo(sg2, Port.tcp(3333));
 

In this case both the Ingress Rule for sg2 and the Egress Rule for sg1 will both be created in Stack 1 which avoids the cyclic reference.

If Stack2 depends on Stack1

 // Stack 1
 Stack stack1;
 Stack stack2;
 
 
 SecurityGroup sg1 = SecurityGroup.Builder.create(stack1, "SG1")
         .allowAllOutbound(false) // if this is `true` then no egress rule will be created
         .vpc(vpc)
         .build();
 
 // Stack 2
 SecurityGroup sg2 = SecurityGroup.Builder.create(stack2, "SG2")
         .allowAllOutbound(false) // if this is `true` then no egress rule will be created
         .vpc(vpc)
         .build();
 
 // `connections.allowFrom` on `sg2` since we want the
 // rules to be created in Stack2
 sg2.connections.allowFrom(sg1, Port.tcp(3333));
 

In this case both the Ingress Rule for sg2 and the Egress Rule for sg1 will both be created in Stack 2 which avoids the cyclic reference.

Machine Images (AMIs)

AMIs control the OS that gets launched when you start your EC2 instance. The EC2 library contains constructs to select the AMI you want to use.

Depending on the type of AMI, you select it a different way. Here are some examples of images you might want to use:

 // Pick the right Amazon Linux edition. All arguments shown are optional
 // and will default to these values when omitted.
 IMachineImage amznLinux = MachineImage.latestAmazonLinux(AmazonLinuxImageProps.builder()
         .generation(AmazonLinuxGeneration.AMAZON_LINUX)
         .edition(AmazonLinuxEdition.STANDARD)
         .virtualization(AmazonLinuxVirt.HVM)
         .storage(AmazonLinuxStorage.GENERAL_PURPOSE)
         .cpuType(AmazonLinuxCpuType.X86_64)
         .build());
 
 // Pick a Windows edition to use
 IMachineImage windows = MachineImage.latestWindows(WindowsVersion.WINDOWS_SERVER_2019_ENGLISH_FULL_BASE);
 
 // Read AMI id from SSM parameter store
 IMachineImage ssm = MachineImage.fromSsmParameter("/my/ami", SsmParameterImageOptions.builder().os(OperatingSystemType.LINUX).build());
 
 // Look up the most recent image matching a set of AMI filters.
 // In this case, look up the NAT instance AMI, by using a wildcard
 // in the 'name' field:
 IMachineImage natAmi = MachineImage.lookup(LookupMachineImageProps.builder()
         .name("amzn-ami-vpc-nat-*")
         .owners(List.of("amazon"))
         .build());
 
 // For other custom (Linux) images, instantiate a `GenericLinuxImage` with
 // a map giving the AMI to in for each region:
 IMachineImage linux = MachineImage.genericLinux(Map.of(
         "us-east-1", "ami-97785bed",
         "eu-west-1", "ami-12345678"));
 
 // For other custom (Windows) images, instantiate a `GenericWindowsImage` with
 // a map giving the AMI to in for each region:
 IMachineImage genericWindows = MachineImage.genericWindows(Map.of(
         "us-east-1", "ami-97785bed",
         "eu-west-1", "ami-12345678"));
 

NOTE: The AMIs selected by MachineImage.lookup() will be cached in cdk.context.json, so that your AutoScalingGroup instances aren't replaced while you are making unrelated changes to your CDK app.

To query for the latest AMI again, remove the relevant cache entry from cdk.context.json, or use the cdk context command. For more information, see Runtime Context in the CDK developer guide.

MachineImage.genericLinux(), MachineImage.genericWindows() will use CfnMapping in an agnostic stack.

Special VPC configurations

VPN connections to a VPC

Create your VPC with VPN connections by specifying the vpnConnections props (keys are construct ids):

 import software.amazon.awscdk.SecretValue;
 
 
 Vpc vpc = Vpc.Builder.create(this, "MyVpc")
         .vpnConnections(Map.of(
                 "dynamic", VpnConnectionOptions.builder() // Dynamic routing (BGP)
                         .ip("1.2.3.4")
                         .tunnelOptions(List.of(VpnTunnelOption.builder()
                                 .preSharedKeySecret(SecretValue.unsafePlainText("secretkey1234"))
                                 .build(), VpnTunnelOption.builder()
                                 .preSharedKeySecret(SecretValue.unsafePlainText("secretkey5678"))
                                 .build())).build(),
                 "static", VpnConnectionOptions.builder() // Static routing
                         .ip("4.5.6.7")
                         .staticRoutes(List.of("192.168.10.0/24", "192.168.20.0/24")).build()))
         .build();
 

To create a VPC that can accept VPN connections, set vpnGateway to true:

 Vpc vpc = Vpc.Builder.create(this, "MyVpc")
         .vpnGateway(true)
         .build();
 

VPN connections can then be added:

 vpc.addVpnConnection("Dynamic", VpnConnectionOptions.builder()
         .ip("1.2.3.4")
         .build());
 

By default, routes will be propagated on the route tables associated with the private subnets. If no private subnets exist, isolated subnets are used. If no isolated subnets exist, public subnets are used. Use the Vpc property vpnRoutePropagation to customize this behavior.

VPN connections expose metrics (cloudwatch.Metric) across all tunnels in the account/region and per connection:

 // Across all tunnels in the account/region
 Metric allDataOut = VpnConnection.metricAllTunnelDataOut();
 
 // For a specific vpn connection
 VpnConnection vpnConnection = vpc.addVpnConnection("Dynamic", VpnConnectionOptions.builder()
         .ip("1.2.3.4")
         .build());
 Metric state = vpnConnection.metricTunnelState();
 

VPC endpoints

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

 // Add gateway endpoints when creating the VPC
 Vpc vpc = Vpc.Builder.create(this, "MyVpc")
         .gatewayEndpoints(Map.of(
                 "S3", GatewayVpcEndpointOptions.builder()
                         .service(GatewayVpcEndpointAwsService.S3)
                         .build()))
         .build();
 
 // Alternatively gateway endpoints can be added on the VPC
 GatewayVpcEndpoint dynamoDbEndpoint = vpc.addGatewayEndpoint("DynamoDbEndpoint", GatewayVpcEndpointOptions.builder()
         .service(GatewayVpcEndpointAwsService.DYNAMODB)
         .build());
 
 // This allows to customize the endpoint policy
 dynamoDbEndpoint.addToPolicy(
 PolicyStatement.Builder.create() // Restrict to listing and describing tables
         .principals(List.of(new AnyPrincipal()))
         .actions(List.of("dynamodb:DescribeTable", "dynamodb:ListTables"))
         .resources(List.of("*")).build());
 
 // Add an interface endpoint
 vpc.addInterfaceEndpoint("EcrDockerEndpoint", InterfaceVpcEndpointOptions.builder()
         .service(InterfaceVpcEndpointAwsService.ECR_DOCKER)
         .build());
 

By default, CDK will place a VPC endpoint in one subnet per AZ. If you wish to override the AZs CDK places the VPC endpoint in, use the subnets parameter as follows:

 Vpc vpc;
 
 
 InterfaceVpcEndpoint.Builder.create(this, "VPC Endpoint")
         .vpc(vpc)
         .service(new InterfaceVpcEndpointService("com.amazonaws.vpce.us-east-1.vpce-svc-uuddlrlrbastrtsvc", 443))
         // Choose which availability zones to place the VPC endpoint in, based on
         // available AZs
         .subnets(SubnetSelection.builder()
                 .availabilityZones(List.of("us-east-1a", "us-east-1c"))
                 .build())
         .build();
 

Per the AWS documentation, not all VPC endpoint services are available in all AZs. If you specify the parameter lookupSupportedAzs, CDK attempts to discover which AZs an endpoint service is available in, and will ensure the VPC endpoint is not placed in a subnet that doesn't match those AZs. These AZs will be stored in cdk.context.json.

 Vpc vpc;
 
 
 InterfaceVpcEndpoint.Builder.create(this, "VPC Endpoint")
         .vpc(vpc)
         .service(new InterfaceVpcEndpointService("com.amazonaws.vpce.us-east-1.vpce-svc-uuddlrlrbastrtsvc", 443))
         // Choose which availability zones to place the VPC endpoint in, based on
         // available AZs
         .lookupSupportedAzs(true)
         .build();
 

Pre-defined AWS services are defined in the InterfaceVpcEndpointAwsService class, and can be used to create VPC endpoints without having to configure name, ports, etc. For example, a Keyspaces endpoint can be created for use in your VPC:

 Vpc vpc;
 
 
 InterfaceVpcEndpoint.Builder.create(this, "VPC Endpoint")
         .vpc(vpc)
         .service(InterfaceVpcEndpointAwsService.KEYSPACES)
         .build();
 

Security groups for interface VPC endpoints

By default, interface VPC endpoints create a new security group and all traffic to the endpoint from within the VPC will be automatically allowed.

Use the connections object to allow other traffic to flow to the endpoint:

 InterfaceVpcEndpoint myEndpoint;
 
 
 myEndpoint.connections.allowDefaultPortFromAnyIpv4();
 

Alternatively, existing security groups can be used by specifying the securityGroups prop.

VPC endpoint services

A VPC endpoint service enables you to expose a Network Load Balancer(s) as a provider service to consumers, who connect to your service over a VPC endpoint. You can restrict access to your service via allowed principals (anything that extends ArnPrincipal), and require that new connections be manually accepted. You can also enable Contributor Insight rules.

 NetworkLoadBalancer networkLoadBalancer1;
 NetworkLoadBalancer networkLoadBalancer2;
 
 
 VpcEndpointService.Builder.create(this, "EndpointService")
         .vpcEndpointServiceLoadBalancers(List.of(networkLoadBalancer1, networkLoadBalancer2))
         .acceptanceRequired(true)
         .allowedPrincipals(List.of(new ArnPrincipal("arn:aws:iam::123456789012:root")))
         .contributorInsights(true)
         .build();
 

You can also include a service principal in the allowedPrincipals property by specifying it as a parameter to the ArnPrincipal constructor. The resulting VPC endpoint will have an allowlisted principal of type Service, instead of Arn for that item in the list.

 NetworkLoadBalancer networkLoadBalancer;
 
 
 VpcEndpointService.Builder.create(this, "EndpointService")
         .vpcEndpointServiceLoadBalancers(List.of(networkLoadBalancer))
         .allowedPrincipals(List.of(new ArnPrincipal("ec2.amazonaws.com")))
         .build();
 

Endpoint services support private DNS, which makes it easier for clients to connect to your service by automatically setting up DNS in their VPC. You can enable private DNS on an endpoint service like so:

 import software.amazon.awscdk.services.route53.PublicHostedZone;
 import software.amazon.awscdk.services.route53.VpcEndpointServiceDomainName;
 PublicHostedZone zone;
 VpcEndpointService vpces;
 
 
 VpcEndpointServiceDomainName.Builder.create(this, "EndpointDomain")
         .endpointService(vpces)
         .domainName("my-stuff.aws-cdk.dev")
         .publicHostedZone(zone)
         .build();
 

Note: The domain name must be owned (registered through Route53) by the account the endpoint service is in, or delegated to the account. The VpcEndpointServiceDomainName will handle the AWS side of domain verification, the process for which can be found here

Client VPN endpoint

AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.

Use the addClientVpnEndpoint() method to add a client VPN endpoint to a VPC:

 vpc.addClientVpnEndpoint("Endpoint", ClientVpnEndpointOptions.builder()
         .cidr("10.100.0.0/16")
         .serverCertificateArn("arn:aws:acm:us-east-1:123456789012:certificate/server-certificate-id")
         // Mutual authentication
         .clientCertificateArn("arn:aws:acm:us-east-1:123456789012:certificate/client-certificate-id")
         // User-based authentication
         .userBasedAuthentication(ClientVpnUserBasedAuthentication.federated(samlProvider))
         .build());
 

The endpoint must use at least one authentication method:

  • Mutual authentication with a client certificate
  • User-based authentication (directory or federated)

If user-based authentication is used, the self-service portal URL is made available via a CloudFormation output.

By default, a new security group is created, and logging is enabled. Moreover, a rule to authorize all users to the VPC CIDR is created.

To customize authorization rules, set the authorizeAllUsersToVpcCidr prop to false and use addAuthorizationRule():

 ClientVpnEndpoint endpoint = vpc.addClientVpnEndpoint("Endpoint", ClientVpnEndpointOptions.builder()
         .cidr("10.100.0.0/16")
         .serverCertificateArn("arn:aws:acm:us-east-1:123456789012:certificate/server-certificate-id")
         .userBasedAuthentication(ClientVpnUserBasedAuthentication.federated(samlProvider))
         .authorizeAllUsersToVpcCidr(false)
         .build());
 
 endpoint.addAuthorizationRule("Rule", ClientVpnAuthorizationRuleOptions.builder()
         .cidr("10.0.10.0/32")
         .groupId("group-id")
         .build());
 

Use addRoute() to configure network routes:

 ClientVpnEndpoint endpoint = vpc.addClientVpnEndpoint("Endpoint", ClientVpnEndpointOptions.builder()
         .cidr("10.100.0.0/16")
         .serverCertificateArn("arn:aws:acm:us-east-1:123456789012:certificate/server-certificate-id")
         .userBasedAuthentication(ClientVpnUserBasedAuthentication.federated(samlProvider))
         .build());
 
 // Client-to-client access
 endpoint.addRoute("Route", ClientVpnRouteOptions.builder()
         .cidr("10.100.0.0/16")
         .target(ClientVpnRouteTarget.local())
         .build());
 

Use the connections object of the endpoint to allow traffic to other security groups.

Instances

You can use the Instance class to start up a single EC2 instance. For production setups, we recommend you use an AutoScalingGroup from the aws-autoscaling module instead, as AutoScalingGroups will take care of restarting your instance if it ever fails.

 Vpc vpc;
 InstanceType instanceType;
 
 
 // Amazon Linux 2
 // Amazon Linux 2
 Instance.Builder.create(this, "Instance2")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(MachineImage.latestAmazonLinux2())
         .build();
 
 // Amazon Linux 2 with kernel 5.x
 // Amazon Linux 2 with kernel 5.x
 Instance.Builder.create(this, "Instance3")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(MachineImage.latestAmazonLinux2(AmazonLinux2ImageSsmParameterProps.builder()
                 .kernel(AmazonLinux2Kernel.KERNEL_5_10)
                 .build()))
         .build();
 
 // Amazon Linux 2023
 // Amazon Linux 2023
 Instance.Builder.create(this, "Instance4")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(MachineImage.latestAmazonLinux2023())
         .build();
 
 // Graviton 3 Processor
 // Graviton 3 Processor
 Instance.Builder.create(this, "Instance5")
         .vpc(vpc)
         .instanceType(InstanceType.of(InstanceClass.C7G, InstanceSize.LARGE))
         .machineImage(MachineImage.latestAmazonLinux2023(AmazonLinux2023ImageSsmParameterProps.builder()
                 .cpuType(AmazonLinuxCpuType.ARM_64)
                 .build()))
         .build();
 

Latest Amazon Linux Images

Rather than specifying a specific AMI ID to use, it is possible to specify a SSM Parameter that contains the AMI ID. AWS publishes a set of public parameters that contain the latest Amazon Linux AMIs. To make it easier to query a particular image parameter, the CDK provides a couple of constructs AmazonLinux2ImageSsmParameter, AmazonLinux2022ImageSsmParameter, & AmazonLinux2023SsmParameter. For example to use the latest al2023 image:

 Vpc vpc;
 
 
 Instance.Builder.create(this, "LatestAl2023")
         .vpc(vpc)
         .instanceType(InstanceType.of(InstanceClass.C7G, InstanceSize.LARGE))
         .machineImage(MachineImage.latestAmazonLinux2023())
         .build();
 

Warning Since this retrieves the value from an SSM parameter at deployment time, the value will be resolved each time the stack is deployed. This means that if the parameter contains a different value on your next deployment, the instance will be replaced.

It is also possible to perform the lookup once at synthesis time and then cache the value in CDK context. This way the value will not change on future deployments unless you manually refresh the context.

 Vpc vpc;
 
 
 Instance.Builder.create(this, "LatestAl2023")
         .vpc(vpc)
         .instanceType(InstanceType.of(InstanceClass.C7G, InstanceSize.LARGE))
         .machineImage(MachineImage.latestAmazonLinux2023(AmazonLinux2023ImageSsmParameterProps.builder()
                 .cachedInContext(true)
                 .build()))
         .build();
 
 // or
 // or
 Instance.Builder.create(this, "LatestAl2023")
         .vpc(vpc)
         .instanceType(InstanceType.of(InstanceClass.C7G, InstanceSize.LARGE))
         // context cache is turned on by default
         .machineImage(new AmazonLinux2023ImageSsmParameter())
         .build();
 

Kernel Versions

Each Amazon Linux AMI uses a specific kernel version. Most Amazon Linux generations come with an AMI using the "default" kernel and then 1 or more AMIs using a specific kernel version, which may or may not be different from the default kernel version.

For example, Amazon Linux 2 has two different AMIs available from the SSM parameters.

  • /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-ebs

    • This is the "default" kernel which uses kernel-4.14
  • /aws/service/ami-amazon-linux-latest/amzn2-ami-kernel-5.10-hvm-x86_64-ebs

If a new Amazon Linux generation AMI is published with a new kernel version, then a new SSM parameter will be created with the new version (e.g. /aws/service/ami-amazon-linux-latest/amzn2-ami-kernel-5.15-hvm-x86_64-ebs), but the "default" AMI may or may not be updated.

If you would like to make sure you always have the latest kernel version, then either specify the specific latest kernel version or opt-in to using the CDK latest kernel version.

 Vpc vpc;
 
 
 Instance.Builder.create(this, "LatestAl2023")
         .vpc(vpc)
         .instanceType(InstanceType.of(InstanceClass.C7G, InstanceSize.LARGE))
         // context cache is turned on by default
         .machineImage(AmazonLinux2023ImageSsmParameter.Builder.create()
                 .kernel(AmazonLinux2023Kernel.KERNEL_6_1)
                 .build())
         .build();
 

CDK managed latest

 Vpc vpc;
 
 
 Instance.Builder.create(this, "LatestAl2023")
         .vpc(vpc)
         .instanceType(InstanceType.of(InstanceClass.C7G, InstanceSize.LARGE))
         // context cache is turned on by default
         .machineImage(AmazonLinux2023ImageSsmParameter.Builder.create()
                 .kernel(AmazonLinux2023Kernel.CDK_LATEST)
                 .build())
         .build();
 
 // or
 
 // or
 Instance.Builder.create(this, "LatestAl2023")
         .vpc(vpc)
         .instanceType(InstanceType.of(InstanceClass.C7G, InstanceSize.LARGE))
         .machineImage(MachineImage.latestAmazonLinux2023())
         .build();
 

When using the CDK managed latest version, when a new kernel version is made available the LATEST will be updated to point to the new kernel version. You then would be required to update the newest CDK version for it to take effect.

Configuring Instances using CloudFormation Init (cfn-init)

CloudFormation Init allows you to configure your instances by writing files to them, installing software packages, starting services and running arbitrary commands. By default, if any of the instance setup commands throw an error; the deployment will fail and roll back to the previously known good state. The following documentation also applies to AutoScalingGroups.

For the full set of capabilities of this system, see the documentation for AWS::CloudFormation::Init. Here is an example of applying some configuration to an instance:

 Vpc vpc;
 InstanceType instanceType;
 IMachineImage machineImage;
 
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(machineImage)
 
         // Showing the most complex setup, if you have simpler requirements
         // you can use `CloudFormationInit.fromElements()`.
         .init(CloudFormationInit.fromConfigSets(ConfigSetProps.builder()
                 .configSets(Map.of(
                         // Applies the configs below in this order
                         "default", List.of("yumPreinstall", "config")))
                 .configs(Map.of(
                         "yumPreinstall", new InitConfig(List.of(InitPackage.yum("git"))),
                         "config", new InitConfig(List.of(InitFile.fromObject("/etc/stack.json", Map.of(
                                 "stackId", Stack.of(this).getStackId(),
                                 "stackName", Stack.of(this).getStackName(),
                                 "region", Stack.of(this).getRegion())), InitGroup.fromName("my-group"), InitUser.fromName("my-user"), InitPackage.rpm("http://mirrors.ukfast.co.uk/sites/dl.fedoraproject.org/pub/epel/8/Everything/x86_64/Packages/r/rubygem-git-1.5.0-2.el8.noarch.rpm")))))
                 .build()))
         .initOptions(ApplyCloudFormationInitOptions.builder()
                 // Optional, which configsets to activate (['default'] by default)
                 .configSets(List.of("default"))
 
                 // Optional, how long the installation is expected to take (5 minutes by default)
                 .timeout(Duration.minutes(30))
 
                 // Optional, whether to include the --url argument when running cfn-init and cfn-signal commands (false by default)
                 .includeUrl(true)
 
                 // Optional, whether to include the --role argument when running cfn-init and cfn-signal commands (false by default)
                 .includeRole(true)
                 .build())
         .build();
 

InitCommand can not be used to start long-running processes. At deploy time, cfn-init will always wait for the process to exit before continuing, causing the CloudFormation deployment to fail because the signal hasn't been received within the expected timeout.

Instead, you should install a service configuration file onto your machine InitFile, and then use InitService to start it.

If your Linux OS is using SystemD (like Amazon Linux 2 or higher), the CDK has helpers to create a long-running service using CFN Init. You can create a SystemD-compatible config file using InitService.systemdConfigFile(), and start it immediately. The following examples shows how to start a trivial Python 3 web server:

 Vpc vpc;
 InstanceType instanceType;
 
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(MachineImage.latestAmazonLinux2023())
 
         .init(CloudFormationInit.fromElements(InitService.systemdConfigFile("simpleserver", SystemdConfigFileOptions.builder()
                 .command("/usr/bin/python3 -m http.server 8080")
                 .cwd("/var/www/html")
                 .build()), InitService.enable("simpleserver", InitServiceOptions.builder()
                 .serviceManager(ServiceManager.SYSTEMD)
                 .build()), InitFile.fromString("/var/www/html/index.html", "Hello! It's working!")))
         .build();
 

You can have services restarted after the init process has made changes to the system. To do that, instantiate an InitServiceRestartHandle and pass it to the config elements that need to trigger the restart and the service itself. For example, the following config writes a config file for nginx, extracts an archive to the root directory, and then restarts nginx so that it picks up the new config and files:

 Bucket myBucket;
 
 
 InitServiceRestartHandle handle = new InitServiceRestartHandle();
 
 CloudFormationInit.fromElements(InitFile.fromString("/etc/nginx/nginx.conf", "...", InitFileOptions.builder().serviceRestartHandles(List.of(handle)).build()), InitSource.fromS3Object("/var/www/html", myBucket, "html.zip", InitSourceOptions.builder().serviceRestartHandles(List.of(handle)).build()), InitService.enable("nginx", InitServiceOptions.builder()
         .serviceRestartHandle(handle)
         .build()));
 

You can use the environmentVariables or environmentFiles parameters to specify environment variables for your services:

 new InitConfig(List.of(InitFile.fromString("/myvars.env", "VAR_FROM_FILE=\"VAR_FROM_FILE\""), InitService.systemdConfigFile("myapp", SystemdConfigFileOptions.builder()
         .command("/usr/bin/python3 -m http.server 8080")
         .cwd("/var/www/html")
         .environmentVariables(Map.of(
                 "MY_VAR", "MY_VAR"))
         .environmentFiles(List.of("/myvars.env"))
         .build())));
 

Bastion Hosts

A bastion host functions as an instance used to access servers and resources in a VPC without open up the complete VPC on a network level. You can use bastion hosts using a standard SSH connection targeting port 22 on the host. As an alternative, you can connect the SSH connection feature of AWS Systems Manager Session Manager, which does not need an opened security group. (https://aws.amazon.com/about-aws/whats-new/2019/07/session-manager-launches-tunneling-support-for-ssh-and-scp/)

A default bastion host for use via SSM can be configured like:

 BastionHostLinux host = BastionHostLinux.Builder.create(this, "BastionHost").vpc(vpc).build();
 

If you want to connect from the internet using SSH, you need to place the host into a public subnet. You can then configure allowed source hosts.

 BastionHostLinux host = BastionHostLinux.Builder.create(this, "BastionHost")
         .vpc(vpc)
         .subnetSelection(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build())
         .build();
 host.allowSshAccessFrom(Peer.ipv4("1.2.3.4/32"));
 

As there are no SSH public keys deployed on this machine, you need to use EC2 Instance Connect with the command aws ec2-instance-connect send-ssh-public-key to provide your SSH public key.

EBS volume for the bastion host can be encrypted like:

 BastionHostLinux host = BastionHostLinux.Builder.create(this, "BastionHost")
         .vpc(vpc)
         .blockDevices(List.of(BlockDevice.builder()
                 .deviceName("/dev/sdh")
                 .volume(BlockDeviceVolume.ebs(10, EbsDeviceOptions.builder()
                         .encrypted(true)
                         .build()))
                 .build()))
         .build();
 

Placement Group

Specify placementGroup to enable the placement group support:

 InstanceType instanceType;
 
 
 PlacementGroup pg = PlacementGroup.Builder.create(this, "test-pg")
         .strategy(PlacementGroupStrategy.SPREAD)
         .build();
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(MachineImage.latestAmazonLinux2023())
         .placementGroup(pg)
         .build();
 

Block Devices

To add EBS block device mappings, specify the blockDevices property. The following example sets the EBS-backed root device (/dev/sda1) size to 50 GiB, and adds another EBS-backed device mapped to /dev/sdm that is 100 GiB in size:

 Vpc vpc;
 InstanceType instanceType;
 IMachineImage machineImage;
 
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(machineImage)
 
         // ...
 
         .blockDevices(List.of(BlockDevice.builder()
                 .deviceName("/dev/sda1")
                 .volume(BlockDeviceVolume.ebs(50))
                 .build(), BlockDevice.builder()
                 .deviceName("/dev/sdm")
                 .volume(BlockDeviceVolume.ebs(100))
                 .build()))
         .build();
 

It is also possible to encrypt the block devices. In this example we will create an customer managed key encrypted EBS-backed root device:

 import software.amazon.awscdk.services.kms.Key;
 
 Vpc vpc;
 InstanceType instanceType;
 IMachineImage machineImage;
 
 
 Key kmsKey = new Key(this, "KmsKey");
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(machineImage)
 
         // ...
 
         .blockDevices(List.of(BlockDevice.builder()
                 .deviceName("/dev/sda1")
                 .volume(BlockDeviceVolume.ebs(50, EbsDeviceOptions.builder()
                         .encrypted(true)
                         .kmsKey(kmsKey)
                         .build()))
                 .build()))
         .build();
 

To specify the throughput value for gp3 volumes, use the throughput property:

 Vpc vpc;
 InstanceType instanceType;
 IMachineImage machineImage;
 
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(machineImage)
 
         // ...
 
         .blockDevices(List.of(BlockDevice.builder()
                 .deviceName("/dev/sda1")
                 .volume(BlockDeviceVolume.ebs(100, EbsDeviceOptions.builder()
                         .volumeType(EbsDeviceVolumeType.GP3)
                         .throughput(250)
                         .build()))
                 .build()))
         .build();
 

EBS Optimized Instances

An Amazon EBS–optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for Amazon EBS I/O. This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance.

Depending on the instance type, this features is enabled by default while others require explicit activation. Please refer to the documentation for details.

 Vpc vpc;
 InstanceType instanceType;
 IMachineImage machineImage;
 
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(machineImage)
         .ebsOptimized(true)
         .blockDevices(List.of(BlockDevice.builder()
                 .deviceName("/dev/xvda")
                 .volume(BlockDeviceVolume.ebs(8))
                 .build()))
         .build();
 

Volumes

Whereas a BlockDeviceVolume is an EBS volume that is created and destroyed as part of the creation and destruction of a specific instance. A Volume is for when you want an EBS volume separate from any particular instance. A Volume is an EBS block device that can be attached to, or detached from, any instance at any time. Some types of Volumes can also be attached to multiple instances at the same time to allow you to have shared storage between those instances.

A notable restriction is that a Volume can only be attached to instances in the same availability zone as the Volume itself.

The following demonstrates how to create a 500 GiB encrypted Volume in the us-west-2a availability zone, and give a role the ability to attach that Volume to a specific instance:

 Instance instance;
 Role role;
 
 
 Volume volume = Volume.Builder.create(this, "Volume")
         .availabilityZone("us-west-2a")
         .size(Size.gibibytes(500))
         .encrypted(true)
         .build();
 
 volume.grantAttachVolume(role, List.of(instance));
 

Instances Attaching Volumes to Themselves

If you need to grant an instance the ability to attach/detach an EBS volume to/from itself, then using grantAttachVolume and grantDetachVolume as outlined above will lead to an unresolvable circular reference between the instance role and the instance. In this case, use grantAttachVolumeByResourceTag and grantDetachVolumeByResourceTag as follows:

 Instance instance;
 Volume volume;
 
 
 Grant attachGrant = volume.grantAttachVolumeByResourceTag(instance.getGrantPrincipal(), List.of(instance));
 Grant detachGrant = volume.grantDetachVolumeByResourceTag(instance.getGrantPrincipal(), List.of(instance));
 

Attaching Volumes

The Amazon EC2 documentation for Linux Instances and Windows Instances contains information on how to attach and detach your Volumes to/from instances, and how to format them for use.

The following is a sample skeleton of EC2 UserData that can be used to attach a Volume to the Linux instance that it is running on:

 Instance instance;
 Volume volume;
 
 
 volume.grantAttachVolumeByResourceTag(instance.getGrantPrincipal(), List.of(instance));
 String targetDevice = "/dev/xvdz";
 instance.userData.addCommands("TOKEN=$(curl -SsfX PUT \"http://169.254.169.254/latest/api/token\" -H \"X-aws-ec2-metadata-token-ttl-seconds: 21600\")", "INSTANCE_ID=$(curl -SsfH \"X-aws-ec2-metadata-token: $TOKEN\" http://169.254.169.254/latest/meta-data/instance-id)", String.format("aws --region %s ec2 attach-volume --volume-id %s --instance-id $INSTANCE_ID --device %s", Stack.of(this).getRegion(), volume.getVolumeId(), targetDevice), String.format("while ! test -e %s; do sleep 1; done", targetDevice));
 

Tagging Volumes

You can configure tag propagation on volume creation.

 Vpc vpc;
 InstanceType instanceType;
 IMachineImage machineImage;
 
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .machineImage(machineImage)
         .instanceType(instanceType)
         .propagateTagsToVolumeOnCreation(true)
         .build();
 

Throughput on GP3 Volumes

You can specify the throughput of a GP3 volume from 125 (default) to 1000.

 Volume.Builder.create(this, "Volume")
         .availabilityZone("us-east-1a")
         .size(Size.gibibytes(125))
         .volumeType(EbsDeviceVolumeType.GP3)
         .throughput(125)
         .build();
 

Configuring Instance Metadata Service (IMDS)

Toggling IMDSv1

You can configure EC2 Instance Metadata Service options to either allow both IMDSv1 and IMDSv2 or enforce IMDSv2 when interacting with the IMDS.

To do this for a single Instance, you can use the requireImdsv2 property. The example below demonstrates IMDSv2 being required on a single Instance:

 Vpc vpc;
 InstanceType instanceType;
 IMachineImage machineImage;
 
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(machineImage)
 
         // ...
 
         .requireImdsv2(true)
         .build();
 

You can also use the either the InstanceRequireImdsv2Aspect for EC2 instances or the LaunchTemplateRequireImdsv2Aspect for EC2 launch templates to apply the operation to multiple instances or launch templates, respectively.

The following example demonstrates how to use the InstanceRequireImdsv2Aspect to require IMDSv2 for all EC2 instances in a stack:

 InstanceRequireImdsv2Aspect aspect = new InstanceRequireImdsv2Aspect();
 Aspects.of(this).add(aspect);
 

Associating a Public IP Address with an Instance

All subnets have an attribute that determines whether instances launched into that subnet are assigned a public IPv4 address. This attribute is set to true by default for default public subnets. Thus, an EC2 instance launched into a default public subnet will be assigned a public IPv4 address. Nondefault public subnets have this attribute set to false by default and any EC2 instance launched into a nondefault public subnet will not be assigned a public IPv4 address automatically. To automatically assign a public IPv4 address to an instance launched into a nondefault public subnet, you can set the associatePublicIpAddress property on the Instance construct to true. Alternatively, to not automatically assign a public IPv4 address to an instance launched into a default public subnet, you can set associatePublicIpAddress to false. Including this property, removing this property, or updating the value of this property on an existing instance will result in replacement of the instance.

 Vpc vpc = Vpc.Builder.create(this, "VPC")
         .cidr("10.0.0.0/16")
         .natGateways(0)
         .maxAzs(3)
         .subnetConfiguration(List.of(SubnetConfiguration.builder()
                 .name("public-subnet-1")
                 .subnetType(SubnetType.PUBLIC)
                 .cidrMask(24)
                 .build()))
         .build();
 
 Instance instance = Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .vpcSubnets(SubnetSelection.builder().subnetGroupName("public-subnet-1").build())
         .instanceType(InstanceType.of(InstanceClass.T3, InstanceSize.NANO))
         .machineImage(AmazonLinuxImage.Builder.create().generation(AmazonLinuxGeneration.AMAZON_LINUX_2).build())
         .detailedMonitoring(true)
         .associatePublicIpAddress(true)
         .build();
 

Specifying a key pair

To allow SSH access to an EC2 instance by default, a Key Pair must be specified. Key pairs can be provided with the keyPair property to instances and launch templates. You can create a key pair for an instance like this:

 Vpc vpc;
 InstanceType instanceType;
 
 
 KeyPair keyPair = KeyPair.Builder.create(this, "KeyPair")
         .type(KeyPairType.ED25519)
         .format(KeyPairFormat.PEM)
         .build();
 Instance instance = Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(MachineImage.latestAmazonLinux2023())
         // Use the custom key pair
         .keyPair(keyPair)
         .build();
 

When a new EC2 Key Pair is created (without imported material), the private key material is automatically stored in Systems Manager Parameter Store. This can be retrieved from the key pair construct:

 KeyPair keyPair = new KeyPair(this, "KeyPair");
 IStringParameter privateKey = keyPair.getPrivateKey();
 

If you already have an SSH key that you wish to use in EC2, that can be provided when constructing the KeyPair. If public key material is provided, the key pair is considered "imported" and there will not be any data automatically stored in Systems Manager Parameter Store and the type property cannot be specified for the key pair.

 KeyPair keyPair = KeyPair.Builder.create(this, "KeyPair")
         .publicKeyMaterial("ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB7jpNzG+YG0s+xIGWbxrxIZiiozHOEuzIJacvASP0mq")
         .build();
 

Using an existing EC2 Key Pair

If you already have an EC2 Key Pair created outside of the CDK, you can import that key to your CDK stack.

You can import it purely by name:

 IKeyPair keyPair = KeyPair.fromKeyPairName(this, "KeyPair", "the-keypair-name");
 

Or by specifying additional attributes:

 IKeyPair keyPair = KeyPair.fromKeyPairAttributes(this, "KeyPair", KeyPairAttributes.builder()
         .keyPairName("the-keypair-name")
         .type(KeyPairType.RSA)
         .build());
 

Using IPv6 IPs

Instances can be given IPv6 IPs by launching them into a subnet of a dual stack VPC.

 Vpc vpc = Vpc.Builder.create(this, "Ip6VpcDualStack")
         .ipProtocol(IpProtocol.DUAL_STACK)
         .subnetConfiguration(List.of(SubnetConfiguration.builder()
                 .name("Public")
                 .subnetType(SubnetType.PUBLIC)
                 .mapPublicIpOnLaunch(true)
                 .build(), SubnetConfiguration.builder()
                 .name("Private")
                 .subnetType(SubnetType.PRIVATE_ISOLATED)
                 .build()))
         .build();
 
 Instance instance = Instance.Builder.create(this, "MyInstance")
         .instanceType(InstanceType.of(InstanceClass.T2, InstanceSize.MICRO))
         .machineImage(MachineImage.latestAmazonLinux2())
         .vpc(vpc)
         .vpcSubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build())
         .allowAllIpv6Outbound(true)
         .build();
 
 instance.connections.allowFrom(Peer.anyIpv6(), Port.allIcmpV6(), "allow ICMPv6");
 

Note to set mapPublicIpOnLaunch to true in the subnetConfiguration.

Additionally, IPv6 support varies by instance type. Most instance types have IPv6 support with exception of m1-m3, c1, g2, and t1.micro. A full list can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI.

Specifying the IPv6 Address

If you want to specify the number of IPv6 addresses to assign to the instance, you can use the ipv6AddresseCount property:

 // dual stack VPC
 Vpc vpc;
 
 
 Instance instance = Instance.Builder.create(this, "MyInstance")
         .instanceType(InstanceType.of(InstanceClass.M5, InstanceSize.LARGE))
         .machineImage(MachineImage.latestAmazonLinux2())
         .vpc(vpc)
         .vpcSubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build())
         // Assign 2 IPv6 addresses to the instance
         .ipv6AddressCount(2)
         .build();
 

Credit configuration modes for burstable instances

You can set the credit configuration mode for burstable instances (T2, T3, T3a and T4g instance types):

 Vpc vpc;
 
 
 Instance instance = Instance.Builder.create(this, "Instance")
         .instanceType(InstanceType.of(InstanceClass.T3, InstanceSize.MICRO))
         .machineImage(MachineImage.latestAmazonLinux2())
         .vpc(vpc)
         .creditSpecification(CpuCredits.STANDARD)
         .build();
 

It is also possible to set the credit configuration mode for NAT instances.

 NatInstanceProvider natInstanceProvider = NatProvider.instance(NatInstanceProps.builder()
         .instanceType(InstanceType.of(InstanceClass.T4G, InstanceSize.LARGE))
         .machineImage(new AmazonLinuxImage())
         .creditSpecification(CpuCredits.UNLIMITED)
         .build());
 Vpc.Builder.create(this, "VPC")
         .natGatewayProvider(natInstanceProvider)
         .build();
 

Note: CpuCredits.UNLIMITED mode is not supported for T3 instances that are launched on a Dedicated Host.

Shutdown behavior

You can specify the behavior of the instance when you initiate shutdown from the instance (using the operating system command for system shutdown).

 Vpc vpc;
 
 
 Instance.Builder.create(this, "Instance")
         .vpc(vpc)
         .instanceType(InstanceType.of(InstanceClass.T3, InstanceSize.NANO))
         .machineImage(AmazonLinuxImage.Builder.create().generation(AmazonLinuxGeneration.AMAZON_LINUX_2).build())
         .instanceInitiatedShutdownBehavior(InstanceInitiatedShutdownBehavior.TERMINATE)
         .build();
 

Enabling Nitro Enclaves

You can enable AWS Nitro Enclaves for your EC2 instances by setting the enclaveEnabled property to true. Nitro Enclaves is a feature of AWS Nitro System that enables creating isolated and highly constrained CPU environments known as enclaves.

 Vpc vpc;
 
 
 Instance instance = Instance.Builder.create(this, "Instance")
         .instanceType(InstanceType.of(InstanceClass.M5, InstanceSize.XLARGE))
         .machineImage(new AmazonLinuxImage())
         .vpc(vpc)
         .enclaveEnabled(true)
         .build();
 

NOTE: You must use an instance type and operating system that support Nitro Enclaves. For more information, see Requirements.

Enabling Termination Protection

You can enable Termination Protection for your EC2 instances by setting the disableApiTermination property to true. Termination Protection controls whether the instance can be terminated using the AWS Management Console, AWS Command Line Interface (AWS CLI), or API.

 Vpc vpc;
 
 
 Instance instance = Instance.Builder.create(this, "Instance")
         .instanceType(InstanceType.of(InstanceClass.M5, InstanceSize.XLARGE))
         .machineImage(new AmazonLinuxImage())
         .vpc(vpc)
         .disableApiTermination(true)
         .build();
 

Enabling Instance Hibernation

You can enable Instance Hibernation for your EC2 instances by setting the hibernationEnabled property to true. Instance Hibernation saves the instance's in-memory (RAM) state when an instance is stopped, and restores that state when the instance is started.

 Vpc vpc;
 
 
 Instance instance = Instance.Builder.create(this, "Instance")
         .instanceType(InstanceType.of(InstanceClass.M5, InstanceSize.XLARGE))
         .machineImage(new AmazonLinuxImage())
         .vpc(vpc)
         .hibernationEnabled(true)
         .blockDevices(List.of(BlockDevice.builder()
                 .deviceName("/dev/xvda")
                 .volume(BlockDeviceVolume.ebs(30, EbsDeviceOptions.builder()
                         .volumeType(EbsDeviceVolumeType.GP3)
                         .encrypted(true)
                         .deleteOnTermination(true)
                         .build()))
                 .build()))
         .build();
 

NOTE: You must use an instance and a volume that meet the requirements for hibernation. For more information, see Prerequisites for Amazon EC2 instance hibernation.

VPC Flow Logs

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination. (https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html).

By default, a flow log will be created with CloudWatch Logs as the destination.

You can create a flow log like this:

 Vpc vpc;
 
 
 FlowLog.Builder.create(this, "FlowLog")
         .resourceType(FlowLogResourceType.fromVpc(vpc))
         .build();
 

Or you can add a Flow Log to a VPC by using the addFlowLog method like this:

 Vpc vpc = new Vpc(this, "Vpc");
 
 vpc.addFlowLog("FlowLog");
 

You can also add multiple flow logs with different destinations.

 Vpc vpc = new Vpc(this, "Vpc");
 
 vpc.addFlowLog("FlowLogS3", FlowLogOptions.builder()
         .destination(FlowLogDestination.toS3())
         .build());
 
 // Only reject traffic and interval every minute.
 vpc.addFlowLog("FlowLogCloudWatch", FlowLogOptions.builder()
         .trafficType(FlowLogTrafficType.REJECT)
         .maxAggregationInterval(FlowLogMaxAggregationInterval.ONE_MINUTE)
         .build());
 

To create a Transit Gateway flow log, you can use the fromTransitGatewayId method:

 CfnTransitGateway tgw;
 
 
 FlowLog.Builder.create(this, "TransitGatewayFlowLog")
         .resourceType(FlowLogResourceType.fromTransitGatewayId(tgw.getRef()))
         .build();
 

To create a Transit Gateway Attachment flow log, you can use the fromTransitGatewayAttachmentId method:

 CfnTransitGatewayAttachment tgwAttachment;
 
 
 FlowLog.Builder.create(this, "TransitGatewayAttachmentFlowLog")
         .resourceType(FlowLogResourceType.fromTransitGatewayAttachmentId(tgwAttachment.getRef()))
         .build();
 

For flow logs targeting TransitGateway and TransitGatewayAttachment, specifying the trafficType is not possible.

Custom Formatting

You can also custom format flow logs.

 Vpc vpc = new Vpc(this, "Vpc");
 
 vpc.addFlowLog("FlowLog", FlowLogOptions.builder()
         .logFormat(List.of(LogFormat.DST_PORT, LogFormat.SRC_PORT))
         .build());
 
 // If you just want to add a field to the default field
 vpc.addFlowLog("FlowLog", FlowLogOptions.builder()
         .logFormat(List.of(LogFormat.VERSION, LogFormat.ALL_DEFAULT_FIELDS))
         .build());
 
 // If AWS CDK does not support the new fields
 vpc.addFlowLog("FlowLog", FlowLogOptions.builder()
         .logFormat(List.of(LogFormat.SRC_PORT, LogFormat.custom("${new-field}")))
         .build());
 

By default, the CDK will create the necessary resources for the destination. For the CloudWatch Logs destination it will create a CloudWatch Logs Log Group as well as the IAM role with the necessary permissions to publish to the log group. In the case of an S3 destination, it will create the S3 bucket.

If you want to customize any of the destination resources you can provide your own as part of the destination.

CloudWatch Logs

 Vpc vpc;
 
 
 LogGroup logGroup = new LogGroup(this, "MyCustomLogGroup");
 
 Role role = Role.Builder.create(this, "MyCustomRole")
         .assumedBy(new ServicePrincipal("vpc-flow-logs.amazonaws.com"))
         .build();
 
 FlowLog.Builder.create(this, "FlowLog")
         .resourceType(FlowLogResourceType.fromVpc(vpc))
         .destination(FlowLogDestination.toCloudWatchLogs(logGroup, role))
         .build();
 

S3

 Vpc vpc;
 
 
 Bucket bucket = new Bucket(this, "MyCustomBucket");
 
 FlowLog.Builder.create(this, "FlowLog")
         .resourceType(FlowLogResourceType.fromVpc(vpc))
         .destination(FlowLogDestination.toS3(bucket))
         .build();
 
 FlowLog.Builder.create(this, "FlowLogWithKeyPrefix")
         .resourceType(FlowLogResourceType.fromVpc(vpc))
         .destination(FlowLogDestination.toS3(bucket, "prefix/"))
         .build();
 

Kinesis Data Firehose

 import software.amazon.awscdk.services.kinesisfirehose.*;
 
 Vpc vpc;
 CfnDeliveryStream deliveryStream;
 
 
 vpc.addFlowLog("FlowLogsKinesisDataFirehose", FlowLogOptions.builder()
         .destination(FlowLogDestination.toKinesisDataFirehoseDestination(deliveryStream.getAttrArn()))
         .build());
 

When the S3 destination is configured, AWS will automatically create an S3 bucket policy that allows the service to write logs to the bucket. This makes it impossible to later update that bucket policy. To have CDK create the bucket policy so that future updates can be made, the @aws-cdk/aws-s3:createDefaultLoggingPolicy feature flag can be used. This can be set in the cdk.json file.

 {
   "context": {
     "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true
   }
 }
 

User Data

User data enables you to run a script when your instances start up. In order to configure these scripts you can add commands directly to the script or you can use the UserData's convenience functions to aid in the creation of your script.

A user data could be configured to run a script found in an asset through the following:

 import software.amazon.awscdk.services.s3.assets.Asset;
 
 Instance instance;
 
 
 Asset asset = Asset.Builder.create(this, "Asset")
         .path("./configure.sh")
         .build();
 
 String localPath = instance.userData.addS3DownloadCommand(S3DownloadOptions.builder()
         .bucket(asset.getBucket())
         .bucketKey(asset.getS3ObjectKey())
         .region("us-east-1")
         .build());
 instance.userData.addExecuteFileCommand(ExecuteFileOptions.builder()
         .filePath(localPath)
         .arguments("--verbose -y")
         .build());
 asset.grantRead(instance.getRole());
 

Persisting user data

By default, EC2 UserData is run once on only the first time that an instance is started. It is possible to make the user data script run on every start of the instance.

When creating a Windows UserData you can use the persist option to set whether or not to add <persist>true</persist> to the user data script. it can be used as follows:

 UserData windowsUserData = UserData.forWindows(WindowsUserDataOptions.builder().persist(true).build());
 

For a Linux instance, this can be accomplished by using a Multipart user data to configure cloud-config as detailed in: https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/

Multipart user data

In addition, to above the MultipartUserData can be used to change instance startup behavior. Multipart user data are composed from separate parts forming archive. The most common parts are scripts executed during instance set-up. However, there are other kinds, too.

The advantage of multipart archive is in flexibility when it's needed to add additional parts or to use specialized parts to fine tune instance startup. Some services (like AWS Batch) support only MultipartUserData.

The parts can be executed at different moment of instance start-up and can serve a different purpose. This is controlled by contentType property. For common scripts, text/x-shellscript; charset="utf-8" can be used as content type.

In order to create archive the MultipartUserData has to be instantiated. Than, user can add parts to multipart archive using addPart. The MultipartBody contains methods supporting creation of body parts.

If the very custom part is required, it can be created using MultipartUserData.fromRawBody, in this case full control over content type, transfer encoding, and body properties is given to the user.

Below is an example for creating multipart user data with single body part responsible for installing awscli and configuring maximum size of storage used by Docker containers:

 UserData bootHookConf = UserData.forLinux();
 bootHookConf.addCommands("cloud-init-per once docker_options echo 'OPTIONS=\"${OPTIONS} --storage-opt dm.basesize=40G\"' >> /etc/sysconfig/docker");
 
 UserData setupCommands = UserData.forLinux();
 setupCommands.addCommands("sudo yum install awscli && echo Packages installed らと > /var/tmp/setup");
 
 MultipartUserData multipartUserData = new MultipartUserData();
 // The docker has to be configured at early stage, so content type is overridden to boothook
 multipartUserData.addPart(MultipartBody.fromUserData(bootHookConf, "text/cloud-boothook; charset=\"us-ascii\""));
 // Execute the rest of setup
 multipartUserData.addPart(MultipartBody.fromUserData(setupCommands));
 
 LaunchTemplate.Builder.create(this, "")
         .userData(multipartUserData)
         .blockDevices(List.of())
         .build();
 

For more information see Specifying Multiple User Data Blocks Using a MIME Multi Part Archive

Using add*Command on MultipartUserData

To use the add*Command methods, that are inherited from the UserData interface, on MultipartUserData you must add a part to the MultipartUserData and designate it as the receiver for these methods. This is accomplished by using the addUserDataPart() method on MultipartUserData with the makeDefault argument set to true:

 MultipartUserData multipartUserData = new MultipartUserData();
 UserData commandsUserData = UserData.forLinux();
 multipartUserData.addUserDataPart(commandsUserData, MultipartBody.SHELL_SCRIPT, true);
 
 // Adding commands to the multipartUserData adds them to commandsUserData, and vice-versa.
 multipartUserData.addCommands("touch /root/multi.txt");
 commandsUserData.addCommands("touch /root/userdata.txt");
 

When used on an EC2 instance, the above multipartUserData will create both multi.txt and userdata.txt in /root.

Importing existing subnet

To import an existing Subnet, call Subnet.fromSubnetAttributes() or Subnet.fromSubnetId(). Only if you supply the subnet's Availability Zone and Route Table Ids when calling Subnet.fromSubnetAttributes() will you be able to use the CDK features that use these values (such as selecting one subnet per AZ).

Importing an existing subnet looks like this:

 // Supply all properties
 ISubnet subnet1 = Subnet.fromSubnetAttributes(this, "SubnetFromAttributes", SubnetAttributes.builder()
         .subnetId("s-1234")
         .availabilityZone("pub-az-4465")
         .routeTableId("rt-145")
         .build());
 
 // Supply only subnet id
 ISubnet subnet2 = Subnet.fromSubnetId(this, "SubnetFromId", "s-1234");
 

Launch Templates

A Launch Template is a standardized template that contains the configuration information to launch an instance. They can be used when launching instances on their own, through Amazon EC2 Auto Scaling, EC2 Fleet, and Spot Fleet. Launch templates enable you to store launch parameters so that you do not have to specify them every time you launch an instance. For information on Launch Templates please see the official documentation.

The following demonstrates how to create a launch template with an Amazon Machine Image, security group, and an instance profile.

 Vpc vpc;
 
 
 Role role = Role.Builder.create(this, "Role")
         .assumedBy(new ServicePrincipal("ec2.amazonaws.com"))
         .build();
 InstanceProfile instanceProfile = InstanceProfile.Builder.create(this, "InstanceProfile")
         .role(role)
         .build();
 
 LaunchTemplate template = LaunchTemplate.Builder.create(this, "LaunchTemplate")
         .launchTemplateName("MyTemplateV1")
         .versionDescription("This is my v1 template")
         .machineImage(MachineImage.latestAmazonLinux2023())
         .securityGroup(SecurityGroup.Builder.create(this, "LaunchTemplateSG")
                 .vpc(vpc)
                 .build())
         .instanceProfile(instanceProfile)
         .build();
 

And the following demonstrates how to enable metadata options support.

 LaunchTemplate.Builder.create(this, "LaunchTemplate")
         .httpEndpoint(true)
         .httpProtocolIpv6(true)
         .httpPutResponseHopLimit(1)
         .httpTokens(LaunchTemplateHttpTokens.REQUIRED)
         .instanceMetadataTags(true)
         .build();
 

And the following demonstrates how to add one or more security groups to launch template.

 Vpc vpc;
 
 
 SecurityGroup sg1 = SecurityGroup.Builder.create(this, "sg1")
         .vpc(vpc)
         .build();
 SecurityGroup sg2 = SecurityGroup.Builder.create(this, "sg2")
         .vpc(vpc)
         .build();
 
 LaunchTemplate launchTemplate = LaunchTemplate.Builder.create(this, "LaunchTemplate")
         .machineImage(MachineImage.latestAmazonLinux2023())
         .securityGroup(sg1)
         .build();
 
 launchTemplate.addSecurityGroup(sg2);
 

To use AWS Systems Manager parameters instead of AMI IDs in launch templates and resolve the AMI IDs at instance launch time:

 LaunchTemplate launchTemplate = LaunchTemplate.Builder.create(this, "LaunchTemplate")
         .machineImage(MachineImage.resolveSsmParameterAtLaunch("parameterName"))
         .build();
 

Please note this feature does not support Launch Configurations.

Detailed Monitoring

The following demonstrates how to enable Detailed Monitoring for an EC2 instance. Keep in mind that Detailed Monitoring results in additional charges.

 Vpc vpc;
 InstanceType instanceType;
 
 
 Instance.Builder.create(this, "Instance1")
         .vpc(vpc)
         .instanceType(instanceType)
         .machineImage(MachineImage.latestAmazonLinux2023())
         .detailedMonitoring(true)
         .build();
 

Connecting to your instances using SSM Session Manager

SSM Session Manager makes it possible to connect to your instances from the AWS Console, without preparing SSH keys.

To do so, you need to:

If these conditions are met, you can connect to the instance from the EC2 Console. Example:

 Vpc vpc;
 InstanceType instanceType;
 
 
 Instance.Builder.create(this, "Instance1")
         .vpc(vpc)
         .instanceType(instanceType)
 
         // Amazon Linux 2023 comes with SSM Agent by default
         .machineImage(MachineImage.latestAmazonLinux2023())
 
         // Turn on SSM
         .ssmSessionPermissions(true)
         .build();
 

Managed Prefix Lists

Create and manage customer-managed prefix lists. If you don't specify anything in this construct, it will manage IPv4 addresses.

You can also create an empty Prefix List with only the maximum number of entries specified, as shown in the following code. If nothing is specified, maxEntries=1.

 PrefixList.Builder.create(this, "EmptyPrefixList")
         .maxEntries(100)
         .build();
 

maxEntries can also be omitted as follows. In this case maxEntries: 2, will be set.

 PrefixList.Builder.create(this, "PrefixList")
         .entries(List.of(EntryProperty.builder().cidr("10.0.0.1/32").build(), EntryProperty.builder().cidr("10.0.0.2/32").description("sample1").build()))
         .build();
 

For more information see Work with customer-managed prefix lists