Skip navigation links


Amazon EC2 Construct Library

See: Description

Package Description

Amazon EC2 Construct Library


cfn-resources: Stable

cdk-constructs: Stable

The @aws-cdk/aws-ec2 package contains primitives for setting up networking and instances.

 // Example automatically generated. See


Most projects need a Virtual Private Cloud to provide security by means of network partitioning. This is achieved by creating an instance of Vpc:

 // Example automatically generated. See
 Object vpc = new Vpc(this, "VPC");

All default constructs require EC2 instances to be launched inside a VPC, so you should generally start by defining a VPC whenever you need to launch instances for your project.

Subnet Types

A VPC consists of one or more subnets that instances can be placed into. CDK distinguishes three different subnet types:

A default VPC configuration will create public and private subnets. However, if natGateways:0 and subnetConfiguration is undefined, default VPC configuration will create public and isolated subnets. See Advanced Subnet Configuration below for information on how to change the default subnet configuration.

Constructs using the VPC will "launch instances" (or more accurately, create Elastic Network Interfaces) into one or more of the subnets. They all accept a property called subnetSelection (sometimes called vpcSubnets) to allow you to select in what subnet to place the ENIs, usually defaulting to private subnets if the property is omitted.

If you would like to save on the cost of NAT gateways, you can use isolated subnets instead of private subnets (as described in Advanced Subnet Configuration). If you need private instances to have internet connectivity, another option is to reduce the number of NAT gateways created by setting the natGateways property to a lower value (the default is one NAT gateway per availability zone). Be aware that this may have availability implications for your application.

Read more about subnets.

Control over availability zones

By default, a VPC will spread over at most 3 Availability Zones available to it. To change the number of Availability Zones that the VPC will spread over, specify the maxAzs property when defining it.

The number of Availability Zones that are available depends on the region and account of the Stack containing the VPC. If the region and account are specified on the Stack, the CLI will look up the existing Availability Zones and get an accurate count. If region and account are not specified, the stack could be deployed anywhere and it will have to make a safe choice, limiting itself to 2 Availability Zones.

Therefore, to get the VPC to spread over 3 or more availability zones, you must specify the environment where the stack will be deployed.

You can gain full control over the availability zones selection strategy by overriding the Stack's get availabilityZones() method:

 // Example automatically generated without compilation. See
 public class MyStack extends Stack {get availabilityZones(): string[] {
         return ['us-west-2a', 'us-west-2b'];
     public MyStack(Construct scope, String id) {
         this(scope, id, null);
     public MyStack(Construct scope, String id, StackProps props) {
         super(scope, id, props);

Note that overriding the get availabilityZones() method will override the default behavior for all constructs defined within the Stack.

Choosing subnets for resources

When creating resources that create Elastic Network Interfaces (such as databases or instances), there is an option to choose which subnets to place them in. For example, a VPC endpoint by default is placed into a subnet in every availability zone, but you can override which subnets to use. The property is typically called one of subnets, vpcSubnets or subnetSelection.

The example below will place the endpoint into two AZs (us-east-1a and us-east-1c), in Isolated subnets:

 // Example automatically generated without compilation. See
 InterfaceVpcEndpoint.Builder.create(stack, "VPC Endpoint")
         .service(new InterfaceVpcEndpointService("", 443))
                 "subnetType", SubnetType.getISOLATED(),
                 "availabilityZones", asList("us-east-1a", "us-east-1c")))

You can also specify specific subnet objects for granular control:

 // Example automatically generated without compilation. See
 InterfaceVpcEndpoint.Builder.create(stack, "VPC Endpoint")
         .service(new InterfaceVpcEndpointService("", 443))
                 "subnets", asList(subnet1, subnet2)))

Which subnets are selected is evaluated as follows:

Using NAT instances

By default, the Vpc construct will create NAT gateways for you, which are managed by AWS. If you would prefer to use your own managed NAT instances instead, specify a different value for the natGatewayProvider property, as follows:

 // Example automatically generated. See
 // Configure the `natGatewayProvider` when defining a Vpc
 NatInstanceProvider natGatewayProvider = ec2.NatProvider.instance(new NatInstanceProps()
         .instanceType(new InstanceType("t3.small")));
 Vpc vpc = new Vpc(this, "MyVpc", new VpcProps()
         // The 'natGateways' parameter now controls the number of NAT instances

The construct will automatically search for the most recent NAT gateway AMI. If you prefer to use a custom AMI, use machineImage: MachineImage.genericLinux({ ... }) and configure the right AMI ID for the regions you want to deploy to.

By default, the NAT instances will route all traffic. To control what traffic gets routed, pass allowAllTraffic: false and access the NatInstanceProvider.connections member after having passed it to the VPC:

 // Example automatically generated without compilation. See
 Object provider = NatProvider.instance(Map.of(
         "instanceType", ,
         "allowAllTraffic", false));
 Vpc.Builder.create(stack, "TheVPC")
 provider.connections.allowFrom(Peer.ipv4(""), Port.tcp(80));

Advanced Subnet Configuration

If the default VPC configuration (public and private subnets spanning the size of the VPC) don't suffice for you, you can configure what subnets to create by specifying the subnetConfiguration property. It allows you to configure the number and size of all subnets. Specifying an advanced subnet configuration could look like this:

 // Example automatically generated. See
 Object vpc = Vpc.Builder.create(this, "TheVPC")
         // 'cidr' configures the IP range and size of the entire VPC.
         // The IP space will be divided over the configured subnets.
         // 'maxAzs' configures the maximum number of availability zones to use
         // 'subnetConfiguration' specifies the "subnet groups" to create.
         // Every subnet group will have a subnet for each AZ, so this
         // configuration will create `3 groups × 3 AZs = 9` subnets.
                 // 'subnetType' controls Internet access, as described above.
                 "subnetType", ec2.SubnetType.getPUBLIC(),
                 // 'name' is used to name this particular subnet group. You will have to
                 // use the name for subnet selection if you have more than one subnet
                 // group of the same type.
                 "name", "Ingress",
                 // 'cidrMask' specifies the IP addresses in the range of of individual
                 // subnets in the group. Each of the subnets in this group will contain
                 // `2^(32 address bits - 24 subnet bits) - 2 reserved addresses = 254`
                 // usable IP addresses.
                 // If 'cidrMask' is left out the available address space is evenly
                 // divided across the remaining subnet groups.
                 "cidrMask", 24), Map.of(
                 "cidrMask", 24,
                 "name", "Application",
                 "subnetType", ec2.SubnetType.getPRIVATE_WITH_NAT()), Map.of(
                 "cidrMask", 28,
                 "name", "Database",
                 "subnetType", ec2.SubnetType.getPRIVATE_ISOLATED(),
                 // 'reserved' can be used to reserve IP address space. No resources will
                 // be created for this subnet, but the IP range will be kept available for
                 // future creation of this subnet, or even for future subdivision.
                 "reserved", true)))

The example above is one possible configuration, but the user can use the constructs above to implement many other network configurations.

The Vpc from the above configuration in a Region with three availability zones will be the following:

Subnet Name |Type |IP Block |AZ|Features ------------------|----------|--------------|--|-------- IngressSubnet1 |PUBLIC | |#1|NAT Gateway IngressSubnet2 |PUBLIC | |#2|NAT Gateway IngressSubnet3 |PUBLIC | |#3|NAT Gateway ApplicationSubnet1|PRIVATE | |#1|Route to NAT in IngressSubnet1 ApplicationSubnet2|PRIVATE | |#2|Route to NAT in IngressSubnet2 ApplicationSubnet3|PRIVATE | |#3|Route to NAT in IngressSubnet3 DatabaseSubnet1 |ISOLATED| |#1|Only routes within the VPC DatabaseSubnet2 |ISOLATED||#2|Only routes within the VPC DatabaseSubnet3 |ISOLATED||#3|Only routes within the VPC

Accessing the Internet Gateway

If you need access to the internet gateway, you can get its ID like so:

 // Example automatically generated without compilation. See
 Object igwId = vpc.getInternetGatewayId();

For a VPC with only ISOLATED subnets, this value will be undefined.

This is only supported for VPCs created in the stack - currently you're unable to get the ID for imported VPCs. To do that you'd have to specifically look up the Internet Gateway by name, which would require knowing the name beforehand.

This can be useful for configuring routing using a combination of gateways: for more information see Routing below.


It's possible to add routes to any subnets using the addRoute() method. If for example you want an isolated subnet to have a static route via the default Internet Gateway created for the public subnet - perhaps for routing a VPN connection - you can do so like this:

 // Example automatically generated without compilation. See
 Object vpc = ec2.Vpc(this, "VPC", {
   subnetConfiguration: [{
       subnetType: SubnetType.PUBLIC,
       name: 'Public',
       subnetType: SubnetType.ISOLATED,
       name: 'Isolated',
 (vpc.isolatedSubnets[0] as Subnet).addRoute("StaticRoute", Map.of(
         "routerId", vpc.getInternetGatewayId(),
         "routerType", RouterType.getGATEWAY(),
         "destinationCidrBlock", ""));

Note that we cast to Subnet here because the list of subnets only returns an ISubnet.

Reserving subnet IP space

There are situations where the IP space for a subnet or number of subnets will need to be reserved. This is useful in situations where subnets would need to be added after the vpc is originally deployed, without causing IP renumbering for existing subnets. The IP space for a subnet may be reserved by setting the reserved subnetConfiguration property to true, as shown below:

 // Example automatically generated. See
 Object vpc = Vpc.Builder.create(this, "TheVPC")
                 "cidrMask", 26,
                 "name", "Public",
                 "subnetType", ec2.SubnetType.getPUBLIC()), Map.of(
                 "cidrMask", 26,
                 "name", "Application1",
                 "subnetType", ec2.SubnetType.getPRIVATE_WITH_NAT()), Map.of(
                 "cidrMask", 26,
                 "name", "Application2",
                 "subnetType", ec2.SubnetType.getPRIVATE_WITH_NAT(),
                 "reserved", true), Map.of(
                 "cidrMask", 27,
                 "name", "Database",
                 "subnetType", ec2.SubnetType.getISOLATED())))

In the example above, the subnet for Application2 is not actually provisioned but its IP space is still reserved. If in the future this subnet needs to be provisioned, then the reserved: true property should be removed. Reserving parts of the IP space prevents the other subnets from getting renumbered.

Sharing VPCs between stacks

If you are creating multiple Stacks inside the same CDK application, you can reuse a VPC defined in one Stack in another by simply passing the VPC instance around:

 // Example automatically generated. See
  * Stack1 creates the VPC
  * /
 public class Stack1 extends Stack {
     public final Vpc vpc;
     public Stack1(App scope, String id) {
         this(scope, id, null);
     public Stack1(App scope, String id, StackProps props) {
         super(scope, id, props);
         this.vpc = new Vpc(this, "VPC");
 public class Stack2Props extends StackProps {
     private IVpc vpc;
     public IVpc getVpc() {
         return this.vpc;
     public Stack2Props vpc(IVpc vpc) {
         this.vpc = vpc;
         return this;
  * Stack2 consumes the VPC
  * /
 public class Stack2 extends Stack {
     public Stack2(App scope, String id, Stack2Props props) {
         super(scope, id, props);
         // Pass the VPC to a construct that needs it
         // Pass the VPC to a construct that needs it
         new ConstructThatTakesAVpc(this, "Construct", new ConstructThatTakesAVpcProps()
 Stack1 stack1 = new Stack1(app, "Stack1");
 Stack2 stack2 = new Stack2(app, "Stack2", new Stack2Props()

Importing an existing VPC

If your VPC is created outside your CDK app, you can use Vpc.fromLookup(). The CDK CLI will search for the specified VPC in the the stack's region and account, and import the subnet configuration. Looking up can be done by VPC ID, but more flexibly by searching for a specific tag on the VPC.

Subnet types will be determined from the aws-cdk:subnet-type tag on the subnet if it exists, or the presence of a route to an Internet Gateway otherwise. Subnet names will be determined from the aws-cdk:subnet-name tag on the subnet if it exists, or will mirror the subnet type otherwise (i.e. a public subnet will have the name "Public").

The result of the Vpc.fromLookup() operation will be written to a file called cdk.context.json. You must commit this file to source control so that the lookup values are available in non-privileged environments such as CI build steps, and to ensure your template builds are repeatable.

Here's how Vpc.fromLookup() can be used:

 // Example automatically generated. See
 IVpc vpc = ec2.Vpc.fromLookup(stack, "VPC", new VpcLookupOptions()
         // This imports the default VPC but you can also
         // specify a 'vpcName' or 'tags'.

Vpc.fromLookup is the recommended way to import VPCs. If for whatever reason you do not want to use the context mechanism to look up a VPC at synthesis time, you can also use Vpc.fromVpcAttributes. This has the following limitations:

Using Vpc.fromVpcAttributes() looks like this:

 // Example automatically generated without compilation. See
 Object vpc = ec2.Vpc.fromVpcAttributes(stack, "VPC", Map.of(
         "vpcId", "vpc-1234",
         "availabilityZones", asList("us-east-1a", "us-east-1b"),
         // Either pass literals for all IDs
         "publicSubnetIds", asList("s-12345", "s-67890"),
         // OR: import a list of known length
         "privateSubnetIds", Fn.importListValue("PrivateSubnetIds", 2),
         // OR: split an imported string to a list of known length
         "isolatedSubnetIds", Fn.split(",", ssm.StringParameter.valueForStringParameter(stack, "MyParameter"), 2)));

Allowing Connections

In AWS, all network traffic in and out of Elastic Network Interfaces (ENIs) is controlled by Security Groups. You can think of Security Groups as a firewall with a set of rules. By default, Security Groups allow no incoming (ingress) traffic and all outgoing (egress) traffic. You can add ingress rules to them to allow incoming traffic streams. To exert fine-grained control over egress traffic, set allowAllOutbound: false on the SecurityGroup, after which you can add egress traffic rules.

You can manipulate Security Groups directly:

 // Example automatically generated. See
 SecurityGroup mySecurityGroup = new SecurityGroup(this, "SecurityGroup", new SecurityGroupProps()
         .description("Allow ssh access to ec2 instances")
 mySecurityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(22), "allow ssh access from the world");

All constructs that create ENIs on your behalf (typically constructs that create EC2 instances or other VPC-connected resources) will all have security groups automatically assigned. Those constructs have an attribute called connections, which is an object that makes it convenient to update the security groups. If you want to allow connections between two constructs that have security groups, you have to add an Egress rule to one Security Group, and an Ingress rule to the other. The connections object will automatically take care of this for you:

 // Example automatically generated. See
 // Allow connections from anywhere
 loadBalancer.connections.allowFromAnyIpv4(ec2.Port.tcp(443), "Allow inbound HTTPS");
 // The same, but an explicit IP address
 loadBalancer.connections.allowFrom(ec2.Peer.ipv4(""), ec2.Port.tcp(443), "Allow inbound HTTPS");
 // Allow connection between AutoScalingGroups
 appFleet.connections.allowTo(dbFleet, ec2.Port.tcp(443), "App can call database");

Connection Peers

There are various classes that implement the connection peer part:

 // Example automatically generated. See
 // Simple connection peers
 IPeer peer = ec2.Peer.ipv4("");
 peer = ec2.Peer.anyIpv4();
 peer = ec2.Peer.ipv6("::0/0");
 peer = ec2.Peer.anyIpv6();
 peer = ec2.Peer.prefixList("pl-12345");
 appFleet.connections.allowTo(peer, ec2.Port.tcp(443), "Allow outbound HTTPS");

Any object that has a security group can itself be used as a connection peer:

 // Example automatically generated. See
 // These automatically create appropriate ingress and egress rules in both security groups
 fleet1.connections.allowTo(fleet2, ec2.Port.tcp(80), "Allow between fleets");
 appFleet.connections.allowFromAnyIpv4(ec2.Port.tcp(80), "Allow from load balancer");

Port Ranges

The connections that are allowed are specified by port ranges. A number of classes provide the connection specifier:

 // Example automatically generated. See
 ec2.Port.tcpRange(60000, 65535);

NOTE: This set is not complete yet; for example, there is no library support for ICMP at the moment. However, you can write your own classes to implement those.

Default Ports

Some Constructs have default ports associated with them. For example, the listener of a load balancer does (it's the public port), or instances of an RDS database (it's the port the database is accepting connections on).

If the object you're calling the peering method on has a default port associated with it, you can call allowDefaultPortFrom() and omit the port specifier. If the argument has an associated default port, call allowDefaultPortTo().

For example:

 // Example automatically generated. See
 // Port implicit in listener
 listener.connections.allowDefaultPortFromAnyIpv4("Allow public");
 // Port implicit in peer
 appFleet.connections.allowDefaultPortTo(rdsDatabase, "Fleet can access database");

Security group rules

By default, security group wills be added inline to the security group in the output cloud formation template, if applicable. This includes any static rules by ip address and port range. This optimization helps to minimize the size of the template.

In some environments this is not desirable, for example if your security group access is controlled via tags. You can disable inline rules per security group or globally via the context key @aws-cdk/aws-ec2.securityGroupDisableInlineRules.

 // Example automatically generated. See
 SecurityGroup mySecurityGroupWithoutInlineRules = new SecurityGroup(this, "SecurityGroup", new SecurityGroupProps()
         .description("Allow ssh access to ec2 instances")
 //This will add the rule as an external cloud formation construct
 mySecurityGroupWithoutInlineRules.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(22), "allow ssh access from the world");

Machine Images (AMIs)

AMIs control the OS that gets launched when you start your EC2 instance. The EC2 library contains constructs to select the AMI you want to use.

Depending on the type of AMI, you select it a different way. Here are some examples of things you might want to use:

 // Example automatically generated. See
 // Pick the right Amazon Linux edition. All arguments shown are optional
 // and will default to these values when omitted.
 IMachineImage amznLinux = ec2.MachineImage.latestAmazonLinux(new AmazonLinuxImageProps()
 // Pick a Windows edition to use
 IMachineImage windows = ec2.MachineImage.latestWindows(ec2.WindowsVersion.getWINDOWS_SERVER_2019_ENGLISH_FULL_BASE());
 // Read AMI id from SSM parameter store
 IMachineImage ssm = ec2.MachineImage.fromSSMParameter("/my/ami", ec2.OperatingSystemType.getLINUX());
 // Look up the most recent image matching a set of AMI filters.
 // In this case, look up the NAT instance AMI, by using a wildcard
 // in the 'name' field:
 IMachineImage natAmi = ec2.MachineImage.lookup(new LookupMachineImageProps()
 // For other custom (Linux) images, instantiate a `GenericLinuxImage` with
 // a map giving the AMI to in for each region:
 IMachineImage linux = ec2.MachineImage.genericLinux(Map.of(
         "us-east-1", "ami-97785bed",
         "eu-west-1", "ami-12345678"));
 // For other custom (Windows) images, instantiate a `GenericWindowsImage` with
 // a map giving the AMI to in for each region:
 IMachineImage genericWindows = ec2.MachineImage.genericWindows(Map.of(
         "us-east-1", "ami-97785bed",
         "eu-west-1", "ami-12345678"));

NOTE: The AMIs selected by MachineImage.lookup() will be cached in cdk.context.json, so that your AutoScalingGroup instances aren't replaced while you are making unrelated changes to your CDK app.

To query for the latest AMI again, remove the relevant cache entry from cdk.context.json, or use the cdk context command. For more information, see Runtime Context in the CDK developer guide.

MachineImage.genericLinux(), MachineImage.genericWindows() will use CfnMapping in an agnostic stack.

Special VPC configurations

VPN connections to a VPC

Create your VPC with VPN connections by specifying the vpnConnections props (keys are construct ids):

 // Example automatically generated. See
 Object vpc = Vpc.Builder.create(this, "MyVpc")
                 "dynamic", Map.of(// Dynamic routing (BGP)
                         "ip", ""),
                 "static", Map.of(// Static routing
                         "ip", "",
                         "staticRoutes", asList("", ""))))

To create a VPC that can accept VPN connections, set vpnGateway to true:

 // Example automatically generated. See
 Object vpc = Vpc.Builder.create(this, "MyVpc")

VPN connections can then be added:

 // Example automatically generated. See
 vpc.addVpnConnection("Dynamic", new VpnConnectionOptions()

By default, routes will be propagated on the route tables associated with the private subnets. If no private subnets exists, isolated subnets are used. If no isolated subnets exists, public subnets are used. Use the Vpc property vpnRoutePropagation to customize this behavior.

VPN connections expose metrics (cloudwatch.Metric) across all tunnels in the account/region and per connection:

 // Example automatically generated. See
 // Across all tunnels in the account/region
 Metric allDataOut = ec2.VpnConnection.metricAllTunnelDataOut();
 // For a specific vpn connection
 VpnConnection vpnConnection = vpc.addVpnConnection("Dynamic", new VpnConnectionOptions()
 Metric state = vpnConnection.metricTunnelState();

VPC endpoints

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

 // Example automatically generated. See
 // Add gateway endpoints when creating the VPC
 Vpc vpc = new Vpc(this, "MyVpc", new VpcProps()
                 "S3", new GatewayVpcEndpointOptions()
 // Alternatively gateway endpoints can be added on the VPC
 GatewayVpcEndpoint dynamoDbEndpoint = vpc.addGatewayEndpoint("DynamoDbEndpoint", new GatewayVpcEndpointOptions()
 // This allows to customize the endpoint policy
 new PolicyStatement(new PolicyStatementProps()// Restrict to listing and describing tables
         .principals(asList(new AnyPrincipal()))
         .actions(asList("dynamodb:DescribeTable", "dynamodb:ListTables"))
 // Add an interface endpoint
 vpc.addInterfaceEndpoint("EcrDockerEndpoint", Map.of(
         "service", ec2.InterfaceVpcEndpointAwsService.getECR_DOCKER()));

By default, CDK will place a VPC endpoint in one subnet per AZ. If you wish to override the AZs CDK places the VPC endpoint in, use the subnets parameter as follows:

 // Example automatically generated without compilation. See
 InterfaceVpcEndpoint.Builder.create(stack, "VPC Endpoint")
         .service(new InterfaceVpcEndpointService("", 443))
         // Choose which availability zones to place the VPC endpoint in, based on
         // available AZs
                 "availabilityZones", asList("us-east-1a", "us-east-1c")))

Per the AWS documentation, not all VPC endpoint services are available in all AZs. If you specify the parameter lookupSupportedAzs, CDK attempts to discover which AZs an endpoint service is available in, and will ensure the VPC endpoint is not placed in a subnet that doesn't match those AZs. These AZs will be stored in cdk.context.json.

 // Example automatically generated without compilation. See
 InterfaceVpcEndpoint.Builder.create(stack, "VPC Endpoint")
         .service(new InterfaceVpcEndpointService("", 443))
         // Choose which availability zones to place the VPC endpoint in, based on
         // available AZs

Pre-defined AWS services are defined in the InterfaceVpcEndpointAwsService class, and can be used to create VPC endpoints without having to configure name, ports, etc. For example, a Keyspaces endpoint can be created for use in your VPC:

 // Example automatically generated without compilation. See
 InterfaceVpcEndpoint.Builder.create(stack, "VPC Endpoint").vpc(vpc).service(InterfaceVpcEndpointAwsService.getKEYSPACES()).build();

Security groups for interface VPC endpoints

By default, interface VPC endpoints create a new security group and traffic is not automatically allowed from the VPC CIDR.

Use the connections object to allow traffic to flow to the endpoint:

 // Example automatically generated. See

Alternatively, existing security groups can be used by specifying the securityGroups prop.

VPC endpoint services

A VPC endpoint service enables you to expose a Network Load Balancer(s) as a provider service to consumers, who connect to your service over a VPC endpoint. You can restrict access to your service via allowed principals (anything that extends ArnPrincipal), and require that new connections be manually accepted.

 // Example automatically generated without compilation. See
 VpcEndpointService.Builder.create(this, "EndpointService")
         .vpcEndpointServiceLoadBalancers(asList(networkLoadBalancer1, networkLoadBalancer2))
         .allowedPrincipals(asList(new ArnPrincipal("arn:aws:iam::123456789012:root")))

Endpoint services support private DNS, which makes it easier for clients to connect to your service by automatically setting up DNS in their VPC. You can enable private DNS on an endpoint service like so:

 // Example automatically generated without compilation. See
 new VpcEndpointServiceDomainName(stack, "EndpointDomain", new VpcEndpointServiceDomainNameProps()

Note: The domain name must be owned (registered through Route53) by the account the endpoint service is in, or delegated to the account. The VpcEndpointServiceDomainName will handle the AWS side of domain verification, the process for which can be found here

Client VPN endpoint

AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.

Use the addClientVpnEndpoint() method to add a client VPN endpoint to a VPC:

 // Example automatically generated without compilation. See
 vpc.addClientVpnEndpoint("Endpoint", new ClientVpnEndpointOptions()
         // Mutual authentication
         // User-based authentication

The endpoint must use at least one authentication method:

If user-based authentication is used, the self-service portal URL is made available via a CloudFormation output.

By default, a new security group is created and logging is enabled. Moreover, a rule to authorize all users to the VPC CIDR is created.

To customize authorization rules, set the authorizeAllUsersToVpcCidr prop to false and use addaddAuthorizationRule():

 // Example automatically generated without compilation. See
 ClientVpnEndpoint endpoint = vpc.addClientVpnEndpoint("Endpoint", new ClientVpnEndpointOptions()
 endpoint.addAuthorizationRule("Rule", new ClientVpnAuthorizationRuleOptions()

Use addRoute() to configure network routes:

 // Example automatically generated without compilation. See
 ClientVpnEndpoint endpoint = vpc.addClientVpnEndpoint("Endpoint", new ClientVpnEndpointOptions()
 // Client-to-client access
 endpoint.addRoute("Route", new ClientVpnRouteOptions()

Use the connections object of the endpoint to allow traffic to other security groups.


You can use the Instance class to start up a single EC2 instance. For production setups, we recommend you use an AutoScalingGroup from the aws-autoscaling module instead, as AutoScalingGroups will take care of restarting your instance if it ever fails.

Configuring Instances using CloudFormation Init (cfn-init)

CloudFormation Init allows you to configure your instances by writing files to them, installing software packages, starting services and running arbitrary commands. By default, if any of the instance setup commands throw an error, the deployment will fail and roll back to the previously known good state. The following documentation also applies to AutoScalingGroups.

For the full set of capabilities of this system, see the documentation for AWS::CloudFormation::Init. Here is an example of applying some configuration to an instance:

 // Example automatically generated without compilation. See
 Instance.Builder.create(this, "Instance")
         // Showing the most complex setup, if you have simpler requirements
         // you can use `CloudFormationInit.fromElements()`.
                 "configSets", Map.of(
                         // Applies the configs below in this order
                         "default", asList("yumPreinstall", "config")),
                 "configs", Map.of(
                         "yumPreinstall", new InitConfig(asList(ec2.InitPackage.yum("git"))),
                         "config", new InitConfig(asList(ec2.InitFile.fromObject("/etc/stack.json", Map.of(
                                 "stackId", stack.getStackId(),
                                 "stackName", stack.getStackName(),
                                 "region", stack.getRegion())), ec2.InitGroup.fromName("my-group"), ec2.InitUser.fromName("my-user"), ec2.InitPackage.rpm("")))))))
                 // Optional, which configsets to activate (['default'] by default)
                 "configSets", asList("default"),
                 // Optional, how long the installation is expected to take (5 minutes by default)
                 "timeout", Duration.minutes(30),
                 // Optional, whether to include the --url argument when running cfn-init and cfn-signal commands (false by default)
                 "includeUrl", true,
                 // Optional, whether to include the --role argument when running cfn-init and cfn-signal commands (false by default)
                 "includeRole", true))

You can have services restarted after the init process has made changes to the system. To do that, instantiate an InitServiceRestartHandle and pass it to the config elements that need to trigger the restart and the service itself. For example, the following config writes a config file for nginx, extracts an archive to the root directory, and then restarts nginx so that it picks up the new config and files:

 // Example automatically generated without compilation. See
 Object handle = new InitServiceRestartHandle();
 ec2.CloudFormationInit.fromElements(ec2.InitFile.fromString("/etc/nginx/nginx.conf", "...", Map.of("serviceRestartHandles", asList(handle))), ec2.InitSource.fromBucket("/var/www/html", myBucket, "", Map.of("serviceRestartHandles", asList(handle))), ec2.InitService.enable("nginx", Map.of(
         "serviceRestartHandle", handle)));

Bastion Hosts

A bastion host functions as an instance used to access servers and resources in a VPC without open up the complete VPC on a network level. You can use bastion hosts using a standard SSH connection targeting port 22 on the host. As an alternative, you can connect the SSH connection feature of AWS Systems Manager Session Manager, which does not need an opened security group. (

A default bastion host for use via SSM can be configured like:

 // Example automatically generated. See
 BastionHostLinux host = new BastionHostLinux(this, "BastionHost", new BastionHostLinuxProps().vpc(vpc));

If you want to connect from the internet using SSH, you need to place the host into a public subnet. You can then configure allowed source hosts.

 // Example automatically generated. See
 BastionHostLinux host = new BastionHostLinux(this, "BastionHost", new BastionHostLinuxProps()
         .subnetSelection(new SubnetSelection().subnetType(ec2.SubnetType.getPUBLIC())));

As there are no SSH public keys deployed on this machine, you need to use EC2 Instance Connect with the command aws ec2-instance-connect send-ssh-public-key to provide your SSH public key.

EBS volume for the bastion host can be encrypted like:

 // Example automatically generated without compilation. See
 Object host = BastionHostLinux.Builder.create(stack, "BastionHost")
                 "deviceName", "EBSBastionHost",
                 "volume", BlockDeviceVolume.ebs(10, Map.of(
                         "encrypted", true)))))

Block Devices

To add EBS block device mappings, specify the blockDevices property. The following example sets the EBS-backed root device (/dev/sda1) size to 50 GiB, and adds another EBS-backed device mapped to /dev/sdm that is 100 GiB in size:

 // Example automatically generated without compilation. See
 Instance.Builder.create(this, "Instance")
         // ...
                 "deviceName", "/dev/sda1",
                 "volume", ec2.BlockDeviceVolume.ebs(50)), Map.of(
                 "deviceName", "/dev/sdm",
                 "volume", ec2.BlockDeviceVolume.ebs(100))))


Whereas a BlockDeviceVolume is an EBS volume that is created and destroyed as part of the creation and destruction of a specific instance. A Volume is for when you want an EBS volume separate from any particular instance. A Volume is an EBS block device that can be attached to, or detached from, any instance at any time. Some types of Volumes can also be attached to multiple instances at the same time to allow you to have shared storage between those instances.

A notable restriction is that a Volume can only be attached to instances in the same availability zone as the Volume itself.

The following demonstrates how to create a 500 GiB encrypted Volume in the us-west-2a availability zone, and give a role the ability to attach that Volume to a specific instance:

 // Example automatically generated without compilation. See
 Object instance = Instance.Builder.create(this, "Instance").build();
 Role role = new Role(stack, "SomeRole", new RoleProps()
         .assumedBy(new AccountRootPrincipal()));
 Object volume = Volume.Builder.create(this, "Volume")
 volume.grantAttachVolume(role, asList(instance));

Instances Attaching Volumes to Themselves

If you need to grant an instance the ability to attach/detach an EBS volume to/from itself, then using grantAttachVolume and grantDetachVolume as outlined above will lead to an unresolvable circular reference between the instance role and the instance. In this case, use grantAttachVolumeByResourceTag and grantDetachVolumeByResourceTag as follows:

 // Example automatically generated without compilation. See
 Object instance = Instance.Builder.create(this, "Instance").build();
 Object volume = Volume.Builder.create(this, "Volume").build();
 Object attachGrant = volume.grantAttachVolumeByResourceTag(instance.getGrantPrincipal(), asList(instance));
 Object detachGrant = volume.grantDetachVolumeByResourceTag(instance.getGrantPrincipal(), asList(instance));

Attaching Volumes

The Amazon EC2 documentation for Linux Instances and Windows Instances contains information on how to attach and detach your Volumes to/from instances, and how to format them for use.

The following is a sample skeleton of EC2 UserData that can be used to attach a Volume to the Linux instance that it is running on:

 // Example automatically generated without compilation. See
 Object volume = Volume.Builder.create(this, "Volume").build();
 Object instance = Instance.Builder.create(this, "Instance").build();
 volume.grantAttachVolumeByResourceTag(instance.getGrantPrincipal(), asList(instance));
 String targetDevice = "/dev/xvdz";
 instance.userData.addCommands("TOKEN=$(curl -SsfX PUT \"\" -H \"X-aws-ec2-metadata-token-ttl-seconds: 21600\")", "INSTANCE_ID=$(curl -SsfH \"X-aws-ec2-metadata-token: $TOKEN\"", String.format("aws --region %s ec2 attach-volume --volume-id %s --instance-id $INSTANCE_ID --device %s", Stack.of(this).getRegion(), volume.getVolumeId(), targetDevice), String.format("while ! test -e %s; do sleep 1; done", targetDevice));

VPC Flow Logs

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination. (

By default a flow log will be created with CloudWatch Logs as the destination.

You can create a flow log like this:

 // Example automatically generated without compilation. See
 FlowLog.Builder.create(this, "FlowLog")

Or you can add a Flow Log to a VPC by using the addFlowLog method like this:

 // Example automatically generated. See
 Object vpc = new Vpc(this, "Vpc");

You can also add multiple flow logs with different destinations.

 // Example automatically generated. See
 Object vpc = new Vpc(this, "Vpc");
 vpc.addFlowLog("FlowLogS3", Map.of(
         "destination", ec2.FlowLogDestination.toS3()));
 vpc.addFlowLog("FlowLogCloudWatch", Map.of(
         "trafficType", ec2.FlowLogTrafficType.getREJECT()));

By default the CDK will create the necessary resources for the destination. For the CloudWatch Logs destination it will create a CloudWatch Logs Log Group as well as the IAM role with the necessary permissions to publish to the log group. In the case of an S3 destination, it will create the S3 bucket.

If you want to customize any of the destination resources you can provide your own as part of the destination.

CloudWatch Logs

 // Example automatically generated without compilation. See
 Object logGroup = new LogGroup(this, "MyCustomLogGroup");
 Role role = new Role(this, "MyCustomRole", new RoleProps()
         .assumedBy(new ServicePrincipal("")));
 FlowLog.Builder.create(this, "FlowLog")
         .destination(ec2.FlowLogDestination.toCloudWatchLogs(logGroup, role))


 // Example automatically generated without compilation. See
 Bucket bucket = new Bucket(this, "MyCustomBucket");
 FlowLog.Builder.create(this, "FlowLog")
 FlowLog.Builder.create(this, "FlowLogWithKeyPrefix")
         .destination(ec2.FlowLogDestination.toS3(bucket, "prefix/"))

User Data

User data enables you to run a script when your instances start up. In order to configure these scripts you can add commands directly to the script or you can use the UserData's convenience functions to aid in the creation of your script.

A user data could be configured to run a script found in an asset through the following:

 // Example automatically generated without compilation. See
 Object asset = Asset.Builder.create(this, "Asset").path(path.join(__dirname, "")).build();
 Object instance = Instance.Builder.create(this, "Instance").build();
 Object localPath = instance.userData.addS3DownloadCommand(Map.of(
         "bucket", asset.getBucket(),
         "bucketKey", asset.getS3ObjectKey()));
         "filePath", localPath,
         "arguments", "--verbose -y"));

Multipart user data

In addition, to above the MultipartUserData can be used to change instance startup behavior. Multipart user data are composed from separate parts forming archive. The most common parts are scripts executed during instance set-up. However, there are other kinds, too.

The advantage of multipart archive is in flexibility when it's needed to add additional parts or to use specialized parts to fine tune instance startup. Some services (like AWS Batch) supports only MultipartUserData.

The parts can be executed at different moment of instance start-up and can serve a different purposes. This is controlled by contentType property. For common scripts, text/x-shellscript; charset="utf-8" can be used as content type.

In order to create archive the MultipartUserData has to be instantiated. Than, user can add parts to multipart archive using addPart. The MultipartBody contains methods supporting creation of body parts.

If the very custom part is required, it can be created using MultipartUserData.fromRawBody, in this case full control over content type, transfer encoding, and body properties is given to the user.

Below is an example for creating multipart user data with single body part responsible for installing awscli and configuring maximum size of storage used by Docker containers:

 // Example automatically generated without compilation. See
 Object bootHookConf = ec2.UserData.forLinux();
 bootHookConf.addCommands("cloud-init-per once docker_options echo 'OPTIONS=\"${OPTIONS} --storage-opt dm.basesize=40G\"' >> /etc/sysconfig/docker");
 Object setupCommands = ec2.UserData.forLinux();
 setupCommands.addCommands("sudo yum install awscli && echo Packages installed らと > /var/tmp/setup");
 Object multipartUserData = new MultipartUserData();
 // The docker has to be configured at early stage, so content type is overridden to boothook
 multipartUserData.addPart(ec2.MultipartBody.fromUserData(bootHookConf, "text/cloud-boothook; charset=\"us-ascii\""));
 // Execute the rest of setup
 LaunchTemplate.Builder.create(stack, "")

For more information see Specifying Multiple User Data Blocks Using a MIME Multi Part Archive

Using add*Command on MultipartUserData

To use the add*Command methods, that are inherited from the UserData interface, on MultipartUserData you must add a part to the MultipartUserData and designate it as the reciever for these methods. This is accomplished by using the addUserDataPart() method on MultipartUserData with the makeDefault argument set to true:

 // Example automatically generated without compilation. See
 Object multipartUserData = new MultipartUserData();
 Object commandsUserData = ec2.UserData.forLinux();
 multipartUserData.addUserDataPart(commandsUserData, MultipartBody.getSHELL_SCRIPT(), true);
 // Adding commands to the multipartUserData adds them to commandsUserData, and vice-versa.
 multipartUserData.addCommands("touch /root/multi.txt");
 commandsUserData.addCommands("touch /root/userdata.txt");

When used on an EC2 instance, the above multipartUserData will create both multi.txt and userdata.txt in /root.

Importing existing subnet

To import an existing Subnet, call Subnet.fromSubnetAttributes() or Subnet.fromSubnetId(). Only if you supply the subnet's Availability Zone and Route Table Ids when calling Subnet.fromSubnetAttributes() will you be able to use the CDK features that use these values (such as selecting one subnet per AZ).

Importing an existing subnet looks like this:

 // Example automatically generated without compilation. See
 // Supply all properties
 Object subnet = Subnet.fromSubnetAttributes(this, "SubnetFromAttributes", Map.of(
         "subnetId", "s-1234",
         "availabilityZone", "pub-az-4465",
         "routeTableId", "rt-145"));
 // Supply only subnet id
 Object subnet = Subnet.fromSubnetId(this, "SubnetFromId", "s-1234");

Launch Templates

A Launch Template is a standardized template that contains the configuration information to launch an instance. They can be used when launching instances on their own, through Amazon EC2 Auto Scaling, EC2 Fleet, and Spot Fleet. Launch templates enable you to store launch parameters so that you do not have to specify them every time you launch an instance. For information on Launch Templates please see the official documentation.

The following demonstrates how to create a launch template with an Amazon Machine Image, and security group.

 // Example automatically generated without compilation. See
 Object vpc = new Vpc(...);
 // ...
 Object template = LaunchTemplate.Builder.create(this, "LaunchTemplate")
         .machineImage(new AmazonMachineImage())
         .securityGroup(SecurityGroup.Builder.create(this, "LaunchTemplateSG")
Skip navigation links