Amazon EKS Construct Library

---

cfn-resources: Stable

cdk-constructs: Stable


This construct library allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters. In addition, the library also supports defining Kubernetes resource manifests within EKS clusters.

Quick Start

This example defines an Amazon EKS cluster with the following configuration:

# Example automatically generated. See https://github.com/aws/jsii/issues/826
# provisiong a cluster
cluster = eks.Cluster(self, "hello-eks",
    version=eks.KubernetesVersion.V1_19
)

# apply a kubernetes manifest to the cluster
cluster.add_manifest("mypod",
    api_version="v1",
    kind="Pod",
    metadata={"name": "mypod"},
    spec={
        "containers": [{
            "name": "hello",
            "image": "paulbouwer/hello-kubernetes:1.5",
            "ports": [{"container_port": 8080}]
        }
        ]
    }
)

In order to interact with your cluster through kubectl, you can use the aws eks update-kubeconfig AWS CLI command to configure your local kubeconfig. The EKS module will define a CloudFormation output in your stack which contains the command to run. For example:

Outputs:
ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy

Execute the aws eks update-kubeconfig ... command in your terminal to create or update a local kubeconfig context:

$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config

And now you can simply use kubectl:

$ kubectl get all -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/aws-node-fpmwv             1/1     Running   0          21m
pod/aws-node-m9htf             1/1     Running   0          21m
pod/coredns-5cb4fb54c7-q222j   1/1     Running   0          23m
pod/coredns-5cb4fb54c7-v9nxx   1/1     Running   0          23m
...

Architectural Overview

The following is a qualitative diagram of the various possible components involved in the cluster deployment.

 +-----------------------------------------------+               +-----------------+
 |                 EKS Cluster                   |    kubectl    |                 |
 |-----------------------------------------------|<-------------+| Kubectl Handler |
 |                                               |               |                 |
 |                                               |               +-----------------+
 | +--------------------+    +-----------------+ |
 | |                    |    |                 | |
 | | Managed Node Group |    | Fargate Profile | |               +-----------------+
 | |                    |    |                 | |               |                 |
 | +--------------------+    +-----------------+ |               | Cluster Handler |
 |                                               |               |                 |
 +-----------------------------------------------+               +-----------------+
    ^                                   ^                          +
    |                                   |                          |
    | connect self managed capacity     |                          | aws-sdk
    |                                   | create/update/delete     |
    +                                   |                          v
 +--------------------+                 +              +-------------------+
 |                    |                 --------------+| eks.amazonaws.com |
 | Auto Scaling Group |                                +-------------------+
 |                    |
 +--------------------+

In a nutshell:

  • EKS Cluster - The cluster endpoint created by EKS.

  • Managed Node Group - EC2 worker nodes managed by EKS.

  • Fargate Profile - Fargate worker nodes managed by EKS.

  • Auto Scaling Group - EC2 worker nodes managed by the user.

  • KubectlHandler - Lambda function for invoking kubectl commands on the cluster - created by CDK.

  • ClusterHandler - Lambda function for interacting with EKS API to manage the cluster lifecycle - created by CDK.

A more detailed breakdown of each is provided further down this README.

Provisioning clusters

Creating a new cluster is done using the Cluster or FargateCluster constructs. The only required property is the kubernetes version.

# Example automatically generated. See https://github.com/aws/jsii/issues/826
eks.Cluster(self, "HelloEKS",
    version=eks.KubernetesVersion.V1_19
)

You can also use FargateCluster to provision a cluster that uses only fargate workers.

# Example automatically generated. See https://github.com/aws/jsii/issues/826
eks.FargateCluster(self, "HelloEKS",
    version=eks.KubernetesVersion.V1_19
)

NOTE: Only 1 cluster per stack is supported. If you have a use-case for multiple clusters per stack, or would like to understand more about this limitation, see https://github.com/aws/aws-cdk/issues/10073.

Below you’ll find a few important cluster configuration options. First of which is Capacity. Capacity is the amount and the type of worker nodes that are available to the cluster for deploying resources. Amazon EKS offers 3 ways of configuring capacity, which you can combine as you like:

Managed node groups

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.

For more details visit Amazon EKS Managed Node Groups.

Managed Node Groups are the recommended way to allocate cluster capacity.

By default, this library will allocate a managed node group with 2 m5.large instances (this instance type suits most common use-cases, and is good value for money).

At cluster instantiation time, you can customize the number of instances and their type:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
eks.Cluster(self, "HelloEKS",
    version=eks.KubernetesVersion.V1_19,
    default_capacity=5,
    default_capacity_instance=ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL)
)

To access the node group that was created on your behalf, you can use cluster.defaultNodegroup.

Additional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the cluster.addNodegroupCapacity method:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster(self, "HelloEKS",
    version=eks.KubernetesVersion.V1_19,
    default_capacity=0
)

cluster.add_nodegroup_capacity("custom-node-group",
    instance_types=[ec2.InstanceType("m5.large")],
    min_size=4,
    disk_size=100,
    ami_type=eks.NodegroupAmiType.AL2_X86_64_GPU, ...
)

Spot Instances Support

Use capacityType to create managed node groups comprised of spot instances. To maximize the availability of your applications while using Spot Instances, we recommend that you configure a Spot managed node group to use multiple instance types with the instanceTypes property.

For more details visit Managed node group capacity types.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster.add_nodegroup_capacity("extra-ng-spot",
    instance_types=[
        ec2.InstanceType("c5.large"),
        ec2.InstanceType("c5a.large"),
        ec2.InstanceType("c5d.large")
    ],
    min_size=3,
    capacity_type=eks.CapacityType.SPOT
)

Launch Template Support

You can specify a launch template that the node group will use. For example, this can be useful if you want to use a custom AMI or add custom user data.

When supplying a custom user data script, it must be encoded in the MIME multi-part archive format, since Amazon EKS merges with its own user data. Visit the Launch Template Docs for mode details.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
user_data = """MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
echo "Running custom user data script"

--==MYBOUNDARY==--\
"""
lt = ec2.CfnLaunchTemplate(self, "LaunchTemplate",
    launch_template_data={
        "instance_type": "t3.small",
        "user_data": Fn.base64(user_data)
    }
)
cluster.add_nodegroup_capacity("extra-ng",
    launch_template_spec={
        "id": lt.ref,
        "version": lt.attr_latest_version_number
    }
)

Note that when using a custom AMI, Amazon EKS doesn’t merge any user data. Which means you do not need the multi-part encoding. and are responsible for supplying the required bootstrap commands for nodes to join the cluster. In the following example, /ect/eks/bootstrap.sh from the AMI will be used to bootstrap the node.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
user_data = ec2.UserData.for_linux()
user_data.add_commands("set -o xtrace", f"/etc/eks/bootstrap.sh {cluster.clusterName}")
lt = ec2.CfnLaunchTemplate(self, "LaunchTemplate",
    launch_template_data={
        "image_id": "some-ami-id", # custom AMI
        "instance_type": "t3.small",
        "user_data": Fn.base64(user_data.render())
    }
)
cluster.add_nodegroup_capacity("extra-ng",
    launch_template_spec={
        "id": lt.ref,
        "version": lt.attr_latest_version_number
    }
)

You may specify one instanceType in the launch template or multiple instanceTypes in the node group, but not both.

For more details visit Launch Template Support.

Graviton 2 instance types are supported including c6g, m6g, r6g and t4g.

Fargate profiles

AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.

You can control which pods start on Fargate and how they run with Fargate Profiles, which are defined as part of your Amazon EKS cluster.

See Fargate Considerations in the AWS EKS User Guide.

You can add Fargate Profiles to any EKS cluster defined in your CDK app through the addFargateProfile() method. The following example adds a profile that will match all pods from the “default” namespace:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster.add_fargate_profile("MyProfile",
    selectors=[{"namespace": "default"}]
)

You can also directly use the FargateProfile construct to create profiles under different scopes:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
eks.FargateProfile(scope, "MyProfile",
    cluster=cluster, ...
)

To create an EKS cluster that only uses Fargate capacity, you can use FargateCluster. The following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the “kube-system” and “default” namespaces. It is also configured to run CoreDNS on Fargate.

# Example automatically generated. See https://github.com/aws/jsii/issues/826
cluster = eks.FargateCluster(self, "MyCluster",
    version=eks.KubernetesVersion.V1_19
)

NOTE: Classic Load Balancers and Network Load Balancers are not supported on pods running on Fargate. For ingress, we recommend that you use the ALB Ingress Controller on Amazon EKS (minimum version v1.1.4).

Self-managed nodes

Another way of allocating capacity to an EKS cluster is by using self-managed nodes. EC2 instances that are part of the auto-scaling group will serve as worker nodes for the cluster. This type of capacity is also commonly referred to as EC2 Capacity* or EC2 Nodes.

For a detailed overview please visit Self Managed Nodes.

Creating an auto-scaling group and connecting it to the cluster is done using the cluster.addAutoScalingGroupCapacity method:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster.add_auto_scaling_group_capacity("frontend-nodes",
    instance_type=ec2.InstanceType("t2.medium"),
    min_capacity=3,
    vpc_subnets={"subnet_type": ec2.SubnetType.PUBLIC}
)

To connect an already initialized auto-scaling group, use the cluster.connectAutoScalingGroupCapacity() method:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
asg = ec2.AutoScalingGroup(...)
cluster.connect_auto_scaling_group_capacity(asg)

In both cases, the cluster security group will be automatically attached to the auto-scaling group, allowing for traffic to flow freely between managed and self-managed nodes.

Note: The default updateType for auto-scaling groups does not replace existing nodes. Since security groups are determined at launch time, self-managed nodes that were provisioned with version 1.78.0 or lower, will not be updated. To apply the new configuration on all your self-managed nodes, you’ll need to replace the nodes using the UpdateType.REPLACING_UPDATE policy for the updateType property.

You can customize the /etc/eks/boostrap.sh script, which is responsible for bootstrapping the node to the EKS cluster. For example, you can use kubeletExtraArgs to add custom node labels or taints.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster.add_auto_scaling_group_capacity("spot",
    instance_type=ec2.InstanceType("t3.large"),
    min_capacity=2,
    bootstrap_options={
        "kubelet_extra_args": "--node-labels foo=bar,goo=far",
        "aws_api_retry_attempts": 5
    }
)

To disable bootstrapping altogether (i.e. to fully customize user-data), set bootstrapEnabled to false. You can also configure the cluster to use an auto-scaling group as the default capacity:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster(self, "HelloEKS",
    version=eks.KubernetesVersion.V1_19,
    default_capacity_type=eks.DefaultCapacityType.EC2
)

This will allocate an auto-scaling group with 2 m5.large instances (this instance type suits most common use-cases, and is good value for money). To access the AutoScalingGroup that was created on your behalf, you can use cluster.defaultCapacity. You can also independently create an AutoScalingGroup and connect it to the cluster using the cluster.connectAutoScalingGroupCapacity method:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
asg = ec2.AutoScalingGroup(...)
cluster.connect_auto_scaling_group_capacity(asg)

This will add the necessary user-data to access the apiserver and configure all connections, roles, and tags needed for the instances in the auto-scaling group to properly join the cluster.

Spot Instances

When using self-managed nodes, you can configure the capacity to use spot instances, greatly reducing capacity cost. To enable spot capacity, use the spotPrice property:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster.add_auto_scaling_group_capacity("spot",
    spot_price="0.1094",
    instance_type=ec2.InstanceType("t3.large"),
    max_capacity=10
)

Spot instance nodes will be labeled with lifecycle=Ec2Spot and tainted with PreferNoSchedule.

The AWS Node Termination Handler DaemonSet will be installed from Amazon EKS Helm chart repository on these nodes. The termination handler ensures that the Kubernetes control plane responds appropriately to events that can cause your EC2 instance to become unavailable, such as EC2 maintenance events and EC2 Spot interruptions and helps gracefully stop all pods running on spot nodes that are about to be terminated.

Handler Version: 1.7.0

Chart Version: 0.9.5

To disable the installation of the termination handler, set the spotInterruptHandler property to false. This applies both to addAutoScalingGroupCapacity and connectAutoScalingGroupCapacity.

Bottlerocket

Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts. At this moment, Bottlerocket is only supported when using self-managed auto-scaling groups.

NOTICE: Bottlerocket is only available in some supported AWS regions.

The following example will create an auto-scaling group of 2 t3.small Linux instances running with the Bottlerocket AMI.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster.add_auto_scaling_group_capacity("BottlerocketNodes",
    instance_type=ec2.InstanceType("t3.small"),
    min_capacity=2,
    machine_image_type=eks.MachineImageType.BOTTLEROCKET
)

The specific Bottlerocket AMI variant will be auto selected according to the k8s version for the x86_64 architecture. For example, if the Amazon EKS cluster version is 1.17, the Bottlerocket AMI variant will be auto selected as aws-k8s-1.17 behind the scene.

See Variants for more details.

Please note Bottlerocket does not allow to customize bootstrap options and bootstrapOptions properties is not supported when you create the Bottlerocket capacity.

Endpoint Access

When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl)

By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).

You can configure the cluster endpoint access by using the endpointAccess property:

# Example automatically generated. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster(self, "hello-eks",
    version=eks.KubernetesVersion.V1_19,
    endpoint_access=eks.EndpointAccess.PRIVATE
)

The default value is eks.EndpointAccess.PUBLIC_AND_PRIVATE. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and kubectl commands issued by this library stay within your VPC.

VPC Support

You can specify the VPC of the cluster using the vpc and vpcSubnets properties:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
vpc = ec2.Vpc(self, "Vpc")

eks.Cluster(self, "HelloEKS",
    version=eks.KubernetesVersion.V1_19,
    vpc=vpc,
    vpc_subnets=[{"subnet_type": ec2.SubnetType.PRIVATE}]
)

Note: Isolated VPCs (i.e with no internet access) are not currently supported. See https://github.com/aws/aws-cdk/issues/12171

If you do not specify a VPC, one will be created on your behalf, which you can then access via cluster.vpc. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).

If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
vpc = ec2.Vpc(self, "Vpc")
cluster.add_auto_scaling_group_capacity("nodes",
    vpc_subnets={"subnets": vpc.private_subnets}
)

There are two additional components you might want to provision within the VPC.

Kubectl Handler

The KubectlHandler is a Lambda function responsible to issuing kubectl and helm commands against the cluster when you add resource manifests to the cluster.

The handler association to the VPC is derived from the endpointAccess configuration. The rule of thumb is: If the cluster VPC can be associated, it will be.

Breaking this down, it means that if the endpoint exposes private access (via EndpointAccess.PRIVATE or EndpointAccess.PUBLIC_AND_PRIVATE), and the VPC contains private subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.

If the endpoint does not expose private access (via EndpointAccess.PUBLIC) or the VPC does not contain private subnets, the function will not be provisioned within the VPC.

Cluster Handler

The ClusterHandler is a Lambda function responsible to interact with the EKS API in order to control the cluster lifecycle. To provision this function inside the VPC, set the placeClusterHandlerInVpc property to true. This will place the function inside the private subnets of the VPC based on the selection strategy specified in the vpcSubnets property.

You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:

# Example automatically generated. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster(self, "hello-eks",
    version=eks.KubernetesVersion.V1_19,
    cluster_handler_environment={
        "http_proxy": "http://proxy.myproxy.com"
    }
)

Kubectl Support

The resources are created in the cluster by running kubectl apply from a python lambda function.

Environment

You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:

# Example automatically generated. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster(self, "hello-eks",
    version=eks.KubernetesVersion.V1_19,
    kubectl_environment={
        "http_proxy": "http://proxy.myproxy.com"
    }
)

Runtime

The kubectl handler uses kubectl, helm and the aws CLI in order to interact with the cluster. These are bundled into AWS Lambda layers included in the @aws-cdk/lambda-layer-awscli and @aws-cdk/lambda-layer-kubectl modules.

You can specify a custom lambda.LayerVersion if you wish to use a different version of these tools. The handler expects the layer to include the following three executables:

helm/helm
kubectl/kubectl
awscli/aws

See more information in the Dockerfile for @aws-cdk/lambda-layer-awscli and the Dockerfile for @aws-cdk/lambda-layer-kubectl.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
layer = lambda_.LayerVersion(self, "KubectlLayer",
    code=lambda_.Code.from_asset("layer.zip")
)

Now specify when the cluster is defined:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster(self, "MyCluster",
    kubectl_layer=layer
)

# or
cluster = eks.Cluster.from_cluster_attributes(self, "MyCluster",
    kubectl_layer=layer
)

Memory

By default, the kubectl provider is configured with 1024MiB of memory. You can use the kubectlMemory option to specify the memory size for the AWS Lambda function:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
from aws_cdk.core import Size


eks.Cluster(self, "MyCluster",
    kubectl_memory=Size.gibibytes(4)
)

# or
eks.Cluster.from_cluster_attributes(self, "MyCluster",
    kubectl_memory=Size.gibibytes(4)
)

ARM64 Support

Instance types with ARM64 architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 instanceType (such as m6g.medium), and the latest Amazon Linux 2 AMI for ARM64 will be automatically selected.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# add a managed ARM64 nodegroup
cluster.add_nodegroup_capacity("extra-ng-arm",
    instance_types=[ec2.InstanceType("m6g.medium")],
    min_size=2
)

# add a self-managed ARM64 nodegroup
cluster.add_auto_scaling_group_capacity("self-ng-arm",
    instance_type=ec2.InstanceType("m6g.medium"),
    min_capacity=2
)

Masters Role

When you create a cluster, you can specify a mastersRole. The Cluster construct will associate this role with the system:masters RBAC group, giving it super-user access to the cluster.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
role = iam.Role(...)
eks.Cluster(self, "HelloEKS",
    version=eks.KubernetesVersion.V1_19,
    masters_role=role
)

If you do not specify it, a default role will be created on your behalf, that can be assumed by anyone in the account with sts:AssumeRole permissions for this role.

This is the role you see as part of the stack outputs mentioned in the Quick Start.

$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config

Encryption

When you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled. The documentation on creating a cluster can provide more details about the customer master key (CMK) that can be used for the encryption.

You can use the secretsEncryptionKey to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.

This setting can only be specified when the cluster is created and cannot be updated.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
secrets_key = kms.Key(self, "SecretsKey")
cluster = eks.Cluster(self, "MyCluster",
    secrets_encryption_key=secrets_key
)

You can also use a similiar configuration for running a cluster built using the FargateCluster construct.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
secrets_key = kms.Key(self, "SecretsKey")
cluster = eks.FargateCluster(self, "MyFargateCluster",
    secrets_encryption_key=secrets_key
)

The Amazon Resource Name (ARN) for that CMK can be retrieved.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster_encryption_config_key_arn = cluster.cluster_encryption_config_key_arn

Permissions and Security

Amazon EKS provides several mechanism of securing the cluster and granting permissions to specific IAM users and roles.

AWS IAM Mapping

As described in the Amazon EKS User Guide, you can map AWS IAM users and roles to Kubernetes Role-based access control (RBAC).

The Amazon EKS construct manages the aws-auth ConfigMap Kubernetes resource on your behalf and exposes an API through the cluster.awsAuth for mapping users, roles and accounts.

Furthermore, when auto-scaling group capacity is added to the cluster, the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required.

For example, let’s say you want to grant an IAM user administrative privileges on your cluster:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
admin_user = iam.User(self, "Admin")
cluster.aws_auth.add_user_mapping(admin_user, groups=["system:masters"])

A convenience method for mapping a role to the system:masters group is also available:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster.aws_auth.add_masters_role(role)

Cluster Security Group

When you create an Amazon EKS cluster, a cluster security group is automatically created as well. This security group is designed to allow all traffic from the control plane and managed node groups to flow freely between each other.

The ID for that security group can be retrieved after creating the cluster.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster_security_group_id = cluster.cluster_security_group_id

Node SSH Access

If you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you’re connecting to and pass it when you add capacity to the cluster. You must also be able to connect to the hosts (meaning they must have a public IP and you should be allowed to connect to them on port 22):

See SSH into nodes for a code example.

If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is unfortunately beyond the scope of this documentation.

Service Accounts

With services account you can provide Kubernetes Pods access to AWS resources.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# add service account
sa = cluster.add_service_account("MyServiceAccount")

bucket = Bucket(self, "Bucket")
bucket.grant_read_write(service_account)

mypod = cluster.add_manifest("mypod",
    api_version="v1",
    kind="Pod",
    metadata={"name": "mypod"},
    spec={
        "service_account_name": sa.service_account_name,
        "containers": [{
            "name": "hello",
            "image": "paulbouwer/hello-kubernetes:1.5",
            "ports": [{"container_port": 8080}]
        }
        ]
    }
)

# create the resource after the service account.
mypod.node.add_dependency(sa)

# print the IAM role arn for this service account
cdk.CfnOutput(self, "ServiceAccountIamRole", value=sa.role.role_arn)

Note that using sa.serviceAccountName above does not translate into a resource dependency. This is why an explicit dependency is needed. See https://github.com/aws/aws-cdk/issues/9910 for more details.

You can also add service accounts to existing clusters. To do so, pass the openIdConnectProvider property when you import the cluster into the application.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# you can import an existing provider
provider = eks.OpenIdConnectProvider.from_open_id_connect_provider_arn(self, "Provider", "arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC")

# or create a new one using an existing issuer url
provider = eks.OpenIdConnectProvider(self, "Provider", issuer_url)

cluster = eks.Cluster.from_cluster_attributes(
    cluster_name="Cluster",
    open_id_connect_provider=provider,
    kubectl_role_arn="arn:aws:iam::123456:role/service-role/k8sservicerole"
)

sa = cluster.add_service_account("MyServiceAccount")

bucket = Bucket(self, "Bucket")
bucket.grant_read_write(service_account)

Note that adding service accounts requires running kubectl commands against the cluster. This means you must also pass the kubectlRoleArn when importing the cluster. See Using existing Clusters.

Applying Kubernetes Resources

The library supports several popular resource deployment mechanisms, among which are:

Kubernetes Manifests

The KubernetesManifest construct or cluster.addManifest method can be used to apply Kubernetes resource manifests to this cluster.

When using cluster.addManifest, the manifest construct is defined within the cluster’s stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly use new KubernetesManifest to create the manifest in the scope of the other stack.

The following examples will deploy the paulbouwer/hello-kubernetes service on the cluster:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
app_label = {"app": "hello-kubernetes"}

deployment = {
    "api_version": "apps/v1",
    "kind": "Deployment",
    "metadata": {"name": "hello-kubernetes"},
    "spec": {
        "replicas": 3,
        "selector": {"match_labels": app_label},
        "template": {
            "metadata": {"labels": app_label},
            "spec": {
                "containers": [{
                    "name": "hello-kubernetes",
                    "image": "paulbouwer/hello-kubernetes:1.5",
                    "ports": [{"container_port": 8080}]
                }
                ]
            }
        }
    }
}

service = {
    "api_version": "v1",
    "kind": "Service",
    "metadata": {"name": "hello-kubernetes"},
    "spec": {
        "type": "LoadBalancer",
        "ports": [{"port": 80, "target_port": 8080}],
        "selector": app_label
    }
}

# option 1: use a construct
KubernetesManifest(self, "hello-kub",
    cluster=cluster,
    manifest=[deployment, service]
)

# or, option2: use `addManifest`
cluster.add_manifest("hello-kub", service, deployment)

Adding resources from a URL

The following example will deploy the resource manifest hosting on remote server:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import js_yaml as yaml
import sync_request as request


manifest_url = "https://url/of/manifest.yaml"
manifest = yaml.safe_load_all(request("GET", manifest_url).get_body())
cluster.add_manifest("my-resource", (SpreadElement ...manifest
  manifest))

Dependencies

There are cases where Kubernetes resources must be deployed in a specific order. For example, you cannot define a resource in a Kubernetes namespace before the namespace was created.

You can represent dependencies between KubernetesManifests using resource.node.addDependency():

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
namespace = cluster.add_manifest("my-namespace",
    api_version="v1",
    kind="Namespace",
    metadata={"name": "my-app"}
)

service = cluster.add_manifest("my-service",
    metadata={
        "name": "myservice",
        "namespace": "my-app"
    },
    spec=
)

service.node.add_dependency(namespace)

NOTE: when a KubernetesManifest includes multiple resources (either directly or through cluster.addManifest()) (e.g. cluster.addManifest('foo', r1, r2, r3,...)), these resources will be applied as a single manifest via kubectl and will be applied sequentially (the standard behavior in kubectl).


Since Kubernetes manifests are implemented as CloudFormation resources in the CDK. This means that if the manifest is deleted from your code (or the stack is deleted), the next cdk deploy will issue a kubectl delete command and the Kubernetes resources in that manifest will be deleted.

Resource Pruning

When a resource is deleted from a Kubernetes manifest, the EKS module will automatically delete these resources by injecting a prune label to all manifest resources. This label is then passed to kubectl apply --prune.

Pruning is enabled by default but can be disabled through the prune option when a cluster is defined:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Cluster(self, "MyCluster",
    prune=False
)

Manifests Validation

The kubectl CLI supports applying a manifest by skipping the validation. This can be accomplished by setting the skipValidation flag to true in the KubernetesManifest props.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
eks.KubernetesManifest(self, "HelloAppWithoutValidation",
    cluster=self.cluster,
    manifest=[deployment, service],
    skip_validation=True
)

Helm Charts

The HelmChart construct or cluster.addHelmChart method can be used to add Kubernetes resources to this cluster using Helm.

When using cluster.addHelmChart, the manifest construct is defined within the cluster’s stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly use new HelmChart to create the chart in the scope of the other stack.

The following example will install the NGINX Ingress Controller to your cluster using Helm.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# option 1: use a construct
HelmChart(self, "NginxIngress",
    cluster=cluster,
    chart="nginx-ingress",
    repository="https://helm.nginx.com/stable",
    namespace="kube-system"
)

# or, option2: use `addHelmChart`
cluster.add_helm_chart("NginxIngress",
    chart="nginx-ingress",
    repository="https://helm.nginx.com/stable",
    namespace="kube-system"
)

Helm charts will be installed and updated using helm upgrade --install, where a few parameters are being passed down (such as repo, values, version, namespace, wait, timeout, etc). This means that if the chart is added to CDK with the same release name, it will try to update the chart in the cluster.

Helm charts are implemented as CloudFormation resources in CDK. This means that if the chart is deleted from your code (or the stack is deleted), the next cdk deploy will issue a helm uninstall command and the Helm chart will be deleted.

When there is no release defined, a unique ID will be allocated for the release based on the construct path.

By default, all Helm charts will be installed concurrently. In some cases, this could cause race conditions where two Helm charts attempt to deploy the same resource or if Helm charts depend on each other. You can use chart.node.addDependency() in order to declare a dependency order between charts:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
chart1 = cluster.add_helm_chart(...)
chart2 = cluster.add_helm_chart(...)

chart2.node.add_dependency(chart1)

CDK8s Charts

CDK8s is an open-source library that enables Kubernetes manifest authoring using familiar programming languages. It is founded on the same technologies as the AWS CDK, such as constructs and jsii.

To learn more about cdk8s, visit the Getting Started tutorials.

The EKS module natively integrates with cdk8s and allows you to apply cdk8s charts on AWS EKS clusters via the cluster.addCdk8sChart method.

In addition to cdk8s, you can also use cdk8s+, which provides higher level abstraction for the core kubernetes api objects. You can think of it like the L2 constructs for Kubernetes. Any other cdk8s based libraries are also supported, for example cdk8s-debore.

To get started, add the following dependencies to your package.json file:

"dependencies": {
  "cdk8s": "0.30.0",
  "cdk8s-plus": "0.30.0",
  "constructs": "3.0.4"
}

Note that the version of cdk8s must be >=0.30.0.

Similarly to how you would create a stack by extending @aws-cdk/core.Stack, we recommend you create a chart of your own that extends cdk8s.Chart, and add your kubernetes resources to it. You can use aws-cdk construct attributes and properties inside your cdk8s construct freely.

In this example we create a chart that accepts an s3.Bucket and passes its name to a kubernetes pod as an environment variable.

Notice that the chart must accept a constructs.Construct type as its scope, not an @aws-cdk/core.Construct as you would normally use. For this reason, to avoid possible confusion, we will create the chart in a separate file:

+ my-chart.ts

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_s3 as s3
import constructs as constructs
import cdk8s as cdk8s
import cdk8s_plus as kplus


class MyChart(cdk8s.Chart):
    def __init__(self, scope, id, *, bucket):
        super().__init__(scope, id)

        kplus.Pod(self, "Pod",
            spec={
                "containers": [
                    kplus.Container(
                        image="my-image",
                        env={
                            "BUCKET_NAME": kplus.EnvValue.from_value(bucket.bucket_name)
                        }
                    )
                ]
            }
        )

Then, in your AWS CDK app:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_s3 as s3
import cdk8s as cdk8s
from ..my_chart import MyChart


# some bucket..
bucket = s3.Bucket(self, "Bucket")

# create a cdk8s chart and use `cdk8s.App` as the scope.
my_chart = MyChart(cdk8s.App(), "MyChart", bucket=bucket)

# add the cdk8s chart to the cluster
cluster.add_cdk8s_chart("my-chart", my_chart)
Custom CDK8s Constructs

You can also compose a few stock cdk8s+ constructs into your own custom construct. However, since mixing scopes between aws-cdk and cdk8s is currently not supported, the Construct class you’ll need to use is the one from the constructs module, and not from @aws-cdk/core like you normally would. This is why we used new cdk8s.App() as the scope of the chart above.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import constructs as constructs
import cdk8s as cdk8s
import cdk8s_plus as kplus


class LoadBalancedWebService(constructs.Construct):
    def __init__(self, scope, id, props):
        super().__init__(scope, id)

        deployment = kplus.Deployment(chart, "Deployment",
            spec={
                "replicas": props.replicas,
                "pod_spec_template": {
                    "containers": [kplus.Container(image=props.image)]
                }
            }
        )

        deployment.expose(port=props.port, service_type=kplus.ServiceType.LOAD_BALANCER)
Manually importing k8s specs and CRD’s

If you find yourself unable to use cdk8s+, or just like to directly use the k8s native objects or CRD’s, you can do so by manually importing them using the cdk8s-cli.

See Importing kubernetes objects for detailed instructions.

Patching Kubernetes Resources

The KubernetesPatch construct can be used to update existing kubernetes resources. The following example can be used to patch the hello-kubernetes deployment from the example above with 5 replicas.

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
KubernetesPatch(self, "hello-kub-deployment-label",
    cluster=cluster,
    resource_name="deployment/hello-kubernetes",
    apply_patch={"spec": {"replicas": 5}},
    restore_patch={"spec": {"replicas": 3}}
)

Querying Kubernetes Resources

The KubernetesObjectValue construct can be used to query for information about kubernetes objects, and use that as part of your CDK application.

For example, you can fetch the address of a LoadBalancer type service:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# query the load balancer address
my_service_address = KubernetesObjectValue(self, "LoadBalancerAttribute",
    cluster=cluster,
    object_type="service",
    object_name="my-service",
    json_path=".status.loadBalancer.ingress[0].hostname"
)

# pass the address to a lambda function
proxy_function = lambda_.Function(self, "ProxyFunction", FunctionProps(
    (SpreadAssignment ...
      environment
      environment)
),
    my_service_address=my_service_address.value
)

Specifically, since the above use-case is quite common, there is an easier way to access that information:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
load_balancer_address = cluster.get_service_load_balancer_address("my-service")

Using existing clusters

The Amazon EKS library allows defining Kubernetes resources such as Kubernetes manifests and Helm charts on clusters that are not defined as part of your CDK app.

First, you’ll need to “import” a cluster to your CDK app. To do that, use the eks.Cluster.fromClusterAttributes() static method:

# Example automatically generated. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster.from_cluster_attributes(self, "MyCluster",
    cluster_name="my-cluster-name",
    kubectl_role_arn="arn:aws:iam::1111111:role/iam-role-that-has-masters-access"
)

Then, you can use addManifest or addHelmChart to define resources inside your Kubernetes cluster. For example:

# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
cluster.add_manifest("Test",
    api_version="v1",
    kind="ConfigMap",
    metadata={
        "name": "myconfigmap"
    },
    data={
        "Key": "value",
        "Another": "123454"
    }
)

At the minimum, when importing clusters for kubectl management, you will need to specify:

  • clusterName - the name of the cluster.

  • kubectlRoleArn - the ARN of an IAM role mapped to the system:masters RBAC role. If the cluster you are importing was created using the AWS CDK, the CloudFormation stack has an output that includes an IAM role that can be used. Otherwise, you can create an IAM role and map it to system:masters manually. The trust policy of this role should include the the arn:aws::iam::${accountId}:root principal in order to allow the execution role of the kubectl resource to assume it.

If the cluster is configured with private-only or private and restricted public Kubernetes endpoint access, you must also specify:

  • kubectlSecurityGroupId - the ID of an EC2 security group that is allowed connections to the cluster’s control security group. For example, the EKS managed cluster security group.

  • kubectlPrivateSubnetIds - a list of private VPC subnets IDs that will be used to access the Kubernetes endpoint.