Package software.amazon.awscdk.services.eks
Amazon EKS Construct Library
This construct library allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters. In addition, the library also supports defining Kubernetes resource manifests within EKS clusters.
Table Of Contents
- Amazon EKS Construct Library
- Table Of Contents
- Quick Start
- Architectural Overview
- Provisioning clusters
- Permissions and Security
- Applying Kubernetes Resources
- Patching Kubernetes Resources
- Querying Kubernetes Resources
- Add-ons
- Using existing clusters
- Logging
- Known Issues and Limitations
Quick Start
This example defines an Amazon EKS cluster with the following configuration:
- Dedicated VPC with default configuration (Implicitly created using ec2.Vpc)
- A Kubernetes pod with a container based on the paulbouwer/hello-kubernetes image.
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v31.KubectlV31Layer; // provisioning a cluster Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_31) .kubectlLayer(new KubectlV31Layer(this, "kubectl")) .build(); // apply a kubernetes manifest to the cluster cluster.addManifest("mypod", Map.of( "apiVersion", "v1", "kind", "Pod", "metadata", Map.of("name", "mypod"), "spec", Map.of( "containers", List.of(Map.of( "name", "hello", "image", "paulbouwer/hello-kubernetes:1.5", "ports", List.of(Map.of("containerPort", 8080)))))));
Architectural Overview
The following is a qualitative diagram of the various possible components involved in the cluster deployment.
+-----------------------------------------------+ +-----------------+ | EKS Cluster | kubectl | | | ----------- |<-------------+| Kubectl Handler | | | | | | | +-----------------+ | +--------------------+ +-----------------+ | | | | | | | | | Managed Node Group | | Fargate Profile | | +-----------------+ | | | | | | | | | +--------------------+ +-----------------+ | | Cluster Handler | | | | | +-----------------------------------------------+ +-----------------+ ^ ^ + | | | | connect self managed capacity | | aws-sdk | | create/update/delete | + | v +--------------------+ + +-------------------+ | | --------------+| eks.amazonaws.com | | Auto Scaling Group | +-------------------+ | | +--------------------+
In a nutshell:
EKS Cluster
- The cluster endpoint created by EKS.Managed Node Group
- EC2 worker nodes managed by EKS.Fargate Profile
- Fargate worker nodes managed by EKS.Auto Scaling Group
- EC2 worker nodes managed by the user.KubectlHandler
- Lambda function for invokingkubectl
commands on the cluster - created by CDK.ClusterHandler
- Lambda function for interacting with EKS API to manage the cluster lifecycle - created by CDK.
A more detailed breakdown of each is provided further down this README.
Provisioning clusters
Creating a new cluster is done using the Cluster
or FargateCluster
constructs. The only required property is the kubernetes version
.
Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .build();
You can also use FargateCluster
to provision a cluster that uses only fargate workers.
FargateCluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .build();
NOTE: Only 1 cluster per stack is supported. If you have a use-case for multiple clusters per stack, or would like to understand more about this limitation, see https://github.com/aws/aws-cdk/issues/10073.
Below you'll find a few important cluster configuration options. First of which is Capacity. Capacity is the amount and the type of worker nodes that are available to the cluster for deploying resources. Amazon EKS offers 3 ways of configuring capacity, which you can combine as you like:
Managed node groups
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
For more details visit Amazon EKS Managed Node Groups.
Managed Node Groups are the recommended way to allocate cluster capacity.
By default, this library will allocate a managed node group with 2 m5.large instances (this instance type suits most common use-cases, and is good value for money).
At cluster instantiation time, you can customize the number of instances and their type:
Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .defaultCapacity(5) .defaultCapacityInstance(InstanceType.of(InstanceClass.M5, InstanceSize.SMALL)) .build();
To access the node group that was created on your behalf, you can use cluster.defaultNodegroup
.
Additional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the cluster.addNodegroupCapacity
method:
Cluster cluster = Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .defaultCapacity(0) .build(); cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m5.large"))) .minSize(4) .diskSize(100) .build());
To set node taints, you can set taints
option.
Cluster cluster; cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m5.large"))) .taints(List.of(TaintSpec.builder() .effect(TaintEffect.NO_SCHEDULE) .key("foo") .value("bar") .build())) .build());
To define the type of the AMI for the node group, you may explicitly define amiType
according to your requirements, supported amiType could be found HERE.
Cluster cluster; // X86_64 based AMI managed node group cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m5.large"))) // NOTE: if amiType is x86_64-based image, the instance types here must be x86_64-based. .amiType(NodegroupAmiType.AL2023_X86_64_STANDARD) .build()); // ARM_64 based AMI managed node group cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m6g.medium"))) // NOTE: if amiType is ARM-based image, the instance types here must be ARM-based. .amiType(NodegroupAmiType.AL2023_ARM_64_STANDARD) .build());
To define the maximum number of instances which can be simultaneously replaced in a node group during a version update you can set maxUnavailable
or maxUnavailablePercentage
options.
For more details visit Updating a managed node group
Cluster cluster; cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m5.large"))) .maxSize(5) .maxUnavailable(2) .build());
Cluster cluster; cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m5.large"))) .maxUnavailablePercentage(33) .build());
NOTE: If you add instances with the inferentia class (
inf1
orinf2
) or trainium class (trn1
ortrn1n
) the neuron plugin will be automatically installed in the kubernetes cluster.
Node Groups with IPv6 Support
Node groups are available with IPv6 configured networks. For custom roles assigned to node groups additional permissions are necessary in order for pods to obtain an IPv6 address. The default node role will include these permissions.
For more details visit Configuring the Amazon VPC CNI plugin for Kubernetes to use IAM roles for service accounts
PolicyDocument ipv6Management = PolicyDocument.Builder.create() .statements(List.of(PolicyStatement.Builder.create() .resources(List.of("arn:aws:ec2:*:*:network-interface/*")) .actions(List.of("ec2:AssignIpv6Addresses", "ec2:UnassignIpv6Addresses")) .build())) .build(); Role eksClusterNodeGroupRole = Role.Builder.create(this, "eksClusterNodeGroupRole") .roleName("eksClusterNodeGroupRole") .assumedBy(new ServicePrincipal("ec2.amazonaws.com")) .managedPolicies(List.of(ManagedPolicy.fromAwsManagedPolicyName("AmazonEKSWorkerNodePolicy"), ManagedPolicy.fromAwsManagedPolicyName("AmazonEC2ContainerRegistryReadOnly"), ManagedPolicy.fromAwsManagedPolicyName("AmazonEKS_CNI_Policy"))) .inlinePolicies(Map.of( "ipv6Management", ipv6Management)) .build(); Cluster cluster = Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .defaultCapacity(0) .build(); cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m5.large"))) .minSize(2) .diskSize(100) .nodeRole(eksClusterNodeGroupRole) .build());
Spot Instances Support
Use capacityType
to create managed node groups comprised of spot instances. To maximize the availability of your applications while using
Spot Instances, we recommend that you configure a Spot managed node group to use multiple instance types with the instanceTypes
property.
For more details visit Managed node group capacity types.
Cluster cluster; cluster.addNodegroupCapacity("extra-ng-spot", NodegroupOptions.builder() .instanceTypes(List.of( new InstanceType("c5.large"), new InstanceType("c5a.large"), new InstanceType("c5d.large"))) .minSize(3) .capacityType(CapacityType.SPOT) .build());
Launch Template Support
You can specify a launch template that the node group will use. For example, this can be useful if you want to use a custom AMI or add custom user data.
When supplying a custom user data script, it must be encoded in the MIME multi-part archive format, since Amazon EKS merges with its own user data. Visit the Launch Template Docs for mode details.
Cluster cluster; String userData = "MIME-Version: 1.0\nContent-Type: multipart/mixed; boundary=\"==MYBOUNDARY==\"\n\n--==MYBOUNDARY==\nContent-Type: text/x-shellscript; charset=\"us-ascii\"\n\n#!/bin/bash\necho \"Running custom user data script\"\n\n--==MYBOUNDARY==--\\\n"; CfnLaunchTemplate lt = CfnLaunchTemplate.Builder.create(this, "LaunchTemplate") .launchTemplateData(LaunchTemplateDataProperty.builder() .instanceType("t3.small") .userData(Fn.base64(userData)) .build()) .build(); cluster.addNodegroupCapacity("extra-ng", NodegroupOptions.builder() .launchTemplateSpec(LaunchTemplateSpec.builder() .id(lt.getRef()) .version(lt.getAttrLatestVersionNumber()) .build()) .build());
Note that when using a custom AMI, Amazon EKS doesn't merge any user data. Which means you do not need the multi-part encoding. and are responsible for supplying the required bootstrap commands for nodes to join the cluster.
In the following example, /ect/eks/bootstrap.sh
from the AMI will be used to bootstrap the node.
Cluster cluster; UserData userData = UserData.forLinux(); userData.addCommands("set -o xtrace", String.format("/etc/eks/bootstrap.sh %s", cluster.getClusterName())); CfnLaunchTemplate lt = CfnLaunchTemplate.Builder.create(this, "LaunchTemplate") .launchTemplateData(LaunchTemplateDataProperty.builder() .imageId("some-ami-id") // custom AMI .instanceType("t3.small") .userData(Fn.base64(userData.render())) .build()) .build(); cluster.addNodegroupCapacity("extra-ng", NodegroupOptions.builder() .launchTemplateSpec(LaunchTemplateSpec.builder() .id(lt.getRef()) .version(lt.getAttrLatestVersionNumber()) .build()) .build());
You may specify one instanceType
in the launch template or multiple instanceTypes
in the node group, but not both.
For more details visit Launch Template Support.
Graviton 2 instance types are supported including c6g
, m6g
, r6g
and t4g
.
Graviton 3 instance types are supported including c7g
.
Update clusters
When you rename the cluster name and redeploy the stack, the cluster replacement will be triggered and
the existing one will be deleted after the new one is provisioned. As the cluster resource ARN has been changed,
the cluster resource handler would not be able to delete the old one as the resource ARN in the IAM policy
has been changed. As a workaround, you need to add a temporary policy to the cluster admin role for
successful replacement. Consider this example if you are renaming the cluster from foo
to bar
:
Cluster cluster = Cluster.Builder.create(this, "cluster-to-rename") .clusterName("foo") // rename this to 'bar' .version(KubernetesVersion.V1_31) .build(); // allow the cluster admin role to delete the cluster 'foo' cluster.adminRole.addToPolicy(PolicyStatement.Builder.create() .actions(List.of("eks:DeleteCluster", "eks:DescribeCluster")) .resources(List.of(Stack.of(this).formatArn(ArnComponents.builder().service("eks").resource("cluster").resourceName("foo").build()))) .build());
Fargate profiles
AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate Profiles, which are defined as part of your Amazon EKS cluster.
See Fargate Considerations in the AWS EKS User Guide.
You can add Fargate Profiles to any EKS cluster defined in your CDK app
through the addFargateProfile()
method. The following example adds a profile
that will match all pods from the "default" namespace:
Cluster cluster; cluster.addFargateProfile("MyProfile", FargateProfileOptions.builder() .selectors(List.of(Selector.builder().namespace("default").build())) .build());
You can also directly use the FargateProfile
construct to create profiles under different scopes:
Cluster cluster; FargateProfile.Builder.create(this, "MyProfile") .cluster(cluster) .selectors(List.of(Selector.builder().namespace("default").build())) .build();
To create an EKS cluster that only uses Fargate capacity, you can use FargateCluster
.
The following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the "kube-system" and "default" namespaces. It is also configured to run CoreDNS on Fargate.
FargateCluster cluster = FargateCluster.Builder.create(this, "MyCluster") .version(KubernetesVersion.V1_31) .build();
FargateCluster
will create a default FargateProfile
which can be accessed via the cluster's defaultProfile
property. The created profile can also be customized by passing options as with addFargateProfile
.
NOTE: Classic Load Balancers and Network Load Balancers are not supported on pods running on Fargate. For ingress, we recommend that you use the ALB Ingress Controller on Amazon EKS (minimum version v1.1.4).
Self-managed nodes
Another way of allocating capacity to an EKS cluster is by using self-managed nodes. EC2 instances that are part of the auto-scaling group will serve as worker nodes for the cluster. This type of capacity is also commonly referred to as EC2 Capacity* or EC2 Nodes.
For a detailed overview please visit Self Managed Nodes.
Creating an auto-scaling group and connecting it to the cluster is done using the cluster.addAutoScalingGroupCapacity
method:
Cluster cluster; cluster.addAutoScalingGroupCapacity("frontend-nodes", AutoScalingGroupCapacityOptions.builder() .instanceType(new InstanceType("t2.medium")) .minCapacity(3) .vpcSubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build()) .build());
To connect an already initialized auto-scaling group, use the cluster.connectAutoScalingGroupCapacity()
method:
Cluster cluster; AutoScalingGroup asg; cluster.connectAutoScalingGroupCapacity(asg, AutoScalingGroupOptions.builder().build());
To connect a self-managed node group to an imported cluster, use the cluster.connectAutoScalingGroupCapacity()
method:
Cluster cluster; AutoScalingGroup asg; ICluster importedCluster = Cluster.fromClusterAttributes(this, "ImportedCluster", ClusterAttributes.builder() .clusterName(cluster.getClusterName()) .clusterSecurityGroupId(cluster.getClusterSecurityGroupId()) .build()); importedCluster.connectAutoScalingGroupCapacity(asg, AutoScalingGroupOptions.builder().build());
In both cases, the cluster security group will be automatically attached to the auto-scaling group, allowing for traffic to flow freely between managed and self-managed nodes.
Note: The default
updateType
for auto-scaling groups does not replace existing nodes. Since security groups are determined at launch time, self-managed nodes that were provisioned with version1.78.0
or lower, will not be updated. To apply the new configuration on all your self-managed nodes, you'll need to replace the nodes using theUpdateType.REPLACING_UPDATE
policy for theupdateType
property.
You can customize the /etc/eks/boostrap.sh script, which is responsible
for bootstrapping the node to the EKS cluster. For example, you can use kubeletExtraArgs
to add custom node labels or taints.
Cluster cluster; cluster.addAutoScalingGroupCapacity("spot", AutoScalingGroupCapacityOptions.builder() .instanceType(new InstanceType("t3.large")) .minCapacity(2) .bootstrapOptions(BootstrapOptions.builder() .kubeletExtraArgs("--node-labels foo=bar,goo=far") .awsApiRetryAttempts(5) .build()) .build());
To disable bootstrapping altogether (i.e. to fully customize user-data), set bootstrapEnabled
to false
.
You can also configure the cluster to use an auto-scaling group as the default capacity:
Cluster cluster = Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .defaultCapacityType(DefaultCapacityType.EC2) .build();
This will allocate an auto-scaling group with 2 m5.large instances (this instance type suits most common use-cases, and is good value for money).
To access the AutoScalingGroup
that was created on your behalf, you can use cluster.defaultCapacity
.
You can also independently create an AutoScalingGroup
and connect it to the cluster using the cluster.connectAutoScalingGroupCapacity
method:
Cluster cluster; AutoScalingGroup asg; cluster.connectAutoScalingGroupCapacity(asg, AutoScalingGroupOptions.builder().build());
This will add the necessary user-data to access the apiserver and configure all connections, roles, and tags needed for the instances in the auto-scaling group to properly join the cluster.
Spot Instances
When using self-managed nodes, you can configure the capacity to use spot instances, greatly reducing capacity cost.
To enable spot capacity, use the spotPrice
property:
Cluster cluster; cluster.addAutoScalingGroupCapacity("spot", AutoScalingGroupCapacityOptions.builder() .spotPrice("0.1094") .instanceType(new InstanceType("t3.large")) .maxCapacity(10) .build());
Spot instance nodes will be labeled with
lifecycle=Ec2Spot
and tainted withPreferNoSchedule
.
The AWS Node Termination Handler DaemonSet
will be
installed from Amazon EKS Helm chart repository on these nodes.
The termination handler ensures that the Kubernetes control plane responds appropriately to events that
can cause your EC2 instance to become unavailable, such as EC2 maintenance events
and EC2 Spot interruptions and helps gracefully stop all pods running on spot nodes that are about to be
terminated.
Handler Version: 1.7.0
Chart Version: 0.9.5
To disable the installation of the termination handler, set the spotInterruptHandler
property to false
. This applies both to addAutoScalingGroupCapacity
and connectAutoScalingGroupCapacity
.
Bottlerocket
Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.
Bottlerocket
is supported when using managed nodegroups or self-managed auto-scaling groups.
To create a Bottlerocket managed nodegroup:
Cluster cluster; cluster.addNodegroupCapacity("BottlerocketNG", NodegroupOptions.builder() .amiType(NodegroupAmiType.BOTTLEROCKET_X86_64) .build());
The following example will create an auto-scaling group of 2 t3.small
Linux instances running with the Bottlerocket
AMI.
Cluster cluster; cluster.addAutoScalingGroupCapacity("BottlerocketNodes", AutoScalingGroupCapacityOptions.builder() .instanceType(new InstanceType("t3.small")) .minCapacity(2) .machineImageType(MachineImageType.BOTTLEROCKET) .build());
The specific Bottlerocket AMI variant will be auto selected according to the k8s version for the x86_64
architecture.
For example, if the Amazon EKS cluster version is 1.17
, the Bottlerocket AMI variant will be auto selected as
aws-k8s-1.17
behind the scene.
See Variants for more details.
Please note Bottlerocket does not allow to customize bootstrap options and bootstrapOptions
properties is not supported when you create the Bottlerocket
capacity.
To create a Bottlerocket managed nodegroup with Nvidia-based EC2 instance types use the BOTTLEROCKET_X86_64_NVIDIA
or
BOTTLEROCKET_ARM_64_NVIDIA
AMIs:
Cluster cluster; cluster.addNodegroupCapacity("BottlerocketNvidiaNG", NodegroupOptions.builder() .amiType(NodegroupAmiType.BOTTLEROCKET_X86_64_NVIDIA) .instanceTypes(List.of(new InstanceType("g4dn.xlarge"))) .build());
For more details about Bottlerocket, see Bottlerocket FAQs and Bottlerocket Open Source Blog.
Endpoint Access
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl
)
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
You can configure the cluster endpoint access by using the endpointAccess
property:
Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_31) .endpointAccess(EndpointAccess.PRIVATE) .build();
The default value is eks.EndpointAccess.PUBLIC_AND_PRIVATE
. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and kubectl
commands issued by this library stay within your VPC.
Alb Controller
Some Kubernetes resources are commonly implemented on AWS with the help of the ALB Controller.
From the docs:
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
- It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
- It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
To deploy the controller on your EKS cluster, configure the albController
property:
Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .albController(AlbControllerOptions.builder() .version(AlbControllerVersion.V2_8_2) .build()) .build();
The albController
requires defaultCapacity
or at least one nodegroup. If there's no defaultCapacity
or available
nodegroup for the cluster, the albController
deployment would fail.
Querying the controller pods should look something like this:
❯ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-76bd6c7586-d929p 1/1 Running 0 109m aws-load-balancer-controller-76bd6c7586-fqxph 1/1 Running 0 109m ... ...
Every Kubernetes manifest that utilizes the ALB Controller is effectively dependant on the controller. If the controller is deleted before the manifest, it might result in dangling ELB/ALB resources. Currently, the EKS construct library does not detect such dependencies, and they should be done explicitly.
For example:
Cluster cluster; KubernetesManifest manifest = cluster.addManifest("manifest", Map.of()); if (cluster.getAlbController()) { manifest.node.addDependency(cluster.getAlbController()); }
VPC Support
You can specify the VPC of the cluster using the vpc
and vpcSubnets
properties:
Vpc vpc; Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .vpc(vpc) .vpcSubnets(List.of(SubnetSelection.builder().subnetType(SubnetType.PRIVATE_WITH_EGRESS).build())) .build();
Note: Isolated VPCs (i.e with no internet access) are not fully supported. See https://github.com/aws/aws-cdk/issues/12171. Check out this aws-cdk-example for reference.
If you do not specify a VPC, one will be created on your behalf, which you can then access via cluster.vpc
. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).
Please note that the vpcSubnets
property defines the subnets where EKS will place the control plane ENIs. To choose
the subnets where EKS will place the worker nodes, please refer to the Provisioning clusters section above.
If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:
Vpc vpc; Cluster cluster; cluster.addAutoScalingGroupCapacity("nodes", AutoScalingGroupCapacityOptions.builder() .vpcSubnets(SubnetSelection.builder().subnets(vpc.getPrivateSubnets()).build()) .instanceType(new InstanceType("t2.medium")) .build());
There are two additional components you might want to provision within the VPC.
Kubectl Handler
The KubectlHandler
is a Lambda function responsible to issuing kubectl
and helm
commands against the cluster when you add resource manifests to the cluster.
The handler association to the VPC is derived from the endpointAccess
configuration. The rule of thumb is: If the cluster VPC can be associated, it will be.
Breaking this down, it means that if the endpoint exposes private access (via EndpointAccess.PRIVATE
or EndpointAccess.PUBLIC_AND_PRIVATE
), and the VPC contains private subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.
If the endpoint does not expose private access (via EndpointAccess.PUBLIC
) or the VPC does not contain private subnets, the function will not be provisioned within the VPC.
If your use-case requires control over the IAM role that the KubeCtl Handler assumes, a custom role can be passed through the ClusterProps (as kubectlLambdaRole
) of the EKS Cluster construct.
Cluster Handler
The ClusterHandler
is a set of Lambda functions (onEventHandler
, isCompleteHandler
) responsible for interacting with the EKS API in order to control the cluster lifecycle. To provision these functions inside the VPC, set the placeClusterHandlerInVpc
property to true
. This will place the functions inside the private subnets of the VPC based on the selection strategy specified in the vpcSubnets
property.
You can configure the environment of the Cluster Handler functions by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
SecurityGroup proxyInstanceSecurityGroup; Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_31) .clusterHandlerEnvironment(Map.of( "https_proxy", "http://proxy.myproxy.com")) /** * If the proxy is not open publicly, you can pass a security group to the * Cluster Handler Lambdas so that it can reach the proxy. */ .clusterHandlerSecurityGroup(proxyInstanceSecurityGroup) .build();
IPv6 Support
You can optionally choose to configure your cluster to use IPv6 using the ipFamily
definition for your cluster. Note that this will require the underlying subnets to have an associated IPv6 CIDR.
Vpc vpc; public void associateSubnetWithV6Cidr(Vpc vpc, Number count, ISubnet subnet) { CfnSubnet cfnSubnet = (CfnSubnet)subnet.getNode().getDefaultChild(); cfnSubnet.getIpv6CidrBlock() = Fn.select(count, Fn.cidr(Fn.select(0, vpc.getVpcIpv6CidrBlocks()), 256, (128 - 64).toString())); cfnSubnet.getAssignIpv6AddressOnCreation() = true; } // make an ipv6 cidr CfnVPCCidrBlock ipv6cidr = CfnVPCCidrBlock.Builder.create(this, "CIDR6") .vpcId(vpc.getVpcId()) .amazonProvidedIpv6CidrBlock(true) .build(); // connect the ipv6 cidr to all vpc subnets Number subnetcount = 0; ISubnet[] subnets = vpc.publicSubnets.concat(vpc.getPrivateSubnets()); for (Object subnet : subnets) { // Wait for the ipv6 cidr to complete subnet.node.addDependency(ipv6cidr); associateSubnetWithV6Cidr(vpc, subnetcount, subnet); subnetcount = subnetcount + 1; } Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_31) .vpc(vpc) .ipFamily(IpFamily.IP_V6) .vpcSubnets(List.of(SubnetSelection.builder().subnets(vpc.getPublicSubnets()).build())) .build();
Kubectl Support
The resources are created in the cluster by running kubectl apply
from a python lambda function.
By default, CDK will create a new python lambda function to apply your k8s manifests. If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:
IRole handlerRole = Role.fromRoleArn(this, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role"); // get the serivceToken from the custom resource provider String functionArn = Function.fromFunctionName(this, "ProviderOnEventFunc", "ProviderframeworkonEvent-XXX").getFunctionArn(); IKubectlProvider kubectlProvider = KubectlProvider.fromKubectlProviderAttributes(this, "KubectlProvider", KubectlProviderAttributes.builder() .functionArn(functionArn) .kubectlRoleArn("arn:aws:iam::123456789012:role/kubectl-role") .handlerRole(handlerRole) .build()); ICluster cluster = Cluster.fromClusterAttributes(this, "Cluster", ClusterAttributes.builder() .clusterName("cluster") .kubectlProvider(kubectlProvider) .build());
Environment
You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_31) .kubectlEnvironment(Map.of( "http_proxy", "http://proxy.myproxy.com")) .build();
Runtime
The kubectl handler uses kubectl
, helm
and the aws
CLI in order to
interact with the cluster. These are bundled into AWS Lambda layers included in
the @aws-cdk/lambda-layer-awscli
and @aws-cdk/lambda-layer-kubectl
modules.
The version of kubectl used must be compatible with the Kubernetes version of the
cluster. kubectl is supported within one minor version (older or newer) of Kubernetes
(see Kubernetes version skew policy).
Depending on which version of kubernetes you're targeting, you will need to use one of
the @aws-cdk/lambda-layer-kubectl-vXY
packages.
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v31.KubectlV31Layer; Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_31) .kubectlLayer(new KubectlV31Layer(this, "kubectl")) .build();
You can also specify a custom lambda.LayerVersion
if you wish to use a
different version of these tools, or a version not available in any of the
@aws-cdk/lambda-layer-kubectl-vXY
packages. The handler expects the layer to
include the following two executables:
helm/helm kubectl/kubectl
See more information in the Dockerfile for @aws-cdk/lambda-layer-awscli and the Dockerfile for @aws-cdk/lambda-layer-kubectl.
LayerVersion layer = LayerVersion.Builder.create(this, "KubectlLayer") .code(Code.fromAsset("layer.zip")) .build();
Now specify when the cluster is defined:
LayerVersion layer; Vpc vpc; Cluster cluster1 = Cluster.Builder.create(this, "MyCluster") .kubectlLayer(layer) .vpc(vpc) .clusterName("cluster-name") .version(KubernetesVersion.V1_31) .build(); // or ICluster cluster2 = Cluster.fromClusterAttributes(this, "MyCluster", ClusterAttributes.builder() .kubectlLayer(layer) .vpc(vpc) .clusterName("cluster-name") .build());
Memory
By default, the kubectl provider is configured with 1024MiB of memory. You can use the kubectlMemory
option to specify the memory size for the AWS Lambda function:
// or Vpc vpc; Cluster.Builder.create(this, "MyCluster") .kubectlMemory(Size.gibibytes(4)) .version(KubernetesVersion.V1_31) .build(); Cluster.fromClusterAttributes(this, "MyCluster", ClusterAttributes.builder() .kubectlMemory(Size.gibibytes(4)) .vpc(vpc) .clusterName("cluster-name") .build());
ARM64 Support
Instance types with ARM64
architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 instanceType
(such as m6g.medium
), and the latest
Amazon Linux 2 AMI for ARM64 will be automatically selected.
Cluster cluster; // add a managed ARM64 nodegroup cluster.addNodegroupCapacity("extra-ng-arm", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m6g.medium"))) .minSize(2) .build()); // add a self-managed ARM64 nodegroup cluster.addAutoScalingGroupCapacity("self-ng-arm", AutoScalingGroupCapacityOptions.builder() .instanceType(new InstanceType("m6g.medium")) .minCapacity(2) .build());
Masters Role
When you create a cluster, you can specify a mastersRole
. The Cluster
construct will associate this role with the system:masters
RBAC group, giving it super-user access to the cluster.
Role role; Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_31) .mastersRole(role) .build();
In order to interact with your cluster through kubectl
, you can use the aws eks update-kubeconfig
AWS CLI command
to configure your local kubeconfig. The EKS module will define a CloudFormation output in your stack which contains the command to run. For example:
Outputs: ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Execute the aws eks update-kubeconfig ...
command in your terminal to create or update a local kubeconfig context:
$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config
And now you can simply use kubectl
:
$ kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/aws-node-fpmwv 1/1 Running 0 21m pod/aws-node-m9htf 1/1 Running 0 21m pod/coredns-5cb4fb54c7-q222j 1/1 Running 0 23m pod/coredns-5cb4fb54c7-v9nxx 1/1 Running 0 23m ...
If you do not specify it, you won't have access to the cluster from outside of the CDK application.
Note that
cluster.addManifest
andnew KubernetesManifest
will still work.
Encryption
When you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled. The documentation on creating a cluster can provide more details about the customer master key (CMK) that can be used for the encryption.
You can use the secretsEncryptionKey
to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.
This setting can only be specified when the cluster is created and cannot be updated.
Key secretsKey = new Key(this, "SecretsKey"); Cluster cluster = Cluster.Builder.create(this, "MyCluster") .secretsEncryptionKey(secretsKey) .version(KubernetesVersion.V1_31) .build();
You can also use a similar configuration for running a cluster built using the FargateCluster construct.
Key secretsKey = new Key(this, "SecretsKey"); FargateCluster cluster = FargateCluster.Builder.create(this, "MyFargateCluster") .secretsEncryptionKey(secretsKey) .version(KubernetesVersion.V1_31) .build();
The Amazon Resource Name (ARN) for that CMK can be retrieved.
Cluster cluster; String clusterEncryptionConfigKeyArn = cluster.getClusterEncryptionConfigKeyArn();
Permissions and Security
Amazon EKS provides several mechanism of securing the cluster and granting permissions to specific IAM users and roles.
AWS IAM Mapping
As described in the Amazon EKS User Guide, you can map AWS IAM users and roles to Kubernetes Role-based access control (RBAC).
The Amazon EKS construct manages the aws-auth ConfigMap
Kubernetes resource on your behalf and exposes an API through the cluster.awsAuth
for mapping
users, roles and accounts.
Furthermore, when auto-scaling group capacity is added to the cluster, the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required.
For example, let's say you want to grant an IAM user administrative privileges on your cluster:
Cluster cluster; User adminUser = new User(this, "Admin"); cluster.awsAuth.addUserMapping(adminUser, AwsAuthMapping.builder().groups(List.of("system:masters")).build());
A convenience method for mapping a role to the system:masters
group is also available:
Cluster cluster; Role role; cluster.awsAuth.addMastersRole(role);
To access the Kubernetes resources from the console, make sure your viewing principal is defined
in the aws-auth
ConfigMap. Some options to consider:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v31.KubectlV31Layer; Cluster cluster; Role your_current_role; Vpc vpc; // Option 1: Add your current assumed IAM role to system:masters. Make sure to add relevant policies. cluster.awsAuth.addMastersRole(your_current_role); your_current_role.addToPolicy(PolicyStatement.Builder.create() .actions(List.of("eks:AccessKubernetesApi", "eks:Describe*", "eks:List*")) .resources(List.of(cluster.getClusterArn())) .build());
// Option 2: create your custom mastersRole with scoped assumeBy arn as the Cluster prop. Switch to this role from the AWS console. import software.amazon.awscdk.cdk.lambdalayer.kubectl.v31.KubectlV31Layer; Vpc vpc; Role mastersRole = Role.Builder.create(this, "MastersRole") .assumedBy(new ArnPrincipal("arn_for_trusted_principal")) .build(); Cluster cluster = Cluster.Builder.create(this, "EksCluster") .vpc(vpc) .version(KubernetesVersion.V1_31) .kubectlLayer(new KubectlV31Layer(this, "KubectlLayer")) .mastersRole(mastersRole) .build(); mastersRole.addToPolicy(PolicyStatement.Builder.create() .actions(List.of("eks:AccessKubernetesApi", "eks:Describe*", "eks:List*")) .resources(List.of(cluster.getClusterArn())) .build());
// Option 3: Create a new role that allows the account root principal to assume. Add this role in the `system:masters` and witch to this role from the AWS console. Cluster cluster; Role consoleReadOnlyRole = Role.Builder.create(this, "ConsoleReadOnlyRole") .assumedBy(new ArnPrincipal("arn_for_trusted_principal")) .build(); consoleReadOnlyRole.addToPolicy(PolicyStatement.Builder.create() .actions(List.of("eks:AccessKubernetesApi", "eks:Describe*", "eks:List*")) .resources(List.of(cluster.getClusterArn())) .build()); // Add this role to system:masters RBAC group cluster.awsAuth.addMastersRole(consoleReadOnlyRole);
Access Config
Amazon EKS supports three modes of authentication: CONFIG_MAP
, API_AND_CONFIG_MAP
, and API
. You can enable cluster
to use access entry APIs by using authenticationMode API
or API_AND_CONFIG_MAP
. Use authenticationMode CONFIG_MAP
to continue using aws-auth configMap exclusively. When API_AND_CONFIG_MAP
is enabled, the cluster will source authenticated
AWS IAM principals from both Amazon EKS access entry APIs and the aws-auth configMap, with priority given to the access entry API.
To specify the authenticationMode
:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v31.KubectlV31Layer; Vpc vpc; Cluster.Builder.create(this, "Cluster") .vpc(vpc) .version(KubernetesVersion.V1_31) .kubectlLayer(new KubectlV31Layer(this, "KubectlLayer")) .authenticationMode(AuthenticationMode.API_AND_CONFIG_MAP) .build();
Note - Switching authentication modes on an existing cluster is a one-way operation. You can switch from
CONFIG_MAP
toAPI_AND_CONFIG_MAP
. You can then switch fromAPI_AND_CONFIG_MAP
toAPI
. You cannot revert these operations in the opposite direction. Meaning you cannot switch back toCONFIG_MAP
orAPI_AND_CONFIG_MAP
fromAPI
. And you cannot switch back toCONFIG_MAP
fromAPI_AND_CONFIG_MAP
.
Read A deep dive into simplified Amazon EKS access management controls for more details.
You can disable granting the cluster admin permissions to the cluster creator role on bootstrapping by setting
bootstrapClusterCreatorAdminPermissions
to false.
Note - Switching
bootstrapClusterCreatorAdminPermissions
on an existing cluster would cause cluster replacement and should be avoided in production.
Access Entry
An access entry is a cluster identity—directly linked to an AWS IAM principal user or role that is used to authenticate to an Amazon EKS cluster. An Amazon EKS access policy authorizes an access entry to perform specific cluster actions.
Access policies are Amazon EKS-specific policies that assign Kubernetes permissions to access entries. Amazon EKS supports only predefined and AWS managed policies. Access policies are not AWS IAM entities and are defined and managed by Amazon EKS. Amazon EKS access policies include permission sets that support common use cases of administration, editing, or read-only access to Kubernetes resources. See Access Policy Permissions for more details.
Use AccessPolicy
to include predefined AWS managed policies:
// AmazonEKSClusterAdminPolicy with `cluster` scope AccessPolicy.fromAccessPolicyName("AmazonEKSClusterAdminPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.CLUSTER) .build()); // AmazonEKSAdminPolicy with `namespace` scope AccessPolicy.fromAccessPolicyName("AmazonEKSAdminPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.NAMESPACE) .namespaces(List.of("foo", "bar")) .build());
Use grantAccess()
to grant the AccessPolicy to an IAM principal:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v31.KubectlV31Layer; Vpc vpc; Role clusterAdminRole = Role.Builder.create(this, "ClusterAdminRole") .assumedBy(new ArnPrincipal("arn_for_trusted_principal")) .build(); Role eksAdminRole = Role.Builder.create(this, "EKSAdminRole") .assumedBy(new ArnPrincipal("arn_for_trusted_principal")) .build(); Role eksAdminViewRole = Role.Builder.create(this, "EKSAdminViewRole") .assumedBy(new ArnPrincipal("arn_for_trusted_principal")) .build(); Cluster cluster = Cluster.Builder.create(this, "Cluster") .vpc(vpc) .mastersRole(clusterAdminRole) .version(KubernetesVersion.V1_31) .kubectlLayer(new KubectlV31Layer(this, "KubectlLayer")) .authenticationMode(AuthenticationMode.API_AND_CONFIG_MAP) .build(); // Cluster Admin role for this cluster cluster.grantAccess("clusterAdminAccess", clusterAdminRole.getRoleArn(), List.of(AccessPolicy.fromAccessPolicyName("AmazonEKSClusterAdminPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.CLUSTER) .build()))); // EKS Admin role for specified namespaces of this cluster cluster.grantAccess("eksAdminRoleAccess", eksAdminRole.getRoleArn(), List.of(AccessPolicy.fromAccessPolicyName("AmazonEKSAdminPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.NAMESPACE) .namespaces(List.of("foo", "bar")) .build()))); // EKS Admin Viewer role for specified namespaces of this cluster cluster.grantAccess("eksAdminViewRoleAccess", eksAdminViewRole.getRoleArn(), List.of(AccessPolicy.fromAccessPolicyName("AmazonEKSAdminViewPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.NAMESPACE) .namespaces(List.of("foo", "bar")) .build())));
Migrating from ConfigMap to Access Entry
If the cluster is created with the authenticationMode
property left undefined,
it will default to CONFIG_MAP
.
The update path is:
undefined
(CONFIG_MAP
) -> API_AND_CONFIG_MAP
-> API
If you have explicitly declared AwsAuth
resources and then try to switch to the API
mode, which no longer supports the
ConfigMap
, AWS CDK will throw an error as a protective measure to prevent you from losing all the access entries in the ConfigMap
. In this case, you will need to remove all the declared AwsAuth
resources explicitly and define the access entries before you are allowed to transition to the API
mode.
Note - This is a one-way transition. Once you switch to the
API
mode, you will not be able to switch back. Therefore, it is crucial to ensure that you have defined all the necessary access entries before making the switch to theAPI
mode.
Cluster Security Group
When you create an Amazon EKS cluster, a cluster security group is automatically created as well. This security group is designed to allow all traffic from the control plane and managed node groups to flow freely between each other.
The ID for that security group can be retrieved after creating the cluster.
Cluster cluster; String clusterSecurityGroupId = cluster.getClusterSecurityGroupId();
Node SSH Access
If you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you're connecting to and pass it when you add capacity to the cluster. You must also be able to connect to the hosts (meaning they must have a public IP and you should be allowed to connect to them on port 22):
See SSH into nodes for a code example.
If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is unfortunately beyond the scope of this documentation.
Service Accounts
With services account you can provide Kubernetes Pods access to AWS resources.
Cluster cluster; // add service account ServiceAccount serviceAccount = cluster.addServiceAccount("MyServiceAccount"); Bucket bucket = new Bucket(this, "Bucket"); bucket.grantReadWrite(serviceAccount); KubernetesManifest mypod = cluster.addManifest("mypod", Map.of( "apiVersion", "v1", "kind", "Pod", "metadata", Map.of("name", "mypod"), "spec", Map.of( "serviceAccountName", serviceAccount.getServiceAccountName(), "containers", List.of(Map.of( "name", "hello", "image", "paulbouwer/hello-kubernetes:1.5", "ports", List.of(Map.of("containerPort", 8080))))))); // create the resource after the service account. mypod.node.addDependency(serviceAccount); // print the IAM role arn for this service account // print the IAM role arn for this service account CfnOutput.Builder.create(this, "ServiceAccountIamRole").value(serviceAccount.getRole().getRoleArn()).build();
Note that using serviceAccount.serviceAccountName
above does not translate into a resource dependency.
This is why an explicit dependency is needed. See https://github.com/aws/aws-cdk/issues/9910 for more details.
It is possible to pass annotations and labels to the service account.
Cluster cluster; // add service account with annotations and labels ServiceAccount serviceAccount = cluster.addServiceAccount("MyServiceAccount", ServiceAccountOptions.builder() .annotations(Map.of( "eks.amazonaws.com/sts-regional-endpoints", "false")) .labels(Map.of( "some-label", "with-some-value")) .build());
You can also add service accounts to existing clusters.
To do so, pass the openIdConnectProvider
property when you import the cluster into the application.
// or create a new one using an existing issuer url String issuerUrl; // you can import an existing provider IOpenIdConnectProvider provider = OpenIdConnectProvider.fromOpenIdConnectProviderArn(this, "Provider", "arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC"); OpenIdConnectProvider provider2 = OpenIdConnectProvider.Builder.create(this, "Provider") .url(issuerUrl) .build(); ICluster cluster = Cluster.fromClusterAttributes(this, "MyCluster", ClusterAttributes.builder() .clusterName("Cluster") .openIdConnectProvider(provider) .kubectlRoleArn("arn:aws:iam::123456:role/service-role/k8sservicerole") .build()); ServiceAccount serviceAccount = cluster.addServiceAccount("MyServiceAccount"); Bucket bucket = new Bucket(this, "Bucket"); bucket.grantReadWrite(serviceAccount);
Note that adding service accounts requires running kubectl
commands against the cluster.
This means you must also pass the kubectlRoleArn
when importing the cluster.
See Using existing Clusters.
Pod Identities
Amazon EKS Pod Identities is a feature that simplifies how Kubernetes applications running on Amazon EKS can obtain AWS IAM credentials. It provides a way to associate an IAM role with a Kubernetes service account, allowing pods to retrieve temporary AWS credentials without the need to manage IAM roles and policies directly.
By default, ServiceAccount
creates an OpenIdConnectProvider
for
IRSA(IAM roles for service accounts) if
identityType
is undefined
or IdentityType.IRSA
.
You may opt in Amaozn EKS Pod Identities as below:
Cluster cluster; ServiceAccount.Builder.create(this, "ServiceAccount") .cluster(cluster) .name("test-sa") .namespace("default") .identityType(IdentityType.POD_IDENTITY) .build();
When you create the ServiceAccount with the identityType
set to POD_IDENTITY
,
ServiceAccount
contruct will perform the following actions behind the scenes:
- It will create an IAM role with the necessary trust policy to allow the "pods.eks.amazonaws.com" principal to assume the role. This trust policy grants the EKS service the permission to retrieve temporary AWS credentials on behalf of the pods using this service account.
- It will enable the "Amazon EKS Pod Identity Agent" add-on on the EKS cluster. This add-on is responsible for managing the temporary AWS credentials and making them available to the pods.
- It will create an association between the IAM role and the Kubernetes service account. This association allows the pods using this service account to obtain the temporary AWS credentials from the associated IAM role.
This simplifies the process of configuring IAM permissions for your Kubernetes applications running on Amazon EKS. It handles the creation of the IAM role, the installation of the Pod Identity Agent add-on, and the association between the role and the service account, making it easier to manage AWS credentials for your applications.
Applying Kubernetes Resources
The library supports several popular resource deployment mechanisms, among which are:
Kubernetes Manifests
The KubernetesManifest
construct or cluster.addManifest
method can be used
to apply Kubernetes resource manifests to this cluster.
When using
cluster.addManifest
, the manifest construct is defined within the cluster's stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly usenew KubernetesManifest
to create the manifest in the scope of the other stack.
The following examples will deploy the paulbouwer/hello-kubernetes service on the cluster:
Cluster cluster; Map<String, String> appLabel = Map.of("app", "hello-kubernetes"); Map<String, Object> deployment = Map.of( "apiVersion", "apps/v1", "kind", "Deployment", "metadata", Map.of("name", "hello-kubernetes"), "spec", Map.of( "replicas", 3, "selector", Map.of("matchLabels", appLabel), "template", Map.of( "metadata", Map.of("labels", appLabel), "spec", Map.of( "containers", List.of(Map.of( "name", "hello-kubernetes", "image", "paulbouwer/hello-kubernetes:1.5", "ports", List.of(Map.of("containerPort", 8080)))))))); Map<String, Object> service = Map.of( "apiVersion", "v1", "kind", "Service", "metadata", Map.of("name", "hello-kubernetes"), "spec", Map.of( "type", "LoadBalancer", "ports", List.of(Map.of("port", 80, "targetPort", 8080)), "selector", appLabel)); // option 1: use a construct // option 1: use a construct KubernetesManifest.Builder.create(this, "hello-kub") .cluster(cluster) .manifest(List.of(deployment, service)) .build(); // or, option2: use `addManifest` cluster.addManifest("hello-kub", service, deployment);
ALB Controller Integration
The KubernetesManifest
construct can detect ingress resources inside your manifest and automatically add the necessary annotations
so they are picked up by the ALB Controller.
See Alb Controller
To that end, it offers the following properties:
ingressAlb
- Signal that the ingress detection should be done.ingressAlbScheme
- Which ALB scheme should be applied. Defaults tointernal
.
Adding resources from a URL
The following example will deploy the resource manifest hosting on remote server:
// This example is only available in TypeScript import * as yaml from 'js-yaml'; import * as request from 'sync-request'; declare const cluster: eks.Cluster; const manifestUrl = 'https://url/of/manifest.yaml'; const manifest = yaml.safeLoadAll(request('GET', manifestUrl).getBody()); cluster.addManifest('my-resource', manifest);
Dependencies
There are cases where Kubernetes resources must be deployed in a specific order. For example, you cannot define a resource in a Kubernetes namespace before the namespace was created.
You can represent dependencies between KubernetesManifest
s using
resource.node.addDependency()
:
Cluster cluster; KubernetesManifest namespace = cluster.addManifest("my-namespace", Map.of( "apiVersion", "v1", "kind", "Namespace", "metadata", Map.of("name", "my-app"))); KubernetesManifest service = cluster.addManifest("my-service", Map.of( "metadata", Map.of( "name", "myservice", "namespace", "my-app"), "spec", Map.of())); service.node.addDependency(namespace);
NOTE: when a KubernetesManifest
includes multiple resources (either directly
or through cluster.addManifest()
) (e.g. cluster.addManifest('foo', r1, r2, r3,...)
), these resources will be applied as a single manifest via kubectl
and will be applied sequentially (the standard behavior in kubectl
).
Since Kubernetes manifests are implemented as CloudFormation resources in the
CDK. This means that if the manifest is deleted from your code (or the stack is
deleted), the next cdk deploy
will issue a kubectl delete
command and the
Kubernetes resources in that manifest will be deleted.
Resource Pruning
When a resource is deleted from a Kubernetes manifest, the EKS module will
automatically delete these resources by injecting a prune label to all
manifest resources. This label is then passed to kubectl apply --prune
.
Pruning is enabled by default but can be disabled through the prune
option
when a cluster is defined:
Cluster.Builder.create(this, "MyCluster") .version(KubernetesVersion.V1_31) .prune(false) .build();
Manifests Validation
The kubectl
CLI supports applying a manifest by skipping the validation.
This can be accomplished by setting the skipValidation
flag to true
in the KubernetesManifest
props.
Cluster cluster; KubernetesManifest.Builder.create(this, "HelloAppWithoutValidation") .cluster(cluster) .manifest(List.of(Map.of("foo", "bar"))) .skipValidation(true) .build();
Helm Charts
The HelmChart
construct or cluster.addHelmChart
method can be used
to add Kubernetes resources to this cluster using Helm.
When using
cluster.addHelmChart
, the manifest construct is defined within the cluster's stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly usenew HelmChart
to create the chart in the scope of the other stack.
The following example will install the NGINX Ingress Controller to your cluster using Helm.
Cluster cluster; // option 1: use a construct // option 1: use a construct HelmChart.Builder.create(this, "NginxIngress") .cluster(cluster) .chart("nginx-ingress") .repository("https://helm.nginx.com/stable") .namespace("kube-system") .build(); // or, option2: use `addHelmChart` cluster.addHelmChart("NginxIngress", HelmChartOptions.builder() .chart("nginx-ingress") .repository("https://helm.nginx.com/stable") .namespace("kube-system") .build());
Helm charts will be installed and updated using helm upgrade --install
, where a few parameters
are being passed down (such as repo
, values
, version
, namespace
, wait
, timeout
, etc).
This means that if the chart is added to CDK with the same release name, it will try to update
the chart in the cluster.
Additionally, the chartAsset
property can be an aws-s3-assets.Asset
. This allows the use of local, private helm charts.
import software.amazon.awscdk.services.s3.assets.*; Cluster cluster; Asset chartAsset = Asset.Builder.create(this, "ChartAsset") .path("/path/to/asset") .build(); cluster.addHelmChart("test-chart", HelmChartOptions.builder() .chartAsset(chartAsset) .build());
Nested values passed to the values
parameter should be provided as a nested dictionary:
Cluster cluster; cluster.addHelmChart("ExternalSecretsOperator", HelmChartOptions.builder() .chart("external-secrets") .release("external-secrets") .repository("https://charts.external-secrets.io") .namespace("external-secrets") .values(Map.of( "installCRDs", true, "webhook", Map.of( "port", 9443))) .build());
Helm chart can come with Custom Resource Definitions (CRDs) defined that by default will be installed by helm as well. However in special cases it might be needed to skip the installation of CRDs, for that the property skipCrds
can be used.
Cluster cluster; // option 1: use a construct // option 1: use a construct HelmChart.Builder.create(this, "NginxIngress") .cluster(cluster) .chart("nginx-ingress") .repository("https://helm.nginx.com/stable") .namespace("kube-system") .skipCrds(true) .build();
OCI Charts
OCI charts are also supported.
Also replace the ${VARS}
with appropriate values.
Cluster cluster; // option 1: use a construct // option 1: use a construct HelmChart.Builder.create(this, "MyOCIChart") .cluster(cluster) .chart("some-chart") .repository("oci://${ACCOUNT_ID}.dkr.ecr.${ACCOUNT_REGION}.amazonaws.com/${REPO_NAME}") .namespace("oci") .version("0.0.1") .build();
Helm charts are implemented as CloudFormation resources in CDK.
This means that if the chart is deleted from your code (or the stack is
deleted), the next cdk deploy
will issue a helm uninstall
command and the
Helm chart will be deleted.
When there is no release
defined, a unique ID will be allocated for the release based
on the construct path.
By default, all Helm charts will be installed concurrently. In some cases, this
could cause race conditions where two Helm charts attempt to deploy the same
resource or if Helm charts depend on each other. You can use
chart.node.addDependency()
in order to declare a dependency order between
charts:
Cluster cluster; HelmChart chart1 = cluster.addHelmChart("MyChart", HelmChartOptions.builder() .chart("foo") .build()); HelmChart chart2 = cluster.addHelmChart("MyChart", HelmChartOptions.builder() .chart("bar") .build()); chart2.node.addDependency(chart1);
CDK8s Charts
CDK8s is an open-source library that enables Kubernetes manifest authoring using familiar programming languages. It is founded on the same technologies as the AWS CDK, such as constructs
and jsii
.
To learn more about cdk8s, visit the Getting Started tutorials.
The EKS module natively integrates with cdk8s and allows you to apply cdk8s charts on AWS EKS clusters via the cluster.addCdk8sChart
method.
In addition to cdk8s
, you can also use cdk8s+
, which provides higher level abstraction for the core kubernetes api objects.
You can think of it like the L2
constructs for Kubernetes. Any other cdk8s
based libraries are also supported, for example cdk8s-debore
.
To get started, add the following dependencies to your package.json
file:
"dependencies": { "cdk8s": "^2.0.0", "cdk8s-plus-25": "^2.0.0", "constructs": "^10.0.0" }
Note that here we are using cdk8s-plus-25
as we are targeting Kubernetes version 1.25.0. If you operate a different kubernetes version, you should
use the corresponding cdk8s-plus-XX
library.
See Select the appropriate cdk8s+ library
for more details.
Similarly to how you would create a stack by extending aws-cdk-lib.Stack
, we recommend you create a chart of your own that extends cdk8s.Chart
,
and add your kubernetes resources to it. You can use aws-cdk
construct attributes and properties inside your cdk8s
construct freely.
In this example we create a chart that accepts an s3.Bucket
and passes its name to a kubernetes pod as an environment variable.
+ my-chart.ts
import software.amazon.awscdk.services.s3.*; import software.constructs.*; import org.cdk8s.*; import org.cdk8s.plus25.*; public class MyChartProps { private Bucket bucket; public Bucket getBucket() { return this.bucket; } public MyChartProps bucket(Bucket bucket) { this.bucket = bucket; return this; } } public class MyChart extends Chart { public MyChart(Construct scope, String id, MyChartProps props) { super(scope, id); Pod.Builder.create(this, "Pod") .containers(List.of(ContainerProps.builder() .image("my-image") .envVariables(Map.of( "BUCKET_NAME", EnvValue.fromValue(props.getBucket().getBucketName()))) .build())) .build(); } }
Then, in your AWS CDK app:
Cluster cluster; // some bucket.. Bucket bucket = new Bucket(this, "Bucket"); // create a cdk8s chart and use `cdk8s.App` as the scope. MyChart myChart = new MyChart(new App(), "MyChart", new MyChartProps().bucket(bucket)); // add the cdk8s chart to the cluster cluster.addCdk8sChart("my-chart", myChart);
Custom CDK8s Constructs
You can also compose a few stock cdk8s+
constructs into your own custom construct. However, since mixing scopes between aws-cdk
and cdk8s
is currently not supported, the Construct
class
you'll need to use is the one from the constructs
module, and not from aws-cdk-lib
like you normally would.
This is why we used new cdk8s.App()
as the scope of the chart above.
import software.constructs.*; import org.cdk8s.*; import org.cdk8s.plus25.*; public class LoadBalancedWebService { private Number port; public Number getPort() { return this.port; } public LoadBalancedWebService port(Number port) { this.port = port; return this; } private String image; public String getImage() { return this.image; } public LoadBalancedWebService image(String image) { this.image = image; return this; } private Number replicas; public Number getReplicas() { return this.replicas; } public LoadBalancedWebService replicas(Number replicas) { this.replicas = replicas; return this; } } App app = new App(); Chart chart = new Chart(app, "my-chart"); public class LoadBalancedWebService extends Construct { public LoadBalancedWebService(Construct scope, String id, LoadBalancedWebService props) { super(scope, id); Deployment deployment = Deployment.Builder.create(chart, "Deployment") .replicas(props.getReplicas()) .containers(List.of(Container.Builder.create().image(props.getImage()).build())) .build(); deployment.exposeViaService(DeploymentExposeViaServiceOptions.builder() .ports(List.of(ServicePort.builder().port(props.getPort()).build())) .serviceType(ServiceType.LOAD_BALANCER) .build()); } }
Manually importing k8s specs and CRD's
If you find yourself unable to use cdk8s+
, or just like to directly use the k8s
native objects or CRD's, you can do so by manually importing them using the cdk8s-cli
.
See Importing kubernetes objects for detailed instructions.
Patching Kubernetes Resources
The KubernetesPatch
construct can be used to update existing kubernetes
resources. The following example can be used to patch the hello-kubernetes
deployment from the example above with 5 replicas.
Cluster cluster; KubernetesPatch.Builder.create(this, "hello-kub-deployment-label") .cluster(cluster) .resourceName("deployment/hello-kubernetes") .applyPatch(Map.of("spec", Map.of("replicas", 5))) .restorePatch(Map.of("spec", Map.of("replicas", 3))) .build();
Querying Kubernetes Resources
The KubernetesObjectValue
construct can be used to query for information about kubernetes objects,
and use that as part of your CDK application.
For example, you can fetch the address of a LoadBalancer
type service:
Cluster cluster; // query the load balancer address KubernetesObjectValue myServiceAddress = KubernetesObjectValue.Builder.create(this, "LoadBalancerAttribute") .cluster(cluster) .objectType("service") .objectName("my-service") .jsonPath(".status.loadBalancer.ingress[0].hostname") .build(); // pass the address to a lambda function Function proxyFunction = Function.Builder.create(this, "ProxyFunction") .handler("index.handler") .code(Code.fromInline("my-code")) .runtime(Runtime.NODEJS_LATEST) .environment(Map.of( "myServiceAddress", myServiceAddress.getValue())) .build();
Specifically, since the above use-case is quite common, there is an easier way to access that information:
Cluster cluster; String loadBalancerAddress = cluster.getServiceLoadBalancerAddress("my-service");
Add-ons
Add-ons is a software that provides supporting operational capabilities to Kubernetes applications. The EKS module supports adding add-ons to your cluster using the eks.Addon
class.
Cluster cluster; Addon.Builder.create(this, "Addon") .cluster(cluster) .addonName("aws-guardduty-agent") .addonVersion("v1.6.1") // whether to preserve the add-on software on your cluster but Amazon EKS stops managing any settings for the add-on. .preserveOnDelete(false) .build();
Using existing clusters
The Amazon EKS library allows defining Kubernetes resources such as Kubernetes manifests and Helm charts on clusters that are not defined as part of your CDK app.
First, you'll need to "import" a cluster to your CDK app. To do that, use the
eks.Cluster.fromClusterAttributes()
static method:
ICluster cluster = Cluster.fromClusterAttributes(this, "MyCluster", ClusterAttributes.builder() .clusterName("my-cluster-name") .kubectlRoleArn("arn:aws:iam::1111111:role/iam-role-that-has-masters-access") .build());
Then, you can use addManifest
or addHelmChart
to define resources inside
your Kubernetes cluster. For example:
Cluster cluster; cluster.addManifest("Test", Map.of( "apiVersion", "v1", "kind", "ConfigMap", "metadata", Map.of( "name", "myconfigmap"), "data", Map.of( "Key", "value", "Another", "123454")));
At the minimum, when importing clusters for kubectl
management, you will need
to specify:
clusterName
- the name of the cluster.kubectlRoleArn
- the ARN of an IAM role mapped to thesystem:masters
RBAC role. If the cluster you are importing was created using the AWS CDK, the CloudFormation stack has an output that includes an IAM role that can be used. Otherwise, you can create an IAM role and map it tosystem:masters
manually. The trust policy of this role should include the thearn:aws::iam::${accountId}:root
principal in order to allow the execution role of the kubectl resource to assume it.
If the cluster is configured with private-only or private and restricted public Kubernetes endpoint access, you must also specify:
kubectlSecurityGroupId
- the ID of an EC2 security group that is allowed connections to the cluster's control security group. For example, the EKS managed cluster security group.kubectlPrivateSubnetIds
- a list of private VPC subnets IDs that will be used to access the Kubernetes endpoint.
Logging
EKS supports cluster logging for 5 different types of events:
- API requests to the cluster.
- Cluster access via the Kubernetes API.
- Authentication requests into the cluster.
- State of cluster controllers.
- Scheduling decisions.
You can enable logging for each one separately using the clusterLogging
property. For example:
Cluster cluster = Cluster.Builder.create(this, "Cluster") // ... .version(KubernetesVersion.V1_31) .clusterLogging(List.of(ClusterLoggingTypes.API, ClusterLoggingTypes.AUTHENTICATOR, ClusterLoggingTypes.SCHEDULER)) .build();
Known Issues and Limitations
-
ClassDescriptionRepresents an access entry in an Amazon EKS cluster.A fluent builder for
AccessEntry
.Represents the attributes of an access entry.A builder forAccessEntryAttributes
An implementation forAccessEntryAttributes
Represents the properties required to create an Amazon EKS access entry.A builder forAccessEntryProps
An implementation forAccessEntryProps
Represents the different types of access entries that can be used in an Amazon EKS cluster.Represents an Amazon EKS Access Policy that implements the IAccessPolicy interface.A fluent builder forAccessPolicy
.Represents an Amazon EKS Access Policy ARN.Represents the options required to create an Amazon EKS Access Policy using thefromAccessPolicyName()
method.A builder forAccessPolicyNameOptions
An implementation forAccessPolicyNameOptions
Properties for configuring an Amazon EKS Access Policy.A builder forAccessPolicyProps
An implementation forAccessPolicyProps
Represents the scope of an access policy.A builder forAccessScope
An implementation forAccessScope
Represents the scope type of an access policy.Represents an Amazon EKS Add-On.A fluent builder forAddon
.Represents the attributes of an addon for an Amazon EKS cluster.A builder forAddonAttributes
An implementation forAddonAttributes
Properties for creating an Amazon EKS Add-On.A builder forAddonProps
An implementation forAddonProps
Construct for installing the AWS ALB Contoller on EKS clusters.A fluent builder forAlbController
.Options forAlbController
.A builder forAlbControllerOptions
An implementation forAlbControllerOptions
Properties forAlbController
.A builder forAlbControllerProps
An implementation forAlbControllerProps
Controller version.ALB Scheme.Represents the authentication mode for an Amazon EKS cluster.Options for adding worker nodes.A builder forAutoScalingGroupCapacityOptions
An implementation forAutoScalingGroupCapacityOptions
Options for adding an AutoScalingGroup as capacity.A builder forAutoScalingGroupOptions
An implementation forAutoScalingGroupOptions
Manages mapping between IAM users and roles to Kubernetes RBAC configuration.A fluent builder forAwsAuth
.AwsAuth mapping.A builder forAwsAuthMapping
An implementation forAwsAuthMapping
Configuration props for the AwsAuth construct.A builder forAwsAuthProps
An implementation forAwsAuthProps
EKS node bootstrapping options.A builder forBootstrapOptions
An implementation forBootstrapOptions
Capacity type of the managed node group.Creates an access entry.An access policy includes permissions that allow Amazon EKS to authorize an IAM principal to work with Kubernetes objects on your cluster.A builder forCfnAccessEntry.AccessPolicyProperty
An implementation forCfnAccessEntry.AccessPolicyProperty
The scope of anAccessPolicy
that's associated to anAccessEntry
.A builder forCfnAccessEntry.AccessScopeProperty
An implementation forCfnAccessEntry.AccessScopeProperty
A fluent builder forCfnAccessEntry
.Properties for defining aCfnAccessEntry
.A builder forCfnAccessEntryProps
An implementation forCfnAccessEntryProps
Creates an Amazon EKS add-on.A fluent builder forCfnAddon
.Amazon EKS Pod Identity associations provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances.A builder forCfnAddon.PodIdentityAssociationProperty
An implementation forCfnAddon.PodIdentityAssociationProperty
Properties for defining aCfnAddon
.A builder forCfnAddonProps
An implementation forCfnAddonProps
Creates an Amazon EKS control plane.The access configuration for the cluster.A builder forCfnCluster.AccessConfigProperty
An implementation forCfnCluster.AccessConfigProperty
A fluent builder forCfnCluster
.The cluster control plane logging configuration for your cluster.A builder forCfnCluster.ClusterLoggingProperty
An implementation forCfnCluster.ClusterLoggingProperty
The placement configuration for all the control plane instances of your local Amazon EKS cluster on an AWS Outpost.A builder forCfnCluster.ControlPlanePlacementProperty
An implementation forCfnCluster.ControlPlanePlacementProperty
The encryption configuration for the cluster.A builder forCfnCluster.EncryptionConfigProperty
An implementation forCfnCluster.EncryptionConfigProperty
The Kubernetes network configuration for the cluster.A builder forCfnCluster.KubernetesNetworkConfigProperty
An implementation forCfnCluster.KubernetesNetworkConfigProperty
Enable or disable exporting the Kubernetes control plane logs for your cluster to CloudWatch Logs.A builder forCfnCluster.LoggingProperty
An implementation forCfnCluster.LoggingProperty
The enabled logging type.A builder forCfnCluster.LoggingTypeConfigProperty
An implementation forCfnCluster.LoggingTypeConfigProperty
The configuration of your local Amazon EKS cluster on an AWS Outpost.A builder forCfnCluster.OutpostConfigProperty
An implementation forCfnCluster.OutpostConfigProperty
Identifies the AWS Key Management Service ( AWS KMS ) key used to encrypt the secrets.A builder forCfnCluster.ProviderProperty
An implementation forCfnCluster.ProviderProperty
An object representing the VPC configuration to use for an Amazon EKS cluster.A builder forCfnCluster.ResourcesVpcConfigProperty
An implementation forCfnCluster.ResourcesVpcConfigProperty
An object representing the Upgrade Policy to use for the cluster.A builder forCfnCluster.UpgradePolicyProperty
An implementation forCfnCluster.UpgradePolicyProperty
The configuration for zonal shift for the cluster.A builder forCfnCluster.ZonalShiftConfigProperty
An implementation forCfnCluster.ZonalShiftConfigProperty
Properties for defining aCfnCluster
.A builder forCfnClusterProps
An implementation forCfnClusterProps
Creates an AWS Fargate profile for your Amazon EKS cluster.A fluent builder forCfnFargateProfile
.A key-value pair.A builder forCfnFargateProfile.LabelProperty
An implementation forCfnFargateProfile.LabelProperty
An object representing an AWS Fargate profile selector.A builder forCfnFargateProfile.SelectorProperty
An implementation forCfnFargateProfile.SelectorProperty
Properties for defining aCfnFargateProfile
.A builder forCfnFargateProfileProps
An implementation forCfnFargateProfileProps
Associates an identity provider configuration to a cluster.A fluent builder forCfnIdentityProviderConfig
.An object representing the configuration for an OpenID Connect (OIDC) identity provider.An implementation forCfnIdentityProviderConfig.OidcIdentityProviderConfigProperty
A key-value pair that describes a required claim in the identity token.A builder forCfnIdentityProviderConfig.RequiredClaimProperty
An implementation forCfnIdentityProviderConfig.RequiredClaimProperty
Properties for defining aCfnIdentityProviderConfig
.A builder forCfnIdentityProviderConfigProps
An implementation forCfnIdentityProviderConfigProps
Creates a managed node group for an Amazon EKS cluster.A fluent builder forCfnNodegroup
.An object representing a node group launch template specification.A builder forCfnNodegroup.LaunchTemplateSpecificationProperty
An implementation forCfnNodegroup.LaunchTemplateSpecificationProperty
An object representing the remote access configuration for the managed node group.A builder forCfnNodegroup.RemoteAccessProperty
An implementation forCfnNodegroup.RemoteAccessProperty
An object representing the scaling configuration details for the Auto Scaling group that is associated with your node group.A builder forCfnNodegroup.ScalingConfigProperty
An implementation forCfnNodegroup.ScalingConfigProperty
A property that allows a node to repel aPod
.A builder forCfnNodegroup.TaintProperty
An implementation forCfnNodegroup.TaintProperty
The update configuration for the node group.A builder forCfnNodegroup.UpdateConfigProperty
An implementation forCfnNodegroup.UpdateConfigProperty
Properties for defining aCfnNodegroup
.A builder forCfnNodegroupProps
An implementation forCfnNodegroupProps
Amazon EKS Pod Identity associations provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances.A fluent builder forCfnPodIdentityAssociation
.Properties for defining aCfnPodIdentityAssociation
.A builder forCfnPodIdentityAssociationProps
An implementation forCfnPodIdentityAssociationProps
A Cluster represents a managed Kubernetes Service (EKS).A fluent builder forCluster
.Attributes for EKS clusters.A builder forClusterAttributes
An implementation forClusterAttributes
EKS cluster logging types.Options for EKS clusters.A builder forClusterOptions
An implementation forClusterOptions
Common configuration props for EKS clusters.A builder forClusterProps
An implementation forClusterProps
Options for configuring an EKS cluster.A builder forCommonClusterOptions
An implementation forCommonClusterOptions
The type of compute resources to use for CoreDNS.CPU architecture.The default capacity type for the cluster.Construct an Amazon Linux 2 image from the latest EKS Optimized AMI published in SSM.A fluent builder forEksOptimizedImage
.Properties for EksOptimizedImage.A builder forEksOptimizedImageProps
An implementation forEksOptimizedImageProps
Endpoint access characteristics.Defines an EKS cluster that runs entirely on AWS Fargate.A fluent builder forFargateCluster
.Configuration props for EKS Fargate.A builder forFargateClusterProps
An implementation forFargateClusterProps
Fargate profiles allows an administrator to declare which pods run on Fargate.A fluent builder forFargateProfile
.Options for defining EKS Fargate Profiles.A builder forFargateProfileOptions
An implementation forFargateProfileOptions
Configuration props for EKS Fargate Profiles.A builder forFargateProfileProps
An implementation forFargateProfileProps
Represents a helm chart within the Kubernetes system.A fluent builder forHelmChart
.Helm Chart options.A builder forHelmChartOptions
An implementation forHelmChartOptions
Helm Chart properties.A builder forHelmChartProps
An implementation forHelmChartProps
Represents an access entry in an Amazon EKS cluster.Internal default implementation forIAccessEntry
.A proxy class which represents a concrete javascript instance of this type.Represents an access policy that defines the permissions and scope for a user or role to access an Amazon EKS cluster.Internal default implementation forIAccessPolicy
.A proxy class which represents a concrete javascript instance of this type.Represents an Amazon EKS Add-On.Internal default implementation forIAddon
.A proxy class which represents a concrete javascript instance of this type.An EKS cluster.Internal default implementation forICluster
.A proxy class which represents a concrete javascript instance of this type.Enum representing the different identity types that can be used for a Kubernetes service account.Imported KubectlProvider that can be used in place of the default one created by CDK.Internal default implementation forIKubectlProvider
.A proxy class which represents a concrete javascript instance of this type.Options for fetching an IngressLoadBalancerAddress.A builder forIngressLoadBalancerAddressOptions
An implementation forIngressLoadBalancerAddressOptions
NodeGroup interface.Internal default implementation forINodegroup
.A proxy class which represents a concrete javascript instance of this type.EKS cluster IP family.Implementation of Kubectl Lambda.A fluent builder forKubectlProvider
.Kubectl Provider Attributes.A builder forKubectlProviderAttributes
An implementation forKubectlProviderAttributes
Properties for a KubectlProvider.A builder forKubectlProviderProps
An implementation forKubectlProviderProps
Represents a manifest within the Kubernetes system.A fluent builder forKubernetesManifest
.Options forKubernetesManifest
.A builder forKubernetesManifestOptions
An implementation forKubernetesManifestOptions
Properties for KubernetesManifest.A builder forKubernetesManifestProps
An implementation forKubernetesManifestProps
Represents a value of a specific object deployed in the cluster.A fluent builder forKubernetesObjectValue
.Properties for KubernetesObjectValue.A builder forKubernetesObjectValueProps
An implementation forKubernetesObjectValueProps
A CloudFormation resource which applies/restores a JSON patch into a Kubernetes resource.A fluent builder forKubernetesPatch
.Properties for KubernetesPatch.A builder forKubernetesPatchProps
An implementation forKubernetesPatchProps
Kubernetes cluster version.Launch template property specification.A builder forLaunchTemplateSpec
An implementation forLaunchTemplateSpec
The machine image type.The Nodegroup resource class.A fluent builder forNodegroup
.The AMI type for your node group.The Nodegroup Options for addNodeGroup() method.A builder forNodegroupOptions
An implementation forNodegroupOptions
NodeGroup properties interface.A builder forNodegroupProps
An implementation forNodegroupProps
The remote access (SSH) configuration to use with your node group.A builder forNodegroupRemoteAccess
An implementation forNodegroupRemoteAccess
Whether the worker nodes should support GPU or just standard instances.IAM OIDC identity providers are entities in IAM that describe an external identity provider (IdP) service that supports the OpenID Connect (OIDC) standard, such as Google or Salesforce.A fluent builder forOpenIdConnectProvider
.Initialization properties forOpenIdConnectProvider
.A builder forOpenIdConnectProviderProps
An implementation forOpenIdConnectProviderProps
Values forkubectl patch
--type argument.Fargate profile selector.A builder forSelector
An implementation forSelector
Service Account.A fluent builder forServiceAccount
.Options forServiceAccount
.A builder forServiceAccountOptions
An implementation forServiceAccountOptions
Properties for defining service accounts.A builder forServiceAccountProps
An implementation forServiceAccountProps
Options for fetching a ServiceLoadBalancerAddress.A builder forServiceLoadBalancerAddressOptions
An implementation forServiceLoadBalancerAddressOptions
Effect types of kubernetes node taint.Taint interface.A builder forTaintSpec
An implementation forTaintSpec