Package software.amazon.awscdk.services.eks.v2.alpha
Amazon EKS V2 Construct Library
---
The APIs of higher level constructs in this module are experimental and under active development. They are subject to non-backward compatible changes or removal in any future version. These are not subject to the Semantic Versioning model and breaking changes will be announced in the release notes. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.
The eks-v2-alpha module is a rewrite of the existing aws-eks module (https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks-readme.html). This new iteration leverages native L1 CFN resources, replacing the previous custom resource approach for creating EKS clusters and Fargate Profiles.
Compared to the original EKS module, it has the following major changes:
- Use native L1 AWS::EKS::Cluster resource to replace custom resource Custom::AWSCDK-EKS-Cluster
- Use native L1 AWS::EKS::FargateProfile resource to replace custom resource Custom::AWSCDK-EKS-FargateProfile
- Kubectl Handler will not be created by default. It will only be created if users specify it.
- Remove AwsAuth construct. Permissions to the cluster will be managed by Access Entry.
- Remove the limit of 1 cluster per stack
- Remove nested stacks
- API changes to make them more ergonomic.
Quick start
Here is the minimal example of defining an AWS EKS cluster
Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_32) .build();
Architecture
+-----------------------------------------------+ | EKS Cluster | kubectl | | | -----------------|<--------+| Kubectl Handler | | AWS::EKS::Cluster (Optional) | | +--------------------+ +-----------------+ | | | | | | | | | Managed Node Group | | Fargate Profile | | | | | | | | | +--------------------+ +-----------------+ | +-----------------------------------------------+ ^ | connect self managed capacity + +--------------------+ | Auto Scaling Group | +--------------------+
In a nutshell:
- EKS Cluster - The cluster endpoint created by EKS.
- Managed Node Group - EC2 worker nodes managed by EKS.
- Fargate Profile - Fargate worker nodes managed by EKS.
- Auto Scaling Group - EC2 worker nodes managed by the user.
- Kubectl Handler (Optional) - Custom resource (i.e Lambda Function) for invoking kubectl commands on the cluster - created by CDK
Provisioning cluster
Creating a new cluster is done using the Cluster
constructs. The only required property is the kubernetes version.
Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_32) .build();
You can also use FargateCluster
to provision a cluster that uses only fargate workers.
FargateCluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_32) .build();
Note: Unlike the previous EKS cluster, Kubectl Handler
will not
be created by default. It will only be deployed when kubectlProviderOptions
property is used.
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v32.KubectlV32Layer; Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_32) .kubectlProviderOptions(KubectlProviderOptions.builder() .kubectlLayer(new KubectlV32Layer(this, "kubectl")) .build()) .build();
EKS Auto Mode
Amazon EKS Auto Mode extends AWS management of Kubernetes clusters beyond the cluster itself, allowing AWS to set up and manage the infrastructure that enables the smooth operation of your workloads.
Using Auto Mode
While aws-eks
uses DefaultCapacityType.NODEGROUP
by default, aws-eks-v2
uses DefaultCapacityType.AUTOMODE
as the default capacity type.
Auto Mode is enabled by default when creating a new cluster without specifying any capacity-related properties:
// Create EKS cluster with Auto Mode implicitly enabled Cluster cluster = Cluster.Builder.create(this, "EksAutoCluster") .version(KubernetesVersion.V1_32) .build();
You can also explicitly enable Auto Mode using defaultCapacityType
:
// Create EKS cluster with Auto Mode explicitly enabled Cluster cluster = Cluster.Builder.create(this, "EksAutoCluster") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.AUTOMODE) .build();
Node Pools
When Auto Mode is enabled, the cluster comes with two default node pools:
system
: For running system components and add-onsgeneral-purpose
: For running your application workloads
These node pools are managed automatically by EKS. You can configure which node pools to enable through the compute
property:
Cluster cluster = Cluster.Builder.create(this, "EksAutoCluster") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.AUTOMODE) .compute(ComputeConfig.builder() .nodePools(List.of("system", "general-purpose")) .build()) .build();
For more information, see Create a Node Pool for EKS Auto Mode.
Disabling Default Node Pools
You can disable the default node pools entirely by setting an empty array for nodePools
. This is useful when you want to use Auto Mode features but manage your compute resources separately:
Cluster cluster = Cluster.Builder.create(this, "EksAutoCluster") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.AUTOMODE) .compute(ComputeConfig.builder() .nodePools(List.of()) .build()) .build();
When node pools are disabled this way, no IAM role will be created for the node pools, preventing deployment failures that would otherwise occur when a role is created without any node pools.
Node Groups as the default capacity type
If you prefer to manage your own node groups instead of using Auto Mode, you can use the traditional node group approach by specifying defaultCapacityType
as NODEGROUP
:
// Create EKS cluster with traditional managed node group Cluster cluster = Cluster.Builder.create(this, "EksCluster") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.NODEGROUP) .defaultCapacity(3) // Number of instances .defaultCapacityInstance(InstanceType.of(InstanceClass.T3, InstanceSize.LARGE)) .build();
You can also create a cluster with no initial capacity and add node groups later:
Cluster cluster = Cluster.Builder.create(this, "EksCluster") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.NODEGROUP) .defaultCapacity(0) .build(); // Add node groups as needed cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .minSize(1) .maxSize(3) .instanceTypes(List.of(InstanceType.of(InstanceClass.T3, InstanceSize.LARGE))) .build());
Read Managed node groups for more information on how to add node groups to the cluster.
Mixed with Auto Mode and Node Groups
You can combine Auto Mode with traditional node groups for specific workload requirements:
Cluster cluster = Cluster.Builder.create(this, "Cluster") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.AUTOMODE) .compute(ComputeConfig.builder() .nodePools(List.of("system", "general-purpose")) .build()) .build(); // Add specialized node group for specific workloads cluster.addNodegroupCapacity("specialized-workload", NodegroupOptions.builder() .minSize(1) .maxSize(3) .instanceTypes(List.of(InstanceType.of(InstanceClass.C5, InstanceSize.XLARGE))) .labels(Map.of( "workload", "specialized")) .build());
Important Notes
- Auto Mode and traditional capacity management are mutually exclusive at the default capacity level. You cannot opt in to Auto Mode and specify
defaultCapacity
ordefaultCapacityInstance
. - When Auto Mode is enabled:
- The cluster will automatically manage compute resources
- Node pools cannot be modified, only enabled or disabled
- EKS will handle scaling and management of the node pools
- Auto Mode requires specific IAM permissions. The construct will automatically attach the required managed policies.
Managed node groups
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. With Amazon EKS managed node groups, you don't need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
For more details visit Amazon EKS Managed Node Groups.
By default, when using DefaultCapacityType.NODEGROUP
, this library will allocate a managed node group with 2 m5.large instances (this instance type suits most common use-cases, and is good value for money).
Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.NODEGROUP) .build();
At cluster instantiation time, you can customize the number of instances and their type:
Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.NODEGROUP) .defaultCapacity(5) .defaultCapacityInstance(InstanceType.of(InstanceClass.M5, InstanceSize.SMALL)) .build();
To access the node group that was created on your behalf, you can use cluster.defaultNodegroup
.
Additional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the cluster.addNodegroupCapacity
method:
Cluster cluster = Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_32) .defaultCapacityType(DefaultCapacityType.NODEGROUP) .defaultCapacity(0) .build(); cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m5.large"))) .minSize(4) .diskSize(100) .build());
Fargate profiles
AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate Profiles, which are defined as part of your Amazon EKS cluster.
See Fargate Considerations in the AWS EKS User Guide.
You can add Fargate Profiles to any EKS cluster defined in your CDK app
through the addFargateProfile()
method. The following example adds a profile
that will match all pods from the "default" namespace:
Cluster cluster; cluster.addFargateProfile("MyProfile", FargateProfileOptions.builder() .selectors(List.of(Selector.builder().namespace("default").build())) .build());
You can also directly use the FargateProfile
construct to create profiles under different scopes:
Cluster cluster; FargateProfile.Builder.create(this, "MyProfile") .cluster(cluster) .selectors(List.of(Selector.builder().namespace("default").build())) .build();
To create an EKS cluster that only uses Fargate capacity, you can use FargateCluster
.
The following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the "kube-system" and "default" namespaces. It is also configured to run CoreDNS on Fargate.
FargateCluster cluster = FargateCluster.Builder.create(this, "MyCluster") .version(KubernetesVersion.V1_32) .build();
FargateCluster
will create a default FargateProfile
which can be accessed via the cluster's defaultProfile
property. The created profile can also be customized by passing options as with addFargateProfile
.
NOTE: Classic Load Balancers and Network Load Balancers are not supported on pods running on Fargate. For ingress, we recommend that you use the ALB Ingress Controller on Amazon EKS (minimum version v1.1.4).
Endpoint Access
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl
)
You can configure the cluster endpoint access by using the endpointAccess
property:
Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_32) .endpointAccess(EndpointAccess.PRIVATE) .build();
The default value is eks.EndpointAccess.PUBLIC_AND_PRIVATE
. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and kubectl
commands issued by this library stay within your VPC.
Alb Controller
Some Kubernetes resources are commonly implemented on AWS with the help of the ALB Controller.
From the docs:
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
- It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
- It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
To deploy the controller on your EKS cluster, configure the albController
property:
Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_32) .albController(AlbControllerOptions.builder() .version(AlbControllerVersion.V2_8_2) .build()) .build();
The albController
requires defaultCapacity
or at least one nodegroup. If there's no defaultCapacity
or available
nodegroup for the cluster, the albController
deployment would fail.
Querying the controller pods should look something like this:
❯ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-76bd6c7586-d929p 1/1 Running 0 109m aws-load-balancer-controller-76bd6c7586-fqxph 1/1 Running 0 109m ... ...
Every Kubernetes manifest that utilizes the ALB Controller is effectively dependant on the controller. If the controller is deleted before the manifest, it might result in dangling ELB/ALB resources. Currently, the EKS construct library does not detect such dependencies, and they should be done explicitly.
For example:
Cluster cluster; KubernetesManifest manifest = cluster.addManifest("manifest", Map.of()); if (cluster.getAlbController()) { manifest.node.addDependency(cluster.getAlbController()); }
You can specify the VPC of the cluster using the vpc
and vpcSubnets
properties:
Vpc vpc; Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_32) .vpc(vpc) .vpcSubnets(List.of(SubnetSelection.builder().subnetType(SubnetType.PRIVATE_WITH_EGRESS).build())) .build();
If you do not specify a VPC, one will be created on your behalf, which you can then access via cluster.vpc
. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).
Please note that the vpcSubnets
property defines the subnets where EKS will place the control plane ENIs. To choose
the subnets where EKS will place the worker nodes, please refer to the Provisioning clusters section above.
If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:
Vpc vpc; Cluster cluster; cluster.addAutoScalingGroupCapacity("nodes", AutoScalingGroupCapacityOptions.builder() .vpcSubnets(SubnetSelection.builder().subnets(vpc.getPrivateSubnets()).build()) .instanceType(new InstanceType("t2.medium")) .build());
There is an additional components you might want to provision within the VPC.
The KubectlHandler
is a Lambda function responsible to issuing kubectl
and helm
commands against the cluster when you add resource manifests to the cluster.
The handler association to the VPC is derived from the endpointAccess
configuration. The rule of thumb is: If the cluster VPC can be associated, it will be.
Breaking this down, it means that if the endpoint exposes private access (via EndpointAccess.PRIVATE
or EndpointAccess.PUBLIC_AND_PRIVATE
), and the VPC contains private subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.
If the endpoint does not expose private access (via EndpointAccess.PUBLIC
) or the VPC does not contain private subnets, the function will not be provisioned within the VPC.
If your use-case requires control over the IAM role that the KubeCtl Handler assumes, a custom role can be passed through the ClusterProps (as kubectlLambdaRole
) of the EKS Cluster construct.
Kubectl Support
You can choose to have CDK create a Kubectl Handler
- a Python Lambda Function to
apply k8s manifests using kubectl apply
. This handler will not be created by default.
To create a Kubectl Handler
, use kubectlProviderOptions
when creating the cluster.
kubectlLayer
is the only required property in kubectlProviderOptions
.
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v32.KubectlV32Layer; Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_32) .kubectlProviderOptions(KubectlProviderOptions.builder() .kubectlLayer(new KubectlV32Layer(this, "kubectl")) .build()) .build();
Kubectl Handler
created along with the cluster will be granted admin permissions to the cluster.
If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v32.KubectlV32Layer; IRole handlerRole = Role.fromRoleArn(this, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role"); // get the serivceToken from the custom resource provider String functionArn = Function.fromFunctionName(this, "ProviderOnEventFunc", "ProviderframeworkonEvent-XXX").getFunctionArn(); IKubectlProvider kubectlProvider = KubectlProvider.fromKubectlProviderAttributes(this, "KubectlProvider", KubectlProviderAttributes.builder() .serviceToken(functionArn) .role(handlerRole) .build()); ICluster cluster = Cluster.fromClusterAttributes(this, "Cluster", ClusterAttributes.builder() .clusterName("cluster") .kubectlProvider(kubectlProvider) .build());
Environment
You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v32.KubectlV32Layer; Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_32) .kubectlProviderOptions(KubectlProviderOptions.builder() .kubectlLayer(new KubectlV32Layer(this, "kubectl")) .environment(Map.of( "http_proxy", "http://proxy.myproxy.com")) .build()) .build();
Runtime
The kubectl handler uses kubectl
, helm
and the aws
CLI in order to
interact with the cluster. These are bundled into AWS Lambda layers included in
the @aws-cdk/lambda-layer-awscli
and @aws-cdk/lambda-layer-kubectl
modules.
The version of kubectl used must be compatible with the Kubernetes version of the
cluster. kubectl is supported within one minor version (older or newer) of Kubernetes
(see Kubernetes version skew policy).
Depending on which version of kubernetes you're targeting, you will need to use one of
the @aws-cdk/lambda-layer-kubectl-vXY
packages.
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v32.KubectlV32Layer; Cluster cluster = Cluster.Builder.create(this, "hello-eks") .version(KubernetesVersion.V1_32) .kubectlProviderOptions(KubectlProviderOptions.builder() .kubectlLayer(new KubectlV32Layer(this, "kubectl")) .build()) .build();
Memory
By default, the kubectl provider is configured with 1024MiB of memory. You can use the memory
option to specify the memory size for the AWS Lambda function:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v32.KubectlV32Layer; Cluster.Builder.create(this, "MyCluster") .kubectlProviderOptions(KubectlProviderOptions.builder() .kubectlLayer(new KubectlV32Layer(this, "kubectl")) .memory(Size.gibibytes(4)) .build()) .version(KubernetesVersion.V1_32) .build();
ARM64 Support
Instance types with ARM64
architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 instanceType
(such as m6g.medium
), and the latest
Amazon Linux 2 AMI for ARM64 will be automatically selected.
Cluster cluster; // add a managed ARM64 nodegroup cluster.addNodegroupCapacity("extra-ng-arm", NodegroupOptions.builder() .instanceTypes(List.of(new InstanceType("m6g.medium"))) .minSize(2) .build()); // add a self-managed ARM64 nodegroup cluster.addAutoScalingGroupCapacity("self-ng-arm", AutoScalingGroupCapacityOptions.builder() .instanceType(new InstanceType("m6g.medium")) .minCapacity(2) .build());
Masters Role
When you create a cluster, you can specify a mastersRole
. The Cluster
construct will associate this role with AmazonEKSClusterAdminPolicy
through Access Entry.
Role role; Cluster.Builder.create(this, "HelloEKS") .version(KubernetesVersion.V1_32) .mastersRole(role) .build();
If you do not specify it, you won't have access to the cluster from outside of the CDK application.
Encryption
When you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled. The documentation on creating a cluster can provide more details about the customer master key (CMK) that can be used for the encryption.
You can use the secretsEncryptionKey
to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.
This setting can only be specified when the cluster is created and cannot be updated.
Key secretsKey = new Key(this, "SecretsKey"); Cluster cluster = Cluster.Builder.create(this, "MyCluster") .secretsEncryptionKey(secretsKey) .version(KubernetesVersion.V1_32) .build();
You can also use a similar configuration for running a cluster built using the FargateCluster construct.
Key secretsKey = new Key(this, "SecretsKey"); FargateCluster cluster = FargateCluster.Builder.create(this, "MyFargateCluster") .secretsEncryptionKey(secretsKey) .version(KubernetesVersion.V1_32) .build();
The Amazon Resource Name (ARN) for that CMK can be retrieved.
Cluster cluster; String clusterEncryptionConfigKeyArn = cluster.getClusterEncryptionConfigKeyArn();
Permissions and Security
In the new EKS module, ConfigMap
is deprecated. Clusters created by the new module will use API
as authentication mode. Access Entry will be the only way for granting permissions to specific IAM users and roles.
Access Entry
An access entry is a cluster identity—directly linked to an AWS IAM principal user or role that is used to authenticate to an Amazon EKS cluster. An Amazon EKS access policy authorizes an access entry to perform specific cluster actions.
Access policies are Amazon EKS-specific policies that assign Kubernetes permissions to access entries. Amazon EKS supports only predefined and AWS managed policies. Access policies are not AWS IAM entities and are defined and managed by Amazon EKS. Amazon EKS access policies include permission sets that support common use cases of administration, editing, or read-only access to Kubernetes resources. See Access Policy Permissions for more details.
Use AccessPolicy
to include predefined AWS managed policies:
// AmazonEKSClusterAdminPolicy with `cluster` scope AccessPolicy.fromAccessPolicyName("AmazonEKSClusterAdminPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.CLUSTER) .build()); // AmazonEKSAdminPolicy with `namespace` scope AccessPolicy.fromAccessPolicyName("AmazonEKSAdminPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.NAMESPACE) .namespaces(List.of("foo", "bar")) .build());
Use grantAccess()
to grant the AccessPolicy to an IAM principal:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v32.KubectlV32Layer; Vpc vpc; Role clusterAdminRole = Role.Builder.create(this, "ClusterAdminRole") .assumedBy(new ArnPrincipal("arn_for_trusted_principal")) .build(); Role eksAdminRole = Role.Builder.create(this, "EKSAdminRole") .assumedBy(new ArnPrincipal("arn_for_trusted_principal")) .build(); Cluster cluster = Cluster.Builder.create(this, "Cluster") .vpc(vpc) .mastersRole(clusterAdminRole) .version(KubernetesVersion.V1_32) .kubectlProviderOptions(KubectlProviderOptions.builder() .kubectlLayer(new KubectlV32Layer(this, "kubectl")) .memory(Size.gibibytes(4)) .build()) .build(); // Cluster Admin role for this cluster cluster.grantAccess("clusterAdminAccess", clusterAdminRole.getRoleArn(), List.of(AccessPolicy.fromAccessPolicyName("AmazonEKSClusterAdminPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.CLUSTER) .build()))); // EKS Admin role for specified namespaces of this cluster cluster.grantAccess("eksAdminRoleAccess", eksAdminRole.getRoleArn(), List.of(AccessPolicy.fromAccessPolicyName("AmazonEKSAdminPolicy", AccessPolicyNameOptions.builder() .accessScopeType(AccessScopeType.NAMESPACE) .namespaces(List.of("foo", "bar")) .build())));
By default, the cluster creator role will be granted the cluster admin permissions. You can disable it by setting
bootstrapClusterCreatorAdminPermissions
to false.
Note - Switching
bootstrapClusterCreatorAdminPermissions
on an existing cluster would cause cluster replacement and should be avoided in production.
Cluster Security Group
When you create an Amazon EKS cluster, a cluster security group is automatically created as well. This security group is designed to allow all traffic from the control plane and managed node groups to flow freely between each other.
The ID for that security group can be retrieved after creating the cluster.
Cluster cluster; String clusterSecurityGroupId = cluster.getClusterSecurityGroupId();
Applying Kubernetes Resources
To apply kubernetes resource, kubectl provider needs to be created for the cluster. You can use kubectlProviderOptions
to create the kubectl Provider.
The library supports several popular resource deployment mechanisms, among which are:
Kubernetes Manifests
The KubernetesManifest
construct or cluster.addManifest
method can be used
to apply Kubernetes resource manifests to this cluster.
When using
cluster.addManifest
, the manifest construct is defined within the cluster's stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly usenew KubernetesManifest
to create the manifest in the scope of the other stack.
The following examples will deploy the paulbouwer/hello-kubernetes service on the cluster:
Cluster cluster; Map<String, String> appLabel = Map.of("app", "hello-kubernetes"); Map<String, Object> deployment = Map.of( "apiVersion", "apps/v1", "kind", "Deployment", "metadata", Map.of("name", "hello-kubernetes"), "spec", Map.of( "replicas", 3, "selector", Map.of("matchLabels", appLabel), "template", Map.of( "metadata", Map.of("labels", appLabel), "spec", Map.of( "containers", List.of(Map.of( "name", "hello-kubernetes", "image", "paulbouwer/hello-kubernetes:1.5", "ports", List.of(Map.of("containerPort", 8080)))))))); Map<String, Object> service = Map.of( "apiVersion", "v1", "kind", "Service", "metadata", Map.of("name", "hello-kubernetes"), "spec", Map.of( "type", "LoadBalancer", "ports", List.of(Map.of("port", 80, "targetPort", 8080)), "selector", appLabel)); // option 1: use a construct // option 1: use a construct KubernetesManifest.Builder.create(this, "hello-kub") .cluster(cluster) .manifest(List.of(deployment, service)) .build(); // or, option2: use `addManifest` cluster.addManifest("hello-kub", service, deployment);
ALB Controller Integration
The KubernetesManifest
construct can detect ingress resources inside your manifest and automatically add the necessary annotations
so they are picked up by the ALB Controller.
See Alb Controller
To that end, it offers the following properties:
ingressAlb
- Signal that the ingress detection should be done.ingressAlbScheme
- Which ALB scheme should be applied. Defaults tointernal
.
Adding resources from a URL
The following example will deploy the resource manifest hosting on remote server:
// This example is only available in TypeScript import * as yaml from 'js-yaml'; import * as request from 'sync-request'; declare const cluster: eks.Cluster; const manifestUrl = 'https://url/of/manifest.yaml'; const manifest = yaml.safeLoadAll(request('GET', manifestUrl).getBody()); cluster.addManifest('my-resource', manifest);
Dependencies
There are cases where Kubernetes resources must be deployed in a specific order. For example, you cannot define a resource in a Kubernetes namespace before the namespace was created.
You can represent dependencies between KubernetesManifest
s using
resource.node.addDependency()
:
Cluster cluster; KubernetesManifest namespace = cluster.addManifest("my-namespace", Map.of( "apiVersion", "v1", "kind", "Namespace", "metadata", Map.of("name", "my-app"))); KubernetesManifest service = cluster.addManifest("my-service", Map.of( "metadata", Map.of( "name", "myservice", "namespace", "my-app"), "spec", Map.of())); service.node.addDependency(namespace);
NOTE: when a KubernetesManifest
includes multiple resources (either directly
or through cluster.addManifest()
) (e.g. cluster.addManifest('foo', r1, r2, r3,...)
), these resources will be applied as a single manifest via kubectl
and will be applied sequentially (the standard behavior in kubectl
).
Since Kubernetes manifests are implemented as CloudFormation resources in the
CDK. This means that if the manifest is deleted from your code (or the stack is
deleted), the next cdk deploy
will issue a kubectl delete
command and the
Kubernetes resources in that manifest will be deleted.
Resource Pruning
When a resource is deleted from a Kubernetes manifest, the EKS module will
automatically delete these resources by injecting a prune label to all
manifest resources. This label is then passed to kubectl apply --prune
.
Pruning is enabled by default but can be disabled through the prune
option
when a cluster is defined:
Cluster.Builder.create(this, "MyCluster") .version(KubernetesVersion.V1_32) .prune(false) .build();
Manifests Validation
The kubectl
CLI supports applying a manifest by skipping the validation.
This can be accomplished by setting the skipValidation
flag to true
in the KubernetesManifest
props.
Cluster cluster; KubernetesManifest.Builder.create(this, "HelloAppWithoutValidation") .cluster(cluster) .manifest(List.of(Map.of("foo", "bar"))) .skipValidation(true) .build();
Helm Charts
The HelmChart
construct or cluster.addHelmChart
method can be used
to add Kubernetes resources to this cluster using Helm.
When using
cluster.addHelmChart
, the manifest construct is defined within the cluster's stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly usenew HelmChart
to create the chart in the scope of the other stack.
The following example will install the NGINX Ingress Controller to your cluster using Helm.
Cluster cluster; // option 1: use a construct // option 1: use a construct HelmChart.Builder.create(this, "NginxIngress") .cluster(cluster) .chart("nginx-ingress") .repository("https://helm.nginx.com/stable") .namespace("kube-system") .build(); // or, option2: use `addHelmChart` cluster.addHelmChart("NginxIngress", HelmChartOptions.builder() .chart("nginx-ingress") .repository("https://helm.nginx.com/stable") .namespace("kube-system") .build());
Helm charts will be installed and updated using helm upgrade --install
, where a few parameters
are being passed down (such as repo
, values
, version
, namespace
, wait
, timeout
, etc).
This means that if the chart is added to CDK with the same release name, it will try to update
the chart in the cluster.
Additionally, the chartAsset
property can be an aws-s3-assets.Asset
. This allows the use of local, private helm charts.
import software.amazon.awscdk.services.s3.assets.*; Cluster cluster; Asset chartAsset = Asset.Builder.create(this, "ChartAsset") .path("/path/to/asset") .build(); cluster.addHelmChart("test-chart", HelmChartOptions.builder() .chartAsset(chartAsset) .build());
Nested values passed to the values
parameter should be provided as a nested dictionary:
Cluster cluster; cluster.addHelmChart("ExternalSecretsOperator", HelmChartOptions.builder() .chart("external-secrets") .release("external-secrets") .repository("https://charts.external-secrets.io") .namespace("external-secrets") .values(Map.of( "installCRDs", true, "webhook", Map.of( "port", 9443))) .build());
Helm chart can come with Custom Resource Definitions (CRDs) defined that by default will be installed by helm as well. However in special cases it might be needed to skip the installation of CRDs, for that the property skipCrds
can be used.
Cluster cluster; // option 1: use a construct // option 1: use a construct HelmChart.Builder.create(this, "NginxIngress") .cluster(cluster) .chart("nginx-ingress") .repository("https://helm.nginx.com/stable") .namespace("kube-system") .skipCrds(true) .build();
OCI Charts
OCI charts are also supported.
Also replace the ${VARS}
with appropriate values.
Cluster cluster; // option 1: use a construct // option 1: use a construct HelmChart.Builder.create(this, "MyOCIChart") .cluster(cluster) .chart("some-chart") .repository("oci://${ACCOUNT_ID}.dkr.ecr.${ACCOUNT_REGION}.amazonaws.com/${REPO_NAME}") .namespace("oci") .version("0.0.1") .build();
Helm charts are implemented as CloudFormation resources in CDK.
This means that if the chart is deleted from your code (or the stack is
deleted), the next cdk deploy
will issue a helm uninstall
command and the
Helm chart will be deleted.
When there is no release
defined, a unique ID will be allocated for the release based
on the construct path.
By default, all Helm charts will be installed concurrently. In some cases, this
could cause race conditions where two Helm charts attempt to deploy the same
resource or if Helm charts depend on each other. You can use
chart.node.addDependency()
in order to declare a dependency order between
charts:
Cluster cluster; HelmChart chart1 = cluster.addHelmChart("MyChart", HelmChartOptions.builder() .chart("foo") .build()); HelmChart chart2 = cluster.addHelmChart("MyChart", HelmChartOptions.builder() .chart("bar") .build()); chart2.node.addDependency(chart1);
Custom CDK8s Constructs
You can also compose a few stock cdk8s+
constructs into your own custom construct. However, since mixing scopes between aws-cdk
and cdk8s
is currently not supported, the Construct
class
you'll need to use is the one from the constructs
module, and not from aws-cdk-lib
like you normally would.
This is why we used new cdk8s.App()
as the scope of the chart above.
import software.constructs.*; import org.cdk8s.*; import org.cdk8s.plus25.*; public class LoadBalancedWebService { private Number port; public Number getPort() { return this.port; } public LoadBalancedWebService port(Number port) { this.port = port; return this; } private String image; public String getImage() { return this.image; } public LoadBalancedWebService image(String image) { this.image = image; return this; } private Number replicas; public Number getReplicas() { return this.replicas; } public LoadBalancedWebService replicas(Number replicas) { this.replicas = replicas; return this; } } App app = new App(); Chart chart = new Chart(app, "my-chart"); public class LoadBalancedWebService extends Construct { public LoadBalancedWebService(Construct scope, String id, LoadBalancedWebService props) { super(scope, id); Deployment deployment = Deployment.Builder.create(chart, "Deployment") .replicas(props.getReplicas()) .containers(List.of(Container.Builder.create().image(props.getImage()).build())) .build(); deployment.exposeViaService(DeploymentExposeViaServiceOptions.builder() .ports(List.of(ServicePort.builder().port(props.getPort()).build())) .serviceType(ServiceType.LOAD_BALANCER) .build()); } }
Manually importing k8s specs and CRD's
If you find yourself unable to use cdk8s+
, or just like to directly use the k8s
native objects or CRD's, you can do so by manually importing them using the cdk8s-cli
.
See Importing kubernetes objects for detailed instructions.
Patching Kubernetes Resources
The KubernetesPatch
construct can be used to update existing kubernetes
resources. The following example can be used to patch the hello-kubernetes
deployment from the example above with 5 replicas.
Cluster cluster; KubernetesPatch.Builder.create(this, "hello-kub-deployment-label") .cluster(cluster) .resourceName("deployment/hello-kubernetes") .applyPatch(Map.of("spec", Map.of("replicas", 5))) .restorePatch(Map.of("spec", Map.of("replicas", 3))) .build();
Querying Kubernetes Resources
The KubernetesObjectValue
construct can be used to query for information about kubernetes objects,
and use that as part of your CDK application.
For example, you can fetch the address of a LoadBalancer
type service:
Cluster cluster; // query the load balancer address KubernetesObjectValue myServiceAddress = KubernetesObjectValue.Builder.create(this, "LoadBalancerAttribute") .cluster(cluster) .objectType("service") .objectName("my-service") .jsonPath(".status.loadBalancer.ingress[0].hostname") .build(); // pass the address to a lambda function Function proxyFunction = Function.Builder.create(this, "ProxyFunction") .handler("index.handler") .code(Code.fromInline("my-code")) .runtime(Runtime.NODEJS_LATEST) .environment(Map.of( "myServiceAddress", myServiceAddress.getValue())) .build();
Specifically, since the above use-case is quite common, there is an easier way to access that information:
Cluster cluster; String loadBalancerAddress = cluster.getServiceLoadBalancerAddress("my-service");
Add-ons
Add-ons is a software that provides supporting operational capabilities to Kubernetes applications. The EKS module supports adding add-ons to your cluster using the eks.Addon
class.
Cluster cluster; Addon.Builder.create(this, "Addon") .cluster(cluster) .addonName("aws-guardduty-agent") .addonVersion("v1.6.1") // whether to preserve the add-on software on your cluster but Amazon EKS stops managing any settings for the add-on. .preserveOnDelete(false) .build();
Using existing clusters
The EKS library allows defining Kubernetes resources such as Kubernetes manifests and Helm charts on clusters that are not defined as part of your CDK app.
First you will need to import the kubectl provider and cluster created in another stack
IRole handlerRole = Role.fromRoleArn(this, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role"); IKubectlProvider kubectlProvider = KubectlProvider.fromKubectlProviderAttributes(this, "KubectlProvider", KubectlProviderAttributes.builder() .serviceToken("arn:aws:lambda:us-east-2:123456789012:function:my-function:1") .role(handlerRole) .build()); ICluster cluster = Cluster.fromClusterAttributes(this, "Cluster", ClusterAttributes.builder() .clusterName("cluster") .kubectlProvider(kubectlProvider) .build());
Then, you can use addManifest
or addHelmChart
to define resources inside
your Kubernetes cluster.
Cluster cluster; cluster.addManifest("Test", Map.of( "apiVersion", "v1", "kind", "ConfigMap", "metadata", Map.of( "name", "myconfigmap"), "data", Map.of( "Key", "value", "Another", "123454")));
Logging
EKS supports cluster logging for 5 different types of events:
- API requests to the cluster.
- Cluster access via the Kubernetes API.
- Authentication requests into the cluster.
- State of cluster controllers.
- Scheduling decisions.
You can enable logging for each one separately using the clusterLogging
property. For example:
Cluster cluster = Cluster.Builder.create(this, "Cluster") // ... .version(KubernetesVersion.V1_32) .clusterLogging(List.of(ClusterLoggingTypes.API, ClusterLoggingTypes.AUTHENTICATOR, ClusterLoggingTypes.SCHEDULER)) .build();
NodeGroup Repair Config
You can enable Managed Node Group auto-repair config using enableNodeAutoRepair
property. For example:
Cluster cluster; cluster.addNodegroupCapacity("NodeGroup", NodegroupOptions.builder() .enableNodeAutoRepair(true) .build());
-
ClassDescription(experimental) Represents an access entry in an Amazon EKS cluster.(experimental) A fluent builder for
AccessEntry
.(experimental) Represents the attributes of an access entry.A builder forAccessEntryAttributes
An implementation forAccessEntryAttributes
(experimental) Represents the properties required to create an Amazon EKS access entry.A builder forAccessEntryProps
An implementation forAccessEntryProps
(experimental) Represents the different types of access entries that can be used in an Amazon EKS cluster.(experimental) Represents an Amazon EKS Access Policy that implements the IAccessPolicy interface.(experimental) A fluent builder forAccessPolicy
.(experimental) Represents an Amazon EKS Access Policy ARN.(experimental) Represents the options required to create an Amazon EKS Access Policy using thefromAccessPolicyName()
method.A builder forAccessPolicyNameOptions
An implementation forAccessPolicyNameOptions
(experimental) Properties for configuring an Amazon EKS Access Policy.A builder forAccessPolicyProps
An implementation forAccessPolicyProps
(experimental) Represents the scope of an access policy.A builder forAccessScope
An implementation forAccessScope
(experimental) Represents the scope type of an access policy.(experimental) Represents an Amazon EKS Add-On.(experimental) A fluent builder forAddon
.(experimental) Represents the attributes of an addon for an Amazon EKS cluster.A builder forAddonAttributes
An implementation forAddonAttributes
(experimental) Properties for creating an Amazon EKS Add-On.A builder forAddonProps
An implementation forAddonProps
(experimental) Construct for installing the AWS ALB Contoller on EKS clusters.(experimental) A fluent builder forAlbController
.(experimental) Options forAlbController
.A builder forAlbControllerOptions
An implementation forAlbControllerOptions
(experimental) Properties forAlbController
.A builder forAlbControllerProps
An implementation forAlbControllerProps
(experimental) Controller version.(experimental) ALB Scheme.(experimental) Options for adding worker nodes.A builder forAutoScalingGroupCapacityOptions
An implementation forAutoScalingGroupCapacityOptions
(experimental) Options for adding an AutoScalingGroup as capacity.A builder forAutoScalingGroupOptions
An implementation forAutoScalingGroupOptions
(experimental) EKS node bootstrapping options.A builder forBootstrapOptions
An implementation forBootstrapOptions
(experimental) Capacity type of the managed node group.(experimental) A Cluster represents a managed Kubernetes Service (EKS).(experimental) A fluent builder forCluster
.(experimental) Attributes for EKS clusters.A builder forClusterAttributes
An implementation forClusterAttributes
(experimental) Options for configuring an EKS cluster.A builder forClusterCommonOptions
An implementation forClusterCommonOptions
(experimental) EKS cluster logging types.(experimental) Properties for configuring a standard EKS cluster (non-Fargate).A builder forClusterProps
An implementation forClusterProps
(experimental) Options for configuring EKS Auto Mode compute settings.A builder forComputeConfig
An implementation forComputeConfig
(experimental) The type of compute resources to use for CoreDNS.(experimental) CPU architecture.(experimental) The default capacity type for the cluster.(experimental) Construct an Amazon Linux 2 image from the latest EKS Optimized AMI published in SSM.(experimental) A fluent builder forEksOptimizedImage
.(experimental) Properties for EksOptimizedImage.A builder forEksOptimizedImageProps
An implementation forEksOptimizedImageProps
(experimental) Endpoint access characteristics.(experimental) Defines an EKS cluster that runs entirely on AWS Fargate.(experimental) A fluent builder forFargateCluster
.(experimental) Configuration props for EKS Fargate.A builder forFargateClusterProps
An implementation forFargateClusterProps
(experimental) Fargate profiles allows an administrator to declare which pods run on Fargate.(experimental) A fluent builder forFargateProfile
.(experimental) Options for defining EKS Fargate Profiles.A builder forFargateProfileOptions
An implementation forFargateProfileOptions
(experimental) Configuration props for EKS Fargate Profiles.A builder forFargateProfileProps
An implementation forFargateProfileProps
(experimental) Represents a helm chart within the Kubernetes system.(experimental) A fluent builder forHelmChart
.(experimental) Helm Chart options.A builder forHelmChartOptions
An implementation forHelmChartOptions
(experimental) Helm Chart properties.A builder forHelmChartProps
An implementation forHelmChartProps
(experimental) Represents an access entry in an Amazon EKS cluster.Internal default implementation forIAccessEntry
.A proxy class which represents a concrete javascript instance of this type.(experimental) Represents an access policy that defines the permissions and scope for a user or role to access an Amazon EKS cluster.Internal default implementation forIAccessPolicy
.A proxy class which represents a concrete javascript instance of this type.(experimental) Represents an Amazon EKS Add-On.Internal default implementation forIAddon
.A proxy class which represents a concrete javascript instance of this type.(experimental) An EKS cluster.Internal default implementation forICluster
.A proxy class which represents a concrete javascript instance of this type.(experimental) Enum representing the different identity types that can be used for a Kubernetes service account.(experimental) Imported KubectlProvider that can be used in place of the default one created by CDK.Internal default implementation forIKubectlProvider
.A proxy class which represents a concrete javascript instance of this type.(experimental) Options for fetching an IngressLoadBalancerAddress.A builder forIngressLoadBalancerAddressOptions
An implementation forIngressLoadBalancerAddressOptions
(experimental) NodeGroup interface.Internal default implementation forINodegroup
.A proxy class which represents a concrete javascript instance of this type.(experimental) EKS cluster IP family.(experimental) Implementation of Kubectl Lambda.(experimental) A fluent builder forKubectlProvider
.(experimental) Kubectl Provider Attributes.A builder forKubectlProviderAttributes
An implementation forKubectlProviderAttributes
Example:A builder forKubectlProviderOptions
An implementation forKubectlProviderOptions
(experimental) Properties for a KubectlProvider.A builder forKubectlProviderProps
An implementation forKubectlProviderProps
(experimental) Represents a manifest within the Kubernetes system.(experimental) A fluent builder forKubernetesManifest
.(experimental) Options forKubernetesManifest
.A builder forKubernetesManifestOptions
An implementation forKubernetesManifestOptions
(experimental) Properties for KubernetesManifest.A builder forKubernetesManifestProps
An implementation forKubernetesManifestProps
(experimental) Represents a value of a specific object deployed in the cluster.(experimental) A fluent builder forKubernetesObjectValue
.(experimental) Properties for KubernetesObjectValue.A builder forKubernetesObjectValueProps
An implementation forKubernetesObjectValueProps
(experimental) A CloudFormation resource which applies/restores a JSON patch into a Kubernetes resource.(experimental) A fluent builder forKubernetesPatch
.(experimental) Properties for KubernetesPatch.A builder forKubernetesPatchProps
An implementation forKubernetesPatchProps
(experimental) Kubernetes cluster version.(experimental) Launch template property specification.A builder forLaunchTemplateSpec
An implementation forLaunchTemplateSpec
(experimental) The machine image type.(experimental) The Nodegroup resource class.(experimental) A fluent builder forNodegroup
.(experimental) The AMI type for your node group.(experimental) The Nodegroup Options for addNodeGroup() method.A builder forNodegroupOptions
An implementation forNodegroupOptions
(experimental) NodeGroup properties interface.A builder forNodegroupProps
An implementation forNodegroupProps
(experimental) The remote access (SSH) configuration to use with your node group.A builder forNodegroupRemoteAccess
An implementation forNodegroupRemoteAccess
(experimental) Whether the worker nodes should support GPU or just standard instances.(experimental) IAM OIDC identity providers are entities in IAM that describe an external identity provider (IdP) service that supports the OpenID Connect (OIDC) standard, such as Google or Salesforce.(experimental) A fluent builder forOpenIdConnectProvider
.(experimental) Initialization properties forOpenIdConnectProvider
.A builder forOpenIdConnectProviderProps
An implementation forOpenIdConnectProviderProps
(experimental) Values forkubectl patch
--type argument.(experimental) Fargate profile selector.A builder forSelector
An implementation forSelector
(experimental) Service Account.(experimental) A fluent builder forServiceAccount
.(experimental) Options forServiceAccount
.A builder forServiceAccountOptions
An implementation forServiceAccountOptions
(experimental) Properties for defining service accounts.A builder forServiceAccountProps
An implementation forServiceAccountProps
(experimental) Options for fetching a ServiceLoadBalancerAddress.A builder forServiceLoadBalancerAddressOptions
An implementation forServiceLoadBalancerAddressOptions
(experimental) Effect types of kubernetes node taint.(experimental) Taint interface.A builder forTaintSpec
An implementation forTaintSpec