@Deprecated
See: Description
Interface | Description |
---|---|
AutoScalingGroupOptions | Deprecated |
AwsAuthProps | Deprecated |
BootstrapOptions | Deprecated |
CapacityOptions | Deprecated |
CfnAddonProps |
Properties for defining a `CfnAddon`.
|
CfnCluster.ClusterLoggingProperty |
The cluster control plane logging configuration for your cluster.
|
CfnCluster.ControlPlanePlacementProperty |
Example:
|
CfnCluster.EncryptionConfigProperty |
The encryption configuration for the cluster.
|
CfnCluster.KubernetesNetworkConfigProperty |
The Kubernetes network configuration for the cluster.
|
CfnCluster.LoggingProperty |
Enable or disable exporting the Kubernetes control plane logs for your cluster to CloudWatch Logs.
|
CfnCluster.LoggingTypeConfigProperty |
The enabled logging type.
|
CfnCluster.OutpostConfigProperty |
The configuration of your local Amazon EKS cluster on an AWS Outpost.
|
CfnCluster.ProviderProperty |
Identifies the AWS Key Management Service ( AWS KMS ) key used to encrypt the secrets.
|
CfnCluster.ResourcesVpcConfigProperty |
An object representing the VPC configuration to use for an Amazon EKS cluster.
|
CfnClusterProps |
Properties for defining a `CfnCluster`.
|
CfnFargateProfile.LabelProperty |
A key-value pair.
|
CfnFargateProfile.SelectorProperty |
An object representing an AWS Fargate profile selector.
|
CfnFargateProfileProps |
Properties for defining a `CfnFargateProfile`.
|
CfnIdentityProviderConfig.OidcIdentityProviderConfigProperty |
An object representing the configuration for an OpenID Connect (OIDC) identity provider.
|
CfnIdentityProviderConfig.RequiredClaimProperty |
A key-value pair that describes a required claim in the identity token.
|
CfnIdentityProviderConfigProps |
Properties for defining a `CfnIdentityProviderConfig`.
|
CfnNodegroup.LaunchTemplateSpecificationProperty |
An object representing a node group launch template specification.
|
CfnNodegroup.RemoteAccessProperty |
An object representing the remote access configuration for the managed node group.
|
CfnNodegroup.ScalingConfigProperty |
An object representing the scaling configuration details for the Auto Scaling group that is associated with your node group.
|
CfnNodegroup.TaintProperty |
A property that allows a node to repel a set of pods.
|
CfnNodegroup.UpdateConfigProperty |
The update configuration for the node group.
|
CfnNodegroupProps |
Properties for defining a `CfnNodegroup`.
|
ClusterAttributes | Deprecated |
ClusterProps | Deprecated |
EksOptimizedImageProps | Deprecated |
HelmChartOptions | Deprecated |
HelmChartProps | Deprecated |
ICluster | Deprecated |
ICluster.Jsii$Default |
Internal default implementation for
ICluster . |
KubernetesResourceProps | Deprecated |
Mapping | Deprecated |
Enum | Description |
---|---|
NodeType | Deprecated |
---
This API may emit warnings. Backward compatibility is not guaranteed.
**This module is available for backwards compatibility purposes only (details). It will no longer be released with the CDK starting March 1st, 2020. See [issue
This construct library allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters programmatically. This library also supports programmatically defining Kubernetes resource manifests within EKS clusters.
This example defines an Amazon EKS cluster with the following configuration:
Cluster cluster = new Cluster(this, "hello-eks"); cluster.addResource("mypod", Map.of( "apiVersion", "v1", "kind", "Pod", "metadata", Map.of("name", "mypod"), "spec", Map.of( "containers", List.of(Map.of( "name", "hello", "image", "paulbouwer/hello-kubernetes:1.5", "ports", List.of(Map.of("containerPort", 8080)))))));
Here is a complete sample.
By default, eks.Cluster
is created with x2 m5.large
instances.
new Cluster(this, "cluster-two-m5-large");
The quantity and instance type for the default capacity can be specified through
the defaultCapacity
and defaultCapacityInstance
props:
Cluster.Builder.create(this, "cluster") .defaultCapacity(10) .defaultCapacityInstance(new InstanceType("m2.xlarge")) .build();
To disable the default capacity, simply set defaultCapacity
to 0
:
Cluster.Builder.create(this, "cluster-with-no-capacity").defaultCapacity(0).build();
The cluster.defaultCapacity
property will reference the AutoScalingGroup
resource for the default capacity. It will be undefined
if defaultCapacity
is set to 0
:
Cluster cluster = new Cluster(this, "my-cluster"); cluster.defaultCapacity.scaleOnCpuUtilization("up", CpuUtilizationScalingProps.builder() .targetUtilizationPercent(80) .build());
You can add customized capacity through cluster.addCapacity()
or
cluster.addAutoScalingGroup()
:
Cluster cluster; cluster.addCapacity("frontend-nodes", CapacityOptions.builder() .instanceType(new InstanceType("t2.medium")) .desiredCapacity(3) .vpcSubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build()) .build());
If spotPrice
is specified, the capacity will be purchased from spot instances:
Cluster cluster; cluster.addCapacity("spot", CapacityOptions.builder() .spotPrice("0.1094") .instanceType(new InstanceType("t3.large")) .maxCapacity(10) .build());
Spot instance nodes will be labeled with lifecycle=Ec2Spot
and tainted with PreferNoSchedule
.
The Spot Termination Handler DaemonSet will be installed on these nodes. The termination handler leverages EC2 Spot Instance Termination Notices to gracefully stop all pods running on spot nodes that are about to be terminated.
When adding capacity, you can specify options for
/etc/eks/boostrap.sh
which is responsible for associating the node to the EKS cluster. For example,
you can use kubeletExtraArgs
to add custom node labels or taints.
// up to ten spot instances Cluster cluster; cluster.addCapacity("spot", CapacityOptions.builder() .instanceType(new InstanceType("t3.large")) .desiredCapacity(2) .bootstrapOptions(BootstrapOptions.builder() .kubeletExtraArgs("--node-labels foo=bar,goo=far") .awsApiRetryAttempts(5) .build()) .build());
To disable bootstrapping altogether (i.e. to fully customize user-data), set bootstrapEnabled
to false
when you add
the capacity.
The Amazon EKS construct library allows you to specify an IAM role that will be
granted system:masters
privileges on your cluster.
Without specifying a mastersRole
, you will not be able to interact manually
with the cluster.
The following example defines an IAM role that can be assumed by all users
in the account and shows how to use the mastersRole
property to map this
role to the Kubernetes system:masters
group:
// first define the role Role clusterAdmin = Role.Builder.create(this, "AdminRole") .assumedBy(new AccountRootPrincipal()) .build(); // now define the cluster and map role to "masters" RBAC group // now define the cluster and map role to "masters" RBAC group Cluster.Builder.create(this, "Cluster") .mastersRole(clusterAdmin) .build();
When you cdk deploy
this CDK app, you will notice that an output will be printed
with the update-kubeconfig
command.
Something like this:
Outputs: eks-integ-defaults.ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y
Copy & paste the "aws eks update-kubeconfig ...
" command to your shell in
order to connect to your EKS cluster with the "masters" role.
Now, given AWS CLI is configured to use AWS
credentials for a user that is trusted by the masters role, you should be able
to interact with your cluster through kubectl
(the above example will trust
all users in the account).
For example:
$ aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y Added new context arn:aws:eks:eu-west-2:112233445566:cluster/cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 to /Users/boom/.kube/config $ kubectl get nodes # list all nodes NAME STATUS ROLES AGE VERSION ip-10-0-147-66.eu-west-2.compute.internal Ready <none> 21m v1.13.7-eks-c57ff8 ip-10-0-169-151.eu-west-2.compute.internal Ready <none> 21m v1.13.7-eks-c57ff8 $ kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/aws-node-fpmwv 1/1 Running 0 21m pod/aws-node-m9htf 1/1 Running 0 21m pod/coredns-5cb4fb54c7-q222j 1/1 Running 0 23m pod/coredns-5cb4fb54c7-v9nxx 1/1 Running 0 23m pod/kube-proxy-d4jrh 1/1 Running 0 21m pod/kube-proxy-q7hh7 1/1 Running 0 21m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 23m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/aws-node 2 2 2 2 2 <none> 23m daemonset.apps/kube-proxy 2 2 2 2 2 <none> 23m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/coredns 2/2 2 2 23m NAME DESIRED CURRENT READY AGE replicaset.apps/coredns-5cb4fb54c7 2 2 2 23m
For your convenience, an AWS CloudFormation output will automatically be
included in your template and will be printed when running cdk deploy
.
NOTE: if the cluster is configured with kubectlEnabled: false
, it
will be created with the role/user that created the AWS CloudFormation
stack. See Kubectl Support for details.
The KubernetesResource
construct or cluster.addResource
method can be used
to apply Kubernetes resource manifests to this cluster.
The following examples will deploy the paulbouwer/hello-kubernetes service on the cluster:
Cluster cluster; Map<String, String> appLabel = Map.of("app", "hello-kubernetes"); Map<String, Object> deployment = Map.of( "apiVersion", "apps/v1", "kind", "Deployment", "metadata", Map.of("name", "hello-kubernetes"), "spec", Map.of( "replicas", 3, "selector", Map.of("matchLabels", appLabel), "template", Map.of( "metadata", Map.of("labels", appLabel), "spec", Map.of( "containers", List.of(Map.of( "name", "hello-kubernetes", "image", "paulbouwer/hello-kubernetes:1.5", "ports", List.of(Map.of("containerPort", 8080)))))))); Map<String, Object> service = Map.of( "apiVersion", "v1", "kind", "Service", "metadata", Map.of("name", "hello-kubernetes"), "spec", Map.of( "type", "LoadBalancer", "ports", List.of(Map.of("port", 80, "targetPort", 8080)), "selector", appLabel)); // option 1: use a construct // option 1: use a construct KubernetesResource.Builder.create(this, "hello-kub") .cluster(cluster) .manifest(List.of(deployment, service)) .build(); // or, option2: use `addResource` cluster.addResource("hello-kub", service, deployment);
Since Kubernetes resources are implemented as CloudFormation resources in the
CDK. This means that if the resource is deleted from your code (or the stack is
deleted), the next cdk deploy
will issue a kubectl delete
command and the
Kubernetes resources will be deleted.
As described in the Amazon EKS User Guide, you can map AWS IAM users and roles to Kubernetes Role-based access control (RBAC).
The Amazon EKS construct manages the aws-auth ConfigMap Kubernetes resource
on your behalf and exposes an API through the cluster.awsAuth
for mapping
users, roles and accounts.
Furthermore, when auto-scaling capacity is added to the cluster (through
cluster.addCapacity
or cluster.addAutoScalingGroup
), the IAM instance role
of the auto-scaling group will be automatically mapped to RBAC so nodes can
connect to the cluster. No manual mapping is required any longer.
NOTE:
cluster.awsAuth
will throw an error if your cluster is created withkubectlEnabled: false
.
For example, let's say you want to grant an IAM user administrative privileges on your cluster:
Cluster cluster; User adminUser = new User(this, "Admin"); cluster.awsAuth.addUserMapping(adminUser, Mapping.builder().groups(List.of("system:masters")).build());
A convenience method for mapping a role to the system:masters
group is also available:
Cluster cluster; Role role; cluster.awsAuth.addMastersRole(role);
If you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you're connecting to and pass it, and you must be able to connect to the hosts (meaning they must have a public IP and you should be allowed to connect to them on port 22):
AutoScalingGroup asg = cluster.addCapacity("Nodes", CapacityOptions.builder() .instanceType(new InstanceType("t2.medium")) .vpcSubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build()) .keyName("my-key-name") .build()); // Replace with desired IP asg.connections.allowFrom(Peer.ipv4("1.2.3.4/32"), Port.tcp(22));
If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is unfortunately beyond the scope of this documentation.
When you create an Amazon EKS cluster, the IAM entity user or role, such as a
federated user
that creates the cluster, is automatically granted system:masters
permissions
in the cluster's RBAC configuration.
In order to allow programmatically defining Kubernetes resources in your AWS
CDK app and provisioning them through AWS CloudFormation, we will need to assume
this "masters" role every time we want to issue kubectl
operations against your
cluster.
At the moment, the AWS::EKS::Cluster
AWS CloudFormation resource does not support this behavior, so in order to
support "programmatic kubectl", such as applying manifests
and mapping IAM roles from within your CDK application, the Amazon EKS
construct library uses a custom resource for provisioning the cluster.
This custom resource is executed with an IAM role that we can then use
to issue kubectl
commands.
The default behavior of this library is to use this custom resource in order to retain programmatic control over the cluster. In other words: to allow you to define Kubernetes resources in your CDK code instead of having to manage your Kubernetes applications through a separate system.
One of the implications of this design is that, by default, the user who
provisioned the AWS CloudFormation stack (executed cdk deploy
) will
not have administrative privileges on the EKS cluster.
system:masters
group. This can be either
done by specifying a mastersRole
when the cluster is defined, calling
cluster.awsAuth.addMastersRole
or explicitly mapping an IAM role or IAM user to the
relevant Kubernetes RBAC groups using cluster.addRoleMapping
and/or
cluster.addUserMapping
.
If you wish to disable the programmatic kubectl behavior and use the standard
AWS::EKS::Cluster resource, you can specify kubectlEnabled: false
when you define
the cluster:
Cluster.Builder.create(this, "cluster") .kubectlEnabled(false) .build();
Take care: a change in this property will cause the cluster to be destroyed and a new cluster to be created.
When kubectl is disabled, you should be aware of the following:
--role-arn
as
long as you are using the same user that created the cluster.eks.Cluster
APIs that depend on programmatic kubectl support will fail
with an error: cluster.addResource
, cluster.addChart
, cluster.awsAuth
, props.mastersRole
.
The HelmChart
construct or cluster.addChart
method can be used
to add Kubernetes resources to this cluster using Helm.
The following example will install the NGINX Ingress Controller to you cluster using Helm.
Cluster cluster; // option 1: use a construct // option 1: use a construct HelmChart.Builder.create(this, "NginxIngress") .cluster(cluster) .chart("nginx-ingress") .repository("https://helm.nginx.com/stable") .namespace("kube-system") .build(); // or, option2: use `addChart` cluster.addChart("NginxIngress", HelmChartOptions.builder() .chart("nginx-ingress") .repository("https://helm.nginx.com/stable") .namespace("kube-system") .build());
Helm charts will be installed and updated using helm upgrade --install
.
This means that if the chart is added to CDK with the same release name, it will try to update
the chart in the cluster. The chart will exists as CloudFormation resource.
Helm charts are implemented as CloudFormation resources in CDK.
This means that if the chart is deleted from your code (or the stack is
deleted), the next cdk deploy
will issue a helm uninstall
command and the
Helm chart will be deleted.
When there is no release
defined, the chart will be installed with a unique name allocated
based on the construct path.