Deploy Windows nodes on EKS clusters - Amazon EKS

Help improve this page

Want to contribute to this user guide? Scroll to the bottom of this page and select Edit this page on GitHub. Your contributions will help make our user guide better for everyone.

Deploy Windows nodes on EKS clusters

Before deploying Windows nodes, be aware of the following considerations.

Considerations
  • You can use host networking on Windows nodes using HostProcess Pods. For more information, see Create a Windows HostProcessPod in the Kubernetes documentation.

  • Amazon EKS clusters must contain one or more Linux or Fargate nodes to run core system Pods that only run on Linux, such as CoreDNS.

  • The kubelet and kube-proxy event logs are redirected to the EKS Windows Event Log and are set to a 200 MB limit.

  • You can't use Security groups for Pods with Pods running on Windows nodes.

  • You can't use custom networking with Windows nodes.

  • You can't use IPv6 with Windows nodes.

  • Windows nodes support one elastic network interface per node. By default, the number of Pods that you can run per Windows node is equal to the number of IP addresses available per elastic network interface for the node's instance type, minus one. For more information, see IP addresses per network interface per instance type in the Amazon EC2 User Guide.

  • In an Amazon EKS cluster, a single service with a load balancer can support up to 1024 back-end Pods. Each Pod has its own unique IP address. The previous limit of 64 Pods is no longer the case, after a Windows Server update starting with OS Build 17763.2746.

  • Windows containers aren't supported for Amazon EKS Pods on Fargate.

  • You can't retrieve logs from the vpc-resource-controller Pod. You previously could when you deployed the controller to the data plane.

  • There is a cool down period before an IPv4 address is assigned to a new Pod. This prevents traffic from flowing to an older Pod with the same IPv4 address due to stale kube-proxy rules.

  • The source for the controller is managed on GitHub. To contribute to, or file issues against the controller, visit the project on GitHub.

  • When specifying a custom AMI ID for Windows managed node groups, add eks:kube-proxy-windows to your AWS IAM Authenticator configuration map. For more information, see Limits and conditions when specifying an AMI ID.

  • If preserving your available IPv4 addresses is crucial for your subnet, refer to EKS Best Practices Guide - Windows Networking IP Address Management for guidance.

Prerequisites
  • An existing cluster. The cluster must be running one of the Kubernetes versions and platform versions listed in the following table. Any Kubernetes and platform versions later than those listed are also supported. If your cluster or platform version is earlier than one of the following versions, you need to enable legacy Windows support on your cluster's data plane. Once your cluster is at one of the following Kubernetes and platform versions, or later, you can remove legacy Windows support and enable Windows support on your control plane.

    Kubernetes version Platform version
    1.30 eks.2
    1.29 eks.1
    1.28 eks.1
    1.27 eks.1
    1.26 eks.1
    1.25 eks.1
    1.24 eks.2
  • Your cluster must have at least one (we recommend at least two) Linux node or Fargate Pod to run CoreDNS. If you enable legacy Windows support, you must use a Linux node (you can't use a Fargate Pod) to run CoreDNS.

  • An existing Amazon EKS cluster IAM role.

Enable Windows support

If your cluster isn't at, or later, than one of the Kubernetes and platform versions listed in the Prerequisites, you must enable legacy Windows support instead. For more information, see Legacy cluster support for Windows nodes.

If you've never enabled Windows support on your cluster, skip to the next step.

If you enabled Windows support on a cluster that is earlier than a Kubernetes or platform version listed in the Prerequisites, then you must first remove the vpc-resource-controller and vpc-admission-webhook from your data plane. They're deprecated and no longer needed.

To enable Windows support for your cluster
  1. If you don't have Amazon Linux nodes in your cluster and use security groups for Pods, skip to the next step. Otherwise, confirm that the AmazonEKSVPCResourceController managed policy is attached to your cluster role. Replace eksClusterRole with your cluster role name.

    aws iam list-attached-role-policies --role-name eksClusterRole

    An example output is as follows.

    {
        "AttachedPolicies": [
            {
                "PolicyName": "AmazonEKSClusterPolicy",
                "PolicyArn": "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
            },
            {
                "PolicyName": "AmazonEKSVPCResourceController",
                "PolicyArn": "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
            }
        ]
    }

    If the policy is attached, as it is in the previous output, skip the next step.

  2. Attach the AmazonEKSVPCResourceController managed policy to your Amazon EKS cluster IAM role. Replace eksClusterRole with your cluster role name.

    aws iam attach-role-policy \ --role-name eksClusterRole \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController
  3. Create a file named vpc-resource-controller-configmap.yaml with the following contents.

    apiVersion: v1 kind: ConfigMap metadata: name: amazon-vpc-cni namespace: kube-system data: enable-windows-ipam: "true"
  4. Apply the ConfigMap to your cluster.

    kubectl apply -f vpc-resource-controller-configmap.yaml
  5. Verify that your aws-auth ConfigMap contains a mapping for the instance role of the Windows node to include the eks:kube-proxy-windows RBAC permission group. You can verify by running the following command.

    kubectl get configmap aws-auth -n kube-system -o yaml

    An example output is as follows.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - groups:
          - system:bootstrappers
          - system:nodes
          - eks:kube-proxy-windows # This group is required for Windows DNS resolution to work
          rolearn: arn:aws:iam::111122223333:role/eksNodeRole
          username: system:node:{{EC2PrivateDNSName}}
    [...]
    

    You should see eks:kube-proxy-windows listed under groups. If the group isn't specified, you need to update your ConfigMap or create it to include the required group. For more information about the aws-auth ConfigMap, see Apply the aws-auth   ConfigMap to your cluster.

Deploy Windows Pods

When you deploy Pods to your cluster, you need to specify the operating system that they use if you're running a mixture of node types.

For Linux Pods, use the following node selector text in your manifests.

nodeSelector: kubernetes.io/os: linux kubernetes.io/arch: amd64

For Windows Pods, use the following node selector text in your manifests.

nodeSelector: kubernetes.io/os: windows kubernetes.io/arch: amd64

You can deploy a sample application to see the node selectors in use.

Support higher Pod density on Windows nodes

In Amazon EKS, each Pod is allocated an IPv4 address from your VPC. Due to this, the number of Pods that you can deploy to a node is constrained by the available IP addresses, even if there are sufficient resources to run more Pods on the node. Since only one elastic network interface is supported by a Windows node, by default, the maximum number of available IP addresses on a Windows node is equal to:

Number of private IPv4 addresses for each interface on the node - 1

One IP address is used as the primary IP address of the network interface, so it can't be allocated to Pods.

You can enable higher Pod density on Windows nodes by enabling IP prefix delegation. This feature enables you to assign a /28 IPv4 prefix to the primary network interface, instead of assigning secondary IPv4 addresses. Assigning an IP prefix increases the maximum available IPv4 addresses on the node to:

(Number of private IPv4 addresses assigned to the interface attached to the node - 1) * 16

With this significantly larger number of available IP addresses, available IP addresses shouldn't limit your ability to scale the number of Pods on your nodes. For more information, see Increase the amount of available IP addresses for your Amazon EC2 nodes.