Installing Calico on Amazon EKS - Amazon EKS

Installing Calico on Amazon EKS

Project Calico is a network policy engine for Kubernetes. With Calico network policy enforcement, you can implement network segmentation and tenant isolation. This is useful in multi-tenant environments where you must isolate tenants from each other or when you want to create separate environments for development, staging, and production. Network policies are similar to AWS security groups in that you can create network ingress and egress rules. Instead of assigning instances to a security group, you assign network policies to pods using pod selectors and labels. The following procedure shows you how to install Calico on Linux nodes in your Amazon EKS cluster. To install Calico on Windows nodes, see Using Calico on Amazon EKS Windows Containers.

  • Calico is not supported when using Fargate with Amazon EKS.

  • Calico adds rules to iptables on the node that may be higher priority than existing rules that you've already implemented outside of Calico. Consider adding existing iptables rules to your Calico policies to avoid having rules outside of Calico policy overridden by Calico.

To install Calico on your Amazon EKS Linux nodes

  1. Apply the Calico manifest from the aws/amazon-vpc-cni-k8s GitHub project. This manifest creates DaemonSets in the kube-system namespace.

    kubectl apply -f
  2. Watch the kube-system DaemonSets and wait for the calico-node DaemonSet to have the DESIRED number of pods in the READY state. When this happens, Calico is working.

    kubectl get daemonset calico-node --namespace kube-system



To delete Calico from your Amazon EKS cluster

  • If you are done using Calico in your Amazon EKS cluster, you can delete the DaemonSet with the following command:

    kubectl delete -f

Stars policy demo

This section walks through the Stars policy demo provided by the Project Calico documentation. The demo creates a frontend, backend, and client service on your Amazon EKS cluster. The demo also creates a management GUI that shows the available ingress and egress paths between each service.

Before you create any network policies, all services can communicate bidirectionally. After you apply the network policies, you can see that the client can only communicate with the frontend service, and the backend only accepts traffic from the frontend.

To run the Stars policy demo

  1. Apply the frontend, backend, client, and management UI services:

    kubectl apply -f kubectl apply -f kubectl apply -f kubectl apply -f kubectl apply -f
  2. Wait for all of the pods to reach the Running status:

    kubectl get pods --all-namespaces --watch
  3. To connect to the management UI, forward your local port 9001 to the management-ui service running on your cluster:

    kubectl port-forward service/management-ui -n management-ui 9001
  4. Open a browser on your local system and point it to http://localhost:9001/. You should see the management UI. The C node is the client service, the F node is the frontend service, and the B node is the backend service. Each node has full communication access to all other nodes (as indicated by the bold, colored lines).

                        Open network policy
  5. Apply the following network policies to isolate the services from each other:

    kubectl apply -n stars -f kubectl apply -n client -f
  6. Refresh your browser. You see that the management UI can no longer reach any of the nodes, so they don't show up in the UI.

  7. Apply the following network policies to allow the management UI to access the services:

    kubectl apply -f kubectl apply -f
  8. Refresh your browser. You see that the management UI can reach the nodes again, but the nodes cannot communicate with each other.

                        UI access network policy
  9. Apply the following network policy to allow traffic from the frontend service to the backend service:

    kubectl apply -f
  10. Apply the following network policy to allow traffic from the client namespace to the frontend service:

    kubectl apply -f
                        Final network policy
  11. (Optional) When you are done with the demo, you can delete its resources with the following commands:

    kubectl delete -f kubectl delete -f kubectl delete -f kubectl delete -f kubectl delete -f

    Even after deleting the resources, there can still be iptables rules on the nodes that might interfere in unexpected ways with networking in your cluster. The only sure way to remove Calico is to terminate all of the nodes and recycle them. To terminate all nodes, either set the Auto Scaling Group desired count to 0, then back up to the desired number, or just terminate the nodes. If you are unable to recycle the nodes, then see Disabling and removing Calico Policy in the Calico GitHub repository for a last resort procedure.