Amazon EFS CSI driver - Amazon EKS

Amazon EFS CSI driver

The Amazon EFS Container Storage Interface (CSI) driver provides a CSI interface that allows Kubernetes clusters running on AWS to manage the lifecycle of Amazon EFS file systems.

This topic shows you how to deploy the Amazon EFS CSI Driver to your Amazon EKS cluster and verify that it works.

Note

Alpha features of the Amazon EFS CSI Driver are not supported on Amazon EKS clusters.

For detailed descriptions of the available parameters and complete examples that demonstrate the driver's features, see the Amazon EFS Container Storage Interface (CSI) driver project on GitHub.

Considerations

  • You can't use dynamic persistent volume provisioning with Fargate nodes, but you can use static provisioning.

  • Dynamic provisioning requires 1.2 or later of the driver, which requires a 1.17 or later cluster. You can statically provision persistent volumes using 1.1 of the driver on any supported Amazon EKS cluster version.

Prerequisites

  • Existing cluster with an OIDC provider – If you don't have a cluster, you can create one using one of the Getting started with Amazon EKS guides. To determine whether you have an OIDC provider for an existing cluster, or to create one, see Create an IAM OIDC provider for your cluster.

  • AWS CLI – A command line tool for working with AWS services, including Amazon EKS. This guide requires that you use version 2.1.26 or later or 1.19.7 or later. For more information, see Installing, updating, and uninstalling the AWS CLI in the AWS Command Line Interface User Guide. After installing the AWS CLI, we recommend that you also configure it. For more information, see Quick configuration with aws configure in the AWS Command Line Interface User Guide.

  • kubectl – A command line tool for working with Kubernetes clusters. This guide requires that you use version 1.19 or later. For more information, see Installing kubectl.

Create an IAM policy and role

Create an IAM policy and assign it to an IAM role. The policy will allow the Amazon EFS driver to interact with your file system.

To deploy the Amazon EFS CSI driver to an Amazon EKS cluster

  1. Create an IAM policy that allows the CSI driver's service account to make calls to AWS APIs on your behalf.

    1. Download the IAM policy document from GitHub. You can also view the policy document.

      curl -o iam-policy-example.json https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/v1.2.0/docs/iam-policy-example.json
    2. Create the policy. You can change AmazonEKS_EFS_CSI_Driver_Policy to a different name, but if you do, make sure to change it in later steps too.

      aws iam create-policy \ --policy-name AmazonEKS_EFS_CSI_Driver_Policy \ --policy-document file://iam-policy-example.json
  2. Create an IAM role and attach the IAM policy to it. Annotate the Kubernetes service account with the IAM role ARN and the IAM role with the Kubernetes service account name. You can create the role using eksctl or the AWS CLI.

    eksctl

    The following command creates the IAM role and Kubernetes service account. It also attach the policy to the role, annotates the Kubernetes service account with the IAM role ARN and adds the Kubernetes service account name to the trust policy for the IAM role. If you don't have an IAM OIDC provider for your cluster, the command also create the IAM OIDC provider.

    eksctl create iamserviceaccount \ --name efs-csi-controller-sa \ --namespace kube-system \ --cluster <cluster-name> \ --attach-policy-arn arn:aws:iam::<AWS account ID>:policy/AmazonEKS_EFS_CSI_Driver_Policy \ --approve \ --override-existing-serviceaccounts \ --region us-west-2
    AWS CLI
    1. Determine your cluster's OIDC provider URL. Replace <cluster_name> (including <>) with your cluster name. If the output from the command is None, review the Prerequisites.

      aws eks describe-cluster --name <cluster-name> --query "cluster.identity.oidc.issuer" --output text

      Output

      https://oidc.eks.us-west-2.amazonaws.com/id/EXAMPLEXXX45D83924220DC4815XXXXX
    2. Create the IAM role, granting the Kubernetes service account the AssumeRoleWithWebIdentity action.

      1. Copy the following contents to a file named trust-policy.json. Replace <AWS_ACCOUNT_ID> (including <>) with your account ID and <EXAMPLEXXX45D83924220DC4815XXXXX> and us-west-2 with the value returned in the previous step.

        { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/<EXAMPLEXXX45D83924220DC4815XXXXX>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.us-west-2.amazonaws.com/id/<EXAMPLEXXX45D83924220DC4815XXXXX>:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa" } } } ] }
      2. Create the role. You can change AmazonEKS_EFS_CSI_DriverRole to a different name, but if you do, make sure to change it in later steps too.

        aws iam create-role \ --role-name AmazonEKS_EFS_CSI_DriverRole \ --assume-role-policy-document file://"trust-policy.json"
    3. Attach the IAM policy to the role. Replace <AWS_ACCOUNT_ID> (including <>) with your account ID.

      aws iam attach-role-policy \ --policy-arn arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AmazonEKS_EFS_CSI_Driver_Policy \ --role-name AmazonEKS_EFS_CSI_DriverRole
    4. Create a Kubernetes service account that is annotated with the ARN of the IAM role that you created.

      1. Save the following contents to a file named efs-service-account.yaml.

        --- apiVersion: v1 kind: ServiceAccount metadata: name: efs-csi-controller-sa namespace: kube-system labels: app.kubernetes.io/name: aws-efs-csi-driver annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AmazonEKS_EFS_CSI_DriverRole
      2. Apply the manifest.

        kubectl apply -f efs-service-account.yaml

Install the Amazon EFS driver

Install the Amazon EFS CSI driver using Helm or a manifest.

Important
  • The following steps install the 1.2.0 version of the driver, which requires a 1.17 or later cluster. If you're installing the driver on a cluster that is earlier than version 1.17, you need to install version 1.1 of the driver. For more information, see Amazon EFS CSI driver on GitHub.

  • Encryption of data in transit using TLS is enabled by default. Using encryption in transit, data is encrypted during its transition over the network to the Amazon EFS service. To disable it and mount volumes using NFSv4, set the volumeAttributes field encryptInTransit to "false" in your persistent volume manifest. For an example manifest, see Encryption in Transit example on GitHub.

Helm

This procedure requires Helm V3 or later. To install or upgrade Helm, see Using Helm with Amazon EKS.

  1. Add the Helm repo.

    helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
  2. Update the repo.

    helm repo update
  3. Install the chart. If your cluster isn't in the us-west-2 Region, then change 602401143452.dkr.ecr.us-west-2.amazonaws.com to the address for your Region.

    helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \ --namespace kube-system \ --set image.repository=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver \ --set serviceAccount.controller.create=false \ --set serviceAccount.controller.name=efs-csi-controller-sa
Manifest
  1. Download the manifest.

    kubectl kustomize \ "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr?ref=release-1.2" > driver.yaml
  2. Edit the file and remove the following lines that create a Kubernetes service account. This isn't necessary since the service account was created in a previous step.

    apiVersion: v1 kind: ServiceAccount metadata: labels: app.kubernetes.io/name: aws-efs-csi-driver name: efs-csi-controller-sa namespace: kube-system ---
  3. Find the following line. If your cluster is not in the us-west-2 region, replace the following address with the address for your Region. Once you've made the change, save your modified manifest.

    image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver:v1.2.0
  4. Apply the manifest.

    kubectl apply -f driver.yaml

Create an Amazon EFS file system

The Amazon EFS CSI driver supports Amazon EFS access points, which are application-specific entry points into an Amazon EFS file system that make it easier to share a file system between multiple pods. Access points can enforce a user identity for all file system requests that are made through the access point, and enforce a root directory for each pod. For more information, see Amazon EFS access points on GitHub.

Important

You must complete the following steps in the same terminal because variables are set and used across the steps.

To create an Amazon EFS file system for your Amazon EKS cluster

  1. Retrieve the VPC ID that your cluster is in and store it in a variable for use in a later step. Replace <cluster-name> (including <>) with your cluster name.

    vpc_id=$(aws eks describe-cluster \ --name <cluster-name> \ --query "cluster.resourcesVpcConfig.vpcId" \ --output text)
  2. Retrieve the CIDR range for your cluster's VPC and store it in a variable for use in a later step.

    cidr_range=$(aws ec2 describe-vpcs \ --vpc-ids $vpc_id \ --query "Vpcs[].CidrBlock" \ --output text)
  3. Create a security group with an inbound rule that allows inbound NFS traffic for your Amazon EFS mount points.

    1. Create a security group. Replace the example values with your own.

      security_group_id=$(aws ec2 create-security-group \ --group-name MyEfsSecurityGroup \ --description "My EFS security group" \ --vpc-id $vpc_id \ --output text)
    2. Create an inbound rule that allows inbound NFS traffic from the CIDR for your cluster's VPC.

      aws ec2 authorize-security-group-ingress \ --group-id $security_group_id \ --protocol tcp \ --port 2049 \ --cidr $cidr_range
      Important

      To further restrict access to your file system, you can use the CIDR for your subnet instead of the VPC.

  4. Create an Amazon EFS file system for your Amazon EKS cluster.

    1. Create a file system.

      file_system_id=$(aws efs create-file-system \ --region us-west-2 \ --performance-mode generalPurpose \ --query 'FileSystemId' \ --output text)
    2. Create mount targets.

      1. Determine the IP address of your cluster nodes.

        kubectl get nodes

        Output

        NAME STATUS ROLES AGE VERSION ip-192-168-56-0.us-west-2.compute.internal Ready <none> 19m v1.19.6-eks-49a6c0
      2. Determine the IDs of the subnets in your VPC and which Availability Zone the subnet is in.

        aws ec2 describe-subnets \ --filters "Name=vpc-id,Values=$vpc_id" \ --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \ --output table

        Output

        | DescribeSubnets | +------------------+--------------------+----------------------------+ | AvailabilityZone | CidrBlock | SubnetId | +------------------+--------------------+----------------------------+ | us-west-2c | 192.168.128.0/19 | subnet-EXAMPLE6e421a0e97 | | us-west-2b | 192.168.96.0/19 | subnet-EXAMPLEd0503db0ec | | us-west-2c | 192.168.32.0/19 | subnet-EXAMPLEe2ba886490 | | us-west-2b | 192.168.0.0/19 | subnet-EXAMPLE123c7c5182 | | us-west-2a | 192.168.160.0/19 | subnet-EXAMPLE0416ce588p | | us-west-2a | 192.168.64.0/19 | subnet-EXAMPLE12c68ea7fb | +------------------+--------------------+----------------------------+
      3. Add mount targets for the subnets that your nodes are in. From the output in the previous two steps, the cluster has one node with an IP address of 192.168.56.0. That IP address is within the CidrBlock of the subnet with the ID subnet-EXAMPLEe2ba886490. As a result, the following command creates a mount target for the subnet the node is in. If there were more nodes in the cluster, you'd run the command once for a subnet in each AZ that you had a node in, replacing subnet-EXAMPLEe2ba886490 with the appropriate subnet ID.

        aws efs create-mount-target \ --file-system-id $file_system_id \ --subnet-id subnet-EXAMPLEe2ba886490 \ --security-groups $security_group_id

(Optional) Deploy a sample application

You can deploy a sample app that dynamically creates a persistent volume, or you can manually create a persistent volume.

Dynamic
Important

You can't use dynamic provisioning with Fargate nodes.

Prerequisite

You must use version 1.2x or later of the Amazon EFS CSI driver, which requires a 1.17 or later cluster. To update your cluster, see Updating a cluster.

To deploy a sample application that uses a persistent volume that the controller creates

This procedure uses the Dynamic Provisioning example from the Amazon EFS Container Storage Interface (CSI) driver GitHub repository. It dynamically creates a persistent volume through EFS access points and a Persistent Volume Claim (PVC) that is consumed by a pod.

.

  1. Create a storage class for EFS. For all parameters and configuration options, see Amazon EFS CSI Driver on GitHub.

    1. Download a StorageClass manifest for Amazon EFS.

      curl -o storageclass.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml
    2. Edit the file, replacing the value for fileSystemId with your file system ID.

    3. Deploy the storage class.

      kubectl apply -f storageclass.yaml
  2. Test automatic provisioning by deploying a Pod that makes use of the PersistentVolumeClaim:

    1. Download a manifest that deploys a pod and a PersistentVolumeClaim.

      curl -o pod.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/pod.yaml
    2. Deploy the pod with a sample app and the PersistentVolumeClaim used by the pod.

      kubectl apply -f pod.yaml
  3. Determine the names of the pods running the controller.

    kubectl get pods -n kube-system | grep efs-csi-controller

    Output

    efs-csi-controller-74ccf9f566-q5989 3/3 Running 0 40m efs-csi-controller-74ccf9f566-wswg9 3/3 Running 0 40m
  4. After few seconds you can observe the controller picking up the change (edited for readability). Replace 74ccf9f566-q5989 with a value from one of the pods in your output from the previous command.

    kubectl logs efs-csi-controller-74ccf9f566-q5989 \ -n kube-system \ -c csi-provisioner \ --tail 10

    Output

    ... 1 controller.go:737] successfully created PV pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca for PVC efs-claim and csi volume name fs-95bcec92::fsap-02a88145b865d3a87

    If you don't see the previous output, run the previous command using one of the other controller pods.

  5. Confirm that a persistent volume was created with a status of Bound to a PersistentVolumeClaim:

    kubectl get pv

    Output

    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca 20Gi RWX Delete Bound default/efs-claim efs-sc 7m57s
  6. View details about the PersistentVolumeClaim that was created.

    kubectl get pvc

    Output

    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE efs-claim Bound pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca 20Gi RWX efs-sc 9m7s
  7. View the sample app pod's status.

    kubectl get pods -o wide

    Output

    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES efs-example 1/1 Running 0 10m 192.168.78.156 ip-192-168-73-191.us-west-2.compute.internal <none> <none>

    Confirm that the data is written to the volume.

    kubectl exec efs-app -- bash -c "cat data/out"

    Output

    ... Tue Mar 23 14:29:16 UTC 2021 Tue Mar 23 14:29:21 UTC 2021 Tue Mar 23 14:29:26 UTC 2021 Tue Mar 23 14:29:31 UTC 2021 ...
  8. (Optional) Terminate the Amazon EKS node that your pod is running on and wait for the pod to be re-scheduled. Alternately, you can delete the pod and redeploy it. Complete step 7 again, confirming that the output includes the previous output.

Static

To deploy a sample application that uses a persistent volume that you create

This procedure uses the Multiple Pods Read Write Many example from the Amazon EFS Container Storage Interface (CSI) driver GitHub repository to consume a statically provisioned Amazon EFS persistent volume and access it from multiple pods with the ReadWriteMany access mode.

  1. Clone the Amazon EFS Container Storage Interface (CSI) driver GitHub repository to your local system.

    git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git
  2. Navigate to the multiple_pods example directory.

    cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/
  3. Retrieve your Amazon EFS file system ID. You can find this in the Amazon EFS console, or use the following AWS CLI command.

    aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text

    Output:

    fs-<582a03f3>
  4. Edit the specs/pv.yaml file and replace the volumeHandle value with your Amazon EFS file system ID.

    apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: efs-sc csi: driver: efs.csi.aws.com volumeHandle: fs-<582a03f3>
    Note

    Because Amazon EFS is an elastic file system, it does not enforce any file system capacity limits. The actual storage capacity value in persistent volumes and persistent volume claims is not used when creating the file system. However, since storage capacity is a required field in Kubernetes, you must specify a valid value, such as, 5Gi in this example. This value does not limit the size of your Amazon EFS file system.

  5. Deploy the efs-sc storage class, efs-claim persistent volume claim, and efs-pv persistent volume from the specs directory.

    kubectl apply -f specs/pv.yaml kubectl apply -f specs/claim.yaml kubectl apply -f specs/storageclass.yaml
  6. List the persistent volumes in the default namespace. Look for a persistent volume with the default/efs-claim claim.

    kubectl get pv -w

    Output:

    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE efs-pv 5Gi RWX Retain Bound default/efs-claim efs-sc 2m50s

    Don't proceed to the next step until the STATUS is Bound.

  7. Deploy the app1 and app2 sample applications from the specs directory.

    kubectl apply -f specs/pod1.yaml kubectl apply -f specs/pod2.yaml
  8. Watch the pods in the default namespace and wait for the app1 and app2 pods' STATUS to become Running.

    kubectl get pods --watch
    Note

    It may take a few minutes for the pods to reach the Running status.

  9. Describe the persistent volume.

    kubectl describe pv efs-pv

    Output:

    Name: efs-pv Labels: none Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":["ReadWriteMany"],"capaci... pv.kubernetes.io/bound-by-controller: yes Finalizers: [kubernetes.io/pv-protection] StorageClass: efs-sc Status: Bound Claim: default/efs-claim Reclaim Policy: Retain Access Modes: RWX VolumeMode: Filesystem Capacity: 5Gi Node Affinity: none Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: efs.csi.aws.com VolumeHandle: fs-582a03f3 ReadOnly: false VolumeAttributes: none Events: none

    The Amazon EFS file system ID is listed as the VolumeHandle.

  10. Verify that the app1 pod is successfully writing data to the volume.

    kubectl exec -ti app1 -- tail /data/out1.txt

    Output:

    ... Mon Mar 22 18:18:22 UTC 2021 Mon Mar 22 18:18:27 UTC 2021 Mon Mar 22 18:18:32 UTC 2021 Mon Mar 22 18:18:37 UTC 2021 ...
  11. Verify that the app2 pod shows the same data in the volume that app1 wrote to the volume.

    kubectl exec -ti app2 -- tail /data/out1.txt

    Output:

    ... Mon Mar 22 18:18:22 UTC 2021 Mon Mar 22 18:18:27 UTC 2021 Mon Mar 22 18:18:32 UTC 2021 Mon Mar 22 18:18:37 UTC 2021 ...
  12. When you finish experimenting, delete the resources for this sample application to clean up.

    kubectl delete -f specs/

    You can also manually delete the file system and security group that you created.