Help improve this page
Want to contribute to this user guide? Scroll to the bottom of this page and select Edit this page on GitHub. Your contributions will help make our user guide better for everyone.
Store high-performance apps with FSx for Lustre
The FSx for Lustre
Container Storage Interface (CSI) driver
This topic shows you how to deploy the FSx for Lustre CSI driver to your Amazon EKS cluster and
verify that it works. We recommend using the latest version of the
driver. For available versions, see CSI Specification Compatibility Matrix
Note
The driver isn't supported on Fargate.
For detailed descriptions of the available parameters and complete examples that
demonstrate the driver's features, see the FSx for Lustre Container
Storage Interface (CSI) driver
Prerequisites
You must have:
-
Version
2.12.3
or later or version1.27.160
or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use
. Package managers suchaws --version | cut -d / -f2 | cut -d ' ' -f1
yum
,apt-get
, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see Installing, updating, and uninstalling the AWS CLI and Quick configuration with aws configure in the AWS Command Line Interface User Guide. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see Installing AWS CLI to your home directory in the AWS CloudShell User Guide. -
Version
0.191.0
or later of theeksctl
command line tool installed on your device or AWS CloudShell. To install or updateeksctl
, see Installationin the eksctl
documentation. -
The
kubectl
command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is1.30
, you can usekubectl
version1.29
,1.30
, or1.31
with it. To install or upgradekubectl
, see Set up kubectl and eksctl.
The following procedures help you create a simple test cluster with the FSx for Lustre CSI
driver so that you can see how it works. We don't recommend using the testing cluster for
production workloads. For this tutorial, we recommend using the
, except where it's noted to replace them. You can
replace any example
values
when completing the steps
for your production cluster. We recommend completing all steps in the same terminal because
variables are set and used throughout the steps and won't exist in different
terminals.example value
To deploy the FSx for Lustre CSI driver
-
Set a few variables to use in the remaining steps. Replace
with the name of the test cluster you want to create andmy-csi-fsx-cluster
with the AWS Region that you want to create your test cluster in.region-code
export cluster_name=
my-csi-fsx-cluster
export region_code=region-code
-
Create a test cluster.
eksctl create cluster \ --name $cluster_name \ --region $region_code \ --with-oidc \ --ssh-access \ --ssh-public-key
my-key
Cluster provisioning takes several minutes. During cluster creation, you'll see several lines of output. The last line of output is similar to the following example line.
[✓] EKS cluster "
my-csi-fsx-cluster
" in "region-code
" region is ready -
Create a Kubernetes service account for the driver and attach the
AmazonFSxFullAccess
AWS-managed policy to the service account with the following command. If your cluster is in the AWS GovCloud (US-East) or AWS GovCloud (US-West) AWS Regions, then replacearn:aws:
witharn:aws-us-gov:
.eksctl create iamserviceaccount \ --name fsx-csi-controller-sa \ --namespace kube-system \ --cluster $cluster_name \ --attach-policy-arn arn:aws:iam::aws:policy/AmazonFSxFullAccess \ --approve \ --role-name
AmazonEKSFSxLustreCSIDriverFullAccess
\ --region $region_codeYou'll see several lines of output as the service account is created. The last lines of output are similar to the following.
[ℹ] 1 task: { 2 sequential sub-tasks: { create IAM role for serviceaccount "kube-system/
fsx-csi-controller-sa
", create serviceaccount "kube-system/fsx-csi-controller-sa
", } } [ℹ] building iamserviceaccount stack "eksctl-my-csi-fsx-cluster
-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa
" [ℹ] deploying stack "eksctl-my-csi-fsx-cluster
-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa
" [ℹ] waiting for CloudFormation stack "eksctl-my-csi-fsx-cluster
-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa
" [ℹ] created serviceaccount "kube-system/fsx-csi-controller-sa
"Note the name of the AWS CloudFormation stack that was deployed. In the previous example output, the stack is named
eksctl-
.my-csi-fsx-cluster
-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa
-
Deploy the driver with the following command. Replace
with your desired branch. The master branch isn't supported because it may contain upcoming features incompatible with the currently released stable version of the driver. We recommend using the latest released version. For a list of branches, seerelease-X.XX
aws-fsx-csi-driver
Brancheson GitHub. Note
You can view the content being applied in
aws-fsx-csi-driver/deploy/kubernetes/overlays/stable
on GitHub. kubectl apply -k "github.com/kubernetes-sigs/aws-fsx-csi-driver/deploy/kubernetes/overlays/stable/?ref=
release-X.XX
"An example output is as follows.
serviceaccount/
fsx-csi-controller-sa
created serviceaccount/fsx-csi-node-sa created clusterrole.rbac.authorization.k8s.io/fsx-csi-external-provisioner-role created clusterrole.rbac.authorization.k8s.io/fsx-external-resizer-role created clusterrolebinding.rbac.authorization.k8s.io/fsx-csi-external-provisioner-binding created clusterrolebinding.rbac.authorization.k8s.io/fsx-csi-resizer-binding created deployment.apps/fsx-csi-controller created daemonset.apps/fsx-csi-node created csidriver.storage.k8s.io/fsx.csi.aws.com created -
Note the ARN for the role that was created. If you didn't note it earlier and don't have it available anymore in the AWS CLI output, you can do the following to see it in the AWS Management Console.
Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation
. -
Ensure that the console is set to the AWS Region that you created your IAM role in and then select Stacks.
-
Select the stack named
eksctl-
.my-csi-fsx-cluster
-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa
-
Select the Outputs tab. The Role1 ARN is listed on the Outputs (1) page.
-
Patch the driver deployment to add the service account that you created earlier with the following command. Replace the ARN with the ARN that you noted. Replace
with your account ID. If your cluster is in the AWS GovCloud (US-East) or AWS GovCloud (US-West) AWS Regions, then replace111122223333
arn:aws:
witharn:aws-us-gov:
.kubectl annotate serviceaccount -n kube-system
fsx-csi-controller-sa
\ eks.amazonaws.com/role-arn=arn:aws:iam::
--overwrite=true111122223333
:role/AmazonEKSFSxLustreCSIDriverFullAccess
An example output is as follows.
serviceaccount/
fsx-csi-controller-sa
annotated
To deploy a storage class, persistent volume claim, and sample app
This procedure uses the FSx for Lustre Container
Storage Interface (CSI) driver
-
Note the security group for your cluster. You can see it in the AWS Management Console under the Networking section or by using the following AWS CLI command.
aws eks describe-cluster --name $cluster_name --query cluster.resourcesVpcConfig.clusterSecurityGroupId
-
Create a security group for your Amazon FSx file system according to the criteria shown in Amazon VPC Security Groups in the Amazon FSx for Lustre User Guide. For the VPC, select the VPC of your cluster as shown under the Networking section. For "the security groups associated with your Lustre clients", use your cluster security group. You can leave the outbound rules alone to allow All traffic.
-
Download the storage class manifest with the following command.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-fsx-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml
-
Edit the parameters section of the
file. Replace everystorageclass.yaml
with your own values.example value
parameters: subnetId:
subnet-0eabfaa81fb22bcaf
securityGroupIds:sg-068000ccf82dfba88
deploymentType:PERSISTENT_1
automaticBackupRetentionDays:"1"
dailyAutomaticBackupStartTime:"00:00"
copyTagsToBackups:"true"
perUnitStorageThroughput:"200"
dataCompressionType:"NONE"
weeklyMaintenanceStartTime:"7:09:00"
fileSystemTypeVersion:"2.12"
-
subnetId
– The subnet ID that the Amazon FSx for Lustre file system should be created in. Amazon FSx for Lustre isn't supported in all Availability Zones. Open the Amazon FSx for Lustre console at https://console.aws.amazon.com/fsx/to confirm that the subnet that you want to use is in a supported Availability Zone. The subnet can include your nodes, or can be a different subnet or VPC: -
You can check for the node subnets in the AWS Management Console by selecting the node group under the Compute section.
-
If the subnet that you specify isn't the same subnet that you have nodes in, then your VPCs must be connected, and you must ensure that you have the necessary ports open in your security groups.
-
-
securityGroupIds
– The ID of the security group you created for the file system. -
deploymentType
(optional) – The file system deployment type. Valid values areSCRATCH_1
,SCRATCH_2
,PERSISTENT_1
, andPERSISTENT_2
. For more information about deployment types, see Create your Amazon FSx for Lustre file system. -
other parameters (optional) – For information about the other parameters, see Edit StorageClass
on GitHub.
-
-
Create the storage class manifest.
kubectl apply -f storageclass.yaml
An example output is as follows.
storageclass.storage.k8s.io/fsx-sc created
-
Download the persistent volume claim manifest.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-fsx-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/claim.yaml
-
(Optional) Edit the
file. Changeclaim.yaml
to one of the following increment values, based on your storage requirements and the1200Gi
deploymentType
that you selected in a previous step.storage:
1200Gi
-
SCRATCH_2
andPERSISTENT
–1.2 TiB
,2.4 TiB
, or increments of 2.4 TiB over 2.4 TiB. -
SCRATCH_1
–1.2 TiB
,2.4 TiB
,3.6 TiB
, or increments of 3.6 TiB over 3.6 TiB.
-
-
Create the persistent volume claim.
kubectl apply -f claim.yaml
An example output is as follows.
persistentvolumeclaim/fsx-claim created
-
Confirm that the file system is provisioned.
kubectl describe pvc
An example output is as follows.
Name: fsx-claim Namespace: default StorageClass: fsx-sc Status: Bound [...]
Note
The
Status
may show asPending
for 5-10 minutes, before changing toBound
. Don't continue with the next step until theStatus
isBound
. If theStatus
showsPending
for more than 10 minutes, use warning messages in theEvents
as reference for addressing any problems. -
Deploy the sample application.
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-fsx-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/pod.yaml
-
Verify that the sample application is running.
kubectl get pods
An example output is as follows.
NAME READY STATUS RESTARTS AGE fsx-app 1/1 Running 0 8s
-
Verify that the file system is mounted correctly by the application.
kubectl exec -ti fsx-app -- df -h
An example output is as follows.
Filesystem Size Used Avail Use% Mounted on overlay
80G
4.0G
77G
5%
/ tmpfs64M
0
64M
0%
/dev tmpfs3.8G
0
3.8G
0%
/sys/fs/cgroup192.0.2.0
@tcp:/abcdef01
1.1T
7.8M
1.1T
1%
/data /dev/nvme0n1p180G
4.0G
77G
5%
/etc/hosts shm64M
0
64M
0%
/dev/shm tmpfs6.9G
12K
6.9G
1%
/run/secrets/kubernetes.io/serviceaccount tmpfs3.8G
0
3.8G
0%
/proc/acpi tmpfs3.8G
0
3.8G
0%
/sys/firmware -
Verify that data was written to the FSx for Lustre file system by the sample app.
kubectl exec -it fsx-app -- ls /data
An example output is as follows.
out.txt
This example output shows that the sample app successfully wrote the
out.txt
file to the file system.
Note
Before deleting the cluster, make sure to delete the FSx for Lustre file system. For more information, see Clean up resources in the FSx for Lustre User Guide.