Persistent storage options
What is an in-tree vs. out-of-tree volume plugin?
Before the introduction of the Container Storage Interface (CSI), all volume plugins were in-tree meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API. This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository.
Out-of-tree volume plugins are developed independently of the Kubernetes code base, and are deployed (installed) on Kubernetes clusters as extensions. This gives vendors the ability to update drivers out-of-band, i.e. separately from the Kubernetes release cycle. This is largely possible because Kubernetes has created a storage interface or CSI that provides vendors a standard way of interfacing with k8s.
You can check more about Amazon Elastic Kubernetes Services (EKS) storage classes and CSI Drivers on https://docs.aws.amazon.com/eks/latest/userguide/storage.html
In-tree Volume Plugin for Windows
Kubernetes volumes enable applications, with data persistence requirements, to be deployed on Kubernetes. The management of persistent volumes consists of provisioning/de-provisioning/resizing of volumes, attaching/detaching a volume to/from a Kubernetes node, and mounting/dismounting a volume to/from individual containers in a pod. The code for implementing these volume management actions for a specific storage back-end or protocol is shipped in the form of a Kubernetes volume plugin (In-tree Volume Plugins). On Amazon Elastic Kubernetes Services (EKS) the following class of Kubernetes volume plugins are supported on Windows:
In-tree Volume Plugin:
awsElasticBlockStore
In order to use In-tree volume plugin on Windows nodes, it is necessary to create an additional StorageClass to use NTFS as the fsType. On EKS, the default StorageClass uses ext4 as the default fsType.
A StorageClass provides a way for administrators to describe the "classes" of storage they offer. Different classes might map to quality-of-service levels, backup policies, or arbitrary policies determined by the cluster administrators. Kubernetes is unopinionated about what classes represent. This concept is sometimes called "profiles" in other storage systems.
You can check it by running the following command:
kubectl describe storageclass gp2
Output:
Name: gp2 IsDefaultClass: Yes Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClas ","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType" "ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"} ,storageclass.kubernetes.io/is-default-class=true Provisioner: kubernetes.io/aws-ebs Parameters: fsType=ext4,type=gp2 AllowVolumeExpansion: <unset> MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: <none>
To create the new StorageClass to support NTFS, use the following manifest:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gp2-windows provisioner: kubernetes.io/aws-ebs parameters: type: gp2 fsType: ntfs volumeBindingMode: WaitForFirstConsumer
Create the StorageClass by running the following command:
kubectl apply -f NTFSStorageClass.yaml
The next step is to create a Persistent Volume Claim (PVC).
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using PVC. It is a resource in the cluster just like a node is a cluster resource. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany).
Users need PersistentVolumes with different attributes, such as performance, for different use cases. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource.
In the example below, the PVC has been created within the namespace windows.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ebs-windows-pv-claim namespace: windows spec: accessModes: - ReadWriteOnce storageClassName: gp2-windows resources: requests: storage: 1Gi
Create the PVC by running the following command:
kubectl apply -f persistent-volume-claim.yaml
The following manifest creates a Windows Pod, setup the VolumeMount as C:\Data
and uses the PVC as the attached storage on C:\Data
.
apiVersion: apps/v1 kind: Deployment metadata: name: windows-server-ltsc2019 namespace: windows spec: selector: matchLabels: app: windows-server-ltsc2019 tier: backend track: stable replicas: 1 template: metadata: labels: app: windows-server-ltsc2019 tier: backend track: stable spec: containers: - name: windows-server-ltsc2019 image: mcr.microsoft.com/windows/servercore:ltsc2019 ports: - name: http containerPort: 80 imagePullPolicy: IfNotPresent volumeMounts: - mountPath: "C:\\data" name: test-volume volumes: - name: test-volume persistentVolumeClaim: claimName: ebs-windows-pv-claim nodeSelector: kubernetes.io/os: windows node.kubernetes.io/windows-build: '10.0.17763'
Test the results by accessing the Windows pod via PowerShell:
kubectl exec -it podname powershell -n windows
Inside the Windows Pod, run: ls
Output:
PS C:\> ls Directory: C:\ Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 3/8/2021 1:54 PM data d----- 3/8/2021 3:37 PM inetpub d-r--- 1/9/2021 7:26 AM Program Files d----- 1/9/2021 7:18 AM Program Files (x86) d-r--- 1/9/2021 7:28 AM Users d----- 3/8/2021 3:36 PM var d----- 3/8/2021 3:36 PM Windows -a---- 12/7/2019 4:20 AM 5510 License.txt
The data directory is provided by the EBS volume.
Out-of-tree for Windows
Code associated with CSI plugins ship as out-of-tree scripts and binaries that are typically distributed as container images and deployed using standard Kubernetes constructs like DaemonSets and StatefulSets. CSI plugins handle a wide range of volume management actions in Kubernetes. CSI plugins typically consist of node plugins (that run on each node as a DaemonSet) and controller plugins.
CSI node plugins (especially those associated with persistent volumes exposed as either block devices or over a shared file-system) need to perform various privileged operations like scanning of disk devices, mounting of file systems, etc. These operations differ for each host operating system. For Linux worker nodes, containerized CSI node plugins are typically deployed as privileged containers. For Windows worker nodes, privileged operations for containerized CSI node plugins is supported using csi-proxy
The Amazon EKS Optimized Windows AMI includes CSI-proxy starting from April 2022. Customers can use the SMB CSI Driver
The following blog
Amazon FSx for Windows File Server
An option is to use Amazon FSx for Windows File Server through an SMB feature called SMB Global Mapping
In the example below, the path G:\Directory\app-state
is an SMB share on the Windows Node.
apiVersion: v1 kind: Pod metadata: name: test-fsx spec: containers: - name: test-fsx image: mcr.microsoft.com/windows/servercore:ltsc2019 command: - powershell.exe - -command - "Add-WindowsFeature Web-Server; Invoke-WebRequest -UseBasicParsing -Uri 'https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/ServiceMonitor.exe' -OutFile 'C:\\ServiceMonitor.exe'; echo '<html><body><br/><br/><marquee><H1>Hello EKS!!!<H1><marquee></body><html>' > C:\\inetpub\\wwwroot\\default.html; C:\\ServiceMonitor.exe 'w3svc'; " volumeMounts: - mountPath: C:\dotnetapp\app-state name: test-mount volumes: - name: test-mount hostPath: path: G:\Directory\app-state type: Directory nodeSelector: beta.kubernetes.io/os: windows beta.kubernetes.io/arch: amd64
The following blog