Managing FSx for OpenZFS volumes - FSx for OpenZFS

Managing FSx for OpenZFS volumes

An FSx for OpenZFS file system can contain one or more volumes, which are isolated data containers for files and directories. Every FSx for OpenZFS file system has one (and only one) root volume, which is created at file system creation time. All other volumes created on a file system are children of the root volume.

After the file system is created, you can create volumes as needed. Each customer-created volume is a child of another volume. This means that all volumes are children or descendants of the root volume. For example, you could create 10 volumes with each one being a child of the root volume (that is, all 10 volumes are siblings) or you could create a hierarchy of 10 volumes in which each volume is a child of the previous volume.

You use volumes to logically separate an individual file system into multiple namespaces. This allows you to independently manage the storage capacity, compression, NFS exports, record size, and user and group quotas at the volume level.

You access volumes from Linux, Windows, or macOS clients over the Network File System (NFS) protocol (v3, v4.0, v4.1, or v4.2). FSx for OpenZFS presents data to your users and applications as a local directory or drive.

Volume properties

When you create a volume, you can set the following configuration properties to customize and control the storage aspects of the volume.

  • Volume name provides a name for the volume. Please ensure that the specified name does not conflict with an existing file or directory on the parent volume.

  • Data compression type reduces the storage capacity that your data consumes and can also help increase your effective throughput. You can choose either Zstandard, LZ4, or No compression. Zstandard compression provides a higher level of data compression and higher read throughput performance than LZ4 compression. LZ4 compression provides a lower level of compression and higher write throughput performance than Zstandard compression. For more information about the storage and performance benefits of the volume data compression options, see Data compression.

  • Storage capacity quota sets a volume quota, which is the maximum storage size for the volume. A volume quota limits the amount of storage space that the volume can consume to the configured amount, but does not guarantee the space will be available. To guarantee quota space, you must also set Storage capacity reservation.

    By setting quotas without setting a Storage capacity reservation, you can create space-efficient thin-provisioned volumes where capacity is allocated only as storage is being consumed. With thin-provisioned volumes, you can assign quotas that are collectively larger than the existing capacity of the file system or quota of a parent volume. If your file system is nearing capacity or a parent volume is nearing its quota, note that your users or applications may not be able to write in a child volume even though the volume has not reached its quota limit.

  • Storage capacity reservation guarantees a specified amount of storage space to always be available for the volume. The reservation reserves a configured amount of storage space from the parent volume. Only the volume with the reservation can use that storage space, regardless of the volume quotas that other volumes may have. Note that unlike volume quotas, you can't reserve an amount of storage space that doesn't exist in its immediate parent. Set a reservation if an application must have a certain amount of storage space or it will fail.

  • Record size sets the suggested block size for a volume in a ZFS dataset. Choose whether to use the default record size of 128 KiB, or to set a custom record size for the volume. We recommend using the default setting for the majority of use cases. For more information about the record size setting, see ZFS record size. Generally, workloads that write in fixed small or large record sizes may benefit from setting a custom record size, like database workloads (small record size) or media streaming workloads (large record size). See the OpenZFS documentation for more information about Dataset record size and ZFS datasets.

  • NFS exports use NFS-level export policies to define how the file system should be exported over the NFS protocol. The NFS exports setting defines which clients can access the volume and what permissions they have. For more information, see NFS exports.

  • User and group quotas configures individual user and/or group quotas for volumes, which sets a limit on the amount of storage space they can consume on the volume. To determine quota usage, add up the total size of files owned by the user or group specified in the quota. Only data in the volume on which the quota is applied counts toward the user or group's volume quota utilization. Other files and directories that exist only in snapshots or child volumes do not count toward quota usage.

  • Source snapshot ID specifies a snapshot from which to create a volume. Then use Source snapshot copy strategy to specify the type of volume to create:

    • Clone creates a clone volume. A clone volume is a writable copy that is initialized with the same data as the snapshot from which it was created. Because clone volumes reference the data from the snapshot, clone volumes are created almost instantly, and initially consume no additional capacity. They only consume the capacity required for the incremental changes made to the source snapshot, providing an easy way to support multiple users or applications in parallel from a shared dataset. However, a clone volume maintains a dependency on its source snapshot, so you cannot delete this source snapshot while the clone volume is in use.

    • Full copy creates a full-copy volume. A full-copy volume is a writable copy that is initialized with the same data as the snapshot from which it was created. Unlike a clone volume, it does not maintain any dependency on its source snapshot. Because a full-copy volume requires copying all of the source snapshot data to a new volume, creation time will depend on the size of the source snapshot. While this data is being copied, your full-copy volume will be read only. Once a full-copy volume is created, it is identical to a standard FSx for OpenZFS volume. Files in the source snapshot will maintain their original record size regardless of the record size of the destination volume. Files will be compressed according to the compression property on the destination volume.

    For more information on snapshots, see Working with FSx for OpenZFS snapshots.

NFS exports

NFS exports are NFS-level export policies that configure which clients can access the volume and the options that are available. Each volume has its own NFS exports setting, so a client may be able to mount one volume on the file system but not a different volume.

When creating a volume from the console, you provide the NFS exports information in an array of client configurations, each of which has Client addresses and NFS options fields.

The Client addresses field specifies which hosts can mount over the NFS protocol and contains one of these settings:

  • * is a wildcard that means anyone who can route to the file server can mount it.

  • The IP address of a client's computer (such as 10.0.0.1) that means a client from that specific IP address can mount the file system.

  • A CIDR block range (such as 192.0.2.0/24) that means any client from that address range can mount the file system.

Note

If an IP address is permitted to mount a parent volume, it is also automatically permitted to mount any of the child volumes.

The NFS options field lists a set of exports options available on the volume. Following are descriptions of the most common NFS options. For a more comprehensive list of exports options, see the exports(5) - Linux man page on the die.net web site.

  • rw allows both read and write requests on this NFS volume from the specified Client addresses.

  • ro allows only read requests on this NFS volume. The specified Client addresses can't write to the volume.

  • crossmnt allows clients to inherit access to any child volumes within this volume (if configured along with the no_sub_tree_check option, which is included by default). This option is required to provide file-level access to your snapshots in the .zfs/snapshot directory of each volume.

  • all_squash maps all User IDs (UIDs) and group IDs (GIDs) to the anonymous user.

  • root_squash maps requests from UID/GID 0 to the anonymous UID/GID. It prevents remote root user from having superuser (root) privileges on remote NFS-mounted volumes. root_squash is the default unless overridden by all_squash or no_root_squash.

  • no_root_squash turns off root squashing.

  • anonuid and anongid explicitly set the UID and GID of the anonymous account. Valid values are 0 - 2147483647, inclusive.

  • sync replies to client requests only after the changes have been committed to stable storage (that is, disk drives). sync is the default unless overridden by async

  • async replies to client requests (such as write requests) after the changes have been committed to memory, but before any changes made by that request have been committed to stable storage (that is, disk drives). This setting can improve performance for latency-intensive or IOPS-intensive workloads. For more information, see Amazon FSx for OpenZFS performance.

    Warning

    Use of the async option can cause data to be lost or corrupted if a write request is acknowledged but the server crashes before the write request is fully written to disk.

When you create a volume, the default for Client addresses is an asterisk (*) and the default for NFS options is rw,crossmnt.