You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::FSx::Types::CreateFileSystemLustreConfiguration

Inherits:
Struct
  • Object
show all
Defined in:
(unknown)

Overview

Note:

When passing CreateFileSystemLustreConfiguration as input to an Aws::Client method, you can use a vanilla Hash:

{
  weekly_maintenance_start_time: "WeeklyTime",
  import_path: "ArchivePath",
  export_path: "ArchivePath",
  imported_file_chunk_size: 1,
  deployment_type: "SCRATCH_1", # accepts SCRATCH_1, SCRATCH_2, PERSISTENT_1
  per_unit_storage_throughput: 1,
}

The Lustre configuration for the file system being created.

Returned by:

Instance Attribute Summary collapse

Instance Attribute Details

#deployment_typeString

(Optional) Choose SCRATCH_1 and SCRATCH_2 deployment types when you need temporary storage and shorter-term processing of data. The SCRATCH_2 deployment type provides in-transit encryption of data and higher burst throughput capacity than SCRATCH_1.

Choose PERSISTENT_1 deployment type for longer-term storage and workloads and encryption of data in transit. To learn more about deployment types, see FSx for Lustre Deployment Options.

Encryption of data in-transit is automatically enabled when you access a SCRATCH_2 or PERSISTENT_1 file system from Amazon EC2 instances that support this feature. (Default = SCRATCH_1)

Encryption of data in-transit for SCRATCH_2 and PERSISTENT_1 deployment types is supported when accessed from supported instance types in supported AWS Regions. To learn more, Encrypting Data in Transit.

Returns:

  • (String)

    (Optional) Choose SCRATCH_1 and SCRATCH_2 deployment types when you need temporary storage and shorter-term processing of data.

#export_pathString

(Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported. The path must use the same Amazon S3 bucket as specified in ImportPath. You can provide an optional prefix to which new and changed data is to be exported from your Amazon FSx for Lustre file system. If an ExportPath value is not provided, Amazon FSx sets a default export path, s3://import-bucket/FSxLustre[creation-timestamp]. The timestamp is in UTC format, for example s3://import-bucket/FSxLustre20181105T222312Z.

The Amazon S3 export bucket must be the same as the import bucket specified by ImportPath. If you only specify a bucket name, such as s3://import-bucket, you get a 1:1 mapping of file system objects to S3 bucket objects. This mapping means that the input data in S3 is overwritten on export. If you provide a custom prefix in the export path, such as s3://import-bucket/[custom-optional-prefix], Amazon FSx exports the contents of your file system to that export prefix in the Amazon S3 bucket.

Returns:

  • (String)

    (Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported.

#import_pathString

(Optional) The path to the Amazon S3 bucket (including the optional prefix) that you\'re using as the data repository for your Amazon FSx for Lustre file system. The root of your FSx for Lustre file system will be mapped to the root of the Amazon S3 bucket you select. An example is s3://import-bucket/optional-prefix. If you specify a prefix after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.

Returns:

  • (String)

    (Optional) The path to the Amazon S3 bucket (including the optional prefix) that you\'re using as the data repository for your Amazon FSx for Lustre file system.

#imported_file_chunk_sizeInteger

(Optional) For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.

The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.

Returns:

  • (Integer)

    (Optional) For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk.

#per_unit_storage_throughputInteger

Required for the PERSISTENT_1 deployment type, describes the amount of read and write throughput for each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by multiplying file system storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB file system, provisioning 50 MB/s/TiB of PerUnitStorageThroughput yields 117 MB/s of file system throughput. You pay for the amount of throughput that you provision.

Valid values are 50, 100, 200.

Returns:

  • (Integer)

    Required for the PERSISTENT_1 deployment type, describes the amount of read and write throughput for each 1 tebibyte of storage, in MB/s/TiB.

#weekly_maintenance_start_timeString

The preferred time to perform weekly maintenance, in the UTC time zone.

Returns:

  • (String)

    The preferred time to perform weekly maintenance, in the UTC time zone.