LustreConfiguration
- class aws_cdk.aws_fsx.LustreConfiguration(*, deployment_type, auto_import_policy=None, automatic_backup_retention=None, copy_tags_to_backups=None, daily_automatic_backup_start_time=None, data_compression_type=None, export_path=None, imported_file_chunk_size_mib=None, import_path=None, per_unit_storage_throughput=None, weekly_maintenance_start_time=None)
Bases:
object
The configuration for the Amazon FSx for Lustre file system.
- Parameters:
deployment_type (
LustreDeploymentType
) – The type of backing file system deployment used by FSx.auto_import_policy (
Optional
[LustreAutoImportPolicy
]) – Available withScratch
andPersistent_1
deployment types. When you create your file system, your existing S3 objects appear as file and directory listings. Use this property to choose how Amazon FSx keeps your file and directory listings up to date as you add or modify objects in your linked S3 bucket.AutoImportPolicy
can have the following values: For more information, see Automatically import updates from your S3 bucket . .. epigraph:: This parameter is not supported for Lustre file systems using thePersistent_2
deployment type. Default: - no import policyautomatic_backup_retention (
Optional
[Duration
]) – The number of days to retain automatic backups. Setting this property to 0 disables automatic backups. You can retain automatic backups for a maximum of 90 days. Automatic Backups is not supported on scratch file systems. Default: Duration.days(0)copy_tags_to_backups (
Optional
[bool
]) – A boolean flag indicating whether tags for the file system should be copied to backups. Default: - falsedaily_automatic_backup_start_time (
Optional
[DailyAutomaticBackupStartTime
]) – Start time for 30-minute daily automatic backup window in Coordinated Universal Time (UTC). Default: - no backup windowdata_compression_type (
Optional
[LustreDataCompressionType
]) – Sets the data compression configuration for the file system. For more information, see Lustre data compression in the Amazon FSx for Lustre User Guide . Default: - no compressionexport_path (
Optional
[str
]) – The path in Amazon S3 where the root of your Amazon FSx file system is exported. The path must use the same Amazon S3 bucket as specified in ImportPath. If you only specify a bucket name, such as s3://import-bucket, you get a 1:1 mapping of file system objects to S3 bucket objects. This mapping means that the input data in S3 is overwritten on export. If you provide a custom prefix in the export path, such as s3://import-bucket/[custom-optional-prefix], Amazon FSx exports the contents of your file system to that export prefix in the Amazon S3 bucket. Default: s3://import-bucket/FSxLustre[creation-timestamp]imported_file_chunk_size_mib (
Union
[int
,float
,None
]) – For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. Allowed values are between 1 and 512,000. Default: 1024import_path (
Optional
[str
]) – The path to the Amazon S3 bucket (including the optional prefix) that you’re using as the data repository for your Amazon FSx for Lustre file system. Must be of the format “s3://{bucketName}/optional-prefix” and cannot exceed 900 characters. Default: - no bucket is importedper_unit_storage_throughput (
Union
[int
,float
,None
]) – Required for the PERSISTENT_1 deployment type, describes the amount of read and write throughput for each 1 tebibyte of storage, in MB/s/TiB. Valid values are 50, 100, 200. Default: - no default, conditionally required for PERSISTENT_1 deployment typeweekly_maintenance_start_time (
Optional
[LustreMaintenanceTime
]) – The preferred day and time to perform weekly maintenance. The first digit is the day of the week, starting at 1 for Monday, then the following are hours and minutes in the UTC time zone, 24 hour clock. For example: ‘2:20:30’ is Tuesdays at 20:30. Default: - no preference
- See:
- ExampleMetadata:
infused
Example:
from aws_cdk import aws_s3 as s3 # vpc: ec2.Vpc # bucket: s3.Bucket lustre_configuration = { "deployment_type": fsx.LustreDeploymentType.SCRATCH_2, "export_path": bucket.s3_url_for_object(), "import_path": bucket.s3_url_for_object(), "auto_import_policy": fsx.LustreAutoImportPolicy.NEW_CHANGED_DELETED } fs = fsx.LustreFileSystem(self, "FsxLustreFileSystem", vpc=vpc, vpc_subnet=vpc.private_subnets[0], storage_capacity_gi_b=1200, lustre_configuration=lustre_configuration )
Attributes
- auto_import_policy
Available with
Scratch
andPersistent_1
deployment types.When you create your file system, your existing S3 objects appear as file and directory listings. Use this property to choose how Amazon FSx keeps your file and directory listings up to date as you add or modify objects in your linked S3 bucket.
AutoImportPolicy
can have the following values:For more information, see Automatically import updates from your S3 bucket . .. epigraph:
This parameter is not supported for Lustre file systems using the ``Persistent_2`` deployment type.
- automatic_backup_retention
The number of days to retain automatic backups.
Setting this property to 0 disables automatic backups. You can retain automatic backups for a maximum of 90 days.
Automatic Backups is not supported on scratch file systems.
- Default:
Duration.days(0)
- copy_tags_to_backups
A boolean flag indicating whether tags for the file system should be copied to backups.
- Default:
false
- daily_automatic_backup_start_time
Start time for 30-minute daily automatic backup window in Coordinated Universal Time (UTC).
- Default:
no backup window
- data_compression_type
Sets the data compression configuration for the file system.
For more information, see Lustre data compression in the Amazon FSx for Lustre User Guide .
- deployment_type
The type of backing file system deployment used by FSx.
- export_path
The path in Amazon S3 where the root of your Amazon FSx file system is exported.
The path must use the same Amazon S3 bucket as specified in ImportPath. If you only specify a bucket name, such as s3://import-bucket, you get a 1:1 mapping of file system objects to S3 bucket objects. This mapping means that the input data in S3 is overwritten on export. If you provide a custom prefix in the export path, such as s3://import-bucket/[custom-optional-prefix], Amazon FSx exports the contents of your file system to that export prefix in the Amazon S3 bucket.
- Default:
s3://import-bucket/FSxLustre[creation-timestamp]
- import_path
The path to the Amazon S3 bucket (including the optional prefix) that you’re using as the data repository for your Amazon FSx for Lustre file system.
Must be of the format “s3://{bucketName}/optional-prefix” and cannot exceed 900 characters.
- Default:
no bucket is imported
- imported_file_chunk_size_mib
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk.
Allowed values are between 1 and 512,000.
- Default:
1024
- per_unit_storage_throughput
Required for the PERSISTENT_1 deployment type, describes the amount of read and write throughput for each 1 tebibyte of storage, in MB/s/TiB.
Valid values are 50, 100, 200.
- Default:
no default, conditionally required for PERSISTENT_1 deployment type
- weekly_maintenance_start_time
The preferred day and time to perform weekly maintenance.
The first digit is the day of the week, starting at 1 for Monday, then the following are hours and minutes in the UTC time zone, 24 hour clock. For example: ‘2:20:30’ is Tuesdays at 20:30.
- Default:
no preference