You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::DatabaseMigrationService::Types::S3Settings

Inherits:
Struct
  • Object
show all
Defined in:
(unknown)

Overview

Note:

When passing S3Settings as input to an Aws::Client method, you can use a vanilla Hash:

{
  service_access_role_arn: "String",
  external_table_definition: "String",
  csv_row_delimiter: "String",
  csv_delimiter: "String",
  bucket_folder: "String",
  bucket_name: "String",
  compression_type: "none", # accepts none, gzip
  encryption_mode: "sse-s3", # accepts sse-s3, sse-kms
  server_side_encryption_kms_key_id: "String",
  data_format: "csv", # accepts csv, parquet
  encoding_type: "plain", # accepts plain, plain-dictionary, rle-dictionary
  dict_page_size_limit: 1,
  row_group_length: 1,
  data_page_size: 1,
  parquet_version: "parquet-1-0", # accepts parquet-1-0, parquet-2-0
  enable_statistics: false,
  include_op_for_full_load: false,
  cdc_inserts_only: false,
  timestamp_column_name: "String",
  parquet_timestamp_in_millisecond: false,
}

Settings for exporting data to Amazon S3.

Returned by:

Instance Attribute Summary collapse

Instance Attribute Details

#bucket_folderString

An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter is not specified, then the path used is schema_name/table_name/.

Returns:

  • (String)

    An optional parameter to set a folder name in the S3 bucket.

#bucket_nameString

The name of the S3 bucket.

Returns:

  • (String)

    The name of the S3 bucket.

#cdc_inserts_onlyBoolean

A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

AWS DMS supports this interaction between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.

Returns:

  • (Boolean)

    A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files.

#compression_typeString

An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both .csv and .parquet file formats.

Possible values:

  • none
  • gzip

Returns:

  • (String)

    An optional parameter to use GZIP to compress the target files.

#csv_delimiterString

The delimiter used to separate columns in the source files. The default is a comma.

Returns:

  • (String)

    The delimiter used to separate columns in the source files.

#csv_row_delimiterString

The delimiter used to separate rows in the source files. The default is a carriage return (\n).

Returns:

  • (String)

    The delimiter used to separate rows in the source files.

#data_formatString

The format of the data that you want to use for output. You can choose one of the following:

  • csv: This is a row-based file format with comma-separated values (.csv).

  • parquet: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

    Possible values:

    • csv
    • parquet

Returns:

  • (String)

    The format of the data that you want to use for output.

#data_page_sizeInteger

The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

Returns:

  • (Integer)

    The size of one data page in bytes.

#dict_page_size_limitInteger

The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

Returns:

  • (Integer)

    The maximum size of an encoded dictionary page of a column.

#enable_statisticsBoolean

A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

Returns:

  • (Boolean)

    A value that enables statistics for Parquet pages and row groups.

#encoding_typeString

The type of encoding you are using:

  • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

  • PLAIN doesn\'t use encoding at all. Values are stored as they are.

  • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

    Possible values:

    • plain
    • plain-dictionary
    • rle-dictionary

Returns:

  • (String)

    The type of encoding you are using:.

#encryption_modeString

The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

  • s3:CreateBucket

  • s3:ListBucket

  • s3:DeleteBucket

  • s3:GetBucketLocation

  • s3:GetObject

  • s3:PutObject

  • s3:DeleteObject

  • s3:GetObjectVersion

  • s3:GetBucketPolicy

  • s3:PutBucketPolicy

  • s3:DeleteBucketPolicy

    Possible values:

    • sse-s3
    • sse-kms

Returns:

  • (String)

    The type of server-side encryption that you want to use for your data.

#external_table_definitionString

The external table definition.

Returns:

  • (String)

    The external table definition.

#include_op_for_full_loadBoolean

A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.

AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.

For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

This setting works together with the CdcInsertsOnly parameter for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

Returns:

  • (Boolean)

    A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.

#parquet_timestamp_in_millisecondBoolean

A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.

When ParquetTimestampInMillisecond is set to true or y, AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.

AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.

Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.

Returns:

  • (Boolean)

    A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

#parquet_versionString

The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

Possible values:

  • parquet-1-0
  • parquet-2-0

Returns:

  • (String)

    The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

#row_group_lengthInteger

The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

Returns:

  • (Integer)

    The number of rows in a row group.

#server_side_encryption_kms_key_idString

If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.

Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

Returns:

  • (String)

    If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID.

#service_access_role_arnString

The Amazon Resource Name (ARN) used by the service access IAM role.

Returns:

  • (String)

    The Amazon Resource Name (ARN) used by the service access IAM role.

#timestamp_column_nameString

A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.

DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

Returns:

  • (String)

    A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.