@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class S3Settings extends Object implements Serializable, Cloneable, StructuredPojo
Settings for exporting data to Amazon S3.
Constructor and Description |
---|
S3Settings() |
Modifier and Type | Method and Description |
---|---|
S3Settings |
clone() |
boolean |
equals(Object obj) |
Boolean |
getAddColumnName()
An optional parameter that, when set to
true or y , you can use to add column name
information to the .csv output file. |
Boolean |
getAddTrailingPaddingCharacter()
Use the S3 target endpoint setting
AddTrailingPaddingCharacter to add padding on string data. |
String |
getBucketFolder()
An optional parameter to set a folder name in the S3 bucket.
|
String |
getBucketName()
The name of the S3 bucket.
|
String |
getCannedAclForObjects()
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3
bucket as .csv or .parquet files.
|
Boolean |
getCdcInsertsAndUpdates()
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet
(columnar storage) output files.
|
Boolean |
getCdcInsertsOnly()
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage
(.parquet) output files.
|
Integer |
getCdcMaxBatchInterval()
Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
|
Integer |
getCdcMinFileSize()
Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
|
String |
getCdcPath()
Specifies the folder path of CDC files.
|
String |
getCompressionType()
An optional parameter to use GZIP to compress the target files.
|
String |
getCsvDelimiter()
The delimiter used to separate columns in the .csv file for both source and target.
|
String |
getCsvNoSupValue()
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in
.csv format.
|
String |
getCsvNullValue()
An optional parameter that specifies how DMS treats null values.
|
String |
getCsvRowDelimiter()
The delimiter used to separate rows in the .csv file for both source and target.
|
String |
getDataFormat()
The format of the data that you want to use for output.
|
Integer |
getDataPageSize()
The size of one data page in bytes.
|
String |
getDatePartitionDelimiter()
Specifies a date separating delimiter to use during folder partitioning.
|
Boolean |
getDatePartitionEnabled()
When set to
true , this parameter partitions S3 bucket folders based on transaction commit dates. |
String |
getDatePartitionSequence()
Identifies the sequence of the date format to use during folder partitioning.
|
String |
getDatePartitionTimezone()
When creating an S3 target endpoint, set
DatePartitionTimezone to convert the current UTC time into
a specified time zone. |
Integer |
getDictPageSizeLimit()
The maximum size of an encoded dictionary page of a column.
|
Boolean |
getEnableStatistics()
A value that enables statistics for Parquet pages and row groups.
|
String |
getEncodingType()
The type of encoding you are using:
|
String |
getEncryptionMode()
The type of server-side encryption that you want to use for your data.
|
String |
getExpectedBucketOwner()
To specify a bucket owner and prevent sniping, you can use the
ExpectedBucketOwner endpoint setting. |
String |
getExternalTableDefinition()
Specifies how tables are defined in the S3 source files only.
|
Boolean |
getGlueCatalogGeneration()
When true, allows Glue to catalog your S3 bucket.
|
Integer |
getIgnoreHeaderRows()
When this value is set to 1, DMS ignores the first row header in a .csv file.
|
Boolean |
getIncludeOpForFullLoad()
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet
output files only to indicate how the rows were added to the source database.
|
Integer |
getMaxFileSize()
A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target
during full load.
|
Boolean |
getParquetTimestampInMillisecond()
A value that specifies the precision of any
TIMESTAMP column values that are written to an Amazon S3
object file in .parquet format. |
String |
getParquetVersion()
The version of the Apache Parquet format that you want to use:
parquet_1_0 (the default) or
parquet_2_0 . |
Boolean |
getPreserveTransactions()
If set to
true , DMS saves the transaction order for a change data capture (CDC) load on the Amazon
S3 target specified by
CdcPath . |
Boolean |
getRfc4180()
For an S3 source, when this value is set to
true or y , each leading double quotation
mark has to be followed by an ending double quotation mark. |
Integer |
getRowGroupLength()
The number of rows in a row group.
|
String |
getServerSideEncryptionKmsKeyId()
If you are using
SSE_KMS for the EncryptionMode , provide the KMS key ID. |
String |
getServiceAccessRoleArn()
The Amazon Resource Name (ARN) used by the service to access the IAM role.
|
String |
getTimestampColumnName()
A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an
Amazon S3 target.
|
Boolean |
getUseCsvNoSupValue()
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format.
|
Boolean |
getUseTaskStartTimeForFullLoadTimestamp()
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data
is written to target.
|
int |
hashCode() |
Boolean |
isAddColumnName()
An optional parameter that, when set to
true or y , you can use to add column name
information to the .csv output file. |
Boolean |
isAddTrailingPaddingCharacter()
Use the S3 target endpoint setting
AddTrailingPaddingCharacter to add padding on string data. |
Boolean |
isCdcInsertsAndUpdates()
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet
(columnar storage) output files.
|
Boolean |
isCdcInsertsOnly()
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage
(.parquet) output files.
|
Boolean |
isDatePartitionEnabled()
When set to
true , this parameter partitions S3 bucket folders based on transaction commit dates. |
Boolean |
isEnableStatistics()
A value that enables statistics for Parquet pages and row groups.
|
Boolean |
isGlueCatalogGeneration()
When true, allows Glue to catalog your S3 bucket.
|
Boolean |
isIncludeOpForFullLoad()
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet
output files only to indicate how the rows were added to the source database.
|
Boolean |
isParquetTimestampInMillisecond()
A value that specifies the precision of any
TIMESTAMP column values that are written to an Amazon S3
object file in .parquet format. |
Boolean |
isPreserveTransactions()
If set to
true , DMS saves the transaction order for a change data capture (CDC) load on the Amazon
S3 target specified by
CdcPath . |
Boolean |
isRfc4180()
For an S3 source, when this value is set to
true or y , each leading double quotation
mark has to be followed by an ending double quotation mark. |
Boolean |
isUseCsvNoSupValue()
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format.
|
Boolean |
isUseTaskStartTimeForFullLoadTimestamp()
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data
is written to target.
|
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setAddColumnName(Boolean addColumnName)
An optional parameter that, when set to
true or y , you can use to add column name
information to the .csv output file. |
void |
setAddTrailingPaddingCharacter(Boolean addTrailingPaddingCharacter)
Use the S3 target endpoint setting
AddTrailingPaddingCharacter to add padding on string data. |
void |
setBucketFolder(String bucketFolder)
An optional parameter to set a folder name in the S3 bucket.
|
void |
setBucketName(String bucketName)
The name of the S3 bucket.
|
void |
setCannedAclForObjects(CannedAclForObjectsValue cannedAclForObjects)
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3
bucket as .csv or .parquet files.
|
void |
setCannedAclForObjects(String cannedAclForObjects)
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3
bucket as .csv or .parquet files.
|
void |
setCdcInsertsAndUpdates(Boolean cdcInsertsAndUpdates)
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet
(columnar storage) output files.
|
void |
setCdcInsertsOnly(Boolean cdcInsertsOnly)
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage
(.parquet) output files.
|
void |
setCdcMaxBatchInterval(Integer cdcMaxBatchInterval)
Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
|
void |
setCdcMinFileSize(Integer cdcMinFileSize)
Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
|
void |
setCdcPath(String cdcPath)
Specifies the folder path of CDC files.
|
void |
setCompressionType(CompressionTypeValue compressionType)
An optional parameter to use GZIP to compress the target files.
|
void |
setCompressionType(String compressionType)
An optional parameter to use GZIP to compress the target files.
|
void |
setCsvDelimiter(String csvDelimiter)
The delimiter used to separate columns in the .csv file for both source and target.
|
void |
setCsvNoSupValue(String csvNoSupValue)
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in
.csv format.
|
void |
setCsvNullValue(String csvNullValue)
An optional parameter that specifies how DMS treats null values.
|
void |
setCsvRowDelimiter(String csvRowDelimiter)
The delimiter used to separate rows in the .csv file for both source and target.
|
void |
setDataFormat(DataFormatValue dataFormat)
The format of the data that you want to use for output.
|
void |
setDataFormat(String dataFormat)
The format of the data that you want to use for output.
|
void |
setDataPageSize(Integer dataPageSize)
The size of one data page in bytes.
|
void |
setDatePartitionDelimiter(DatePartitionDelimiterValue datePartitionDelimiter)
Specifies a date separating delimiter to use during folder partitioning.
|
void |
setDatePartitionDelimiter(String datePartitionDelimiter)
Specifies a date separating delimiter to use during folder partitioning.
|
void |
setDatePartitionEnabled(Boolean datePartitionEnabled)
When set to
true , this parameter partitions S3 bucket folders based on transaction commit dates. |
void |
setDatePartitionSequence(DatePartitionSequenceValue datePartitionSequence)
Identifies the sequence of the date format to use during folder partitioning.
|
void |
setDatePartitionSequence(String datePartitionSequence)
Identifies the sequence of the date format to use during folder partitioning.
|
void |
setDatePartitionTimezone(String datePartitionTimezone)
When creating an S3 target endpoint, set
DatePartitionTimezone to convert the current UTC time into
a specified time zone. |
void |
setDictPageSizeLimit(Integer dictPageSizeLimit)
The maximum size of an encoded dictionary page of a column.
|
void |
setEnableStatistics(Boolean enableStatistics)
A value that enables statistics for Parquet pages and row groups.
|
void |
setEncodingType(EncodingTypeValue encodingType)
The type of encoding you are using:
|
void |
setEncodingType(String encodingType)
The type of encoding you are using:
|
void |
setEncryptionMode(EncryptionModeValue encryptionMode)
The type of server-side encryption that you want to use for your data.
|
void |
setEncryptionMode(String encryptionMode)
The type of server-side encryption that you want to use for your data.
|
void |
setExpectedBucketOwner(String expectedBucketOwner)
To specify a bucket owner and prevent sniping, you can use the
ExpectedBucketOwner endpoint setting. |
void |
setExternalTableDefinition(String externalTableDefinition)
Specifies how tables are defined in the S3 source files only.
|
void |
setGlueCatalogGeneration(Boolean glueCatalogGeneration)
When true, allows Glue to catalog your S3 bucket.
|
void |
setIgnoreHeaderRows(Integer ignoreHeaderRows)
When this value is set to 1, DMS ignores the first row header in a .csv file.
|
void |
setIncludeOpForFullLoad(Boolean includeOpForFullLoad)
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet
output files only to indicate how the rows were added to the source database.
|
void |
setMaxFileSize(Integer maxFileSize)
A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target
during full load.
|
void |
setParquetTimestampInMillisecond(Boolean parquetTimestampInMillisecond)
A value that specifies the precision of any
TIMESTAMP column values that are written to an Amazon S3
object file in .parquet format. |
void |
setParquetVersion(ParquetVersionValue parquetVersion)
The version of the Apache Parquet format that you want to use:
parquet_1_0 (the default) or
parquet_2_0 . |
void |
setParquetVersion(String parquetVersion)
The version of the Apache Parquet format that you want to use:
parquet_1_0 (the default) or
parquet_2_0 . |
void |
setPreserveTransactions(Boolean preserveTransactions)
If set to
true , DMS saves the transaction order for a change data capture (CDC) load on the Amazon
S3 target specified by
CdcPath . |
void |
setRfc4180(Boolean rfc4180)
For an S3 source, when this value is set to
true or y , each leading double quotation
mark has to be followed by an ending double quotation mark. |
void |
setRowGroupLength(Integer rowGroupLength)
The number of rows in a row group.
|
void |
setServerSideEncryptionKmsKeyId(String serverSideEncryptionKmsKeyId)
If you are using
SSE_KMS for the EncryptionMode , provide the KMS key ID. |
void |
setServiceAccessRoleArn(String serviceAccessRoleArn)
The Amazon Resource Name (ARN) used by the service to access the IAM role.
|
void |
setTimestampColumnName(String timestampColumnName)
A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an
Amazon S3 target.
|
void |
setUseCsvNoSupValue(Boolean useCsvNoSupValue)
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format.
|
void |
setUseTaskStartTimeForFullLoadTimestamp(Boolean useTaskStartTimeForFullLoadTimestamp)
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data
is written to target.
|
String |
toString()
Returns a string representation of this object.
|
S3Settings |
withAddColumnName(Boolean addColumnName)
An optional parameter that, when set to
true or y , you can use to add column name
information to the .csv output file. |
S3Settings |
withAddTrailingPaddingCharacter(Boolean addTrailingPaddingCharacter)
Use the S3 target endpoint setting
AddTrailingPaddingCharacter to add padding on string data. |
S3Settings |
withBucketFolder(String bucketFolder)
An optional parameter to set a folder name in the S3 bucket.
|
S3Settings |
withBucketName(String bucketName)
The name of the S3 bucket.
|
S3Settings |
withCannedAclForObjects(CannedAclForObjectsValue cannedAclForObjects)
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3
bucket as .csv or .parquet files.
|
S3Settings |
withCannedAclForObjects(String cannedAclForObjects)
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3
bucket as .csv or .parquet files.
|
S3Settings |
withCdcInsertsAndUpdates(Boolean cdcInsertsAndUpdates)
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet
(columnar storage) output files.
|
S3Settings |
withCdcInsertsOnly(Boolean cdcInsertsOnly)
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage
(.parquet) output files.
|
S3Settings |
withCdcMaxBatchInterval(Integer cdcMaxBatchInterval)
Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
|
S3Settings |
withCdcMinFileSize(Integer cdcMinFileSize)
Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
|
S3Settings |
withCdcPath(String cdcPath)
Specifies the folder path of CDC files.
|
S3Settings |
withCompressionType(CompressionTypeValue compressionType)
An optional parameter to use GZIP to compress the target files.
|
S3Settings |
withCompressionType(String compressionType)
An optional parameter to use GZIP to compress the target files.
|
S3Settings |
withCsvDelimiter(String csvDelimiter)
The delimiter used to separate columns in the .csv file for both source and target.
|
S3Settings |
withCsvNoSupValue(String csvNoSupValue)
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in
.csv format.
|
S3Settings |
withCsvNullValue(String csvNullValue)
An optional parameter that specifies how DMS treats null values.
|
S3Settings |
withCsvRowDelimiter(String csvRowDelimiter)
The delimiter used to separate rows in the .csv file for both source and target.
|
S3Settings |
withDataFormat(DataFormatValue dataFormat)
The format of the data that you want to use for output.
|
S3Settings |
withDataFormat(String dataFormat)
The format of the data that you want to use for output.
|
S3Settings |
withDataPageSize(Integer dataPageSize)
The size of one data page in bytes.
|
S3Settings |
withDatePartitionDelimiter(DatePartitionDelimiterValue datePartitionDelimiter)
Specifies a date separating delimiter to use during folder partitioning.
|
S3Settings |
withDatePartitionDelimiter(String datePartitionDelimiter)
Specifies a date separating delimiter to use during folder partitioning.
|
S3Settings |
withDatePartitionEnabled(Boolean datePartitionEnabled)
When set to
true , this parameter partitions S3 bucket folders based on transaction commit dates. |
S3Settings |
withDatePartitionSequence(DatePartitionSequenceValue datePartitionSequence)
Identifies the sequence of the date format to use during folder partitioning.
|
S3Settings |
withDatePartitionSequence(String datePartitionSequence)
Identifies the sequence of the date format to use during folder partitioning.
|
S3Settings |
withDatePartitionTimezone(String datePartitionTimezone)
When creating an S3 target endpoint, set
DatePartitionTimezone to convert the current UTC time into
a specified time zone. |
S3Settings |
withDictPageSizeLimit(Integer dictPageSizeLimit)
The maximum size of an encoded dictionary page of a column.
|
S3Settings |
withEnableStatistics(Boolean enableStatistics)
A value that enables statistics for Parquet pages and row groups.
|
S3Settings |
withEncodingType(EncodingTypeValue encodingType)
The type of encoding you are using:
|
S3Settings |
withEncodingType(String encodingType)
The type of encoding you are using:
|
S3Settings |
withEncryptionMode(EncryptionModeValue encryptionMode)
The type of server-side encryption that you want to use for your data.
|
S3Settings |
withEncryptionMode(String encryptionMode)
The type of server-side encryption that you want to use for your data.
|
S3Settings |
withExpectedBucketOwner(String expectedBucketOwner)
To specify a bucket owner and prevent sniping, you can use the
ExpectedBucketOwner endpoint setting. |
S3Settings |
withExternalTableDefinition(String externalTableDefinition)
Specifies how tables are defined in the S3 source files only.
|
S3Settings |
withGlueCatalogGeneration(Boolean glueCatalogGeneration)
When true, allows Glue to catalog your S3 bucket.
|
S3Settings |
withIgnoreHeaderRows(Integer ignoreHeaderRows)
When this value is set to 1, DMS ignores the first row header in a .csv file.
|
S3Settings |
withIncludeOpForFullLoad(Boolean includeOpForFullLoad)
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet
output files only to indicate how the rows were added to the source database.
|
S3Settings |
withMaxFileSize(Integer maxFileSize)
A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target
during full load.
|
S3Settings |
withParquetTimestampInMillisecond(Boolean parquetTimestampInMillisecond)
A value that specifies the precision of any
TIMESTAMP column values that are written to an Amazon S3
object file in .parquet format. |
S3Settings |
withParquetVersion(ParquetVersionValue parquetVersion)
The version of the Apache Parquet format that you want to use:
parquet_1_0 (the default) or
parquet_2_0 . |
S3Settings |
withParquetVersion(String parquetVersion)
The version of the Apache Parquet format that you want to use:
parquet_1_0 (the default) or
parquet_2_0 . |
S3Settings |
withPreserveTransactions(Boolean preserveTransactions)
If set to
true , DMS saves the transaction order for a change data capture (CDC) load on the Amazon
S3 target specified by
CdcPath . |
S3Settings |
withRfc4180(Boolean rfc4180)
For an S3 source, when this value is set to
true or y , each leading double quotation
mark has to be followed by an ending double quotation mark. |
S3Settings |
withRowGroupLength(Integer rowGroupLength)
The number of rows in a row group.
|
S3Settings |
withServerSideEncryptionKmsKeyId(String serverSideEncryptionKmsKeyId)
If you are using
SSE_KMS for the EncryptionMode , provide the KMS key ID. |
S3Settings |
withServiceAccessRoleArn(String serviceAccessRoleArn)
The Amazon Resource Name (ARN) used by the service to access the IAM role.
|
S3Settings |
withTimestampColumnName(String timestampColumnName)
A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an
Amazon S3 target.
|
S3Settings |
withUseCsvNoSupValue(Boolean useCsvNoSupValue)
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format.
|
S3Settings |
withUseTaskStartTimeForFullLoadTimestamp(Boolean useTaskStartTimeForFullLoadTimestamp)
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data
is written to target.
|
public void setServiceAccessRoleArn(String serviceAccessRoleArn)
The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an
S3 bucket.
serviceAccessRoleArn
- The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects
from an S3 bucket.public String getServiceAccessRoleArn()
The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an
S3 bucket.
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects
from an S3 bucket.public S3Settings withServiceAccessRoleArn(String serviceAccessRoleArn)
The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an
S3 bucket.
serviceAccessRoleArn
- The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects
from an S3 bucket.public void setExternalTableDefinition(String externalTableDefinition)
Specifies how tables are defined in the S3 source files only.
externalTableDefinition
- Specifies how tables are defined in the S3 source files only.public String getExternalTableDefinition()
Specifies how tables are defined in the S3 source files only.
public S3Settings withExternalTableDefinition(String externalTableDefinition)
Specifies how tables are defined in the S3 source files only.
externalTableDefinition
- Specifies how tables are defined in the S3 source files only.public void setCsvRowDelimiter(String csvRowDelimiter)
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return
(\n
).
csvRowDelimiter
- The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage
return (\n
).public String getCsvRowDelimiter()
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return
(\n
).
\n
).public S3Settings withCsvRowDelimiter(String csvRowDelimiter)
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return
(\n
).
csvRowDelimiter
- The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage
return (\n
).public void setCsvDelimiter(String csvDelimiter)
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
csvDelimiter
- The delimiter used to separate columns in the .csv file for both source and target. The default is a
comma.public String getCsvDelimiter()
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
public S3Settings withCsvDelimiter(String csvDelimiter)
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
csvDelimiter
- The delimiter used to separate columns in the .csv file for both source and target. The default is a
comma.public void setBucketFolder(String bucketFolder)
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then
the path used is schema_name/table_name/
.
bucketFolder
- An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't
specified, then the path used is schema_name/table_name/
.public String getBucketFolder()
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then
the path used is schema_name/table_name/
.
bucketFolder/schema_name/table_name/
. If this parameter isn't
specified, then the path used is schema_name/table_name/
.public S3Settings withBucketFolder(String bucketFolder)
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then
the path used is schema_name/table_name/
.
bucketFolder
- An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path
bucketFolder/schema_name/table_name/
. If this parameter isn't
specified, then the path used is schema_name/table_name/
.public void setBucketName(String bucketName)
The name of the S3 bucket.
bucketName
- The name of the S3 bucket.public String getBucketName()
The name of the S3 bucket.
public S3Settings withBucketName(String bucketName)
The name of the S3 bucket.
bucketName
- The name of the S3 bucket.public void setCompressionType(String compressionType)
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
compressionType
- An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files.
Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This
parameter applies to both .csv and .parquet file formats.CompressionTypeValue
public String getCompressionType()
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
CompressionTypeValue
public S3Settings withCompressionType(String compressionType)
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
compressionType
- An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files.
Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This
parameter applies to both .csv and .parquet file formats.CompressionTypeValue
public void setCompressionType(CompressionTypeValue compressionType)
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
compressionType
- An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files.
Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This
parameter applies to both .csv and .parquet file formats.CompressionTypeValue
public S3Settings withCompressionType(CompressionTypeValue compressionType)
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
compressionType
- An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files.
Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This
parameter applies to both .csv and .parquet file formats.CompressionTypeValue
public void setEncryptionMode(String encryptionMode)
The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t change the
existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
encryptionMode
- The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t
change the existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
EncryptionModeValue
public String getEncryptionMode()
The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t change the
existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t
change the existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to
allow "arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
EncryptionModeValue
public S3Settings withEncryptionMode(String encryptionMode)
The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t change the
existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
encryptionMode
- The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t
change the existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
EncryptionModeValue
public void setEncryptionMode(EncryptionModeValue encryptionMode)
The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t change the
existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
encryptionMode
- The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t
change the existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
EncryptionModeValue
public S3Settings withEncryptionMode(EncryptionModeValue encryptionMode)
The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t change the
existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
encryptionMode
- The type of server-side encryption that you want to use for your data. This encryption type is part of the
endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) or SSE_KMS
.
For the ModifyEndpoint
operation, you can change the existing value of the
EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t
change the existing value from SSE_S3
to SSE_KMS
.
To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow
"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
EncryptionModeValue
public void setServerSideEncryptionKmsKeyId(String serverSideEncryptionKmsKeyId)
If you are using SSE_KMS
for the EncryptionMode
, provide the KMS key ID. The key that
you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows
use of the key.
Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
serverSideEncryptionKmsKeyId
- If you are using SSE_KMS
for the EncryptionMode
, provide the KMS key ID. The key
that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions
and allows use of the key.
Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
public String getServerSideEncryptionKmsKeyId()
If you are using SSE_KMS
for the EncryptionMode
, provide the KMS key ID. The key that
you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows
use of the key.
Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
SSE_KMS
for the EncryptionMode
, provide the KMS key ID. The
key that you use needs an attached policy that enables Identity and Access Management (IAM) user
permissions and allows use of the key.
Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
public S3Settings withServerSideEncryptionKmsKeyId(String serverSideEncryptionKmsKeyId)
If you are using SSE_KMS
for the EncryptionMode
, provide the KMS key ID. The key that
you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows
use of the key.
Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
serverSideEncryptionKmsKeyId
- If you are using SSE_KMS
for the EncryptionMode
, provide the KMS key ID. The key
that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions
and allows use of the key.
Here is a CLI example:
aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
public void setDataFormat(String dataFormat)
The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
dataFormat
- The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
DataFormatValue
public String getDataFormat()
The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features
efficient compression and provides faster query response.
DataFormatValue
public S3Settings withDataFormat(String dataFormat)
The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
dataFormat
- The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
DataFormatValue
public void setDataFormat(DataFormatValue dataFormat)
The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
dataFormat
- The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
DataFormatValue
public S3Settings withDataFormat(DataFormatValue dataFormat)
The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
dataFormat
- The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient
compression and provides faster query response.
DataFormatValue
public void setEncodingType(String encodingType)
The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values
more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is
stored in a dictionary page for each column chunk.
encodingType
- The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated
values more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The
dictionary is stored in a dictionary page for each column chunk.
EncodingTypeValue
public String getEncodingType()
The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values
more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is
stored in a dictionary page for each column chunk.
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated
values more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The
dictionary is stored in a dictionary page for each column chunk.
EncodingTypeValue
public S3Settings withEncodingType(String encodingType)
The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values
more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is
stored in a dictionary page for each column chunk.
encodingType
- The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated
values more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The
dictionary is stored in a dictionary page for each column chunk.
EncodingTypeValue
public void setEncodingType(EncodingTypeValue encodingType)
The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values
more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is
stored in a dictionary page for each column chunk.
encodingType
- The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated
values more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The
dictionary is stored in a dictionary page for each column chunk.
EncodingTypeValue
public S3Settings withEncodingType(EncodingTypeValue encodingType)
The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values
more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is
stored in a dictionary page for each column chunk.
encodingType
- The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated
values more efficiently. This is the default.
PLAIN
doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The
dictionary is stored in a dictionary page for each column chunk.
EncodingTypeValue
public void setDictPageSizeLimit(Integer dictPageSizeLimit)
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is
stored using an encoding type of PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the
maximum size of a dictionary page before it reverts to PLAIN
encoding. This size is used for
.parquet file format only.
dictPageSizeLimit
- The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this
column is stored using an encoding type of PLAIN
. This parameter defaults to 1024 * 1024
bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN
encoding.
This size is used for .parquet file format only.public Integer getDictPageSizeLimit()
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is
stored using an encoding type of PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the
maximum size of a dictionary page before it reverts to PLAIN
encoding. This size is used for
.parquet file format only.
PLAIN
. This parameter defaults to 1024 * 1024
bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN
encoding.
This size is used for .parquet file format only.public S3Settings withDictPageSizeLimit(Integer dictPageSizeLimit)
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is
stored using an encoding type of PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the
maximum size of a dictionary page before it reverts to PLAIN
encoding. This size is used for
.parquet file format only.
dictPageSizeLimit
- The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this
column is stored using an encoding type of PLAIN
. This parameter defaults to 1024 * 1024
bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN
encoding.
This size is used for .parquet file format only.public void setRowGroupLength(Integer rowGroupLength)
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength
is set to the max row group length in
bytes (64 * 1024 * 1024).
rowGroupLength
- The number of rows in a row group. A smaller row group size provides faster reads. But as the number of
row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used
for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength
is set to the max row group
length in bytes (64 * 1024 * 1024).
public Integer getRowGroupLength()
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength
is set to the max row group length in
bytes (64 * 1024 * 1024).
If you choose a value larger than the maximum, RowGroupLength
is set to the max row group
length in bytes (64 * 1024 * 1024).
public S3Settings withRowGroupLength(Integer rowGroupLength)
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength
is set to the max row group length in
bytes (64 * 1024 * 1024).
rowGroupLength
- The number of rows in a row group. A smaller row group size provides faster reads. But as the number of
row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used
for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength
is set to the max row group
length in bytes (64 * 1024 * 1024).
public void setDataPageSize(Integer dataPageSize)
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
dataPageSize
- The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is
used for .parquet file format only.public Integer getDataPageSize()
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
public S3Settings withDataPageSize(Integer dataPageSize)
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
dataPageSize
- The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is
used for .parquet file format only.public void setParquetVersion(String parquetVersion)
The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.
parquetVersion
- The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.ParquetVersionValue
public String getParquetVersion()
The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.
parquet_1_0
(the default) or
parquet_2_0
.ParquetVersionValue
public S3Settings withParquetVersion(String parquetVersion)
The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.
parquetVersion
- The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.ParquetVersionValue
public void setParquetVersion(ParquetVersionValue parquetVersion)
The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.
parquetVersion
- The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.ParquetVersionValue
public S3Settings withParquetVersion(ParquetVersionValue parquetVersion)
The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.
parquetVersion
- The version of the Apache Parquet format that you want to use: parquet_1_0
(the default) or
parquet_2_0
.ParquetVersionValue
public void setEnableStatistics(Boolean enableStatistics)
A value that enables statistics for Parquet pages and row groups. Choose true
to enable statistics,
false
to disable. Statistics include NULL
, DISTINCT
, MAX
, and
MIN
values. This parameter defaults to true
. This value is used for .parquet file
format only.
enableStatistics
- A value that enables statistics for Parquet pages and row groups. Choose true
to enable
statistics, false
to disable. Statistics include NULL
, DISTINCT
,
MAX
, and MIN
values. This parameter defaults to true
. This value is
used for .parquet file format only.public Boolean getEnableStatistics()
A value that enables statistics for Parquet pages and row groups. Choose true
to enable statistics,
false
to disable. Statistics include NULL
, DISTINCT
, MAX
, and
MIN
values. This parameter defaults to true
. This value is used for .parquet file
format only.
true
to enable
statistics, false
to disable. Statistics include NULL
, DISTINCT
,
MAX
, and MIN
values. This parameter defaults to true
. This value
is used for .parquet file format only.public S3Settings withEnableStatistics(Boolean enableStatistics)
A value that enables statistics for Parquet pages and row groups. Choose true
to enable statistics,
false
to disable. Statistics include NULL
, DISTINCT
, MAX
, and
MIN
values. This parameter defaults to true
. This value is used for .parquet file
format only.
enableStatistics
- A value that enables statistics for Parquet pages and row groups. Choose true
to enable
statistics, false
to disable. Statistics include NULL
, DISTINCT
,
MAX
, and MIN
values. This parameter defaults to true
. This value is
used for .parquet file format only.public Boolean isEnableStatistics()
A value that enables statistics for Parquet pages and row groups. Choose true
to enable statistics,
false
to disable. Statistics include NULL
, DISTINCT
, MAX
, and
MIN
values. This parameter defaults to true
. This value is used for .parquet file
format only.
true
to enable
statistics, false
to disable. Statistics include NULL
, DISTINCT
,
MAX
, and MIN
values. This parameter defaults to true
. This value
is used for .parquet file format only.public void setIncludeOpForFullLoad(Boolean includeOpForFullLoad)
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet output files only to indicate how the rows were added to the source database.
DMS supports the IncludeOpForFullLoad
parameter in versions 3.1.4 and later.
DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in versions 3.4.7
and later.
For full load, records can only be inserted. By default (the false
setting), no information is
recorded in these output files for a full load to indicate that the rows were inserted at the source database. If
IncludeOpForFullLoad
is set to true
or y
, the INSERT is recorded as an I
annotation in the first field of the .csv file. This allows the format of your target records from a full load to
be consistent with the target records from a CDC load.
This setting works together with the CdcInsertsOnly
and the CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
includeOpForFullLoad
- A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or
.parquet output files only to indicate how the rows were added to the source database.
DMS supports the IncludeOpForFullLoad
parameter in versions 3.1.4 and later.
DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in
versions 3.4.7 and later.
For full load, records can only be inserted. By default (the false
setting), no information
is recorded in these output files for a full load to indicate that the rows were inserted at the source
database. If IncludeOpForFullLoad
is set to true
or y
, the INSERT
is recorded as an I annotation in the first field of the .csv file. This allows the format of your target
records from a full load to be consistent with the target records from a CDC load.
This setting works together with the CdcInsertsOnly
and the CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
public Boolean getIncludeOpForFullLoad()
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet output files only to indicate how the rows were added to the source database.
DMS supports the IncludeOpForFullLoad
parameter in versions 3.1.4 and later.
DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in versions 3.4.7
and later.
For full load, records can only be inserted. By default (the false
setting), no information is
recorded in these output files for a full load to indicate that the rows were inserted at the source database. If
IncludeOpForFullLoad
is set to true
or y
, the INSERT is recorded as an I
annotation in the first field of the .csv file. This allows the format of your target records from a full load to
be consistent with the target records from a CDC load.
This setting works together with the CdcInsertsOnly
and the CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the IncludeOpForFullLoad
parameter in versions 3.1.4 and later.
DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in
versions 3.4.7 and later.
For full load, records can only be inserted. By default (the false
setting), no information
is recorded in these output files for a full load to indicate that the rows were inserted at the source
database. If IncludeOpForFullLoad
is set to true
or y
, the INSERT
is recorded as an I annotation in the first field of the .csv file. This allows the format of your target
records from a full load to be consistent with the target records from a CDC load.
This setting works together with the CdcInsertsOnly
and the
CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about
how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
public S3Settings withIncludeOpForFullLoad(Boolean includeOpForFullLoad)
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet output files only to indicate how the rows were added to the source database.
DMS supports the IncludeOpForFullLoad
parameter in versions 3.1.4 and later.
DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in versions 3.4.7
and later.
For full load, records can only be inserted. By default (the false
setting), no information is
recorded in these output files for a full load to indicate that the rows were inserted at the source database. If
IncludeOpForFullLoad
is set to true
or y
, the INSERT is recorded as an I
annotation in the first field of the .csv file. This allows the format of your target records from a full load to
be consistent with the target records from a CDC load.
This setting works together with the CdcInsertsOnly
and the CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
includeOpForFullLoad
- A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or
.parquet output files only to indicate how the rows were added to the source database.
DMS supports the IncludeOpForFullLoad
parameter in versions 3.1.4 and later.
DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in
versions 3.4.7 and later.
For full load, records can only be inserted. By default (the false
setting), no information
is recorded in these output files for a full load to indicate that the rows were inserted at the source
database. If IncludeOpForFullLoad
is set to true
or y
, the INSERT
is recorded as an I annotation in the first field of the .csv file. This allows the format of your target
records from a full load to be consistent with the target records from a CDC load.
This setting works together with the CdcInsertsOnly
and the CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
public Boolean isIncludeOpForFullLoad()
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet output files only to indicate how the rows were added to the source database.
DMS supports the IncludeOpForFullLoad
parameter in versions 3.1.4 and later.
DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in versions 3.4.7
and later.
For full load, records can only be inserted. By default (the false
setting), no information is
recorded in these output files for a full load to indicate that the rows were inserted at the source database. If
IncludeOpForFullLoad
is set to true
or y
, the INSERT is recorded as an I
annotation in the first field of the .csv file. This allows the format of your target records from a full load to
be consistent with the target records from a CDC load.
This setting works together with the CdcInsertsOnly
and the CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the IncludeOpForFullLoad
parameter in versions 3.1.4 and later.
DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in
versions 3.4.7 and later.
For full load, records can only be inserted. By default (the false
setting), no information
is recorded in these output files for a full load to indicate that the rows were inserted at the source
database. If IncludeOpForFullLoad
is set to true
or y
, the INSERT
is recorded as an I annotation in the first field of the .csv file. This allows the format of your target
records from a full load to be consistent with the target records from a CDC load.
This setting works together with the CdcInsertsOnly
and the
CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about
how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
public void setCdcInsertsOnly(Boolean cdcInsertsOnly)
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage
(.parquet) output files. By default (the false
setting), the first field in a .csv or .parquet
record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was
inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly
is set to true
or y
, only INSERTs from the source
database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends
on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to
true
, the first field of every CDC record is set to I to indicate the INSERT operation at the
source. If IncludeOpForFullLoad
is set to false
, every CDC record is written without a
first field to indicate the INSERT operation at the source. For more information about how these settings work
together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the interaction described preceding between the CdcInsertsOnly
and
IncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the
same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both.
cdcInsertsOnly
- A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar
storage (.parquet) output files. By default (the false
setting), the first field in a .csv or
.parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether
the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly
is set to true
or y
, only INSERTs from the source
database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded
depends on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to
true
, the first field of every CDC record is set to I to indicate the INSERT operation at the
source. If IncludeOpForFullLoad
is set to false
, every CDC record is written
without a first field to indicate the INSERT operation at the source. For more information about how these
settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
DMS supports the interaction described preceding between the CdcInsertsOnly
and
IncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to
true
for the same endpoint, but not both.
public Boolean getCdcInsertsOnly()
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage
(.parquet) output files. By default (the false
setting), the first field in a .csv or .parquet
record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was
inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly
is set to true
or y
, only INSERTs from the source
database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends
on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to
true
, the first field of every CDC record is set to I to indicate the INSERT operation at the
source. If IncludeOpForFullLoad
is set to false
, every CDC record is written without a
first field to indicate the INSERT operation at the source. For more information about how these settings work
together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the interaction described preceding between the CdcInsertsOnly
and
IncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the
same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both.
false
setting), the first field in a .csv
or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate
whether the row was inserted, updated, or deleted at the source database for a CDC load to the
target.
If CdcInsertsOnly
is set to true
or y
, only INSERTs from the
source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are
recorded depends on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to true
, the first field of every CDC record is set to I to indicate the INSERT
operation at the source. If IncludeOpForFullLoad
is set to false
, every CDC
record is written without a first field to indicate the INSERT operation at the source. For more
information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
DMS supports the interaction described preceding between the CdcInsertsOnly
and
IncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to
true
for the same endpoint, but not both.
public S3Settings withCdcInsertsOnly(Boolean cdcInsertsOnly)
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage
(.parquet) output files. By default (the false
setting), the first field in a .csv or .parquet
record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was
inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly
is set to true
or y
, only INSERTs from the source
database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends
on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to
true
, the first field of every CDC record is set to I to indicate the INSERT operation at the
source. If IncludeOpForFullLoad
is set to false
, every CDC record is written without a
first field to indicate the INSERT operation at the source. For more information about how these settings work
together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the interaction described preceding between the CdcInsertsOnly
and
IncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the
same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both.
cdcInsertsOnly
- A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar
storage (.parquet) output files. By default (the false
setting), the first field in a .csv or
.parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether
the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly
is set to true
or y
, only INSERTs from the source
database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded
depends on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to
true
, the first field of every CDC record is set to I to indicate the INSERT operation at the
source. If IncludeOpForFullLoad
is set to false
, every CDC record is written
without a first field to indicate the INSERT operation at the source. For more information about how these
settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
DMS supports the interaction described preceding between the CdcInsertsOnly
and
IncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to
true
for the same endpoint, but not both.
public Boolean isCdcInsertsOnly()
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage
(.parquet) output files. By default (the false
setting), the first field in a .csv or .parquet
record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was
inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly
is set to true
or y
, only INSERTs from the source
database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends
on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to
true
, the first field of every CDC record is set to I to indicate the INSERT operation at the
source. If IncludeOpForFullLoad
is set to false
, every CDC record is written without a
first field to indicate the INSERT operation at the source. For more information about how these settings work
together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the interaction described preceding between the CdcInsertsOnly
and
IncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the
same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both.
false
setting), the first field in a .csv
or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate
whether the row was inserted, updated, or deleted at the source database for a CDC load to the
target.
If CdcInsertsOnly
is set to true
or y
, only INSERTs from the
source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are
recorded depends on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to true
, the first field of every CDC record is set to I to indicate the INSERT
operation at the source. If IncludeOpForFullLoad
is set to false
, every CDC
record is written without a first field to indicate the INSERT operation at the source. For more
information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
DMS supports the interaction described preceding between the CdcInsertsOnly
and
IncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to
true
for the same endpoint, but not both.
public void setTimestampColumnName(String timestampColumnName)
A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
DMS supports the TimestampColumnName
parameter in versions 3.1.4 and later.
DMS includes an additional STRING
column in the .csv or .parquet object files of your migrated data
when you set TimestampColumnName
to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the
precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit
timestamp supported by DMS for the source database.
When the AddColumnName
parameter is set to true
, DMS also includes a name for the
timestamp column that you set with TimestampColumnName
.
timestampColumnName
- A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for
an Amazon S3 target.
DMS supports the TimestampColumnName
parameter in versions 3.1.4 and later.
DMS includes an additional STRING
column in the .csv or .parquet object files of your
migrated data when you set TimestampColumnName
to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS
. By default,
the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on
the commit timestamp supported by DMS for the source database.
When the AddColumnName
parameter is set to true
, DMS also includes a name for
the timestamp column that you set with TimestampColumnName
.
public String getTimestampColumnName()
A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
DMS supports the TimestampColumnName
parameter in versions 3.1.4 and later.
DMS includes an additional STRING
column in the .csv or .parquet object files of your migrated data
when you set TimestampColumnName
to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the
precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit
timestamp supported by DMS for the source database.
When the AddColumnName
parameter is set to true
, DMS also includes a name for the
timestamp column that you set with TimestampColumnName
.
DMS supports the TimestampColumnName
parameter in versions 3.1.4 and later.
DMS includes an additional STRING
column in the .csv or .parquet object files of your
migrated data when you set TimestampColumnName
to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS
. By default,
the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on
the commit timestamp supported by DMS for the source database.
When the AddColumnName
parameter is set to true
, DMS also includes a name for
the timestamp column that you set with TimestampColumnName
.
public S3Settings withTimestampColumnName(String timestampColumnName)
A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
DMS supports the TimestampColumnName
parameter in versions 3.1.4 and later.
DMS includes an additional STRING
column in the .csv or .parquet object files of your migrated data
when you set TimestampColumnName
to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the
precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit
timestamp supported by DMS for the source database.
When the AddColumnName
parameter is set to true
, DMS also includes a name for the
timestamp column that you set with TimestampColumnName
.
timestampColumnName
- A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for
an Amazon S3 target.
DMS supports the TimestampColumnName
parameter in versions 3.1.4 and later.
DMS includes an additional STRING
column in the .csv or .parquet object files of your
migrated data when you set TimestampColumnName
to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS
. By default,
the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on
the commit timestamp supported by DMS for the source database.
When the AddColumnName
parameter is set to true
, DMS also includes a name for
the timestamp column that you set with TimestampColumnName
.
public void setParquetTimestampInMillisecond(Boolean parquetTimestampInMillisecond)
A value that specifies the precision of any TIMESTAMP
column values that are written to an Amazon S3
object file in .parquet format.
DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes all
TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes
them with microsecond precision.
Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values. Set
this parameter to true
for S3 endpoint object files that are .parquet formatted only if you plan to
query or process the data with Athena or Glue.
DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond
precision.
Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp column
value that is inserted by setting the TimestampColumnName
parameter.
parquetTimestampInMillisecond
- A value that specifies the precision of any TIMESTAMP
column values that are written to an
Amazon S3 object file in .parquet format.
DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes
all TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS
writes them with microsecond precision.
Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values.
Set this parameter to true
for S3 endpoint object files that are .parquet formatted only if
you plan to query or process the data with Athena or Glue.
DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond
precision.
Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp
column value that is inserted by setting the TimestampColumnName
parameter.
public Boolean getParquetTimestampInMillisecond()
A value that specifies the precision of any TIMESTAMP
column values that are written to an Amazon S3
object file in .parquet format.
DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes all
TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes
them with microsecond precision.
Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values. Set
this parameter to true
for S3 endpoint object files that are .parquet formatted only if you plan to
query or process the data with Athena or Glue.
DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond
precision.
Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp column
value that is inserted by setting the TimestampColumnName
parameter.
TIMESTAMP
column values that are written to an
Amazon S3 object file in .parquet format.
DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes
all TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise,
DMS writes them with microsecond precision.
Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values. Set this parameter to true
for S3 endpoint object files that are .parquet formatted
only if you plan to query or process the data with Athena or Glue.
DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond
precision.
Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp
column value that is inserted by setting the TimestampColumnName
parameter.
public S3Settings withParquetTimestampInMillisecond(Boolean parquetTimestampInMillisecond)
A value that specifies the precision of any TIMESTAMP
column values that are written to an Amazon S3
object file in .parquet format.
DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes all
TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes
them with microsecond precision.
Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values. Set
this parameter to true
for S3 endpoint object files that are .parquet formatted only if you plan to
query or process the data with Athena or Glue.
DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond
precision.
Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp column
value that is inserted by setting the TimestampColumnName
parameter.
parquetTimestampInMillisecond
- A value that specifies the precision of any TIMESTAMP
column values that are written to an
Amazon S3 object file in .parquet format.
DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes
all TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS
writes them with microsecond precision.
Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values.
Set this parameter to true
for S3 endpoint object files that are .parquet formatted only if
you plan to query or process the data with Athena or Glue.
DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond
precision.
Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp
column value that is inserted by setting the TimestampColumnName
parameter.
public Boolean isParquetTimestampInMillisecond()
A value that specifies the precision of any TIMESTAMP
column values that are written to an Amazon S3
object file in .parquet format.
DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes all
TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes
them with microsecond precision.
Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values. Set
this parameter to true
for S3 endpoint object files that are .parquet formatted only if you plan to
query or process the data with Athena or Glue.
DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond
precision.
Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp column
value that is inserted by setting the TimestampColumnName
parameter.
TIMESTAMP
column values that are written to an
Amazon S3 object file in .parquet format.
DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes
all TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise,
DMS writes them with microsecond precision.
Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values. Set this parameter to true
for S3 endpoint object files that are .parquet formatted
only if you plan to query or process the data with Athena or Glue.
DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond
precision.
Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp
column value that is inserted by setting the TimestampColumnName
parameter.
public void setCdcInsertsAndUpdates(Boolean cdcInsertsAndUpdates)
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet
(columnar storage) output files. The default setting is false
, but when
CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs from
the source database are migrated to the .csv or .parquet file.
DMS supports the use of the .parquet files in versions 3.4.7 and later.
How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC record
is set to either I
or U
to indicate INSERT and UPDATE operations at the source. But if
IncludeOpForFullLoad
is set to false
, CDC records are written without an indication of
INSERT or UPDATE operations at the source. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the
same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both.
cdcInsertsAndUpdates
- A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or
.parquet (columnar storage) output files. The default setting is false
, but when
CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs
from the source database are migrated to the .csv or .parquet file. DMS supports the use of the .parquet files in versions 3.4.7 and later.
How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC
record is set to either I
or U
to indicate INSERT and UPDATE operations at the
source. But if IncludeOpForFullLoad
is set to false
, CDC records are written
without an indication of INSERT or UPDATE operations at the source. For more information about how these
settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to
true
for the same endpoint, but not both.
public Boolean getCdcInsertsAndUpdates()
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet
(columnar storage) output files. The default setting is false
, but when
CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs from
the source database are migrated to the .csv or .parquet file.
DMS supports the use of the .parquet files in versions 3.4.7 and later.
How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC record
is set to either I
or U
to indicate INSERT and UPDATE operations at the source. But if
IncludeOpForFullLoad
is set to false
, CDC records are written without an indication of
INSERT or UPDATE operations at the source. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the
same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both.
false
, but when
CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs
from the source database are migrated to the .csv or .parquet file. DMS supports the use of the .parquet files in versions 3.4.7 and later.
How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC
record is set to either I
or U
to indicate INSERT and UPDATE operations at the
source. But if IncludeOpForFullLoad
is set to false
, CDC records are written
without an indication of INSERT or UPDATE operations at the source. For more information about how these
settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to
true
for the same endpoint, but not both.
public S3Settings withCdcInsertsAndUpdates(Boolean cdcInsertsAndUpdates)
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet
(columnar storage) output files. The default setting is false
, but when
CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs from
the source database are migrated to the .csv or .parquet file.
DMS supports the use of the .parquet files in versions 3.4.7 and later.
How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC record
is set to either I
or U
to indicate INSERT and UPDATE operations at the source. But if
IncludeOpForFullLoad
is set to false
, CDC records are written without an indication of
INSERT or UPDATE operations at the source. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the
same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both.
cdcInsertsAndUpdates
- A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or
.parquet (columnar storage) output files. The default setting is false
, but when
CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs
from the source database are migrated to the .csv or .parquet file. DMS supports the use of the .parquet files in versions 3.4.7 and later.
How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC
record is set to either I
or U
to indicate INSERT and UPDATE operations at the
source. But if IncludeOpForFullLoad
is set to false
, CDC records are written
without an indication of INSERT or UPDATE operations at the source. For more information about how these
settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to
true
for the same endpoint, but not both.
public Boolean isCdcInsertsAndUpdates()
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet
(columnar storage) output files. The default setting is false
, but when
CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs from
the source database are migrated to the .csv or .parquet file.
DMS supports the use of the .parquet files in versions 3.4.7 and later.
How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC record
is set to either I
or U
to indicate INSERT and UPDATE operations at the source. But if
IncludeOpForFullLoad
is set to false
, CDC records are written without an indication of
INSERT or UPDATE operations at the source. For more information about how these settings work together, see
Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the
same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both.
false
, but when
CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs
from the source database are migrated to the .csv or .parquet file. DMS supports the use of the .parquet files in versions 3.4.7 and later.
How these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC
record is set to either I
or U
to indicate INSERT and UPDATE operations at the
source. But if IncludeOpForFullLoad
is set to false
, CDC records are written
without an indication of INSERT or UPDATE operations at the source. For more information about how these
settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User
Guide..
DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to
true
for the same endpoint, but not both.
public void setDatePartitionEnabled(Boolean datePartitionEnabled)
When set to true
, this parameter partitions S3 bucket folders based on transaction commit dates. The
default value is false
. For more information about date-based folder partitioning, see Using
date-based folder partitioning.
datePartitionEnabled
- When set to true
, this parameter partitions S3 bucket folders based on transaction commit
dates. The default value is false
. For more information about date-based folder partitioning,
see Using date-based folder partitioning.public Boolean getDatePartitionEnabled()
When set to true
, this parameter partitions S3 bucket folders based on transaction commit dates. The
default value is false
. For more information about date-based folder partitioning, see Using
date-based folder partitioning.
true
, this parameter partitions S3 bucket folders based on transaction commit
dates. The default value is false
. For more information about date-based folder
partitioning, see Using date-based folder partitioning.public S3Settings withDatePartitionEnabled(Boolean datePartitionEnabled)
When set to true
, this parameter partitions S3 bucket folders based on transaction commit dates. The
default value is false
. For more information about date-based folder partitioning, see Using
date-based folder partitioning.
datePartitionEnabled
- When set to true
, this parameter partitions S3 bucket folders based on transaction commit
dates. The default value is false
. For more information about date-based folder partitioning,
see Using date-based folder partitioning.public Boolean isDatePartitionEnabled()
When set to true
, this parameter partitions S3 bucket folders based on transaction commit dates. The
default value is false
. For more information about date-based folder partitioning, see Using
date-based folder partitioning.
true
, this parameter partitions S3 bucket folders based on transaction commit
dates. The default value is false
. For more information about date-based folder
partitioning, see Using date-based folder partitioning.public void setDatePartitionSequence(String datePartitionSequence)
Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to true
.
datePartitionSequence
- Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionSequenceValue
public String getDatePartitionSequence()
Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to true
.
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionSequenceValue
public S3Settings withDatePartitionSequence(String datePartitionSequence)
Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to true
.
datePartitionSequence
- Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionSequenceValue
public void setDatePartitionSequence(DatePartitionSequenceValue datePartitionSequence)
Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to true
.
datePartitionSequence
- Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionSequenceValue
public S3Settings withDatePartitionSequence(DatePartitionSequenceValue datePartitionSequence)
Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to true
.
datePartitionSequence
- Identifies the sequence of the date format to use during folder partitioning. The default value is
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionSequenceValue
public void setDatePartitionDelimiter(String datePartitionDelimiter)
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH
.
Use this parameter when DatePartitionedEnabled
is set to true
.
datePartitionDelimiter
- Specifies a date separating delimiter to use during folder partitioning. The default value is
SLASH
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionDelimiterValue
public String getDatePartitionDelimiter()
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH
.
Use this parameter when DatePartitionedEnabled
is set to true
.
SLASH
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionDelimiterValue
public S3Settings withDatePartitionDelimiter(String datePartitionDelimiter)
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH
.
Use this parameter when DatePartitionedEnabled
is set to true
.
datePartitionDelimiter
- Specifies a date separating delimiter to use during folder partitioning. The default value is
SLASH
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionDelimiterValue
public void setDatePartitionDelimiter(DatePartitionDelimiterValue datePartitionDelimiter)
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH
.
Use this parameter when DatePartitionedEnabled
is set to true
.
datePartitionDelimiter
- Specifies a date separating delimiter to use during folder partitioning. The default value is
SLASH
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionDelimiterValue
public S3Settings withDatePartitionDelimiter(DatePartitionDelimiterValue datePartitionDelimiter)
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH
.
Use this parameter when DatePartitionedEnabled
is set to true
.
datePartitionDelimiter
- Specifies a date separating delimiter to use during folder partitioning. The default value is
SLASH
. Use this parameter when DatePartitionedEnabled
is set to
true
.DatePartitionDelimiterValue
public void setUseCsvNoSupValue(Boolean useCsvNoSupValue)
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format.
If set to true
for columns not included in the supplemental log, DMS uses the value specified by
CsvNoSupValue
. If not set or set to false
, DMS uses the null value for these
columns.
This setting is supported in DMS versions 3.4.1 and later.
useCsvNoSupValue
- This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv
format. If set to true
for columns not included in the supplemental log, DMS uses the value
specified by CsvNoSupValue
. If not set or set to false
, DMS uses the null value for
these columns. This setting is supported in DMS versions 3.4.1 and later.
public Boolean getUseCsvNoSupValue()
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format.
If set to true
for columns not included in the supplemental log, DMS uses the value specified by
CsvNoSupValue
. If not set or set to false
, DMS uses the null value for these
columns.
This setting is supported in DMS versions 3.4.1 and later.
true
for columns not included in the supplemental log, DMS uses the value
specified by CsvNoSupValue
. If not set or set to false
, DMS uses the null value for
these columns. This setting is supported in DMS versions 3.4.1 and later.
public S3Settings withUseCsvNoSupValue(Boolean useCsvNoSupValue)
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format.
If set to true
for columns not included in the supplemental log, DMS uses the value specified by
CsvNoSupValue
. If not set or set to false
, DMS uses the null value for these
columns.
This setting is supported in DMS versions 3.4.1 and later.
useCsvNoSupValue
- This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv
format. If set to true
for columns not included in the supplemental log, DMS uses the value
specified by CsvNoSupValue
. If not set or set to false
, DMS uses the null value for
these columns. This setting is supported in DMS versions 3.4.1 and later.
public Boolean isUseCsvNoSupValue()
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format.
If set to true
for columns not included in the supplemental log, DMS uses the value specified by
CsvNoSupValue
. If not set or set to false
, DMS uses the null value for these
columns.
This setting is supported in DMS versions 3.4.1 and later.
true
for columns not included in the supplemental log, DMS uses the value
specified by CsvNoSupValue
. If not set or set to false
, DMS uses the null value for
these columns. This setting is supported in DMS versions 3.4.1 and later.
public void setCsvNoSupValue(String csvNoSupValue)
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in
.csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all
columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for
these columns regardless of the UseCsvNoSupValue
setting.
This setting is supported in DMS versions 3.4.1 and later.
csvNoSupValue
- This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are
written in .csv format. If UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for
all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null
value for these columns regardless of the UseCsvNoSupValue
setting. This setting is supported in DMS versions 3.4.1 and later.
public String getCsvNoSupValue()
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in
.csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all
columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for
these columns regardless of the UseCsvNoSupValue
setting.
This setting is supported in DMS versions 3.4.1 and later.
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for
all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null
value for these columns regardless of the UseCsvNoSupValue
setting. This setting is supported in DMS versions 3.4.1 and later.
public S3Settings withCsvNoSupValue(String csvNoSupValue)
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in
.csv format. If
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all
columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for
these columns regardless of the UseCsvNoSupValue
setting.
This setting is supported in DMS versions 3.4.1 and later.
csvNoSupValue
- This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are
written in .csv format. If UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for
all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null
value for these columns regardless of the UseCsvNoSupValue
setting. This setting is supported in DMS versions 3.4.1 and later.
public void setPreserveTransactions(Boolean preserveTransactions)
If set to true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon
S3 target specified by
CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
preserveTransactions
- If set to true
, DMS saves the transaction order for a change data capture (CDC) load on the
Amazon S3 target specified by
CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target. This setting is supported in DMS versions 3.4.2 and later.
public Boolean getPreserveTransactions()
If set to true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon
S3 target specified by
CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
true
, DMS saves the transaction order for a change data capture (CDC) load on the
Amazon S3 target specified by
CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target. This setting is supported in DMS versions 3.4.2 and later.
public S3Settings withPreserveTransactions(Boolean preserveTransactions)
If set to true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon
S3 target specified by
CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
preserveTransactions
- If set to true
, DMS saves the transaction order for a change data capture (CDC) load on the
Amazon S3 target specified by
CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target. This setting is supported in DMS versions 3.4.2 and later.
public Boolean isPreserveTransactions()
If set to true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon
S3 target specified by
CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
true
, DMS saves the transaction order for a change data capture (CDC) load on the
Amazon S3 target specified by
CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target. This setting is supported in DMS versions 3.4.2 and later.
public void setCdcPath(String cdcPath)
Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change
data; otherwise, it's optional. If CdcPath
is set, DMS reads CDC files from this path and replicates
the data changes to the target endpoint. For an S3 target if you set PreserveTransactions
to true
, DMS verifies that you have set this parameter to a
folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC
folder path in either your S3 target working directory or the S3 target location specified by
BucketFolder
and
BucketName
.
For example, if you specify CdcPath
as MyChangedData
, and you specify
BucketName
as MyTargetBucket
but do not specify BucketFolder
, DMS creates
the CDC folder path following: MyTargetBucket/MyChangedData
.
If you specify the same CdcPath
, and you specify BucketName
as
MyTargetBucket
and BucketFolder
as MyTargetData
, DMS creates the CDC
folder path following: MyTargetBucket/MyTargetData/MyChangedData
.
For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
cdcPath
- Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures
change data; otherwise, it's optional. If CdcPath
is set, DMS reads CDC files from this path
and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions
to true
, DMS verifies that you have set this
parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load.
DMS creates this CDC folder path in either your S3 target working directory or the S3 target location
specified by BucketFolder
and BucketName
.
For example, if you specify CdcPath
as MyChangedData
, and you specify
BucketName
as MyTargetBucket
but do not specify BucketFolder
, DMS
creates the CDC folder path following: MyTargetBucket/MyChangedData
.
If you specify the same CdcPath
, and you specify BucketName
as
MyTargetBucket
and BucketFolder
as MyTargetData
, DMS creates the
CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData
.
For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
public String getCdcPath()
Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change
data; otherwise, it's optional. If CdcPath
is set, DMS reads CDC files from this path and replicates
the data changes to the target endpoint. For an S3 target if you set PreserveTransactions
to true
, DMS verifies that you have set this parameter to a
folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC
folder path in either your S3 target working directory or the S3 target location specified by
BucketFolder
and
BucketName
.
For example, if you specify CdcPath
as MyChangedData
, and you specify
BucketName
as MyTargetBucket
but do not specify BucketFolder
, DMS creates
the CDC folder path following: MyTargetBucket/MyChangedData
.
If you specify the same CdcPath
, and you specify BucketName
as
MyTargetBucket
and BucketFolder
as MyTargetData
, DMS creates the CDC
folder path following: MyTargetBucket/MyTargetData/MyChangedData
.
For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
CdcPath
is set, DMS reads CDC files from this path
and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions
to true
, DMS verifies that you have set this
parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load.
DMS creates this CDC folder path in either your S3 target working directory or the S3 target location
specified by BucketFolder
and BucketName
.
For example, if you specify CdcPath
as MyChangedData
, and you specify
BucketName
as MyTargetBucket
but do not specify BucketFolder
, DMS
creates the CDC folder path following: MyTargetBucket/MyChangedData
.
If you specify the same CdcPath
, and you specify BucketName
as
MyTargetBucket
and BucketFolder
as MyTargetData
, DMS creates the
CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData
.
For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
public S3Settings withCdcPath(String cdcPath)
Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change
data; otherwise, it's optional. If CdcPath
is set, DMS reads CDC files from this path and replicates
the data changes to the target endpoint. For an S3 target if you set PreserveTransactions
to true
, DMS verifies that you have set this parameter to a
folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC
folder path in either your S3 target working directory or the S3 target location specified by
BucketFolder
and
BucketName
.
For example, if you specify CdcPath
as MyChangedData
, and you specify
BucketName
as MyTargetBucket
but do not specify BucketFolder
, DMS creates
the CDC folder path following: MyTargetBucket/MyChangedData
.
If you specify the same CdcPath
, and you specify BucketName
as
MyTargetBucket
and BucketFolder
as MyTargetData
, DMS creates the CDC
folder path following: MyTargetBucket/MyTargetData/MyChangedData
.
For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
cdcPath
- Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures
change data; otherwise, it's optional. If CdcPath
is set, DMS reads CDC files from this path
and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions
to true
, DMS verifies that you have set this
parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load.
DMS creates this CDC folder path in either your S3 target working directory or the S3 target location
specified by BucketFolder
and BucketName
.
For example, if you specify CdcPath
as MyChangedData
, and you specify
BucketName
as MyTargetBucket
but do not specify BucketFolder
, DMS
creates the CDC folder path following: MyTargetBucket/MyChangedData
.
If you specify the same CdcPath
, and you specify BucketName
as
MyTargetBucket
and BucketFolder
as MyTargetData
, DMS creates the
CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData
.
For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in DMS versions 3.4.2 and later.
public void setUseTaskStartTimeForFullLoadTimestamp(Boolean useTaskStartTimeForFullLoadTimestamp)
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data
is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp
is set to
true
, each row of the timestamp column contains the task start time. For CDC loads, each row of the
timestamp column contains the transaction commit time.
When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load timestamp in
the timestamp column increments with the time data arrives at the target.
useTaskStartTimeForFullLoadTimestamp
- When set to true, this parameter uses the task start time as the timestamp column value instead of the
time data is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp
is
set to true
, each row of the timestamp column contains the task start time. For CDC loads,
each row of the timestamp column contains the transaction commit time.
When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load
timestamp in the timestamp column increments with the time data arrives at the target.
public Boolean getUseTaskStartTimeForFullLoadTimestamp()
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data
is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp
is set to
true
, each row of the timestamp column contains the task start time. For CDC loads, each row of the
timestamp column contains the transaction commit time.
When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load timestamp in
the timestamp column increments with the time data arrives at the target.
useTaskStartTimeForFullLoadTimestamp
is
set to true
, each row of the timestamp column contains the task start time. For CDC loads,
each row of the timestamp column contains the transaction commit time.
When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load
timestamp in the timestamp column increments with the time data arrives at the target.
public S3Settings withUseTaskStartTimeForFullLoadTimestamp(Boolean useTaskStartTimeForFullLoadTimestamp)
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data
is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp
is set to
true
, each row of the timestamp column contains the task start time. For CDC loads, each row of the
timestamp column contains the transaction commit time.
When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load timestamp in
the timestamp column increments with the time data arrives at the target.
useTaskStartTimeForFullLoadTimestamp
- When set to true, this parameter uses the task start time as the timestamp column value instead of the
time data is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp
is
set to true
, each row of the timestamp column contains the task start time. For CDC loads,
each row of the timestamp column contains the transaction commit time.
When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load
timestamp in the timestamp column increments with the time data arrives at the target.
public Boolean isUseTaskStartTimeForFullLoadTimestamp()
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data
is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp
is set to
true
, each row of the timestamp column contains the task start time. For CDC loads, each row of the
timestamp column contains the transaction commit time.
When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load timestamp in
the timestamp column increments with the time data arrives at the target.
useTaskStartTimeForFullLoadTimestamp
is
set to true
, each row of the timestamp column contains the task start time. For CDC loads,
each row of the timestamp column contains the transaction commit time.
When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load
timestamp in the timestamp column increments with the time data arrives at the target.
public void setCannedAclForObjects(String cannedAclForObjects)
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
cannedAclForObjects
- A value that enables DMS to specify a predefined (canned) access control list for objects created in an
Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the
Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
CannedAclForObjectsValue
public String getCannedAclForObjects()
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
CannedAclForObjectsValue
public S3Settings withCannedAclForObjects(String cannedAclForObjects)
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
cannedAclForObjects
- A value that enables DMS to specify a predefined (canned) access control list for objects created in an
Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the
Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
CannedAclForObjectsValue
public void setCannedAclForObjects(CannedAclForObjectsValue cannedAclForObjects)
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
cannedAclForObjects
- A value that enables DMS to specify a predefined (canned) access control list for objects created in an
Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the
Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
CannedAclForObjectsValue
public S3Settings withCannedAclForObjects(CannedAclForObjectsValue cannedAclForObjects)
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
cannedAclForObjects
- A value that enables DMS to specify a predefined (canned) access control list for objects created in an
Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the
Amazon S3 Developer Guide.
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
CannedAclForObjectsValue
public void setAddColumnName(Boolean addColumnName)
An optional parameter that, when set to true
or y
, you can use to add column name
information to the .csv output file.
The default value is false
. Valid values are true
, false
, y
,
and n
.
addColumnName
- An optional parameter that, when set to true
or y
, you can use to add column
name information to the .csv output file.
The default value is false
. Valid values are true
, false
,
y
, and n
.
public Boolean getAddColumnName()
An optional parameter that, when set to true
or y
, you can use to add column name
information to the .csv output file.
The default value is false
. Valid values are true
, false
, y
,
and n
.
true
or y
, you can use to add column
name information to the .csv output file.
The default value is false
. Valid values are true
, false
,
y
, and n
.
public S3Settings withAddColumnName(Boolean addColumnName)
An optional parameter that, when set to true
or y
, you can use to add column name
information to the .csv output file.
The default value is false
. Valid values are true
, false
, y
,
and n
.
addColumnName
- An optional parameter that, when set to true
or y
, you can use to add column
name information to the .csv output file.
The default value is false
. Valid values are true
, false
,
y
, and n
.
public Boolean isAddColumnName()
An optional parameter that, when set to true
or y
, you can use to add column name
information to the .csv output file.
The default value is false
. Valid values are true
, false
, y
,
and n
.
true
or y
, you can use to add column
name information to the .csv output file.
The default value is false
. Valid values are true
, false
,
y
, and n
.
public void setCdcMaxBatchInterval(Integer cdcMaxBatchInterval)
Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When CdcMaxBatchInterval
and CdcMinFileSize
are both specified, the file write is
triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 60 seconds.
cdcMaxBatchInterval
- Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When CdcMaxBatchInterval
and CdcMinFileSize
are both specified, the file write
is triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 60 seconds.
public Integer getCdcMaxBatchInterval()
Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When CdcMaxBatchInterval
and CdcMinFileSize
are both specified, the file write is
triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 60 seconds.
When CdcMaxBatchInterval
and CdcMinFileSize
are both specified, the file write
is triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 60 seconds.
public S3Settings withCdcMaxBatchInterval(Integer cdcMaxBatchInterval)
Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When CdcMaxBatchInterval
and CdcMinFileSize
are both specified, the file write is
triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 60 seconds.
cdcMaxBatchInterval
- Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When CdcMaxBatchInterval
and CdcMinFileSize
are both specified, the file write
is triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 60 seconds.
public void setCdcMinFileSize(Integer cdcMinFileSize)
Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
When CdcMinFileSize
and CdcMaxBatchInterval
are both specified, the file write is
triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 32 MB.
cdcMinFileSize
- Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
When CdcMinFileSize
and CdcMaxBatchInterval
are both specified, the file write
is triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 32 MB.
public Integer getCdcMinFileSize()
Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
When CdcMinFileSize
and CdcMaxBatchInterval
are both specified, the file write is
triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 32 MB.
When CdcMinFileSize
and CdcMaxBatchInterval
are both specified, the file write
is triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 32 MB.
public S3Settings withCdcMinFileSize(Integer cdcMinFileSize)
Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
When CdcMinFileSize
and CdcMaxBatchInterval
are both specified, the file write is
triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 32 MB.
cdcMinFileSize
- Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
When CdcMinFileSize
and CdcMaxBatchInterval
are both specified, the file write
is triggered by whichever parameter condition is met first within an DMS CloudFormation template.
The default value is 32 MB.
public void setCsvNullValue(String csvNullValue)
An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this
parameter to pass a user-defined string as null when writing to the target. For example, when target columns are
nullable, you can use this option to differentiate between the empty string value and the null value. So, if you
set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.
The default value is NULL
. Valid values include any valid string.
csvNullValue
- An optional parameter that specifies how DMS treats null values. While handling the null value, you can
use this parameter to pass a user-defined string as null when writing to the target. For example, when
target columns are nullable, you can use this option to differentiate between the empty string value and
the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty
string as the null value instead of NULL
.
The default value is NULL
. Valid values include any valid string.
public String getCsvNullValue()
An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this
parameter to pass a user-defined string as null when writing to the target. For example, when target columns are
nullable, you can use this option to differentiate between the empty string value and the null value. So, if you
set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.
The default value is NULL
. Valid values include any valid string.
NULL
.
The default value is NULL
. Valid values include any valid string.
public S3Settings withCsvNullValue(String csvNullValue)
An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this
parameter to pass a user-defined string as null when writing to the target. For example, when target columns are
nullable, you can use this option to differentiate between the empty string value and the null value. So, if you
set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of
NULL
.
The default value is NULL
. Valid values include any valid string.
csvNullValue
- An optional parameter that specifies how DMS treats null values. While handling the null value, you can
use this parameter to pass a user-defined string as null when writing to the target. For example, when
target columns are nullable, you can use this option to differentiate between the empty string value and
the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty
string as the null value instead of NULL
.
The default value is NULL
. Valid values include any valid string.
public void setIgnoreHeaderRows(Integer ignoreHeaderRows)
When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
ignoreHeaderRows
- When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the
feature; a value of 0 turns off the feature.
The default is 0.
public Integer getIgnoreHeaderRows()
When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
The default is 0.
public S3Settings withIgnoreHeaderRows(Integer ignoreHeaderRows)
When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
ignoreHeaderRows
- When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the
feature; a value of 0 turns off the feature.
The default is 0.
public void setMaxFileSize(Integer maxFileSize)
A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
maxFileSize
- A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3
target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
public Integer getMaxFileSize()
A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
public S3Settings withMaxFileSize(Integer maxFileSize)
A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
maxFileSize
- A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3
target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
public void setRfc4180(Boolean rfc4180)
For an S3 source, when this value is set to true
or y
, each leading double quotation
mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this
value is set to false
or n
, string literals are copied to the target as is. In this
case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the
string, because it signals the end of the value.
For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon
S3 using .csv file format only. When this value is set to true
or y
using Amazon S3 as
a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an
additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.
The default value is true
. Valid values include true
, false
,
y
, and n
.
rfc4180
- For an S3 source, when this value is set to true
or y
, each leading double
quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC
4180. When this value is set to false
or n
, string literals are copied to the
target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use
a delimiter as part of the string, because it signals the end of the value.
For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to
Amazon S3 using .csv file format only. When this value is set to true
or y
using
Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the
entire column with an additional pair of double quotation marks ("). Every quotation mark within the data
is repeated twice.
The default value is true
. Valid values include true
, false
,
y
, and n
.
public Boolean getRfc4180()
For an S3 source, when this value is set to true
or y
, each leading double quotation
mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this
value is set to false
or n
, string literals are copied to the target as is. In this
case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the
string, because it signals the end of the value.
For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon
S3 using .csv file format only. When this value is set to true
or y
using Amazon S3 as
a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an
additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.
The default value is true
. Valid values include true
, false
,
y
, and n
.
true
or y
, each leading double
quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC
4180. When this value is set to false
or n
, string literals are copied to the
target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use
a delimiter as part of the string, because it signals the end of the value.
For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to
Amazon S3 using .csv file format only. When this value is set to true
or y
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses
the entire column with an additional pair of double quotation marks ("). Every quotation mark within the
data is repeated twice.
The default value is true
. Valid values include true
, false
,
y
, and n
.
public S3Settings withRfc4180(Boolean rfc4180)
For an S3 source, when this value is set to true
or y
, each leading double quotation
mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this
value is set to false
or n
, string literals are copied to the target as is. In this
case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the
string, because it signals the end of the value.
For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon
S3 using .csv file format only. When this value is set to true
or y
using Amazon S3 as
a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an
additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.
The default value is true
. Valid values include true
, false
,
y
, and n
.
rfc4180
- For an S3 source, when this value is set to true
or y
, each leading double
quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC
4180. When this value is set to false
or n
, string literals are copied to the
target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use
a delimiter as part of the string, because it signals the end of the value.
For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to
Amazon S3 using .csv file format only. When this value is set to true
or y
using
Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the
entire column with an additional pair of double quotation marks ("). Every quotation mark within the data
is repeated twice.
The default value is true
. Valid values include true
, false
,
y
, and n
.
public Boolean isRfc4180()
For an S3 source, when this value is set to true
or y
, each leading double quotation
mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this
value is set to false
or n
, string literals are copied to the target as is. In this
case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the
string, because it signals the end of the value.
For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon
S3 using .csv file format only. When this value is set to true
or y
using Amazon S3 as
a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an
additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.
The default value is true
. Valid values include true
, false
,
y
, and n
.
true
or y
, each leading double
quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC
4180. When this value is set to false
or n
, string literals are copied to the
target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use
a delimiter as part of the string, because it signals the end of the value.
For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to
Amazon S3 using .csv file format only. When this value is set to true
or y
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses
the entire column with an additional pair of double quotation marks ("). Every quotation mark within the
data is repeated twice.
The default value is true
. Valid values include true
, false
,
y
, and n
.
public void setDatePartitionTimezone(String datePartitionTimezone)
When creating an S3 target endpoint, set DatePartitionTimezone
to convert the current UTC time into
a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is
generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled
is
set to true
, as shown in the following example.
s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'
datePartitionTimezone
- When creating an S3 target endpoint, set DatePartitionTimezone
to convert the current UTC
time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC
filename is generated. The time zone format is Area/Location. Use this parameter when
DatePartitionedEnabled
is set to true
, as shown in the following example.
s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'
public String getDatePartitionTimezone()
When creating an S3 target endpoint, set DatePartitionTimezone
to convert the current UTC time into
a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is
generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled
is
set to true
, as shown in the following example.
s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'
DatePartitionTimezone
to convert the current UTC
time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC
filename is generated. The time zone format is Area/Location. Use this parameter when
DatePartitionedEnabled
is set to true
, as shown in the following example.
s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'
public S3Settings withDatePartitionTimezone(String datePartitionTimezone)
When creating an S3 target endpoint, set DatePartitionTimezone
to convert the current UTC time into
a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is
generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled
is
set to true
, as shown in the following example.
s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'
datePartitionTimezone
- When creating an S3 target endpoint, set DatePartitionTimezone
to convert the current UTC
time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC
filename is generated. The time zone format is Area/Location. Use this parameter when
DatePartitionedEnabled
is set to true
, as shown in the following example.
s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'
public void setAddTrailingPaddingCharacter(Boolean addTrailingPaddingCharacter)
Use the S3 target endpoint setting AddTrailingPaddingCharacter
to add padding on string data. The
default value is false
.
addTrailingPaddingCharacter
- Use the S3 target endpoint setting AddTrailingPaddingCharacter
to add padding on string data.
The default value is false
.public Boolean getAddTrailingPaddingCharacter()
Use the S3 target endpoint setting AddTrailingPaddingCharacter
to add padding on string data. The
default value is false
.
AddTrailingPaddingCharacter
to add padding on string
data. The default value is false
.public S3Settings withAddTrailingPaddingCharacter(Boolean addTrailingPaddingCharacter)
Use the S3 target endpoint setting AddTrailingPaddingCharacter
to add padding on string data. The
default value is false
.
addTrailingPaddingCharacter
- Use the S3 target endpoint setting AddTrailingPaddingCharacter
to add padding on string data.
The default value is false
.public Boolean isAddTrailingPaddingCharacter()
Use the S3 target endpoint setting AddTrailingPaddingCharacter
to add padding on string data. The
default value is false
.
AddTrailingPaddingCharacter
to add padding on string
data. The default value is false
.public void setExpectedBucketOwner(String expectedBucketOwner)
To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner
endpoint setting.
Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.
expectedBucketOwner
- To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner
endpoint
setting.
Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.
public String getExpectedBucketOwner()
To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner
endpoint setting.
Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.
ExpectedBucketOwner
endpoint
setting.
Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.
public S3Settings withExpectedBucketOwner(String expectedBucketOwner)
To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner
endpoint setting.
Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.
expectedBucketOwner
- To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner
endpoint
setting.
Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.
public void setGlueCatalogGeneration(Boolean glueCatalogGeneration)
When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query your data.
glueCatalogGeneration
- When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query
your data.public Boolean getGlueCatalogGeneration()
When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query your data.
public S3Settings withGlueCatalogGeneration(Boolean glueCatalogGeneration)
When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query your data.
glueCatalogGeneration
- When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query
your data.public Boolean isGlueCatalogGeneration()
When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query your data.
public String toString()
toString
in class Object
Object.toString()
public S3Settings clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.