AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Edit-DMSEndpoint-EndpointArn <String>-RedshiftSettings_AcceptAnyDate <Boolean>-OracleSettings_AccessAlternateDirectly <Boolean>-S3Settings_AddColumnName <Boolean>-OracleSettings_AdditionalArchivedLogDestId <Int32>-OracleSettings_AddSupplementalLogging <Boolean>-S3Settings_AddTrailingPaddingCharacter <Boolean>-GcpMySQLSettings_AfterConnectScript <String>-MySQLSettings_AfterConnectScript <String>-PostgreSQLSettings_AfterConnectScript <String>-RedshiftSettings_AfterConnectScript <String>-OracleSettings_AllowSelectNestedTable <Boolean>-OracleSettings_ArchivedLogDestId <Int32>-OracleSettings_ArchivedLogsOnly <Boolean>-OracleSettings_AsmPassword <String>-OracleSettings_AsmServer <String>-OracleSettings_AsmUser <String>-MongoDbSettings_AuthMechanism <AuthMechanismValue>-RedisSettings_AuthPassword <String>-MongoDbSettings_AuthSource <String>-MongoDbSettings_AuthType <AuthTypeValue>-RedisSettings_AuthType <RedisAuthTypeValue>-RedisSettings_AuthUserName <String>-PostgreSQLSettings_BabelfishDatabaseName <String>-MicrosoftSQLServerSettings_BcpPacketSize <Int32>-KafkaSettings_Broker <String>-RedshiftSettings_BucketFolder <String>-S3Settings_BucketFolder <String>-DmsTransferSettings_BucketName <String>-RedshiftSettings_BucketName <String>-S3Settings_BucketName <String>-S3Settings_CannedAclForObject <CannedAclForObjectsValue>-PostgreSQLSettings_CaptureDdl <Boolean>-RedshiftSettings_CaseSensitiveName <Boolean>-S3Settings_CdcInsertsAndUpdate <Boolean>-TimestreamSettings_CdcInsertsAndUpdate <Boolean>-S3Settings_CdcInsertsOnly <Boolean>-S3Settings_CdcMaxBatchInterval <Int32>-S3Settings_CdcMinFileSize <Int32>-S3Settings_CdcPath <String>-CertificateArn <String>-OracleSettings_CharLengthSemantic <CharLengthSemantics>-GcpMySQLSettings_CleanSourceMetadataOnMismatch <Boolean>-MySQLSettings_CleanSourceMetadataOnMismatch <Boolean>-S3Settings_CompressionType <CompressionTypeValue>-RedshiftSettings_CompUpdate <Boolean>-RedshiftSettings_ConnectionTimeout <Int32>-MicrosoftSQLServerSettings_ControlTablesFileGroup <String>-OracleSettings_ConvertTimestampWithZoneToUTC <Boolean>-S3Settings_CsvDelimiter <String>-S3Settings_CsvNoSupValue <String>-S3Settings_CsvNullValue <String>-S3Settings_CsvRowDelimiter <String>-IBMDb2Settings_CurrentLsn <String>-PostgreSQLSettings_DatabaseMode <DatabaseMode>-DatabaseName <String>-DocDbSettings_DatabaseName <String>-GcpMySQLSettings_DatabaseName <String>-IBMDb2Settings_DatabaseName <String>-MicrosoftSQLServerSettings_DatabaseName <String>-MongoDbSettings_DatabaseName <String>-MySQLSettings_DatabaseName <String>-OracleSettings_DatabaseName <String>-PostgreSQLSettings_DatabaseName <String>-RedshiftSettings_DatabaseName <String>-SybaseSettings_DatabaseName <String>-TimestreamSettings_DatabaseName <String>-S3Settings_DataFormat <DataFormatValue>-S3Settings_DataPageSize <Int32>-RedshiftSettings_DateFormat <String>-S3Settings_DatePartitionDelimiter <DatePartitionDelimiterValue>-S3Settings_DatePartitionEnabled <Boolean>-S3Settings_DatePartitionSequence <DatePartitionSequenceValue>-S3Settings_DatePartitionTimezone <String>-PostgreSQLSettings_DdlArtifactsSchema <String>-S3Settings_DictPageSizeLimit <Int32>-OracleSettings_DirectPathNoLog <Boolean>-OracleSettings_DirectPathParallelLoad <Boolean>-DocDbSettings_DocsToInvestigate <Int32>-MongoDbSettings_DocsToInvestigate <String>-RedshiftSettings_EmptyAsNull <Boolean>-OracleSettings_EnableHomogenousTablespace <Boolean>-TimestreamSettings_EnableMagneticStoreWrite <Boolean>-S3Settings_EnableStatistic <Boolean>-S3Settings_EncodingType <EncodingTypeValue>-RedshiftSettings_EncryptionMode <EncryptionModeValue>-S3Settings_EncryptionMode <EncryptionModeValue>-EndpointIdentifier <String>-EndpointType <ReplicationEndpointTypeValue>-ElasticsearchSettings_EndpointUri <String>-EngineName <String>-ElasticsearchSettings_ErrorRetryDuration <Int32>-NeptuneSettings_ErrorRetryDuration <Int32>-GcpMySQLSettings_EventsPollInterval <Int32>-MySQLSettings_EventsPollInterval <Int32>-ExactSetting <Boolean>-PostgreSQLSettings_ExecuteTimeout <Int32>-S3Settings_ExpectedBucketOwner <String>-RedshiftSettings_ExplicitId <Boolean>-ExternalTableDefinition <String>-S3Settings_ExternalTableDefinition <String>-OracleSettings_ExtraArchivedLogDestId <Int32[]>-ExtraConnectionAttribute <String>-DocDbSettings_ExtractDocId <Boolean>-MongoDbSettings_ExtractDocId <String>-OracleSettings_FailTasksOnLobTruncation <Boolean>-PostgreSQLSettings_FailTasksOnLobTruncation <Boolean>-RedshiftSettings_FileTransferUploadStream <Int32>-MicrosoftSQLServerSettings_ForceLobLookup <Boolean>-ElasticsearchSettings_FullLoadErrorPercentage <Int32>-S3Settings_GlueCatalogGeneration <Boolean>-PostgreSQLSettings_HeartbeatEnable <Boolean>-PostgreSQLSettings_HeartbeatFrequency <Int32>-PostgreSQLSettings_HeartbeatSchema <String>-NeptuneSettings_IamAuthEnabled <Boolean>-S3Settings_IgnoreHeaderRow <Int32>-KafkaSettings_IncludeControlDetail <Boolean>-KinesisSettings_IncludeControlDetail <Boolean>-KafkaSettings_IncludeNullAndEmpty <Boolean>-KinesisSettings_IncludeNullAndEmpty <Boolean>-S3Settings_IncludeOpForFullLoad <Boolean>-KafkaSettings_IncludePartitionValue <Boolean>-KinesisSettings_IncludePartitionValue <Boolean>-KafkaSettings_IncludeTableAlterOperation <Boolean>-KinesisSettings_IncludeTableAlterOperation <Boolean>-KafkaSettings_IncludeTransactionDetail <Boolean>-KinesisSettings_IncludeTransactionDetail <Boolean>-DocDbSettings_KmsKeyId <String>-MongoDbSettings_KmsKeyId <String>-RedshiftSettings_LoadTimeout <Int32>-TimestreamSettings_MagneticDuration <Int32>-PostgreSQLSettings_MapBooleanAsBoolean <Boolean>-RedshiftSettings_MapBooleanAsBoolean <Boolean>-PostgreSQLSettings_MapJsonbAsClob <Boolean>-PostgreSQLSettings_MapLongVarcharAs <LongVarcharMappingType>-GcpMySQLSettings_MaxFileSize <Int32>-MySQLSettings_MaxFileSize <Int32>-NeptuneSettings_MaxFileSize <Int32>-PostgreSQLSettings_MaxFileSize <Int32>-RedshiftSettings_MaxFileSize <Int32>-S3Settings_MaxFileSize <Int32>-IBMDb2Settings_MaxKBytesPerRead <Int32>-NeptuneSettings_MaxRetryCount <Int32>-TimestreamSettings_MemoryDuration <Int32>-KafkaSettings_MessageFormat <MessageFormatValue>-KinesisSettings_MessageFormat <MessageFormatValue>-KafkaSettings_MessageMaxByte <Int32>-DocDbSettings_NestingLevel <NestingLevelValue>-MongoDbSettings_NestingLevel <NestingLevelValue>-KafkaSettings_NoHexPrefix <Boolean>-KinesisSettings_NoHexPrefix <Boolean>-OracleSettings_NumberDatatypeScale <Int32>-OracleSettings_OpenTransactionWindow <Int32>-OracleSettings_OraclePathPrefix <String>-OracleSettings_ParallelAsmReadThread <Int32>-GcpMySQLSettings_ParallelLoadThread <Int32>-MySQLSettings_ParallelLoadThread <Int32>-S3Settings_ParquetTimestampInMillisecond <Boolean>-S3Settings_ParquetVersion <ParquetVersionValue>-KafkaSettings_PartitionIncludeSchemaTable <Boolean>-KinesisSettings_PartitionIncludeSchemaTable <Boolean>-DocDbSettings_Password <String>-GcpMySQLSettings_Password <String>-IBMDb2Settings_Password <String>-MicrosoftSQLServerSettings_Password <String>-MongoDbSettings_Password <String>-MySQLSettings_Password <String>-OracleSettings_Password <String>-Password <String>-PostgreSQLSettings_Password <String>-RedshiftSettings_Password <String>-SybaseSettings_Password <String>-PostgreSQLSettings_PluginName <PluginNameValue>-DocDbSettings_Port <Int32>-GcpMySQLSettings_Port <Int32>-IBMDb2Settings_Port <Int32>-MicrosoftSQLServerSettings_Port <Int32>-MongoDbSettings_Port <Int32>-MySQLSettings_Port <Int32>-OracleSettings_Port <Int32>-Port <Int32>-PostgreSQLSettings_Port <Int32>-RedisSettings_Port <Int32>-RedshiftSettings_Port <Int32>-SybaseSettings_Port <Int32>-S3Settings_PreserveTransaction <Boolean>-MicrosoftSQLServerSettings_QuerySingleAlwaysOnNode <Boolean>-OracleSettings_ReadAheadBlock <Int32>-MicrosoftSQLServerSettings_ReadBackupOnly <Boolean>-OracleSettings_ReadTableSpaceName <Boolean>-RedshiftSettings_RemoveQuote <Boolean>-RedshiftSettings_ReplaceChar <String>-RedshiftSettings_ReplaceInvalidChar <String>-OracleSettings_ReplacePathPrefix <Boolean>-DocDbSettings_ReplicateShardCollection <Boolean>-MongoDbSettings_ReplicateShardCollection <Boolean>-OracleSettings_RetryInterval <Int32>-S3Settings_Rfc4180 <Boolean>-S3Settings_RowGroupLength <Int32>-NeptuneSettings_S3BucketFolder <String>-NeptuneSettings_S3BucketName <String>-MicrosoftSQLServerSettings_SafeguardPolicy <SafeguardPolicy>-KafkaSettings_SaslMechanism <KafkaSaslMechanism>-KafkaSettings_SaslPassword <String>-KafkaSettings_SaslUsername <String>-DocDbSettings_SecretsManagerAccessRoleArn <String>-GcpMySQLSettings_SecretsManagerAccessRoleArn <String>-IBMDb2Settings_SecretsManagerAccessRoleArn <String>-MicrosoftSQLServerSettings_SecretsManagerAccessRoleArn <String>-MongoDbSettings_SecretsManagerAccessRoleArn <String>-MySQLSettings_SecretsManagerAccessRoleArn <String>-OracleSettings_SecretsManagerAccessRoleArn <String>-PostgreSQLSettings_SecretsManagerAccessRoleArn <String>-RedshiftSettings_SecretsManagerAccessRoleArn <String>-SybaseSettings_SecretsManagerAccessRoleArn <String>-OracleSettings_SecretsManagerOracleAsmAccessRoleArn <String>-OracleSettings_SecretsManagerOracleAsmSecretId <String>-DocDbSettings_SecretsManagerSecretId <String>-GcpMySQLSettings_SecretsManagerSecretId <String>-IBMDb2Settings_SecretsManagerSecretId <String>-MicrosoftSQLServerSettings_SecretsManagerSecretId <String>-MongoDbSettings_SecretsManagerSecretId <String>-MySQLSettings_SecretsManagerSecretId <String>-OracleSettings_SecretsManagerSecretId <String>-PostgreSQLSettings_SecretsManagerSecretId <String>-RedshiftSettings_SecretsManagerSecretId <String>-SybaseSettings_SecretsManagerSecretId <String>-OracleSettings_SecurityDbEncryption <String>-OracleSettings_SecurityDbEncryptionName <String>-KafkaSettings_SecurityProtocol <KafkaSecurityProtocol>-DocDbSettings_ServerName <String>-GcpMySQLSettings_ServerName <String>-IBMDb2Settings_ServerName <String>-MicrosoftSQLServerSettings_ServerName <String>-MongoDbSettings_ServerName <String>-MySQLSettings_ServerName <String>-OracleSettings_ServerName <String>-PostgreSQLSettings_ServerName <String>-RedisSettings_ServerName <String>-RedshiftSettings_ServerName <String>-ServerName <String>-SybaseSettings_ServerName <String>-RedshiftSettings_ServerSideEncryptionKmsKeyId <String>-S3Settings_ServerSideEncryptionKmsKeyId <String>-GcpMySQLSettings_ServerTimezone <String>-MySQLSettings_ServerTimezone <String>-DmsTransferSettings_ServiceAccessRoleArn <String>-DynamoDbSettings_ServiceAccessRoleArn <String>-ElasticsearchSettings_ServiceAccessRoleArn <String>-KinesisSettings_ServiceAccessRoleArn <String>-NeptuneSettings_ServiceAccessRoleArn <String>-RedshiftSettings_ServiceAccessRoleArn <String>-S3Settings_ServiceAccessRoleArn <String>-ServiceAccessRoleArn <String>-IBMDb2Settings_SetDataCaptureChange <Boolean>-PostgreSQLSettings_SlotName <String>-OracleSettings_SpatialDataOptionToGeoJsonFunctionName <String>-KafkaSettings_SslCaCertificateArn <String>-RedisSettings_SslCaCertificateArn <String>-KafkaSettings_SslClientCertificateArn <String>-KafkaSettings_SslClientKeyArn <String>-KafkaSettings_SslClientKeyPassword <String>-KafkaSettings_SslEndpointIdentificationAlgorithm <KafkaSslEndpointIdentificationAlgorithm>-SslMode <DmsSslModeValue>-RedisSettings_SslSecurityProtocol <SslSecurityProtocolValue>-OracleSettings_StandbyDelayTime <Int32>-KinesisSettings_StreamArn <String>-GcpMySQLSettings_TargetDbType <TargetDbType>-MySQLSettings_TargetDbType <TargetDbType>-RedshiftSettings_TimeFormat <String>-S3Settings_TimestampColumnName <String>-MicrosoftSQLServerSettings_TlogAccessMode <TlogAccessMode>-KafkaSettings_Topic <String>-RedshiftSettings_TrimBlank <Boolean>-MicrosoftSQLServerSettings_TrimSpaceInChar <Boolean>-OracleSettings_TrimSpaceInChar <Boolean>-PostgreSQLSettings_TrimSpaceInChar <Boolean>-RedshiftSettings_TruncateColumn <Boolean>-OracleSettings_UseAlternateFolderForOnline <Boolean>-MicrosoftSQLServerSettings_UseBcpFullLoad <Boolean>-OracleSettings_UseBFile <Boolean>-S3Settings_UseCsvNoSupValue <Boolean>-OracleSettings_UseDirectPathFullLoad <Boolean>-OracleSettings_UseLogminerReader <Boolean>-ElasticsearchSettings_UseNewMappingType <Boolean>-OracleSettings_UsePathPrefix <String>-DocDbSettings_Username <String>-GcpMySQLSettings_Username <String>-IBMDb2Settings_Username <String>-MicrosoftSQLServerSettings_Username <String>-MongoDbSettings_Username <String>-MySQLSettings_Username <String>-OracleSettings_Username <String>-PostgreSQLSettings_Username <String>-RedshiftSettings_Username <String>-SybaseSettings_Username <String>-Username <String>-S3Settings_UseTaskStartTimeForFullLoadTimestamp <Boolean>-MicrosoftSQLServerSettings_UseThirdPartyBackupDevice <Boolean>-DocDbSettings_UseUpdateLookUp <Boolean>-MongoDbSettings_UseUpdateLookUp <Boolean>-RedshiftSettings_WriteBufferSize <Int32>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter>-ClientConfig <AmazonDatabaseMigrationServiceConfig>
DatabaseName
request parameter on the ModifyEndpoint
API call. Specifying DatabaseName
when you modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
iam:PassRole
action. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
NestingLevel
is set to "one"
. Must be a positive value greater than 0
. Default value is 1000
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
NestingLevel
is set to "none"
. Default value is "false"
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
"none"
. Specify "none"
to use document mode. Specify "one"
to use table mode. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, DMS replicates data to shard collections. DMS only uses this setting if the target endpoint is a DocumentDB elastic cluster.When this setting is true
, note the following:TargetTablePrepMode
to nothing
.useUpdateLookup
to false
.Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | DocDbSettings_ReplicateShardCollections |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the DocumentDB endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, DMS retrieves the entire document from the DocumentDB source during migration. This may cause a migration failure if the server response exceeds bandwidth limits. To fetch only updates and deletes during migration, set this parameter to false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
iam:PassRole
action. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
iam:PassRole
action. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
for DMS to migrate documentation using the documentation type _doc
. OpenSearch and an Elasticsearch cluster only support the _doc documentation type in versions 7. x and later. The default value is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | True |
Position? | 1 |
Accept pipeline input? | True (ByValue, ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
source
and target
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
"mysql"
, "oracle"
, "postgres"
, "mariadb"
, "aurora"
, "aurora-postgresql"
, "redshift"
, "s3"
, "db2"
, "db2-zos"
, "azuredb"
, "sybase"
, "dynamodb"
, "mongodb"
, "kinesis"
, "kafka"
, "elasticsearch"
, "documentdb"
, "sqlserver"
, "neptune"
, and "babelfish"
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
ModifyEndpoint
replaces all existing endpoint settings with the exact settings that you specify in this call. If this attribute is N, the current call to ModifyEndpoint
does two things:create-endpoint ... --endpoint-settings '{"a":1}' ...
, the endpoint has the following endpoint settings: '{"a":1}'
. If you then call modify-endpoint ... --endpoint-settings '{"b":2}' ...
for the same endpoint, the endpoint has the following settings: '{"a":1,"b":2}'
. However, suppose that you follow this with a call to modify-endpoint ... --endpoint-settings '{"b":2}' --exact-settings ...
for that same endpoint again. Then the endpoint has the following settings: '{"b":2}'
. All existing settings are replaced with the exact settings that you specify. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | ExactSettings |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | ExtraConnectionAttributes |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
DatabaseName
request parameter on either the CreateEndpoint
or ModifyEndpoint
API call. Specifying DatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
maxFileSize=512
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
parallelLoadThreads=1
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | GcpMySQLSettings_ParallelLoadThreads |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret.
The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the MySQL endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
targetDbType=MULTIPLE_DATABASES
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint. You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the Db2 LUW endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | IBMDb2Settings_SetDataCaptureChanges |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
broker-hostname-or-ip:port
. For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | KafkaSettings_IncludeControlDetails |
false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
schema-table-type
. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
rename-table
, drop-table
, add-column
, drop-column
, and rename-column
. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | KafkaSettings_IncludeTableAlterOperations |
transaction_id
, previous transaction_id
, and transaction_record_id
(the record offset within a transaction). The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | KafkaSettings_IncludeTransactionDetails |
JSON
(default) or JSON_UNFORMATTED
(a single line with no tab). Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | KafkaSettings_MessageMaxBytes |
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SCRAM-SHA-512
mechanism by default. DMS versions 3.5.0 and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this parameter to PLAIN.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
ssl-encryption
, ssl-authentication
, and sasl-ssl
. sasl-ssl
requires SaslUsername
and SaslPassword
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
"kafka-default-topic"
as the migration topic. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | KinesisSettings_IncludeControlDetails |
false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
schema-table-type
. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
rename-table
, drop-table
, add-column
, drop-column
, and rename-column
. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | KinesisSettings_IncludeTableAlterOperations |
transaction_id
, previous transaction_id
, and transaction_record_id
(the record offset within a transaction). The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | KinesisSettings_IncludeTransactionDetails |
JSON
(default) or JSON_UNFORMATTED
(a single line with no tab). Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use the NoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
iam:PassRole
action. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Y
, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the SQL Server endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Endpoint.Address
field. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
TrimSpaceInChar
source endpoint setting to right-trim data on CHAR and NCHAR data types during migration. Setting TrimSpaceInChar
does not left-trim data. The default value is true
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Y
, DMS processes third-party transaction log backups if they are created in native format. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
"default"
is "mongodb_cr"
. For MongoDB version 3.x or later, "default"
is "scram_sha_1"
. This setting isn't used when AuthType
is set to "no"
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
AuthType
is set to "no"
. The default is "admin"
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
"no"
, user name and password parameters are not used and can be empty. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
NestingLevel
is set to "one"
. Must be a positive value greater than 0
. Default value is 1000
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
NestingLevel
is set to "none"
. Default value is "false"
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
KmsKeyId
parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
"none"
. Specify "none"
to use document mode. Specify "one"
to use table mode. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, DMS replicates data to shard collections. DMS only uses this setting if the target endpoint is a DocumentDB elastic cluster.When this setting is true
, note the following:TargetTablePrepMode
to nothing
.useUpdateLookup
to false
.Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | MongoDbSettings_ReplicateShardCollections |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the MongoDB endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, DMS retrieves the entire document from the MongoDB source during migration. This may cause a migration failure if the server response exceeds bandwidth limits. To fetch only updates and deletes during migration, set this parameter to false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
DatabaseName
request parameter on either the CreateEndpoint
or ModifyEndpoint
API call. Specifying DatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
eventsPollInterval=5;
In the example, DMS checks for changes in the binary logs every five seconds. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
maxFileSize=512
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
parallelLoadThreads=1
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | MySQLSettings_ParallelLoadThreads |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the MySQL endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Endpoint.Address
field.For an Aurora MySQL instance, this is the output of DescribeDBClusters, in the Endpoint
field. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SPECIFIC_DATABASE
, specify the database name using the DatabaseName
parameter of the Endpoint
object.Example: targetDbType=MULTIPLE_DATABASES
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
. Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn
. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
S3BucketName
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
iam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
ArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle RESETLOGS
option to open the database, never use RESETLOGS
unless necessary. For additional information about RESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
to enable replication of Oracle tables containing columns that are nested tables or defined types. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | OracleSettings_AllowSelectNestedTables |
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Y
, DMS only accesses the archived redo logs. If the archived redo logs are stored on Automatic Storage Management (ASM) only, the DMS user account needs to be granted ASM privileges. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
asm_user_password
value. You set this value as part of the comma-separated value that you set to the Password
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
asm_server
value. You set asm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
asm_user
value. You set asm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
CHAR
. Otherwise, the character column length is in bytes.Example: charLengthSemantics=CHAR;
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | OracleSettings_CharLengthSemantics |
timezone
datatype to their UTC value. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, this attribute specifies a parallel load when useDirectPathFullLoad
is set to Y
. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
dest_id
column in the v$archived_log
view. Use this setting with the archivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup. This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings. archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although DMS supports the use of the Oracle RESETLOGS
option to open the database, never use RESETLOGS
unless it's necessary. For more information about RESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | OracleSettings_ExtraArchivedLogDestIds |
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize
.If a task is set to limited LOB mode and this option is set to true
, the task fails instead of truncating the LOB data. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
numberDataTypeScale=12
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
OpenTransactionWindow
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
readAheadBlocks
attribute. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | OracleSettings_ParallelAsmReadThreads |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | OracleSettings_ReadAheadBlocks |
true
, this attribute supports tablespace replication. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
usePathPrefix
setting to access the redo logs. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
retryInterval=6;
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerOracleAsmSecret
. This SecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerOracleAsmSecretId
. Or you can specify clear-text values for AsmUser
, AsmPassword
, and AsmServerName
. You can't specify both. For more information on creating this SecretsManagerOracleAsmSecret
and the SecretsManagerOracleAsmAccessRoleArn
and SecretsManagerOracleAsmSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the Oracle endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
TDE_Password
part of the comma-separated value you set to the Password
request parameter when you create the endpoint. The SecurityDbEncryptian
setting is related to this SecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecurityDbEncryption
setting. For more information on setting the key name value of SecurityDbEncryptionName
, see the information and example for setting the securityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Endpoint.Address
field. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SDO_GEOMETRY
to GEOJSON
format. By default, DMS calls the SDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation of SDOGEOJSON
and set SpatialDataOptionToGeoJsonFunctionName
to call it instead. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
TrimSpaceInChar
source endpoint setting to trim data on CHAR and NCHAR data types during migration. The default value is true
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
UseLogminerReader
to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
UseLogminerReader
to N, also set UseBfile
to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
afterConnectScript=SET session_replication_role='replica'
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
N
, you don't have to create tables or triggers on the source database. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | PostgreSQLSettings_CaptureDdls |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
ddlArtifactsSchema=xyzddlschema;
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
executeTimeout=100;
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specified LobMaxSize
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
restart_lsn
moving and prevents storage full scenarios. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
varchar(5)
. You must set this setting on both the source and target endpoints for it to take effect. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
maxFileSize=512
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Endpoint.Address
field.For an Aurora PostgreSQL instance, this is the output of DescribeDBClusters, in the Endpoint
field. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
CdcStartPosition
request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting of CdcStartPosition
. If the specified slot doesn't exist or the task doesn't have a valid CdcStartPosition
setting, DMS raises an error.For more information about setting the CdcStartPosition
request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about using CdcStartPosition
, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
TrimSpaceInChar
source endpoint setting to trim data on CHAR and NCHAR data types during migration. The default value is true
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
auth-role
and auth-token
options of the AuthType
setting for a Redis target endpoint. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
none
, auth-token
, and auth-role
. The auth-token
option requires an AuthPassword
value to be provided. The auth-role
option requires AuthUserName
and AuthPassword
values to be provided. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
auth-role
option of the AuthType
setting for a Redis target endpoint. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
plaintext
and ssl-encryption
. The default is ssl-encryption
. The ssl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using the SslCaCertificateArn
setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The plaintext
option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
or false
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
COPY
command to upload the .csv files to the target table. The files are deleted once the COPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
CaseSensitiveNames
to true
. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RedshiftSettings_CaseSensitiveNames |
CompUpdate
to true
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW
. If you set CompUpdate
to false
, automatic compression is disabled and existing column encodings aren't changed. The default is true
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto
recognizes most strings, even some that aren't supported when you use a date format string. If your date and time values use formats different from each other, set this to auto
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
sets empty CHAR and VARCHAR fields to null. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SSE_S3
(the default) or SSE_KMS
.For the ModifyEndpoint
operation, you can change the existing value of the EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t change the existing value from SSE_S3
to SSE_KMS
.To use SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*"
to use the following actions: "s3:PutObject", "s3:ListBucket"
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
ExplicitIds
to true
to have tables with IDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RedshiftSettings_ExplicitIds |
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RedshiftSettings_FileTransferUploadStreams |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
varchar(1)
. You must set this setting on both the source and target endpoints for it to take effect. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
username
property. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
to remove quotation marks. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RedshiftSettings_RemoveQuotes |
ReplaceInvalidChars
, substituting the specified characters instead. The default is "?"
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RedshiftSettings_ReplaceChars |
ReplaceChars
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RedshiftSettings_ReplaceInvalidChars |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SSE_KMS
for the EncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
iam:PassRole
action. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
auto
(case-sensitive), 'timeformat_string'
, 'epochsecs'
, or 'epochmillisecs'
. It defaults to 10. Using auto
recognizes most strings, even some that aren't supported when you use a time format string. If your date and time values use formats different from each other, set this parameter to auto
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
to remove unneeded white space. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RedshiftSettings_TrimBlanks |
true
to truncate data. The default is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RedshiftSettings_TruncateColumns |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
or y
, you can use to add column name information to the .csv output file.The default value is false
. Valid values are true
, false
, y
, and n
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
AddTrailingPaddingCharacter
to add padding on string data. The default value is false
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
bucketFolder/schema_name/table_name/
. If this parameter isn't specified, then the path used is schema_name/table_name/
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | S3Settings_CannedAclForObjects |
false
, but when CdcInsertsAndUpdates
is set to true
or y
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.IncludeOpForFullLoad
parameter. If IncludeOpForFullLoad
is set to true
, the first field of every CDC record is set to either I
or U
to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad
is set to false
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..DMS supports the use of the CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | S3Settings_CdcInsertsAndUpdates |
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If CdcInsertsOnly
is set to true
or y
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad
. If IncludeOpForFullLoad
is set to true
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad
is set to false
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..DMS supports the interaction described preceding between the CdcInsertsOnly
and IncludeOpForFullLoad
parameters in versions 3.1.4 and later. CdcInsertsOnly
and CdcInsertsAndUpdates
can't both be set to true
for the same endpoint. Set either CdcInsertsOnly
or CdcInsertsAndUpdates
to true
for the same endpoint, but not both. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
CdcMaxBatchInterval
and CdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
CdcMinFileSize
and CdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
CdcPath
is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions
to true
, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by BucketFolder
and BucketName
.For example, if you specify CdcPath
as MyChangedData
, and you specify BucketName
as MyTargetBucket
but do not specify BucketFolder
, DMS creates the CDC folder path following: MyTargetBucket/MyChangedData
.If you specify the same CdcPath
, and you specify BucketName
as MyTargetBucket
and BucketFolder
as MyTargetData
, DMS creates the CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.This setting is supported in DMS versions 3.4.2 and later. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
UseCsvNoSupValue
is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of the UseCsvNoSupValue
setting.This setting is supported in DMS versions 3.4.1 and later. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
NULL
.The default value is NULL
. Valid values include any valid string. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
\n
). Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
csv
: This is a row-based file format with comma-separated values (.csv).parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SLASH
. Use this parameter when DatePartitionedEnabled
is set to true
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false
. For more information about date-based folder partitioning, see Using date-based folder partitioning. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
YYYYMMDD
. Use this parameter when DatePartitionedEnabled
is set to true
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
DatePartitionTimezone
to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled
is set to true
, as shown in the following example.s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN
encoding. This size is used for .parquet file format only. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
to enable statistics, false
to disable. Statistics include NULL
, DISTINCT
, MAX
, and MIN
values. This parameter defaults to true
. This value is used for .parquet file format only. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | S3Settings_EnableStatistics |
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.PLAIN
doesn't use encoding at all. Values are stored as they are.PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SSE_S3
(the default) or SSE_KMS
.For the ModifyEndpoint
operation, you can change the existing value of the EncryptionMode
parameter from SSE_KMS
to SSE_S3
. But you can’t change the existing value from SSE_S3
to SSE_KMS
.To use SSE_S3
, you need an Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*"
to use the following actions:s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
ExpectedBucketOwner
endpoint setting. Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | S3Settings_IgnoreHeaderRows |
IncludeOpForFullLoad
parameter in versions 3.1.4 and later.DMS supports the use of the .parquet files with the IncludeOpForFullLoad
parameter in versions 3.4.7 and later.For full load, records can only be inserted. By default (the false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad
is set to true
or y
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.This setting works together with the CdcInsertsOnly
and the CdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide.. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.DMS supports the ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When ParquetTimestampInMillisecond
is set to true
or y
, DMS writes all TIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP
values. Set this parameter to true
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.DMS writes any TIMESTAMP
column values written to an S3 file in .csv format with microsecond precision.Setting ParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName
parameter. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
parquet_1_0
(the default) or parquet_2_0
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.This setting is supported in DMS versions 3.4.2 and later. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | S3Settings_PreserveTransactions |
true
or y
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set to false
or n
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to true
or y
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is true
. Valid values include true
, false
, y
, and n
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024). Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SSE_KMS
for the EncryptionMode
, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
iam:PassRole
action. It is a required parameter that enables DMS to write and read objects from an S3 bucket. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
TimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional STRING
column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS. For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the AddColumnName
parameter is set to true
, DMS also includes a name for the timestamp column that you set with TimestampColumnName
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
for columns not included in the supplemental log, DMS uses the value specified by CsvNoSupValue
. If not set or set to false
, DMS uses the null value for these columns.This setting is supported in DMS versions 3.4.1 and later. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
useTaskStartTimeForFullLoadTimestamp
is set to true
, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.When useTaskStartTimeForFullLoadTimestamp
is set to false
, the full load timestamp in the timestamp column increments with the time data arrives at the target. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
iam:PassRole
action. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
none
. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
. The role must allow the iam:PassRole
action. SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
SecretsManagerSecret
that contains the SAP SAE endpoint connection details. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
to specify that DMS only applies inserts and updates, and not deletes. Amazon Timestream does not allow deleting records, so if this value is false
, DMS nulls out the corresponding record in the Timestream database rather than deleting it. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | TimestreamSettings_CdcInsertsAndUpdates |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
true
to enable memory store writes. When this value is false
, DMS does not write records that are older in days than the value specified in MagneticDuration
, because Amazon Timestream does not allow memory writes by default. For more information, see Storage in the Amazon Timestream Developer Guide. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | TimestreamSettings_EnableMagneticStoreWrites |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | AK |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByValue, ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByValue, ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | AWSProfilesLocation, ProfilesLocation |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | StoredCredentials, AWSProfileName |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | RegionToCall |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | SK, SecretAccessKey |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | ST |
AWS Tools for PowerShell: 2.x.y.z