AWS Tools for Windows PowerShell
Command Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Synopsis

Calls the AWS Database Migration Service CreateEndpoint API operation.

Syntax

New-DMSEndpoint
-EndpointIdentifier <String>
-RedshiftSettings_AcceptAnyDate <Boolean>
-OracleSettings_AccessAlternateDirectly <Boolean>
-S3Settings_AddColumnName <Boolean>
-OracleSettings_AdditionalArchivedLogDestId <Int32>
-OracleSettings_AddSupplementalLogging <Boolean>
-GcpMySQLSettings_AfterConnectScript <String>
-MySQLSettings_AfterConnectScript <String>
-PostgreSQLSettings_AfterConnectScript <String>
-RedshiftSettings_AfterConnectScript <String>
-OracleSettings_AllowSelectNestedTable <Boolean>
-OracleSettings_ArchivedLogDestId <Int32>
-OracleSettings_ArchivedLogsOnly <Boolean>
-OracleSettings_AsmPassword <String>
-OracleSettings_AsmServer <String>
-OracleSettings_AsmUser <String>
-MongoDbSettings_AuthMechanism <AuthMechanismValue>
-RedisSettings_AuthPassword <String>
-MongoDbSettings_AuthSource <String>
-MongoDbSettings_AuthType <AuthTypeValue>
-RedisSettings_AuthType <RedisAuthTypeValue>
-RedisSettings_AuthUserName <String>
-MicrosoftSQLServerSettings_BcpPacketSize <Int32>
-KafkaSettings_Broker <String>
-RedshiftSettings_BucketFolder <String>
-S3Settings_BucketFolder <String>
-DmsTransferSettings_BucketName <String>
-RedshiftSettings_BucketName <String>
-S3Settings_BucketName <String>
-S3Settings_CannedAclForObject <CannedAclForObjectsValue>
-PostgreSQLSettings_CaptureDdl <Boolean>
-RedshiftSettings_CaseSensitiveName <Boolean>
-S3Settings_CdcInsertsAndUpdate <Boolean>
-S3Settings_CdcInsertsOnly <Boolean>
-S3Settings_CdcMaxBatchInterval <Int32>
-S3Settings_CdcMinFileSize <Int32>
-S3Settings_CdcPath <String>
-CertificateArn <String>
-OracleSettings_CharLengthSemantic <CharLengthSemantics>
-GcpMySQLSettings_CleanSourceMetadataOnMismatch <Boolean>
-MySQLSettings_CleanSourceMetadataOnMismatch <Boolean>
-S3Settings_CompressionType <CompressionTypeValue>
-RedshiftSettings_CompUpdate <Boolean>
-RedshiftSettings_ConnectionTimeout <Int32>
-MicrosoftSQLServerSettings_ControlTablesFileGroup <String>
-S3Settings_CsvDelimiter <String>
-S3Settings_CsvNoSupValue <String>
-S3Settings_CsvNullValue <String>
-S3Settings_CsvRowDelimiter <String>
-IBMDb2Settings_CurrentLsn <String>
-DatabaseName <String>
-DocDbSettings_DatabaseName <String>
-GcpMySQLSettings_DatabaseName <String>
-IBMDb2Settings_DatabaseName <String>
-MicrosoftSQLServerSettings_DatabaseName <String>
-MongoDbSettings_DatabaseName <String>
-MySQLSettings_DatabaseName <String>
-OracleSettings_DatabaseName <String>
-PostgreSQLSettings_DatabaseName <String>
-RedshiftSettings_DatabaseName <String>
-SybaseSettings_DatabaseName <String>
-S3Settings_DataFormat <DataFormatValue>
-S3Settings_DataPageSize <Int32>
-RedshiftSettings_DateFormat <String>
-S3Settings_DatePartitionDelimiter <DatePartitionDelimiterValue>
-S3Settings_DatePartitionEnabled <Boolean>
-S3Settings_DatePartitionSequence <DatePartitionSequenceValue>
-S3Settings_DatePartitionTimezone <String>
-PostgreSQLSettings_DdlArtifactsSchema <String>
-S3Settings_DictPageSizeLimit <Int32>
-OracleSettings_DirectPathNoLog <Boolean>
-OracleSettings_DirectPathParallelLoad <Boolean>
-DocDbSettings_DocsToInvestigate <Int32>
-MongoDbSettings_DocsToInvestigate <String>
-RedshiftSettings_EmptyAsNull <Boolean>
-OracleSettings_EnableHomogenousTablespace <Boolean>
-S3Settings_EnableStatistic <Boolean>
-S3Settings_EncodingType <EncodingTypeValue>
-RedshiftSettings_EncryptionMode <EncryptionModeValue>
-S3Settings_EncryptionMode <EncryptionModeValue>
-EndpointType <ReplicationEndpointTypeValue>
-ElasticsearchSettings_EndpointUri <String>
-EngineName <String>
-ElasticsearchSettings_ErrorRetryDuration <Int32>
-NeptuneSettings_ErrorRetryDuration <Int32>
-GcpMySQLSettings_EventsPollInterval <Int32>
-MySQLSettings_EventsPollInterval <Int32>
-PostgreSQLSettings_ExecuteTimeout <Int32>
-RedshiftSettings_ExplicitId <Boolean>
-ExternalTableDefinition <String>
-S3Settings_ExternalTableDefinition <String>
-OracleSettings_ExtraArchivedLogDestId <Int32[]>
-ExtraConnectionAttribute <String>
-DocDbSettings_ExtractDocId <Boolean>
-MongoDbSettings_ExtractDocId <String>
-OracleSettings_FailTasksOnLobTruncation <Boolean>
-PostgreSQLSettings_FailTasksOnLobTruncation <Boolean>
-RedshiftSettings_FileTransferUploadStream <Int32>
-ElasticsearchSettings_FullLoadErrorPercentage <Int32>
-PostgreSQLSettings_HeartbeatEnable <Boolean>
-PostgreSQLSettings_HeartbeatFrequency <Int32>
-PostgreSQLSettings_HeartbeatSchema <String>
-NeptuneSettings_IamAuthEnabled <Boolean>
-S3Settings_IgnoreHeaderRow <Int32>
-KafkaSettings_IncludeControlDetail <Boolean>
-KinesisSettings_IncludeControlDetail <Boolean>
-KafkaSettings_IncludeNullAndEmpty <Boolean>
-KinesisSettings_IncludeNullAndEmpty <Boolean>
-S3Settings_IncludeOpForFullLoad <Boolean>
-KafkaSettings_IncludePartitionValue <Boolean>
-KinesisSettings_IncludePartitionValue <Boolean>
-KafkaSettings_IncludeTableAlterOperation <Boolean>
-KinesisSettings_IncludeTableAlterOperation <Boolean>
-KafkaSettings_IncludeTransactionDetail <Boolean>
-KinesisSettings_IncludeTransactionDetail <Boolean>
-DocDbSettings_KmsKeyId <String>
-KmsKeyId <String>
-MongoDbSettings_KmsKeyId <String>
-RedshiftSettings_LoadTimeout <Int32>
-GcpMySQLSettings_MaxFileSize <Int32>
-MySQLSettings_MaxFileSize <Int32>
-NeptuneSettings_MaxFileSize <Int32>
-PostgreSQLSettings_MaxFileSize <Int32>
-RedshiftSettings_MaxFileSize <Int32>
-S3Settings_MaxFileSize <Int32>
-IBMDb2Settings_MaxKBytesPerRead <Int32>
-NeptuneSettings_MaxRetryCount <Int32>
-KafkaSettings_MessageFormat <MessageFormatValue>
-KinesisSettings_MessageFormat <MessageFormatValue>
-KafkaSettings_MessageMaxByte <Int32>
-DocDbSettings_NestingLevel <NestingLevelValue>
-MongoDbSettings_NestingLevel <NestingLevelValue>
-KafkaSettings_NoHexPrefix <Boolean>
-KinesisSettings_NoHexPrefix <Boolean>
-OracleSettings_NumberDatatypeScale <Int32>
-OracleSettings_OraclePathPrefix <String>
-OracleSettings_ParallelAsmReadThread <Int32>
-GcpMySQLSettings_ParallelLoadThread <Int32>
-MySQLSettings_ParallelLoadThread <Int32>
-S3Settings_ParquetTimestampInMillisecond <Boolean>
-S3Settings_ParquetVersion <ParquetVersionValue>
-KafkaSettings_PartitionIncludeSchemaTable <Boolean>
-KinesisSettings_PartitionIncludeSchemaTable <Boolean>
-DocDbSettings_Password <String>
-GcpMySQLSettings_Password <String>
-IBMDb2Settings_Password <String>
-MicrosoftSQLServerSettings_Password <String>
-MongoDbSettings_Password <String>
-MySQLSettings_Password <String>
-OracleSettings_Password <String>
-Password <String>
-PostgreSQLSettings_Password <String>
-RedshiftSettings_Password <String>
-SybaseSettings_Password <String>
-PostgreSQLSettings_PluginName <PluginNameValue>
-DocDbSettings_Port <Int32>
-GcpMySQLSettings_Port <Int32>
-IBMDb2Settings_Port <Int32>
-MicrosoftSQLServerSettings_Port <Int32>
-MongoDbSettings_Port <Int32>
-MySQLSettings_Port <Int32>
-OracleSettings_Port <Int32>
-Port <Int32>
-PostgreSQLSettings_Port <Int32>
-RedisSettings_Port <Int32>
-RedshiftSettings_Port <Int32>
-SybaseSettings_Port <Int32>
-S3Settings_PreserveTransaction <Boolean>
-MicrosoftSQLServerSettings_QuerySingleAlwaysOnNode <Boolean>
-OracleSettings_ReadAheadBlock <Int32>
-MicrosoftSQLServerSettings_ReadBackupOnly <Boolean>
-OracleSettings_ReadTableSpaceName <Boolean>
-RedshiftSettings_RemoveQuote <Boolean>
-RedshiftSettings_ReplaceChar <String>
-RedshiftSettings_ReplaceInvalidChar <String>
-OracleSettings_ReplacePathPrefix <Boolean>
-ResourceIdentifier <String>
-OracleSettings_RetryInterval <Int32>
-S3Settings_Rfc4180 <Boolean>
-S3Settings_RowGroupLength <Int32>
-NeptuneSettings_S3BucketFolder <String>
-NeptuneSettings_S3BucketName <String>
-MicrosoftSQLServerSettings_SafeguardPolicy <SafeguardPolicy>
-KafkaSettings_SaslPassword <String>
-KafkaSettings_SaslUsername <String>
-DocDbSettings_SecretsManagerAccessRoleArn <String>
-GcpMySQLSettings_SecretsManagerAccessRoleArn <String>
-IBMDb2Settings_SecretsManagerAccessRoleArn <String>
-MicrosoftSQLServerSettings_SecretsManagerAccessRoleArn <String>
-MongoDbSettings_SecretsManagerAccessRoleArn <String>
-MySQLSettings_SecretsManagerAccessRoleArn <String>
-OracleSettings_SecretsManagerAccessRoleArn <String>
-PostgreSQLSettings_SecretsManagerAccessRoleArn <String>
-RedshiftSettings_SecretsManagerAccessRoleArn <String>
-SybaseSettings_SecretsManagerAccessRoleArn <String>
-OracleSettings_SecretsManagerOracleAsmAccessRoleArn <String>
-OracleSettings_SecretsManagerOracleAsmSecretId <String>
-DocDbSettings_SecretsManagerSecretId <String>
-GcpMySQLSettings_SecretsManagerSecretId <String>
-IBMDb2Settings_SecretsManagerSecretId <String>
-MicrosoftSQLServerSettings_SecretsManagerSecretId <String>
-MongoDbSettings_SecretsManagerSecretId <String>
-MySQLSettings_SecretsManagerSecretId <String>
-OracleSettings_SecretsManagerSecretId <String>
-PostgreSQLSettings_SecretsManagerSecretId <String>
-RedshiftSettings_SecretsManagerSecretId <String>
-SybaseSettings_SecretsManagerSecretId <String>
-OracleSettings_SecurityDbEncryption <String>
-OracleSettings_SecurityDbEncryptionName <String>
-KafkaSettings_SecurityProtocol <KafkaSecurityProtocol>
-DocDbSettings_ServerName <String>
-GcpMySQLSettings_ServerName <String>
-IBMDb2Settings_ServerName <String>
-MicrosoftSQLServerSettings_ServerName <String>
-MongoDbSettings_ServerName <String>
-MySQLSettings_ServerName <String>
-OracleSettings_ServerName <String>
-PostgreSQLSettings_ServerName <String>
-RedisSettings_ServerName <String>
-RedshiftSettings_ServerName <String>
-ServerName <String>
-SybaseSettings_ServerName <String>
-RedshiftSettings_ServerSideEncryptionKmsKeyId <String>
-S3Settings_ServerSideEncryptionKmsKeyId <String>
-GcpMySQLSettings_ServerTimezone <String>
-MySQLSettings_ServerTimezone <String>
-DmsTransferSettings_ServiceAccessRoleArn <String>
-DynamoDbSettings_ServiceAccessRoleArn <String>
-ElasticsearchSettings_ServiceAccessRoleArn <String>
-KinesisSettings_ServiceAccessRoleArn <String>
-NeptuneSettings_ServiceAccessRoleArn <String>
-RedshiftSettings_ServiceAccessRoleArn <String>
-S3Settings_ServiceAccessRoleArn <String>
-ServiceAccessRoleArn <String>
-IBMDb2Settings_SetDataCaptureChange <Boolean>
-PostgreSQLSettings_SlotName <String>
-OracleSettings_SpatialDataOptionToGeoJsonFunctionName <String>
-KafkaSettings_SslCaCertificateArn <String>
-RedisSettings_SslCaCertificateArn <String>
-KafkaSettings_SslClientCertificateArn <String>
-KafkaSettings_SslClientKeyArn <String>
-KafkaSettings_SslClientKeyPassword <String>
-SslMode <DmsSslModeValue>
-RedisSettings_SslSecurityProtocol <SslSecurityProtocolValue>
-OracleSettings_StandbyDelayTime <Int32>
-KinesisSettings_StreamArn <String>
-Tag <Tag[]>
-GcpMySQLSettings_TargetDbType <TargetDbType>
-MySQLSettings_TargetDbType <TargetDbType>
-RedshiftSettings_TimeFormat <String>
-S3Settings_TimestampColumnName <String>
-KafkaSettings_Topic <String>
-RedshiftSettings_TrimBlank <Boolean>
-RedshiftSettings_TruncateColumn <Boolean>
-OracleSettings_UseAlternateFolderForOnline <Boolean>
-MicrosoftSQLServerSettings_UseBcpFullLoad <Boolean>
-OracleSettings_UseBFile <Boolean>
-S3Settings_UseCsvNoSupValue <Boolean>
-OracleSettings_UseDirectPathFullLoad <Boolean>
-OracleSettings_UseLogminerReader <Boolean>
-OracleSettings_UsePathPrefix <String>
-DocDbSettings_Username <String>
-GcpMySQLSettings_Username <String>
-IBMDb2Settings_Username <String>
-MicrosoftSQLServerSettings_Username <String>
-MongoDbSettings_Username <String>
-MySQLSettings_Username <String>
-OracleSettings_Username <String>
-PostgreSQLSettings_Username <String>
-RedshiftSettings_Username <String>
-SybaseSettings_Username <String>
-Username <String>
-S3Settings_UseTaskStartTimeForFullLoadTimestamp <Boolean>
-MicrosoftSQLServerSettings_UseThirdPartyBackupDevice <Boolean>
-RedshiftSettings_WriteBufferSize <Int32>
-Select <String>
-PassThru <SwitchParameter>
-Force <SwitchParameter>

Description

Creates an endpoint using the provided settings. For a MySQL source or target endpoint, don't explicitly specify the database using the DatabaseName request parameter on the CreateEndpoint API call. Specifying DatabaseName when you create a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.

Parameters

-CertificateArn <String>
The Amazon Resource Name (ARN) for the certificate.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DatabaseName <String>
The name of the endpoint database. For a MySQL source or target endpoint, do not specify DatabaseName.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DmsTransferSettings_BucketName <String>
The name of the S3 bucket to use.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DmsTransferSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the iam:PassRole action.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_DatabaseName <String>
The database name on the DocumentDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_DocsToInvestigate <Int32>
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one". Must be a positive value greater than 0. Default value is 1000.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_ExtractDocId <Boolean>
Specifies the document ID. Use this setting when NestingLevel is set to "none". Default value is "false".
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_KmsKeyId <String>
The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_NestingLevel <NestingLevelValue>
Specifies either document or table mode. Default value is "none". Specify "none" to use document mode. Specify "one" to use table mode.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_Password <String>
The password for the user account you use to access the DocumentDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_Port <Int32>
The port value for the DocumentDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the DocumentDB endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the DocumentDB endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_ServerName <String>
The name of the server on the DocumentDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DocDbSettings_Username <String>
The user name you use to access the DocumentDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DynamoDbSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the iam:PassRole action.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ElasticsearchSettings_EndpointUri <String>
The endpoint for the OpenSearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ElasticsearchSettings_ErrorRetryDuration <Int32>
The maximum number of seconds for which DMS retries failed API requests to the OpenSearch cluster.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ElasticsearchSettings_FullLoadErrorPercentage <Int32>
The maximum percentage of records that can fail to be written before a full load operation stops.To avoid early failure, this counter is only effective after 1000 records are transferred. OpenSearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ElasticsearchSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the iam:PassRole action.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-EndpointIdentifier <String>
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen, or contain two consecutive hyphens.
Required?True
Position?1
Accept pipeline input?True (ByValue, ByPropertyName)
The type of endpoint. Valid values are source and target.
Required?True
Position?Named
Accept pipeline input?True (ByPropertyName)
-EngineName <String>
The type of engine for the endpoint. Valid values, depending on the EndpointType value, include "mysql", "oracle", "postgres", "mariadb", "aurora", "aurora-postgresql", "opensearch", "redshift", "s3", "db2", "azuredb", "sybase", "dynamodb", "mongodb", "kinesis", "kafka", "elasticsearch", "docdb", "sqlserver", and "neptune".
Required?True
Position?Named
Accept pipeline input?True (ByPropertyName)
-ExternalTableDefinition <String>
The external table definition.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ExtraConnectionAttribute <String>
Additional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see Working with DMS Endpoints in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesExtraConnectionAttributes
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_AfterConnectScript <String>
Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.For this parameter, provide the code of the script itself, not the name of a file containing the script.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_CleanSourceMetadataOnMismatch <Boolean>
Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_DatabaseName <String>
Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the DatabaseName request parameter on either the CreateEndpoint or ModifyEndpoint API call. Specifying DatabaseName when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_EventsPollInterval <Int32>
Specifies how often to check the binary log for new changes/events when the database is idle. The default is five seconds.Example: eventsPollInterval=5;In the example, DMS checks for changes in the binary logs every five seconds.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_MaxFileSize <Int32>
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.Example: maxFileSize=512
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_ParallelLoadThread <Int32>
Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread. The default is one.Example: parallelLoadThreads=1
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesGcpMySQLSettings_ParallelLoadThreads
-GcpMySQLSettings_Password <String>
Endpoint connection password.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_Port <Int32>
The service has not provided documentation for this parameter; please refer to the service's API reference documentation for the latest available information.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the MySQL endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_ServerName <String>
Endpoint TCP port.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_ServerTimezone <String>
Specifies the time zone for the source MySQL database.Example: serverTimezone=US/Pacific;Note: Do not enclose time zones in single quotes.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_TargetDbType <TargetDbType>
Specifies where to migrate source tables on the target, either to a single database or multiple databases.Example: targetDbType=MULTIPLE_DATABASES
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-GcpMySQLSettings_Username <String>
Endpoint connection user name.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_CurrentLsn <String>
For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_DatabaseName <String>
Database name for the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_MaxKBytesPerRead <Int32>
Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_Password <String>
Endpoint connection password.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_Port <Int32>
Endpoint TCP port. The default value is 50000.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint. You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the Db2 LUW endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_ServerName <String>
Fully qualified domain name of the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-IBMDb2Settings_SetDataCaptureChange <Boolean>
Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesIBMDb2Settings_SetDataCaptureChanges
-IBMDb2Settings_Username <String>
Endpoint connection user name.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_Broker <String>
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form broker-hostname-or-ip:port. For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_IncludeControlDetail <Boolean>
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesKafkaSettings_IncludeControlDetails
-KafkaSettings_IncludeNullAndEmpty <Boolean>
Include NULL and empty columns for records migrated to the endpoint. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_IncludePartitionValue <Boolean>
Shows the partition value within the Kafka message output unless the partition type is schema-table-type. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_IncludeTableAlterOperation <Boolean>
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesKafkaSettings_IncludeTableAlterOperations
-KafkaSettings_IncludeTransactionDetail <Boolean>
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesKafkaSettings_IncludeTransactionDetails
-KafkaSettings_MessageFormat <MessageFormatValue>
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_MessageMaxByte <Int32>
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesKafkaSettings_MessageMaxBytes
-KafkaSettings_NoHexPrefix <Boolean>
Set this optional parameter to true to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use the NoHexPrefix endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_PartitionIncludeSchemaTable <Boolean>
Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_SaslPassword <String>
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_SaslUsername <String>
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_SecurityProtocol <KafkaSecurityProtocol>
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl requires SaslUsername and SaslPassword.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_SslCaCertificateArn <String>
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_SslClientCertificateArn <String>
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_SslClientKeyArn <String>
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_SslClientKeyPassword <String>
The password for the client private key used to securely connect to a Kafka target endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KafkaSettings_Topic <String>
The topic to which you migrate the data. If you don't specify a topic, DMS specifies "kafka-default-topic" as the migration topic.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_IncludeControlDetail <Boolean>
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesKinesisSettings_IncludeControlDetails
-KinesisSettings_IncludeNullAndEmpty <Boolean>
Include NULL and empty columns for records migrated to the endpoint. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_IncludePartitionValue <Boolean>
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_IncludeTableAlterOperation <Boolean>
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesKinesisSettings_IncludeTableAlterOperations
-KinesisSettings_IncludeTransactionDetail <Boolean>
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesKinesisSettings_IncludeTransactionDetails
-KinesisSettings_MessageFormat <MessageFormatValue>
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_NoHexPrefix <Boolean>
Set this optional parameter to true to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use the NoHexPrefix endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_PartitionIncludeSchemaTable <Boolean>
Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Kinesis data stream. The role must allow the iam:PassRole action.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_StreamArn <String>
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KmsKeyId <String>
An KMS key identifier that is used to encrypt the connection parameters for the endpoint.If you don't specify a value for the KmsKeyId parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_BcpPacketSize <Int32>
The maximum size of the packets (in bytes) used to transfer data using BCP.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_ControlTablesFileGroup <String>
Specifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_DatabaseName <String>
Database name for the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_Password <String>
Endpoint connection password.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_Port <Int32>
Endpoint TCP port.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_QuerySingleAlwaysOnNode <Boolean>
Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_ReadBackupOnly <Boolean>
When this attribute is set to Y, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_SafeguardPolicy <SafeguardPolicy>
Use this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the SQL Server endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_ServerName <String>
Fully qualified domain name of the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_UseBcpFullLoad <Boolean>
Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_Username <String>
Endpoint connection user name.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MicrosoftSQLServerSettings_UseThirdPartyBackupDevice <Boolean>
When this attribute is set to Y, DMS processes third-party transaction log backups if they are created in native format.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_AuthMechanism <AuthMechanismValue>
The authentication mechanism you use to access the MongoDB source endpoint.For the default value, in MongoDB version 2.x, "default" is "mongodb_cr". For MongoDB version 3.x or later, "default" is "scram_sha_1". This setting isn't used when AuthType is set to "no".
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_AuthSource <String>
The MongoDB database name. This setting isn't used when AuthType is set to "no". The default is "admin".
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_AuthType <AuthTypeValue>
The authentication type you use to access the MongoDB source endpoint.When when set to "no", user name and password parameters are not used and can be empty.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_DatabaseName <String>
The database name on the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_DocsToInvestigate <String>
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one". Must be a positive value greater than 0. Default value is 1000.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_ExtractDocId <String>
Specifies the document ID. Use this setting when NestingLevel is set to "none". Default value is "false".
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_KmsKeyId <String>
The KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then DMS uses your default encryption key. KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_NestingLevel <NestingLevelValue>
Specifies either document or table mode. Default value is "none". Specify "none" to use document mode. Specify "one" to use table mode.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_Password <String>
The password for the user account you use to access the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_Port <Int32>
The port value for the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the MongoDB endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the MongoDB endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_ServerName <String>
The name of the server on the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_Username <String>
The user name you use to access the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_AfterConnectScript <String>
Specifies a script to run immediately after DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.For this parameter, provide the code of the script itself, not the name of a file containing the script.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_CleanSourceMetadataOnMismatch <Boolean>
Adjusts the behavior of DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_DatabaseName <String>
Database name for the endpoint. For a MySQL source or target endpoint, don't explicitly specify the database using the DatabaseName request parameter on either the CreateEndpoint or ModifyEndpoint API call. Specifying DatabaseName when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the DMS task.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_EventsPollInterval <Int32>
Specifies how often to check the binary log for new changes/events when the database is idle. The default is five seconds.Example: eventsPollInterval=5;In the example, DMS checks for changes in the binary logs every five seconds.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_MaxFileSize <Int32>
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.Example: maxFileSize=512
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_ParallelLoadThread <Int32>
Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread. The default is one.Example: parallelLoadThreads=1
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesMySQLSettings_ParallelLoadThreads
-MySQLSettings_Password <String>
Endpoint connection password.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_Port <Int32>
Endpoint TCP port.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the MySQL endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the MySQL endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_ServerName <String>
Fully qualified domain name of the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_ServerTimezone <String>
Specifies the time zone for the source MySQL database.Example: serverTimezone=US/Pacific;Note: Do not enclose time zones in single quotes.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_TargetDbType <TargetDbType>
Specifies where to migrate source tables on the target, either to a single database or multiple databases.Example: targetDbType=MULTIPLE_DATABASES
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MySQLSettings_Username <String>
Endpoint connection user name.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NeptuneSettings_ErrorRetryDuration <Int32>
The number of milliseconds for DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NeptuneSettings_IamAuthEnabled <Boolean>
If you want Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true. Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NeptuneSettings_MaxFileSize <Int32>
The maximum size in kilobytes of migrated graph data stored in a .csv file before DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to store the next batch of migrated graph data.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NeptuneSettings_MaxRetryCount <Int32>
The number of times for DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NeptuneSettings_S3BucketFolder <String>
A folder path where you want DMS to store migrated graph data in the S3 bucket specified by S3BucketName
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NeptuneSettings_S3BucketName <String>
The name of the Amazon S3 bucket where DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. DMS maps the SQL source data to graph data before storing it in these .csv files.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NeptuneSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow the iam:PassRole action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_AccessAlternateDirectly <Boolean>
Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_AdditionalArchivedLogDestId <Int32>
Set this attribute with ArchivedLogDestId in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.Although DMS supports the use of the Oracle RESETLOGS option to open the database, never use RESETLOGS unless necessary. For additional information about RESETLOGS, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_AddSupplementalLogging <Boolean>
Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.If you use this option, you still need to enable database-level supplemental logging.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_AllowSelectNestedTable <Boolean>
Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOracleSettings_AllowSelectNestedTables
-OracleSettings_ArchivedLogDestId <Int32>
Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the AdditionalArchivedLogDestId option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_ArchivedLogsOnly <Boolean>
When this field is set to Y, DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the DMS user account needs to be granted ASM privileges.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_AsmPassword <String>
For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the asm_user_password value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_AsmServer <String>
For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_AsmUser <String>
For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_CharLengthSemantic <CharLengthSemantics>
Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to CHAR. Otherwise, the character column length is in bytes.Example: charLengthSemantics=CHAR;
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOracleSettings_CharLengthSemantics
-OracleSettings_DatabaseName <String>
Database name for the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_DirectPathNoLog <Boolean>
When set to true, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_DirectPathParallelLoad <Boolean>
When set to true, this attribute specifies a parallel load when useDirectPathFullLoad is set to Y. This attribute also only applies when you use the DMS parallel load feature. Note that the target table cannot have any constraints or indexes.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_EnableHomogenousTablespace <Boolean>
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_ExtraArchivedLogDestId <Int32[]>
Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the dest_id column in the v$archived_log view. Use this setting with the archivedLogDestId extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup. This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, DMS needs information about what destination to get archive redo logs from to read changes. DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings. archivedLogDestId=1; ExtraArchivedLogDestIds=[2]In a primary-to-multiple-standby setup, you might apply the following settings.archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]Although DMS supports the use of the Oracle RESETLOGS option to open the database, never use RESETLOGS unless it's necessary. For more information about RESETLOGS, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User's Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOracleSettings_ExtraArchivedLogDestIds
-OracleSettings_FailTasksOnLobTruncation <Boolean>
When set to true, this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize.If a task is set to limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_NumberDatatypeScale <Int32>
Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.Example: numberDataTypeScale=12
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_OraclePathPrefix <String>
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_ParallelAsmReadThread <Int32>
Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the readAheadBlocks attribute.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOracleSettings_ParallelAsmReadThreads
-OracleSettings_Password <String>
Endpoint connection password.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_Port <Int32>
Endpoint TCP port.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_ReadAheadBlock <Int32>
Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOracleSettings_ReadAheadBlocks
-OracleSettings_ReadTableSpaceName <Boolean>
When set to true, this attribute supports tablespace replication.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_ReplacePathPrefix <Boolean>
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified usePathPrefix setting to access the redo logs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_RetryInterval <Int32>
Specifies the number of seconds that the system waits before resending a query.Example: retryInterval=6;
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the Oracle endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_SecretsManagerOracleAsmAccessRoleArn <String>
Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the SecretsManagerOracleAsmSecret. This SecretsManagerOracleAsmSecret has the secret value that allows access to the Oracle ASM of the endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerOracleAsmSecretId. Or you can specify clear-text values for AsmUserName, AsmPassword, and AsmServerName. You can't specify both. For more information on creating this SecretsManagerOracleAsmSecret and the SecretsManagerOracleAsmAccessRoleArn and SecretsManagerOracleAsmSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_SecretsManagerOracleAsmSecretId <String>
Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or friendly name of the SecretsManagerOracleAsmSecret that contains the Oracle ASM connection details for the Oracle endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the Oracle endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_SecurityDbEncryption <String>
For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the TDE_Password part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_SecurityDbEncryptionName <String>
For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName, see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for DMS in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_ServerName <String>
Fully qualified domain name of the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_SpatialDataOptionToGeoJsonFunctionName <String>
Use this attribute to convert SDO_GEOMETRY to GEOJSON format. By default, DMS calls the SDO2GEOJSON custom function if present and accessible. Or you can create your own custom function that mimics the operation of SDOGEOJSON and set SpatialDataOptionToGeoJsonFunctionName to call it instead.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_StandbyDelayTime <Int32>
Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.In DMS, you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_UseAlternateFolderForOnline <Boolean>
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_UseBFile <Boolean>
Set this attribute to Y to capture change data using the Binary Reader utility. Set UseLogminerReader to N to set this attribute to Y. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for CDC.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_UseDirectPathFullLoad <Boolean>
Set this attribute to Y to have DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_UseLogminerReader <Boolean>
Set this attribute to Y to capture change data using the Oracle LogMiner utility (the default). Set this attribute to N if you want to access the redo logs as a binary file. When you set UseLogminerReader to N, also set UseBfile to Y. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in the DMS User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_UsePathPrefix <String>
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OracleSettings_Username <String>
Endpoint connection user name.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PassThru <SwitchParameter>
Changes the cmdlet behavior to return the value passed to the EndpointIdentifier parameter. The -PassThru parameter is deprecated, use -Select '^EndpointIdentifier' instead. This parameter will be removed in a future version.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Password <String>
The password to be used to log in to the endpoint database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Port <Int32>
The port used by the endpoint database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_AfterConnectScript <String>
For use with change data capture (CDC) only, this attribute has DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.Example: afterConnectScript=SET session_replication_role='replica'
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_CaptureDdl <Boolean>
To capture DDL events, DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.If this value is set to N, you don't have to create tables or triggers on the source database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesPostgreSQLSettings_CaptureDdls
-PostgreSQLSettings_DatabaseName <String>
Database name for the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_DdlArtifactsSchema <String>
The schema in which the operational DDL database artifacts are created.Example: ddlArtifactsSchema=xyzddlschema;
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_ExecuteTimeout <Int32>
Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.Example: executeTimeout=100;
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_FailTasksOnLobTruncation <Boolean>
When set to true, this value causes a task to fail if the actual size of a LOB column is greater than the specified LobMaxSize.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_HeartbeatEnable <Boolean>
The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps restart_lsn moving and prevents storage full scenarios.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_HeartbeatFrequency <Int32>
Sets the WAL heartbeat frequency (in minutes).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_HeartbeatSchema <String>
Sets the schema in which the heartbeat artifacts are created.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_MaxFileSize <Int32>
Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.Example: maxFileSize=512
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_Password <String>
Endpoint connection password.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_PluginName <PluginNameValue>
Specifies the plugin to use to create a replication slot.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_Port <Int32>
Endpoint TCP port. The default is 5432.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the PostgreSQL endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the PostgreSQL endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_ServerName <String>
Fully qualified domain name of the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_SlotName <String>
Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance. When used with the CdcStartPosition request parameter for the DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting of CdcStartPosition. If the specified slot doesn't exist or the task doesn't have a valid CdcStartPosition setting, DMS raises an error.For more information about setting the CdcStartPosition request parameter, see Determining a CDC native start point in the Database Migration Service User Guide. For more information about using CdcStartPosition, see CreateReplicationTask, StartReplicationTask, and ModifyReplicationTask.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PostgreSQLSettings_Username <String>
Endpoint connection user name.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedisSettings_AuthPassword <String>
The password provided with the auth-role and auth-token options of the AuthType setting for a Redis target endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedisSettings_AuthType <RedisAuthTypeValue>
The type of authentication to perform when connecting to a Redis target. Options include none, auth-token, and auth-role. The auth-token option requires an AuthPassword value to be provided. The auth-role option requires AuthUserName and AuthPassword values to be provided.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedisSettings_AuthUserName <String>
The user name provided with the auth-role option of the AuthType setting for a Redis target endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedisSettings_Port <Int32>
Transmission Control Protocol (TCP) port for the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedisSettings_ServerName <String>
Fully qualified domain name of the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedisSettings_SslCaCertificateArn <String>
The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedisSettings_SslSecurityProtocol <SslSecurityProtocolValue>
The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include plaintext and ssl-encryption. The default is ssl-encryption. The ssl-encryption option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using the SslCaCertificateArn setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The plaintext option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_AcceptAnyDate <Boolean>
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_AfterConnectScript <String>
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_BucketFolder <String>
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster. For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift COPY command to upload the .csv files to the target table. The files are deleted once the COPY operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_BucketName <String>
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_CaseSensitiveName <Boolean>
If Amazon Redshift is configured to support case sensitive schema names, set CaseSensitiveNames to true. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_CaseSensitiveNames
-RedshiftSettings_CompUpdate <Boolean>
If you set CompUpdate to true Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW. If you set CompUpdate to false, automatic compression is disabled and existing column encodings aren't changed. The default is true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_ConnectionTimeout <Int32>
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_DatabaseName <String>
The name of the Amazon Redshift data warehouse (service) that you are working with.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_DateFormat <String>
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string. If your date and time values use formats different from each other, set this to auto.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_EmptyAsNull <Boolean>
A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_EncryptionMode <EncryptionModeValue>
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3. But you can’t change the existing value from SSE_S3 to SSE_KMS.To use SSE_S3, create an Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_ExplicitId <Boolean>
This setting is only valid for a full-load migration task. Set ExplicitIds to true to have tables with IDENTITY columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_ExplicitIds
-RedshiftSettings_FileTransferUploadStream <Int32>
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview. FileTransferUploadStreams accepts a value from 1 through 64. It defaults to 10.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_FileTransferUploadStreams
-RedshiftSettings_LoadTimeout <Int32>
The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_MaxFileSize <Int32>
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_Password <String>
The password for the user named in the username property.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_Port <Int32>
The port number for Amazon Redshift. The default value is 5439.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_RemoveQuote <Boolean>
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_RemoveQuotes
-RedshiftSettings_ReplaceChar <String>
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_ReplaceChars
-RedshiftSettings_ReplaceInvalidChar <String>
A list of characters that you want to replace. Use with ReplaceChars.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_ReplaceInvalidChars
-RedshiftSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the Amazon Redshift endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_ServerName <String>
The name of the Amazon Redshift cluster you are using.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_ServerSideEncryptionKmsKeyId <String>
The KMS key ID. If you are using SSE_KMS for the EncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the iam:PassRole action.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_TimeFormat <String>
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string. If your date and time values use formats different from each other, set this parameter to auto.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_TrimBlank <Boolean>
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_TrimBlanks
-RedshiftSettings_TruncateColumn <Boolean>
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_TruncateColumns
-RedshiftSettings_Username <String>
An Amazon Redshift user name for a registered user.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_WriteBufferSize <Int32>
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ResourceIdentifier <String>
A friendly name for the resource identifier at the end of the EndpointArn response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1. For example, this value might result in the EndpointArn value arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1. If you don't specify a ResourceIdentifier value, DMS generates a default identifier value for the end of EndpointArn.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_AddColumnName <Boolean>
An optional parameter that, when set to true or y, you can use to add column name information to the .csv output file.The default value is false. Valid values are true, false, y, and n.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_BucketFolder <String>
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_BucketName <String>
The name of the S3 bucket.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CannedAclForObject <CannedAclForObjectsValue>
A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesS3Settings_CannedAclForObjects
-S3Settings_CdcInsertsAndUpdate <Boolean>
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true or y, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file. For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesS3Settings_CdcInsertsAndUpdates
-S3Settings_CdcInsertsOnly <Boolean>
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later. CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CdcMaxBatchInterval <Int32>
Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.When CdcMaxBatchInterval and CdcMinFileSize are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 60 seconds.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CdcMinFileSize <Int32>
Minimum file size, defined in megabytes, to reach for a file output to Amazon S3.When CdcMinFileSize and CdcMaxBatchInterval are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CdcPath <String>
Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions to true, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by BucketFolder and BucketName.For example, if you specify CdcPath as MyChangedData, and you specify BucketName as MyTargetBucket but do not specify BucketFolder, DMS creates the CDC folder path following: MyTargetBucket/MyChangedData.If you specify the same CdcPath, and you specify BucketName as MyTargetBucket and BucketFolder as MyTargetData, DMS creates the CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.This setting is supported in DMS versions 3.4.2 and later.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CompressionType <CompressionTypeValue>
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CsvDelimiter <String>
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CsvNoSupValue <String>
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If UseCsvNoSupValue is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of the UseCsvNoSupValue setting.This setting is supported in DMS versions 3.4.1 and later.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CsvNullValue <String>
An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of NULL.The default value is NULL. Valid values include any valid string.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CsvRowDelimiter <String>
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DataFormat <DataFormatValue>
The format of the data that you want to use for output. You can choose one of the following:
  • csv : This is a row-based file format with comma-separated values (.csv).
  • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DataPageSize <Int32>
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DatePartitionDelimiter <DatePartitionDelimiterValue>
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH. Use this parameter when DatePartitionedEnabled is set to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DatePartitionEnabled <Boolean>
When set to true, this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false. For more information about date-based folder partitioning, see Using date-based folder partitioning.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DatePartitionSequence <DatePartitionSequenceValue>
Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD. Use this parameter when DatePartitionedEnabled is set to true.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DatePartitionTimezone <String>
When creating an S3 target endpoint, set DatePartitionTimezone to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled is set to true, as shown in the following example.s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}'
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DictPageSizeLimit <Int32>
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_EnableStatistic <Boolean>
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesS3Settings_EnableStatistics
-S3Settings_EncodingType <EncodingTypeValue>
The type of encoding you are using:
  • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
  • PLAIN doesn't use encoding at all. Values are stored as they are.
  • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_EncryptionMode <EncryptionModeValue>
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3. But you can’t change the existing value from SSE_S3 to SSE_KMS.To use SSE_S3, you need an Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
  • s3:CreateBucket
  • s3:ListBucket
  • s3:DeleteBucket
  • s3:GetBucketLocation
  • s3:GetObject
  • s3:PutObject
  • s3:DeleteObject
  • s3:GetObjectVersion
  • s3:GetBucketPolicy
  • s3:PutBucketPolicy
  • s3:DeleteBucketPolicy
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ExternalTableDefinition <String>
Specifies how tables are defined in the S3 source files only.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_IgnoreHeaderRow <Int32>
When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.The default is 0.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesS3Settings_IgnoreHeaderRows
-S3Settings_IncludeOpForFullLoad <Boolean>
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the Database Migration Service User Guide..
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_MaxFileSize <Int32>
A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ParquetTimestampInMillisecond <Boolean>
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.When ParquetTimestampInMillisecond is set to true or y, DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ParquetVersion <ParquetVersionValue>
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_PreserveTransaction <Boolean>
If set to true, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by CdcPath. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.This setting is supported in DMS versions 3.4.2 and later.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesS3Settings_PreserveTransactions
-S3Settings_Rfc4180 <Boolean>
For an S3 source, when this value is set to true or y, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set to false or n, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to true or y using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is true. Valid values include true, false, y, and n.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_RowGroupLength <Int32>
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only. If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ServerSideEncryptionKmsKeyId <String>
If you are using SSE_KMS for the EncryptionMode, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the iam:PassRole action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_TimestampColumnName <String>
A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS. For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_UseCsvNoSupValue <Boolean>
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true for columns not included in the supplemental log, DMS uses the value specified by CsvNoSupValue. If not set or set to false, DMS uses the null value for these columns.This setting is supported in DMS versions 3.4.1 and later.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_UseTaskStartTimeForFullLoadTimestamp <Boolean>
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp is set to true, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.When useTaskStartTimeForFullLoadTimestamp is set to false, the full load timestamp in the timestamp column increments with the time data arrives at the target.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Select <String>
Use the -Select parameter to control the cmdlet output. The default value is 'Endpoint'. Specifying -Select '*' will result in the cmdlet returning the whole service response (Amazon.DatabaseMigrationService.Model.CreateEndpointResponse). Specifying the name of a property of type Amazon.DatabaseMigrationService.Model.CreateEndpointResponse will result in that property being returned. Specifying -Select '^ParameterName' will result in the cmdlet returning the selected cmdlet parameter value.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ServerName <String>
The name of the server where the endpoint database resides.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint. The role must allow the iam:PassRole action.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SslMode <DmsSslModeValue>
The Secure Sockets Layer (SSL) mode to use for the SSL connection. The default is none
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SybaseSettings_DatabaseName <String>
Database name for the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SybaseSettings_Password <String>
Endpoint connection password.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SybaseSettings_Port <Int32>
Endpoint TCP port. The default is 5000.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SybaseSettings_SecretsManagerAccessRoleArn <String>
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId. Or you can specify clear-text values for UserName, Password, ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SybaseSettings_SecretsManagerSecretId <String>
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that contains the SAP SAE endpoint connection details.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SybaseSettings_ServerName <String>
Fully qualified domain name of the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SybaseSettings_Username <String>
Endpoint connection user name.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Tag <Tag[]>
One or more tags to be assigned to the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesTags
-Username <String>
The user name to be used to log in to the endpoint database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)

Common Credential and Region Parameters

-AccessKey <String>
The AWS access key for the user account. This can be a temporary access key if the corresponding session token is supplied to the -SessionToken parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesAK
-Credential <AWSCredentials>
An AWSCredentials object instance containing access and secret key information, and optionally a token for session-based credentials.
Required?False
Position?Named
Accept pipeline input?True (ByValue, ByPropertyName)
-EndpointUrl <String>
The endpoint to make the call against.Note: This parameter is primarily for internal AWS use and is not required/should not be specified for normal usage. The cmdlets normally determine which endpoint to call based on the region specified to the -Region parameter or set as default in the shell (via Set-DefaultAWSRegion). Only specify this parameter if you must direct the call to a specific custom endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NetworkCredential <PSCredential>
Used with SAML-based authentication when ProfileName references a SAML role profile. Contains the network credentials to be supplied during authentication with the configured identity provider's endpoint. This parameter is not required if the user's default network identity can or should be used during authentication.
Required?False
Position?Named
Accept pipeline input?True (ByValue, ByPropertyName)
-ProfileLocation <String>
Used to specify the name and location of the ini-format credential file (shared with the AWS CLI and other AWS SDKs)If this optional parameter is omitted this cmdlet will search the encrypted credential file used by the AWS SDK for .NET and AWS Toolkit for Visual Studio first. If the profile is not found then the cmdlet will search in the ini-format credential file at the default location: (user's home directory)\.aws\credentials.If this parameter is specified then this cmdlet will only search the ini-format credential file at the location given.As the current folder can vary in a shell or during script execution it is advised that you use specify a fully qualified path instead of a relative path.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesAWSProfilesLocation, ProfilesLocation
-ProfileName <String>
The user-defined name of an AWS credentials or SAML-based role profile containing credential information. The profile is expected to be found in the secure credential file shared with the AWS SDK for .NET and AWS Toolkit for Visual Studio. You can also specify the name of a profile stored in the .ini-format credential file used with the AWS CLI and other AWS SDKs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesStoredCredentials, AWSProfileName
-Region <Object>
The system name of an AWS region or an AWSRegion instance. This governs the endpoint that will be used when calling service operations. Note that the AWS resources referenced in a call are usually region-specific.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRegionToCall
-SecretKey <String>
The AWS secret key for the user account. This can be a temporary secret key if the corresponding session token is supplied to the -SessionToken parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesSK, SecretAccessKey
-SessionToken <String>
The session token if the access and secret keys are temporary session-based credentials.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesST

Outputs

This cmdlet returns an Amazon.DatabaseMigrationService.Model.Endpoint object. The service call response (type Amazon.DatabaseMigrationService.Model.CreateEndpointResponse) can also be referenced from properties attached to the cmdlet entry in the $AWSHistory stack.

Supported Version

AWS Tools for PowerShell: 2.x.y.z