AWS Tools for Windows PowerShell
Command Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Synopsis

Calls the AWS Database Migration Service CreateEndpoint API operation.

Syntax

New-DMSEndpoint
-EndpointIdentifier <String>
-RedshiftSettings_AcceptAnyDate <Boolean>
-RedshiftSettings_AfterConnectScript <String>
-MongoDbSettings_AuthMechanism <AuthMechanismValue>
-MongoDbSettings_AuthSource <String>
-MongoDbSettings_AuthType <AuthTypeValue>
-RedshiftSettings_BucketFolder <String>
-S3Settings_BucketFolder <String>
-DmsTransferSettings_BucketName <String>
-RedshiftSettings_BucketName <String>
-S3Settings_BucketName <String>
-S3Settings_CdcInsertsOnly <Boolean>
-CertificateArn <String>
-S3Settings_CompressionType <CompressionTypeValue>
-RedshiftSettings_ConnectionTimeout <Int32>
-S3Settings_CsvDelimiter <String>
-S3Settings_CsvRowDelimiter <String>
-DatabaseName <String>
-MongoDbSettings_DatabaseName <String>
-RedshiftSettings_DatabaseName <String>
-S3Settings_DataFormat <DataFormatValue>
-S3Settings_DataPageSize <Int32>
-RedshiftSettings_DateFormat <String>
-S3Settings_DictPageSizeLimit <Int32>
-MongoDbSettings_DocsToInvestigate <String>
-RedshiftSettings_EmptyAsNull <Boolean>
-S3Settings_EnableStatistic <Boolean>
-S3Settings_EncodingType <EncodingTypeValue>
-RedshiftSettings_EncryptionMode <EncryptionModeValue>
-S3Settings_EncryptionMode <EncryptionModeValue>
-EndpointType <ReplicationEndpointTypeValue>
-ElasticsearchSettings_EndpointUri <String>
-EngineName <String>
-ElasticsearchSettings_ErrorRetryDuration <Int32>
-ExternalTableDefinition <String>
-S3Settings_ExternalTableDefinition <String>
-ExtraConnectionAttribute <String>
-MongoDbSettings_ExtractDocId <String>
-RedshiftSettings_FileTransferUploadStream <Int32>
-ElasticsearchSettings_FullLoadErrorPercentage <Int32>
-S3Settings_IncludeOpForFullLoad <Boolean>
-KmsKeyId <String>
-MongoDbSettings_KmsKeyId <String>
-RedshiftSettings_LoadTimeout <Int32>
-RedshiftSettings_MaxFileSize <Int32>
-KinesisSettings_MessageFormat <MessageFormatValue>
-MongoDbSettings_NestingLevel <NestingLevelValue>
-S3Settings_ParquetTimestampInMillisecond <Boolean>
-S3Settings_ParquetVersion <ParquetVersionValue>
-MongoDbSettings_Password <String>
-Password <String>
-RedshiftSettings_Password <String>
-MongoDbSettings_Port <Int32>
-Port <Int32>
-RedshiftSettings_Port <Int32>
-RedshiftSettings_RemoveQuote <Boolean>
-RedshiftSettings_ReplaceChar <String>
-RedshiftSettings_ReplaceInvalidChar <String>
-S3Settings_RowGroupLength <Int32>
-MongoDbSettings_ServerName <String>
-RedshiftSettings_ServerName <String>
-ServerName <String>
-RedshiftSettings_ServerSideEncryptionKmsKeyId <String>
-S3Settings_ServerSideEncryptionKmsKeyId <String>
-DmsTransferSettings_ServiceAccessRoleArn <String>
-DynamoDbSettings_ServiceAccessRoleArn <String>
-ElasticsearchSettings_ServiceAccessRoleArn <String>
-KinesisSettings_ServiceAccessRoleArn <String>
-RedshiftSettings_ServiceAccessRoleArn <String>
-S3Settings_ServiceAccessRoleArn <String>
-ServiceAccessRoleArn <String>
-SslMode <DmsSslModeValue>
-KinesisSettings_StreamArn <String>
-Tag <Tag[]>
-RedshiftSettings_TimeFormat <String>
-S3Settings_TimestampColumnName <String>
-RedshiftSettings_TrimBlank <Boolean>
-RedshiftSettings_TruncateColumn <Boolean>
-MongoDbSettings_Username <String>
-RedshiftSettings_Username <String>
-Username <String>
-RedshiftSettings_WriteBufferSize <Int32>
-Select <String>
-PassThru <SwitchParameter>
-Force <SwitchParameter>

Description

Creates an endpoint using the provided settings.

Parameters

-CertificateArn <String>
The Amazon Resource Name (ARN) for the certificate.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DatabaseName <String>
The name of the endpoint database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DmsTransferSettings_BucketName <String>
The name of the S3 bucket to use.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DmsTransferSettings_ServiceAccessRoleArn <String>
The IAM role that has permission to access the Amazon S3 bucket.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DynamoDbSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by the service access IAM role.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ElasticsearchSettings_EndpointUri <String>
The endpoint for the Elasticsearch cluster.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ElasticsearchSettings_ErrorRetryDuration <Int32>
The maximum number of seconds that DMS retries failed API requests to the Elasticsearch cluster.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ElasticsearchSettings_FullLoadErrorPercentage <Int32>
The maximum percentage of records that can fail to be written before a full load operation stops.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ElasticsearchSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by service to access the IAM role.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-EndpointIdentifier <String>
The database endpoint identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
Required?True
Position?1
Accept pipeline input?True (ByValue, ByPropertyName)
The type of endpoint. Valid values are source and target.
Required?True
Position?Named
Accept pipeline input?True (ByPropertyName)
-EngineName <String>
The type of engine for the endpoint. Valid values, depending on the EndpointType value, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, db2, azuredb, sybase, dynamodb, mongodb, and sqlserver.
Required?True
Position?Named
Accept pipeline input?True (ByPropertyName)
-ExternalTableDefinition <String>
The external table definition.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ExtraConnectionAttribute <String>
Additional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see Working with AWS DMS Endpoints in the AWS Database Migration Service User Guide.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesExtraConnectionAttributes
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_MessageFormat <MessageFormatValue>
The output format for the records created on the endpoint. The message format is JSON.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Amazon Kinesis data stream.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KinesisSettings_StreamArn <String>
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-KmsKeyId <String>
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_AuthMechanism <AuthMechanismValue>
The authentication mechanism you use to access the MongoDB source endpoint.Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1 DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This setting is not used when authType=No.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_AuthSource <String>
The MongoDB database name. This setting is not used when authType=NO.The default is admin.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_AuthType <AuthTypeValue>
The authentication type you use to access the MongoDB source endpoint.Valid values: NO, PASSWORD When NO is selected, user name and password parameters are not used and can be empty.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_DatabaseName <String>
The database name on the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_DocsToInvestigate <String>
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to ONE. Must be a positive value greater than 0. Default value is 1000.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_ExtractDocId <String>
Specifies the document ID. Use this setting when NestingLevel is set to NONE. Default value is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_KmsKeyId <String>
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_NestingLevel <NestingLevelValue>
Specifies either document or table mode. Valid values: NONE, ONEDefault value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_Password <String>
The password for the user account you use to access the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_Port <Int32>
The port value for the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_ServerName <String>
The name of the server on the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_Username <String>
The user name you use to access the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PassThru <SwitchParameter>
Changes the cmdlet behavior to return the value passed to the EndpointIdentifier parameter. The -PassThru parameter is deprecated, use -Select '^EndpointIdentifier' instead. This parameter will be removed in a future version.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Password <String>
The password to be used to log in to the endpoint database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Port <Int32>
The port used by the endpoint database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_AcceptAnyDate <Boolean>
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_AfterConnectScript <String>
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_BucketFolder <String>
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_BucketName <String>
The name of the S3 bucket you want to use
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_ConnectionTimeout <Int32>
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_DatabaseName <String>
The name of the Amazon Redshift data warehouse (service) that you are working with.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_DateFormat <String>
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string. If your date and time values use formats different from each other, set this to auto.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_EmptyAsNull <Boolean>
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_EncryptionMode <EncryptionModeValue>
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_FileTransferUploadStream <Int32>
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_FileTransferUploadStreams
-RedshiftSettings_LoadTimeout <Int32>
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_MaxFileSize <Int32>
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_Password <String>
The password for the user named in the username property.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_Port <Int32>
The port number for Amazon Redshift. The default value is 5439.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_RemoveQuote <Boolean>
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_RemoveQuotes
-RedshiftSettings_ReplaceChar <String>
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_ReplaceChars
-RedshiftSettings_ReplaceInvalidChar <String>
A list of characters that you want to replace. Use with ReplaceChars.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_ReplaceInvalidChars
-RedshiftSettings_ServerName <String>
The name of the Amazon Redshift cluster you are using.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_ServerSideEncryptionKmsKeyId <String>
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_TimeFormat <String>
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string. If your date and time values use formats different from each other, set this parameter to auto.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_TrimBlank <Boolean>
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_TrimBlanks
-RedshiftSettings_TruncateColumn <Boolean>
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRedshiftSettings_TruncateColumns
-RedshiftSettings_Username <String>
An Amazon Redshift user name for a registered user.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RedshiftSettings_WriteBufferSize <Int32>
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_BucketFolder <String>
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter is not specified, then the path used is schema_name/table_name/.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_BucketName <String>
The name of the S3 bucket.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CdcInsertsOnly <Boolean>
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..AWS DMS supports this interaction between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CompressionType <CompressionTypeValue>
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both .csv and .parquet file formats.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CsvDelimiter <String>
The delimiter used to separate columns in the source files. The default is a comma.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_CsvRowDelimiter <String>
The delimiter used to separate rows in the source files. The default is a carriage return (\n).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DataFormat <DataFormatValue>
The format of the data that you want to use for output. You can choose one of the following:
  • csv : This is a row-based file format with comma-separated values (.csv).
  • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DataPageSize <Int32>
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_DictPageSizeLimit <Int32>
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_EnableStatistic <Boolean>
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesS3Settings_EnableStatistics
-S3Settings_EncodingType <EncodingTypeValue>
The type of encoding you are using:
  • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
  • PLAIN doesn't use encoding at all. Values are stored as they are.
  • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_EncryptionMode <EncryptionModeValue>
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
  • s3:CreateBucket
  • s3:ListBucket
  • s3:DeleteBucket
  • s3:GetBucketLocation
  • s3:GetObject
  • s3:PutObject
  • s3:DeleteObject
  • s3:GetObjectVersion
  • s3:GetBucketPolicy
  • s3:PutBucketPolicy
  • s3:DeleteBucketPolicy
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ExternalTableDefinition <String>
The external table definition.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_IncludeOpForFullLoad <Boolean>
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.This setting works together with the CdcInsertsOnly parameter for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ParquetTimestampInMillisecond <Boolean>
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.When ParquetTimestampInMillisecond is set to true or y, AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ParquetVersion <ParquetVersionValue>
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_RowGroupLength <Int32>
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only. If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ServerSideEncryptionKmsKeyId <String>
If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by the service access IAM role.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-S3Settings_TimestampColumnName <String>
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS. For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Select <String>
Use the -Select parameter to control the cmdlet output. The default value is 'Endpoint'. Specifying -Select '*' will result in the cmdlet returning the whole service response (Amazon.DatabaseMigrationService.Model.CreateEndpointResponse). Specifying the name of a property of type Amazon.DatabaseMigrationService.Model.CreateEndpointResponse will result in that property being returned. Specifying -Select '^ParameterName' will result in the cmdlet returning the selected cmdlet parameter value.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ServerName <String>
The name of the server where the endpoint database resides.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-SslMode <DmsSslModeValue>
The Secure Sockets Layer (SSL) mode to use for the SSL connection. The default is none
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Tag <Tag[]>
One or more tags to be assigned to the endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesTags
-Username <String>
The user name to be used to log in to the endpoint database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)

Common Credential and Region Parameters

-AccessKey <String>
The AWS access key for the user account. This can be a temporary access key if the corresponding session token is supplied to the -SessionToken parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesAK
-Credential <AWSCredentials>
An AWSCredentials object instance containing access and secret key information, and optionally a token for session-based credentials.
Required?False
Position?Named
Accept pipeline input?True (ByValue, ByPropertyName)
-EndpointUrl <String>
The endpoint to make the call against.Note: This parameter is primarily for internal AWS use and is not required/should not be specified for normal usage. The cmdlets normally determine which endpoint to call based on the region specified to the -Region parameter or set as default in the shell (via Set-DefaultAWSRegion). Only specify this parameter if you must direct the call to a specific custom endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NetworkCredential <PSCredential>
Used with SAML-based authentication when ProfileName references a SAML role profile. Contains the network credentials to be supplied during authentication with the configured identity provider's endpoint. This parameter is not required if the user's default network identity can or should be used during authentication.
Required?False
Position?Named
Accept pipeline input?True (ByValue, ByPropertyName)
-ProfileLocation <String>
Used to specify the name and location of the ini-format credential file (shared with the AWS CLI and other AWS SDKs)If this optional parameter is omitted this cmdlet will search the encrypted credential file used by the AWS SDK for .NET and AWS Toolkit for Visual Studio first. If the profile is not found then the cmdlet will search in the ini-format credential file at the default location: (user's home directory)\.aws\credentials.If this parameter is specified then this cmdlet will only search the ini-format credential file at the location given.As the current folder can vary in a shell or during script execution it is advised that you use specify a fully qualified path instead of a relative path.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesAWSProfilesLocation, ProfilesLocation
-ProfileName <String>
The user-defined name of an AWS credentials or SAML-based role profile containing credential information. The profile is expected to be found in the secure credential file shared with the AWS SDK for .NET and AWS Toolkit for Visual Studio. You can also specify the name of a profile stored in the .ini-format credential file used with the AWS CLI and other AWS SDKs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesStoredCredentials, AWSProfileName
-Region <Object>
The system name of an AWS region or an AWSRegion instance. This governs the endpoint that will be used when calling service operations. Note that the AWS resources referenced in a call are usually region-specific.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRegionToCall
-SecretKey <String>
The AWS secret key for the user account. This can be a temporary secret key if the corresponding session token is supplied to the -SessionToken parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesSK, SecretAccessKey
-SessionToken <String>
The session token if the access and secret keys are temporary session-based credentials.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesST

Outputs

This cmdlet returns an Amazon.DatabaseMigrationService.Model.Endpoint object. The service call response (type Amazon.DatabaseMigrationService.Model.CreateEndpointResponse) can also be referenced from properties attached to the cmdlet entry in the $AWSHistory stack.

Supported Version

AWS Tools for PowerShell: 2.x.y.z