AWS Tools for Windows PowerShell
Command Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Synopsis

Calls the AWS Database Migration Service CreateEndpoint API operation.

Syntax

New-DMSEndpoint
-EndpointIdentifier <String>
-RedshiftSettings_AcceptAnyDate <Boolean>
-RedshiftSettings_AfterConnectScript <String>
-MongoDbSettings_AuthMechanism <AuthMechanismValue>
-MongoDbSettings_AuthSource <String>
-MongoDbSettings_AuthType <AuthTypeValue>
-RedshiftSettings_BucketFolder <String>
-S3Settings_BucketFolder <String>
-DmsTransferSettings_BucketName <String>
-RedshiftSettings_BucketName <String>
-S3Settings_BucketName <String>
-S3Settings_CdcInsertsOnly <Boolean>
-CertificateArn <String>
-S3Settings_CompressionType <CompressionTypeValue>
-RedshiftSettings_ConnectionTimeout <Int32>
-S3Settings_CsvDelimiter <String>
-S3Settings_CsvRowDelimiter <String>
-DatabaseName <String>
-MongoDbSettings_DatabaseName <String>
-RedshiftSettings_DatabaseName <String>
-S3Settings_DataFormat <DataFormatValue>
-S3Settings_DataPageSize <Int32>
-RedshiftSettings_DateFormat <String>
-S3Settings_DictPageSizeLimit <Int32>
-MongoDbSettings_DocsToInvestigate <String>
-RedshiftSettings_EmptyAsNull <Boolean>
-S3Settings_EnableStatistic <Boolean>
-S3Settings_EncodingType <EncodingTypeValue>
-RedshiftSettings_EncryptionMode <EncryptionModeValue>
-S3Settings_EncryptionMode <EncryptionModeValue>
-EndpointType <ReplicationEndpointTypeValue>
-ElasticsearchSettings_EndpointUri <String>
-EngineName <String>
-ElasticsearchSettings_ErrorRetryDuration <Int32>
-ExternalTableDefinition <String>
-S3Settings_ExternalTableDefinition <String>
-ExtraConnectionAttribute <String>
-MongoDbSettings_ExtractDocId <String>
-RedshiftSettings_FileTransferUploadStream <Int32>
-ElasticsearchSettings_FullLoadErrorPercentage <Int32>
-KmsKeyId <String>
-MongoDbSettings_KmsKeyId <String>
-RedshiftSettings_LoadTimeout <Int32>
-RedshiftSettings_MaxFileSize <Int32>
-KinesisSettings_MessageFormat <MessageFormatValue>
-MongoDbSettings_NestingLevel <NestingLevelValue>
-S3Settings_ParquetVersion <ParquetVersionValue>
-MongoDbSettings_Password <String>
-Password <String>
-RedshiftSettings_Password <String>
-MongoDbSettings_Port <Int32>
-Port <Int32>
-RedshiftSettings_Port <Int32>
-RedshiftSettings_RemoveQuote <Boolean>
-RedshiftSettings_ReplaceChar <String>
-RedshiftSettings_ReplaceInvalidChar <String>
-S3Settings_RowGroupLength <Int32>
-MongoDbSettings_ServerName <String>
-RedshiftSettings_ServerName <String>
-ServerName <String>
-RedshiftSettings_ServerSideEncryptionKmsKeyId <String>
-S3Settings_ServerSideEncryptionKmsKeyId <String>
-DmsTransferSettings_ServiceAccessRoleArn <String>
-DynamoDbSettings_ServiceAccessRoleArn <String>
-ElasticsearchSettings_ServiceAccessRoleArn <String>
-KinesisSettings_ServiceAccessRoleArn <String>
-RedshiftSettings_ServiceAccessRoleArn <String>
-S3Settings_ServiceAccessRoleArn <String>
-ServiceAccessRoleArn <String>
-SslMode <DmsSslModeValue>
-KinesisSettings_StreamArn <String>
-Tag <Tag[]>
-RedshiftSettings_TimeFormat <String>
-RedshiftSettings_TrimBlank <Boolean>
-RedshiftSettings_TruncateColumn <Boolean>
-MongoDbSettings_Username <String>
-RedshiftSettings_Username <String>
-Username <String>
-RedshiftSettings_WriteBufferSize <Int32>
-Force <SwitchParameter>

Description

Creates an endpoint using the provided settings.

Parameters

-CertificateArn <String>
The Amazon Resource Name (ARN) for the certificate.
Required?False
Position?Named
Accept pipeline input?False
-DatabaseName <String>
The name of the endpoint database.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-DmsTransferSettings_BucketName <String>
The name of the S3 bucket to use.
Required?False
Position?Named
Accept pipeline input?False
-DmsTransferSettings_ServiceAccessRoleArn <String>
The IAM role that has permission to access the Amazon S3 bucket.
Required?False
Position?Named
Accept pipeline input?False
-DynamoDbSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by the service access IAM role.
Required?False
Position?Named
Accept pipeline input?False
-ElasticsearchSettings_EndpointUri <String>
The endpoint for the ElasticSearch cluster.
Required?False
Position?Named
Accept pipeline input?False
-ElasticsearchSettings_ErrorRetryDuration <Int32>
The maximum number of seconds that DMS retries failed API requests to the Elasticsearch cluster.
Required?False
Position?Named
Accept pipeline input?False
-ElasticsearchSettings_FullLoadErrorPercentage <Int32>
The maximum percentage of records that can fail to be written before a full load operation stops.
Required?False
Position?Named
Accept pipeline input?False
-ElasticsearchSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by service to access the IAM role.
Required?False
Position?Named
Accept pipeline input?False
-EndpointIdentifier <String>
The database endpoint identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
Required?False
Position?1
Accept pipeline input?True (ByValue, ByPropertyName)
The type of endpoint.
Required?False
Position?Named
Accept pipeline input?False
-EngineName <String>
The type of engine for the endpoint. Valid values, depending on the EndPointType value, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, db2, azuredb, sybase, dynamodb, mongodb, and sqlserver.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ExternalTableDefinition <String>
The external table definition.
Required?False
Position?Named
Accept pipeline input?False
-ExtraConnectionAttribute <String>
Additional attributes associated with the connection.
Required?False
Position?Named
Accept pipeline input?False
AliasesExtraConnectionAttributes
-Force <SwitchParameter>
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required?False
Position?Named
Accept pipeline input?False
-KinesisSettings_MessageFormat <MessageFormatValue>
The output format for the records created on the endpoint. The message format is JSON.
Required?False
Position?Named
Accept pipeline input?False
-KinesisSettings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Amazon Kinesis data stream.
Required?False
Position?Named
Accept pipeline input?False
-KinesisSettings_StreamArn <String>
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
Required?False
Position?Named
Accept pipeline input?False
-KmsKeyId <String>
The AWS KMS key identifier to use to encrypt the connection parameters. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-MongoDbSettings_AuthMechanism <AuthMechanismValue>
The authentication mechanism you use to access the MongoDB source endpoint.Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1 DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This attribute is not used when authType=No.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_AuthSource <String>
The MongoDB database name. This attribute is not used when authType=NO.The default is admin.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_AuthType <AuthTypeValue>
The authentication type you use to access the MongoDB source endpoint.Valid values: NO, PASSWORD When NO is selected, user name and password parameters are not used and can be empty.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_DatabaseName <String>
The database name on the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_DocsToInvestigate <String>
Indicates the number of documents to preview to determine the document organization. Use this attribute when NestingLevel is set to ONE. Must be a positive value greater than 0. Default value is 1000.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_ExtractDocId <String>
Specifies the document ID. Use this attribute when NestingLevel is set to NONE. Default value is false.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_KmsKeyId <String>
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_NestingLevel <NestingLevelValue>
Specifies either document or table mode. Valid values: NONE, ONEDefault value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_Password <String>
The password for the user account you use to access the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_Port <Int32>
The port value for the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_ServerName <String>
The name of the server on the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?False
-MongoDbSettings_Username <String>
The user name you use to access the MongoDB source endpoint.
Required?False
Position?Named
Accept pipeline input?False
-Password <String>
The password to be used to log in to the endpoint database.
Required?False
Position?Named
Accept pipeline input?False
-Port <Int32>
The port used by the endpoint database.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_AcceptAnyDate <Boolean>
Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose TRUE or FALSE (default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data does not match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_AfterConnectScript <String>
Code to run after connecting. This should be the code, not a filename.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_BucketFolder <String>
The location where the CSV files are stored before being uploaded to the S3 bucket.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_BucketName <String>
The name of the S3 bucket you want to use
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_ConnectionTimeout <Int32>
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_DatabaseName <String>
The name of the Amazon Redshift data warehouse (service) you are working with.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_DateFormat <String>
The date format you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that are not supported when you use a date format string. If your date and time values use formats different from each other, set this to auto.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_EmptyAsNull <Boolean>
Specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of TRUE sets empty CHAR and VARCHAR fields to null. The default is FALSE.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_EncryptionMode <EncryptionModeValue>
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (default) or SSE_KMS. To use SSE_S3, create an IAM role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket".
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_FileTransferUploadStream <Int32>
Specifies the number of threads used to upload a single file. This accepts a value between 1 and 64. It defaults to 10.
Required?False
Position?Named
Accept pipeline input?False
AliasesRedshiftSettings_FileTransferUploadStreams
-RedshiftSettings_LoadTimeout <Int32>
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_MaxFileSize <Int32>
Specifies the maximum size (in KB) of any CSV file used to transfer data to Amazon Redshift. This accepts a value between 1 and 1048576. It defaults to 32768 KB (32 MB).
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_Password <String>
The password for the user named in the username property.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_Port <Int32>
The port number for Amazon Redshift. The default value is 5439.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_RemoveQuote <Boolean>
Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose TRUE to remove quotation marks. The default is FALSE.
Required?False
Position?Named
Accept pipeline input?False
AliasesRedshiftSettings_RemoveQuotes
-RedshiftSettings_ReplaceChar <String>
Replaces invalid characters specified in ReplaceInvalidChars, substituting the specified value instead. The default is "?".
Required?False
Position?Named
Accept pipeline input?False
AliasesRedshiftSettings_ReplaceChars
-RedshiftSettings_ReplaceInvalidChar <String>
A list of chars you want to replace. Use with ReplaceChars.
Required?False
Position?Named
Accept pipeline input?False
AliasesRedshiftSettings_ReplaceInvalidChars
-RedshiftSettings_ServerName <String>
The name of the Amazon Redshift cluster you are using.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_ServerSideEncryptionKmsKeyId <String>
If you are using SSE_KMS for the EncryptionMode, provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_ServiceAccessRoleArn <String>
The ARN of the role that has access to the Redshift service.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_TimeFormat <String>
The time format you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that are not supported when you use a time format string. If your date and time values use formats different from each other, set this to auto.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_TrimBlank <Boolean>
Removes the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose TRUE to remove unneeded white space. The default is FALSE.
Required?False
Position?Named
Accept pipeline input?False
AliasesRedshiftSettings_TrimBlanks
-RedshiftSettings_TruncateColumn <Boolean>
Truncates data in columns to the appropriate number of characters, so that it fits in the column. Applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose TRUE to truncate data. The default is FALSE.
Required?False
Position?Named
Accept pipeline input?False
AliasesRedshiftSettings_TruncateColumns
-RedshiftSettings_Username <String>
An Amazon Redshift user name for a registered user.
Required?False
Position?Named
Accept pipeline input?False
-RedshiftSettings_WriteBufferSize <Int32>
The size of the write buffer to use in rows. Valid values range from 1 to 2048. Defaults to 1024. Use this setting to tune performance.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_BucketFolder <String>
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path <bucketFolder>/<schema_name>/<table_name>/. If this parameter is not specified, then the path used is <schema_name>/<table_name>/.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_BucketName <String>
The name of the S3 bucket.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_CdcInsertsOnly <Boolean>
Option to write only INSERT operations to the comma-separated value (CSV) output files. By default, the first field in a CSV record contains the letter I (insert), U (update) or D (delete) to indicate whether the row was inserted, updated, or deleted at the source database. If cdcInsertsOnly is set to true, then only INSERTs are recorded in the CSV file, without the I annotation on each line. Valid values are TRUE and FALSE.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_CompressionType <CompressionTypeValue>
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both CSV and PARQUET data formats.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_CsvDelimiter <String>
The delimiter used to separate columns in the source files. The default is a comma.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_CsvRowDelimiter <String>
The delimiter used to separate rows in the source files. The default is a carriage return (\n).
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_DataFormat <DataFormatValue>
The format of the data which you want to use for output. You can choose one of the following:
  • CSV : This is a row-based format with comma-separated values.
  • PARQUET : Apache Parquet is a columnar storage format that features efficient compression and provides faster query response.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_DataPageSize <Int32>
The size of one data page in bytes. Defaults to 1024 * 1024 bytes (1MiB). For PARQUET format only.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_DictPageSizeLimit <Int32>
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. Defaults to 1024 * 1024 bytes (1MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. For PARQUET format only.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_EnableStatistic <Boolean>
Enables statistics for Parquet pages and rowGroups. Choose TRUE to enable statistics, choose FALSE to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. Defaults to TRUE. For PARQUET format only.
Required?False
Position?Named
Accept pipeline input?False
AliasesS3Settings_EnableStatistics
-S3Settings_EncodingType <EncodingTypeValue>
The type of encoding you are using: RLE_DICTIONARY (default), PLAIN, or PLAIN_DICTIONARY.
  • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently.
  • PLAIN does not use encoding at all. Values are stored as they are.
  • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_EncryptionMode <EncryptionModeValue>
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (default) or SSE_KMS. To use SSE_S3, you need an IAM role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
  • s3:CreateBucket
  • s3:ListBucket
  • s3:DeleteBucket
  • s3:GetBucketLocation
  • s3:GetObject
  • s3:PutObject
  • s3:DeleteObject
  • s3:GetObjectVersion
  • s3:GetBucketPolicy
  • s3:PutBucketPolicy
  • s3:DeleteBucketPolicy
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_ExternalTableDefinition <String>
The external table definition.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_ParquetVersion <ParquetVersionValue>
The version of Apache Parquet format you want to use: PARQUET_1_0 (default) or PARQUET_2_0.
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_RowGroupLength <Int32>
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. Defaults to 10,000 (ten thousand) rows. For PARQUET format only. If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_ServerSideEncryptionKmsKeyId <String>
If you are using SSE_KMS for the EncryptionMode, provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.Here is a CLI example: aws dms create-endpoint --endpoint-identifier <value> --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=<value>,BucketFolder=<value>,BucketName=<value>,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=<value>
Required?False
Position?Named
Accept pipeline input?False
-S3Settings_ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) used by the service access IAM role.
Required?False
Position?Named
Accept pipeline input?False
-ServerName <String>
The name of the server where the endpoint database resides.
Required?False
Position?Named
Accept pipeline input?False
-ServiceAccessRoleArn <String>
The Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint.
Required?False
Position?Named
Accept pipeline input?False
-SslMode <DmsSslModeValue>
The Secure Sockets Layer (SSL) mode to use for the SSL connection. The SSL mode can be one of four values: none, require, verify-ca, verify-full. The default value is none.
Required?False
Position?Named
Accept pipeline input?False
-Tag <Tag[]>
Tags to be added to the endpoint.
Required?False
Position?Named
Accept pipeline input?False
AliasesTags
-Username <String>
The user name to be used to log in to the endpoint database.
Required?False
Position?Named
Accept pipeline input?False

Common Credential and Region Parameters

-AccessKey <String>
The AWS access key for the user account. This can be a temporary access key if the corresponding session token is supplied to the -SessionToken parameter.
Required? False
Position? Named
Accept pipeline input? False
-Credential <AWSCredentials>
An AWSCredentials object instance containing access and secret key information, and optionally a token for session-based credentials.
Required? False
Position? Named
Accept pipeline input? False
-ProfileLocation <String>

Used to specify the name and location of the ini-format credential file (shared with the AWS CLI and other AWS SDKs)

If this optional parameter is omitted this cmdlet will search the encrypted credential file used by the AWS SDK for .NET and AWS Toolkit for Visual Studio first. If the profile is not found then the cmdlet will search in the ini-format credential file at the default location: (user's home directory)\.aws\credentials. Note that the encrypted credential file is not supported on all platforms. It will be skipped when searching for profiles on Windows Nano Server, Mac, and Linux platforms.

If this parameter is specified then this cmdlet will only search the ini-format credential file at the location given.

As the current folder can vary in a shell or during script execution it is advised that you use specify a fully qualified path instead of a relative path.

Required? False
Position? Named
Accept pipeline input? False
-ProfileName <String>
The user-defined name of an AWS credentials or SAML-based role profile containing credential information. The profile is expected to be found in the secure credential file shared with the AWS SDK for .NET and AWS Toolkit for Visual Studio. You can also specify the name of a profile stored in the .ini-format credential file used with the AWS CLI and other AWS SDKs.
Required? False
Position? Named
Accept pipeline input? False
-NetworkCredential <PSCredential>
Used with SAML-based authentication when ProfileName references a SAML role profile. Contains the network credentials to be supplied during authentication with the configured identity provider's endpoint. This parameter is not required if the user's default network identity can or should be used during authentication.
Required? False
Position? Named
Accept pipeline input? False
-SecretKey <String>
The AWS secret key for the user account. This can be a temporary secret key if the corresponding session token is supplied to the -SessionToken parameter.
Required? False
Position? Named
Accept pipeline input? False
-SessionToken <String>
The session token if the access and secret keys are temporary session-based credentials.
Required? False
Position? Named
Accept pipeline input? False
-Region <String>
The system name of the AWS region in which the operation should be invoked. For example, us-east-1, eu-west-1 etc.
Required? False
Position? Named
Accept pipeline input? False
-EndpointUrl <String>

The endpoint to make the call against.

Note: This parameter is primarily for internal AWS use and is not required/should not be specified for normal usage. The cmdlets normally determine which endpoint to call based on the region specified to the -Region parameter or set as default in the shell (via Set-DefaultAWSRegion). Only specify this parameter if you must direct the call to a specific custom endpoint.

Required? False
Position? Named
Accept pipeline input? False

Inputs

You can pipe a String object to this cmdlet for the EndpointIdentifier parameter.

Outputs

This cmdlet returns a Endpoint object. The service call response (type Amazon.DatabaseMigrationService.Model.CreateEndpointResponse) can also be referenced from properties attached to the cmdlet entry in the $AWSHistory stack.

Supported Version

AWS Tools for PowerShell: 2.x.y.z