CfnEndpoint
- class aws_cdk.aws_dms.CfnEndpoint(scope, id, *, endpoint_type, engine_name, certificate_arn=None, database_name=None, doc_db_settings=None, dynamo_db_settings=None, elasticsearch_settings=None, endpoint_identifier=None, extra_connection_attributes=None, gcp_my_sql_settings=None, ibm_db2_settings=None, kafka_settings=None, kinesis_settings=None, kms_key_id=None, microsoft_sql_server_settings=None, mongo_db_settings=None, my_sql_settings=None, neptune_settings=None, oracle_settings=None, password=None, port=None, postgre_sql_settings=None, redis_settings=None, redshift_settings=None, resource_identifier=None, s3_settings=None, server_name=None, ssl_mode=None, sybase_settings=None, tags=None, username=None)
Bases:
CfnResource
The
AWS::DMS::Endpoint
resource specifies an AWS DMS endpoint.Currently, AWS CloudFormation supports all AWS DMS endpoint types.
- See:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dms-endpoint.html
- CloudformationResource:
AWS::DMS::Endpoint
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms cfn_endpoint = dms.CfnEndpoint(self, "MyCfnEndpoint", endpoint_type="endpointType", engine_name="engineName", # the properties below are optional certificate_arn="certificateArn", database_name="databaseName", doc_db_settings=dms.CfnEndpoint.DocDbSettingsProperty( docs_to_investigate=123, extract_doc_id=False, nesting_level="nestingLevel", secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId" ), dynamo_db_settings=dms.CfnEndpoint.DynamoDbSettingsProperty( service_access_role_arn="serviceAccessRoleArn" ), elasticsearch_settings=dms.CfnEndpoint.ElasticsearchSettingsProperty( endpoint_uri="endpointUri", error_retry_duration=123, full_load_error_percentage=123, service_access_role_arn="serviceAccessRoleArn" ), endpoint_identifier="endpointIdentifier", extra_connection_attributes="extraConnectionAttributes", gcp_my_sql_settings=dms.CfnEndpoint.GcpMySQLSettingsProperty( after_connect_script="afterConnectScript", clean_source_metadata_on_mismatch=False, database_name="databaseName", events_poll_interval=123, max_file_size=123, parallel_load_threads=123, password="password", port=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_name="serverName", server_timezone="serverTimezone", username="username" ), ibm_db2_settings=dms.CfnEndpoint.IbmDb2SettingsProperty( current_lsn="currentLsn", keep_csv_files=False, load_timeout=123, max_file_size=123, max_kBytes_per_read=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", set_data_capture_changes=False, write_buffer_size=123 ), kafka_settings=dms.CfnEndpoint.KafkaSettingsProperty( broker="broker", include_control_details=False, include_null_and_empty=False, include_partition_value=False, include_table_alter_operations=False, include_transaction_details=False, message_format="messageFormat", message_max_bytes=123, no_hex_prefix=False, partition_include_schema_table=False, sasl_password="saslPassword", sasl_user_name="saslUserName", security_protocol="securityProtocol", ssl_ca_certificate_arn="sslCaCertificateArn", ssl_client_certificate_arn="sslClientCertificateArn", ssl_client_key_arn="sslClientKeyArn", ssl_client_key_password="sslClientKeyPassword", topic="topic" ), kinesis_settings=dms.CfnEndpoint.KinesisSettingsProperty( include_control_details=False, include_null_and_empty=False, include_partition_value=False, include_table_alter_operations=False, include_transaction_details=False, message_format="messageFormat", no_hex_prefix=False, partition_include_schema_table=False, service_access_role_arn="serviceAccessRoleArn", stream_arn="streamArn" ), kms_key_id="kmsKeyId", microsoft_sql_server_settings=dms.CfnEndpoint.MicrosoftSqlServerSettingsProperty( bcp_packet_size=123, control_tables_file_group="controlTablesFileGroup", database_name="databaseName", force_lob_lookup=False, password="password", port=123, query_single_always_on_node=False, read_backup_only=False, safeguard_policy="safeguardPolicy", secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_name="serverName", tlog_access_mode="tlogAccessMode", trim_space_in_char=False, use_bcp_full_load=False, username="username", use_third_party_backup_device=False ), mongo_db_settings=dms.CfnEndpoint.MongoDbSettingsProperty( auth_mechanism="authMechanism", auth_source="authSource", auth_type="authType", database_name="databaseName", docs_to_investigate="docsToInvestigate", extract_doc_id="extractDocId", nesting_level="nestingLevel", password="password", port=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_name="serverName", username="username" ), my_sql_settings=dms.CfnEndpoint.MySqlSettingsProperty( after_connect_script="afterConnectScript", clean_source_metadata_on_mismatch=False, events_poll_interval=123, max_file_size=123, parallel_load_threads=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_timezone="serverTimezone", target_db_type="targetDbType" ), neptune_settings=dms.CfnEndpoint.NeptuneSettingsProperty( error_retry_duration=123, iam_auth_enabled=False, max_file_size=123, max_retry_count=123, s3_bucket_folder="s3BucketFolder", s3_bucket_name="s3BucketName", service_access_role_arn="serviceAccessRoleArn" ), oracle_settings=dms.CfnEndpoint.OracleSettingsProperty( access_alternate_directly=False, additional_archived_log_dest_id=123, add_supplemental_logging=False, allow_select_nested_tables=False, archived_log_dest_id=123, archived_logs_only=False, asm_password="asmPassword", asm_server="asmServer", asm_user="asmUser", char_length_semantics="charLengthSemantics", direct_path_no_log=False, direct_path_parallel_load=False, enable_homogenous_tablespace=False, extra_archived_log_dest_ids=[123], fail_tasks_on_lob_truncation=False, number_datatype_scale=123, oracle_path_prefix="oraclePathPrefix", parallel_asm_read_threads=123, read_ahead_blocks=123, read_table_space_name=False, replace_path_prefix=False, retry_interval=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_oracle_asm_access_role_arn="secretsManagerOracleAsmAccessRoleArn", secrets_manager_oracle_asm_secret_id="secretsManagerOracleAsmSecretId", secrets_manager_secret_id="secretsManagerSecretId", security_db_encryption="securityDbEncryption", security_db_encryption_name="securityDbEncryptionName", spatial_data_option_to_geo_json_function_name="spatialDataOptionToGeoJsonFunctionName", standby_delay_time=123, use_alternate_folder_for_online=False, use_bFile=False, use_direct_path_full_load=False, use_logminer_reader=False, use_path_prefix="usePathPrefix" ), password="password", port=123, postgre_sql_settings=dms.CfnEndpoint.PostgreSqlSettingsProperty( after_connect_script="afterConnectScript", babelfish_database_name="babelfishDatabaseName", capture_ddls=False, database_mode="databaseMode", ddl_artifacts_schema="ddlArtifactsSchema", execute_timeout=123, fail_tasks_on_lob_truncation=False, heartbeat_enable=False, heartbeat_frequency=123, heartbeat_schema="heartbeatSchema", map_boolean_as_boolean=False, max_file_size=123, plugin_name="pluginName", secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", slot_name="slotName" ), redis_settings=dms.CfnEndpoint.RedisSettingsProperty( auth_password="authPassword", auth_type="authType", auth_user_name="authUserName", port=123, server_name="serverName", ssl_ca_certificate_arn="sslCaCertificateArn", ssl_security_protocol="sslSecurityProtocol" ), redshift_settings=dms.CfnEndpoint.RedshiftSettingsProperty( accept_any_date=False, after_connect_script="afterConnectScript", bucket_folder="bucketFolder", bucket_name="bucketName", case_sensitive_names=False, comp_update=False, connection_timeout=123, date_format="dateFormat", empty_as_null=False, encryption_mode="encryptionMode", explicit_ids=False, file_transfer_upload_streams=123, load_timeout=123, map_boolean_as_boolean=False, max_file_size=123, remove_quotes=False, replace_chars="replaceChars", replace_invalid_chars="replaceInvalidChars", secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_side_encryption_kms_key_id="serverSideEncryptionKmsKeyId", service_access_role_arn="serviceAccessRoleArn", time_format="timeFormat", trim_blanks=False, truncate_columns=False, write_buffer_size=123 ), resource_identifier="resourceIdentifier", s3_settings=dms.CfnEndpoint.S3SettingsProperty( add_column_name=False, add_trailing_padding_character=False, bucket_folder="bucketFolder", bucket_name="bucketName", canned_acl_for_objects="cannedAclForObjects", cdc_inserts_and_updates=False, cdc_inserts_only=False, cdc_max_batch_interval=123, cdc_min_file_size=123, cdc_path="cdcPath", compression_type="compressionType", csv_delimiter="csvDelimiter", csv_no_sup_value="csvNoSupValue", csv_null_value="csvNullValue", csv_row_delimiter="csvRowDelimiter", data_format="dataFormat", data_page_size=123, date_partition_delimiter="datePartitionDelimiter", date_partition_enabled=False, date_partition_sequence="datePartitionSequence", date_partition_timezone="datePartitionTimezone", dict_page_size_limit=123, enable_statistics=False, encoding_type="encodingType", encryption_mode="encryptionMode", expected_bucket_owner="expectedBucketOwner", external_table_definition="externalTableDefinition", glue_catalog_generation=False, ignore_header_rows=123, include_op_for_full_load=False, max_file_size=123, parquet_timestamp_in_millisecond=False, parquet_version="parquetVersion", preserve_transactions=False, rfc4180=False, row_group_length=123, server_side_encryption_kms_key_id="serverSideEncryptionKmsKeyId", service_access_role_arn="serviceAccessRoleArn", timestamp_column_name="timestampColumnName", use_csv_no_sup_value=False, use_task_start_time_for_full_load_timestamp=False ), server_name="serverName", ssl_mode="sslMode", sybase_settings=dms.CfnEndpoint.SybaseSettingsProperty( secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId" ), tags=[CfnTag( key="key", value="value" )], username="username" )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).endpoint_type (
str
) – The type of endpoint. Valid values aresource
andtarget
.engine_name (
str
) – The type of engine for the endpoint, depending on theEndpointType
value. Valid values :mysql
|oracle
|postgres
|mariadb
|aurora
|aurora-postgresql
|opensearch
|redshift
|redshift-serverless
|s3
|db2
|azuredb
|sybase
|dynamodb
|mongodb
|kinesis
|kafka
|elasticsearch
|docdb
|sqlserver
|neptune
certificate_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) for the certificate.database_name (
Optional
[str
]) – The name of the endpoint database. For a MySQL source or target endpoint, don’t specifyDatabaseName
. To migrate to a specific database, use this setting andtargetDbType
.doc_db_settings (
Union
[IResolvable
,DocDbSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source and target DocumentDB endpoint. For more information about other available settings, see Using extra connections attributes with Amazon DocumentDB as a source and Using Amazon DocumentDB as a target for AWS Database Migration Service in the AWS Database Migration Service User Guide .dynamo_db_settings (
Union
[IResolvable
,DynamoDbSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using object mapping to migrate data to DynamoDB in the AWS Database Migration Service User Guide .elasticsearch_settings (
Union
[IResolvable
,ElasticsearchSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the target OpenSearch endpoint. For more information about the available settings, see Extra connection attributes when using OpenSearch as a target for AWS DMS in the AWS Database Migration Service User Guide .endpoint_identifier (
Optional
[str
]) – The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can’t end with a hyphen, or contain two consecutive hyphens.extra_connection_attributes (
Optional
[str
]) – Additional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see Working with AWS DMS Endpoints in the AWS Database Migration Service User Guide .gcp_my_sql_settings (
Union
[IResolvable
,GcpMySQLSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source GCP MySQL endpoint. These settings are much the same as the settings for any MySQL-compatible endpoint. For more information, see Extra connection attributes when using MySQL as a source for AWS DMS in the AWS Database Migration Service User Guide .ibm_db2_settings (
Union
[IResolvable
,IbmDb2SettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see Extra connection attributes when using Db2 LUW as a source for AWS DMS in the AWS Database Migration Service User Guide .kafka_settings (
Union
[IResolvable
,KafkaSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the target Apache Kafka endpoint. For more information about other available settings, see Using object mapping to migrate data to a Kafka topic in the AWS Database Migration Service User Guide .kinesis_settings (
Union
[IResolvable
,KinesisSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about other available settings, see Using object mapping to migrate data to a Kinesis data stream in the AWS Database Migration Service User Guide .kms_key_id (
Optional
[str
]) – An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint. If you don’t specify a value for theKmsKeyId
parameter, AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account . Your AWS account has a different default encryption key for each AWS Region .microsoft_sql_server_settings (
Union
[IResolvable
,MicrosoftSqlServerSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see Extra connection attributes when using SQL Server as a source for AWS DMS and Extra connection attributes when using SQL Server as a target for AWS DMS in the AWS Database Migration Service User Guide .mongo_db_settings (
Union
[IResolvable
,MongoDbSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see Using MongoDB as a target for AWS Database Migration Service in the AWS Database Migration Service User Guide .my_sql_settings (
Union
[IResolvable
,MySqlSettingsProperty
,Dict
[str
,Any
],None
]) –Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see Extra connection attributes when using MySQL as a source for AWS DMS and Extra connection attributes when using a MySQL-compatible database as a target for AWS DMS in the AWS Database Migration Service User Guide .
neptune_settings (
Union
[IResolvable
,NeptuneSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see Specifying endpoint settings for Amazon Neptune as a target in the AWS Database Migration Service User Guide .oracle_settings (
Union
[IResolvable
,OracleSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see Extra connection attributes when using Oracle as a source for AWS DMS and Extra connection attributes when using Oracle as a target for AWS DMS in the AWS Database Migration Service User Guide .password (
Optional
[str
]) – The password to be used to log in to the endpoint database.port (
Union
[int
,float
,None
]) – The port used by the endpoint database.postgre_sql_settings (
Union
[IResolvable
,PostgreSqlSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see Extra connection attributes when using PostgreSQL as a source for AWS DMS and Extra connection attributes when using PostgreSQL as a target for AWS DMS in the AWS Database Migration Service User Guide .redis_settings (
Union
[IResolvable
,RedisSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the target Redis endpoint. For information about other available settings, see Specifying endpoint settings for Redis as a target in the AWS Database Migration Service User Guide .redshift_settings (
Union
[IResolvable
,RedshiftSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the Amazon Redshift endpoint. For more information about other available settings, see Extra connection attributes when using Amazon Redshift as a target for AWS DMS in the AWS Database Migration Service User Guide .resource_identifier (
Optional
[str
]) – A display name for the resource identifier at the end of theEndpointArn
response parameter that is returned in the createdEndpoint
object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen (‘-‘). Also, it can’t end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such asExample-App-ARN1
. For example, this value might result in theEndpointArn
valuearn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1
. If you don’t specify aResourceIdentifier
value, AWS DMS generates a default identifier value for the end ofEndpointArn
.s3_settings (
Union
[IResolvable
,S3SettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source and target Amazon S3 endpoint. For more information about other available settings, see Extra connection attributes when using Amazon S3 as a source for AWS DMS and Extra connection attributes when using Amazon S3 as a target for AWS DMS in the AWS Database Migration Service User Guide .server_name (
Optional
[str
]) – The name of the server where the endpoint database resides.ssl_mode (
Optional
[str
]) – The Secure Sockets Layer (SSL) mode to use for the SSL connection. The default isnone
. .. epigraph:: Whenengine_name
is set to S3, the only allowed value isnone
.sybase_settings (
Union
[IResolvable
,SybaseSettingsProperty
,Dict
[str
,Any
],None
]) – Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see Extra connection attributes when using SAP ASE as a source for AWS DMS and Extra connection attributes when using SAP ASE as a target for AWS DMS in the AWS Database Migration Service User Guide .tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – One or more tags to be assigned to the endpoint.username (
Optional
[str
]) – The user name to be used to log in to the endpoint database.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::DMS::Endpoint'
- attr_external_id
A value that can be used for cross-account validation.
- CloudformationAttribute:
ExternalId
- attr_id
Id
- Type:
cloudformationAttribute
- certificate_arn
The Amazon Resource Name (ARN) for the certificate.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- database_name
The name of the endpoint database.
- doc_db_settings
Settings in JSON format for the source and target DocumentDB endpoint.
- dynamo_db_settings
Settings in JSON format for the target Amazon DynamoDB endpoint.
- elasticsearch_settings
Settings in JSON format for the target OpenSearch endpoint.
- endpoint_identifier
The database endpoint identifier.
- endpoint_type
The type of endpoint.
- engine_name
The type of engine for the endpoint, depending on the
EndpointType
value.
- extra_connection_attributes
Additional attributes associated with the connection.
- gcp_my_sql_settings
Settings in JSON format for the source GCP MySQL endpoint.
- ibm_db2_settings
Settings in JSON format for the source IBM Db2 LUW endpoint.
- kafka_settings
Settings in JSON format for the target Apache Kafka endpoint.
- kinesis_settings
Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams.
- kms_key_id
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- microsoft_sql_server_settings
Settings in JSON format for the source and target Microsoft SQL Server endpoint.
- mongo_db_settings
Settings in JSON format for the source MongoDB endpoint.
- my_sql_settings
Settings in JSON format for the source and target MySQL endpoint.
- neptune_settings
Settings in JSON format for the target Amazon Neptune endpoint.
- node
The tree node.
- oracle_settings
Settings in JSON format for the source and target Oracle endpoint.
- password
The password to be used to log in to the endpoint database.
- port
The port used by the endpoint database.
- postgre_sql_settings
Settings in JSON format for the source and target PostgreSQL endpoint.
- redis_settings
Settings in JSON format for the target Redis endpoint.
- redshift_settings
Settings in JSON format for the Amazon Redshift endpoint.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- resource_identifier
A display name for the resource identifier at the end of the
EndpointArn
response parameter that is returned in the createdEndpoint
object.
- s3_settings
Settings in JSON format for the source and target Amazon S3 endpoint.
- server_name
The name of the server where the endpoint database resides.
- ssl_mode
The Secure Sockets Layer (SSL) mode to use for the SSL connection.
The default is
none
.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- sybase_settings
Settings in JSON format for the source and target SAP ASE endpoint.
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
One or more tags to be assigned to the endpoint.
- username
The user name to be used to log in to the endpoint database.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
DocDbSettingsProperty
- class CfnEndpoint.DocDbSettingsProperty(*, docs_to_investigate=None, extract_doc_id=None, nesting_level=None, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None)
Bases:
object
Provides information that defines a DocumentDB endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about other available settings, see Using extra connections attributes with Amazon DocumentDB as a source and Using Amazon DocumentDB as a target for AWS Database Migration Service in the AWS Database Migration Service User Guide .
- Parameters:
docs_to_investigate (
Union
[int
,float
,None
]) – Indicates the number of documents to preview to determine the document organization. Use this setting whenNestingLevel
is set to"one"
. Must be a positive value greater than0
. Default value is1000
.extract_doc_id (
Union
[bool
,IResolvable
,None
]) – Specifies the document ID. Use this setting whenNestingLevel
is set to"none"
. Default value is"false"
.nesting_level (
Optional
[str
]) – Specifies either document or table mode. Default value is"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode.secrets_manager_access_role_arn (
Optional
[str
]) – The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value inSecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the DocumentDB endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the DocumentDB endpoint connection details.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms doc_db_settings_property = dms.CfnEndpoint.DocDbSettingsProperty( docs_to_investigate=123, extract_doc_id=False, nesting_level="nestingLevel", secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId" )
Attributes
- docs_to_investigate
Indicates the number of documents to preview to determine the document organization.
Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.
- extract_doc_id
Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.
- nesting_level
Specifies either document or table mode.
Default value is
"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode.
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the DocumentDB endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
DynamoDbSettingsProperty
- class CfnEndpoint.DynamoDbSettingsProperty(*, service_access_role_arn=None)
Bases:
object
Provides information, including the Amazon Resource Name (ARN) of the IAM role used to define an Amazon DynamoDB target endpoint.
This information also includes the output format of records applied to the endpoint and details of transaction and control table data information. For information about other available settings, see Using object mapping to migrate data to DynamoDB in the AWS Database Migration Service User Guide .
- Parameters:
service_access_role_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow theiam:PassRole
action.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms dynamo_db_settings_property = dms.CfnEndpoint.DynamoDbSettingsProperty( service_access_role_arn="serviceAccessRoleArn" )
Attributes
- service_access_role_arn
The Amazon Resource Name (ARN) used by the service to access the IAM role.
The role must allow the
iam:PassRole
action.
ElasticsearchSettingsProperty
- class CfnEndpoint.ElasticsearchSettingsProperty(*, endpoint_uri=None, error_retry_duration=None, full_load_error_percentage=None, service_access_role_arn=None)
Bases:
object
Provides information that defines an OpenSearch endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about the available settings, see Extra connection attributes when using OpenSearch as a target for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
endpoint_uri (
Optional
[str
]) – The endpoint for the OpenSearch cluster. AWS DMS uses HTTPS if a transport protocol (either HTTP or HTTPS) isn’t specified.error_retry_duration (
Union
[int
,float
,None
]) – The maximum number of seconds for which DMS retries failed API requests to the OpenSearch cluster.full_load_error_percentage (
Union
[int
,float
,None
]) – The maximum percentage of records that can fail to be written before a full load operation stops. To avoid early failure, this counter is only effective after 1,000 records are transferred. OpenSearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.service_access_role_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow theiam:PassRole
action.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms elasticsearch_settings_property = dms.CfnEndpoint.ElasticsearchSettingsProperty( endpoint_uri="endpointUri", error_retry_duration=123, full_load_error_percentage=123, service_access_role_arn="serviceAccessRoleArn" )
Attributes
- endpoint_uri
The endpoint for the OpenSearch cluster.
AWS DMS uses HTTPS if a transport protocol (either HTTP or HTTPS) isn’t specified.
- error_retry_duration
The maximum number of seconds for which DMS retries failed API requests to the OpenSearch cluster.
- full_load_error_percentage
The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1,000 records are transferred. OpenSearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
- service_access_role_arn
The Amazon Resource Name (ARN) used by the service to access the IAM role.
The role must allow the
iam:PassRole
action.
GcpMySQLSettingsProperty
- class CfnEndpoint.GcpMySQLSettingsProperty(*, after_connect_script=None, clean_source_metadata_on_mismatch=None, database_name=None, events_poll_interval=None, max_file_size=None, parallel_load_threads=None, password=None, port=None, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None, server_name=None, server_timezone=None, username=None)
Bases:
object
Provides information that defines a GCP MySQL endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. These settings are much the same as the settings for any MySQL-compatible endpoint. For more information, see Extra connection attributes when using MySQL as a source for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
after_connect_script (
Optional
[str
]) – Specifies a script to run immediately after AWS DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails. For this parameter, provide the code of the script itself, not the name of a file containing the script.clean_source_metadata_on_mismatch (
Union
[bool
,IResolvable
,None
]) – Adjusts the behavior of AWS DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster. If you need AWS DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute tofalse
.database_name (
Optional
[str
]) – Database name for the endpoint. For a MySQL source or target endpoint, don’t explicitly specify the database using theDatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the AWS DMS task.events_poll_interval (
Union
[int
,float
,None
]) – Specifies how often to check the binary log for new changes/events when the database is idle. The default is five seconds. Example:eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.max_file_size (
Union
[int
,float
,None
]) – Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database. Example:maxFileSize=512
parallel_load_threads (
Union
[int
,float
,None
]) – Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread. The default is one. Example:parallelLoadThreads=1
password (
Optional
[str
]) – Endpoint connection password.port (
Union
[int
,float
,None
]) – The port used by the endpoint database.secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret.
The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the MySQL endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the MySQL endpoint connection details.server_name (
Optional
[str
]) – The MySQL host name.server_timezone (
Optional
[str
]) – Specifies the time zone for the source MySQL database. Don’t enclose time zones in single quotation marks. Example:serverTimezone=US/Pacific;
username (
Optional
[str
]) – Endpoint connection user name.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms gcp_my_sQLSettings_property = dms.CfnEndpoint.GcpMySQLSettingsProperty( after_connect_script="afterConnectScript", clean_source_metadata_on_mismatch=False, database_name="databaseName", events_poll_interval=123, max_file_size=123, parallel_load_threads=123, password="password", port=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_name="serverName", server_timezone="serverTimezone", username="username" )
Attributes
- after_connect_script
Specifies a script to run immediately after AWS DMS connects to the endpoint.
The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
- clean_source_metadata_on_mismatch
Adjusts the behavior of AWS DMS when migrating from an SQL Server source database that is hosted as part of an Always On availability group cluster.
If you need AWS DMS to poll all the nodes in the Always On cluster for transaction backups, set this attribute to
false
.
- database_name
Database name for the endpoint.
For a MySQL source or target endpoint, don’t explicitly specify the database using the
DatabaseName
request parameter on either theCreateEndpoint
orModifyEndpoint
API call. SpecifyingDatabaseName
when you create or modify a MySQL endpoint replicates all the task tables to this single database. For MySQL endpoints, you specify the database only when you specify the schema in the table-mapping rules of the AWS DMS task.
- events_poll_interval
Specifies how often to check the binary log for new changes/events when the database is idle.
The default is five seconds.
Example:
eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.
- max_file_size
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
- parallel_load_threads
Improves performance when loading data into the MySQL-compatible target database.
Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread. The default is one.
Example:
parallelLoadThreads=1
- password
Endpoint connection password.
- port
The port used by the endpoint database.
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret.
The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the MySQL endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and
SecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both.For more information on creating this
SecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
- server_name
The MySQL host name.
- server_timezone
Specifies the time zone for the source MySQL database. Don’t enclose time zones in single quotation marks.
Example:
serverTimezone=US/Pacific;
- username
Endpoint connection user name.
IbmDb2SettingsProperty
- class CfnEndpoint.IbmDb2SettingsProperty(*, current_lsn=None, keep_csv_files=None, load_timeout=None, max_file_size=None, max_k_bytes_per_read=None, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None, set_data_capture_changes=None, write_buffer_size=None)
Bases:
object
Provides information that defines an IBMDB2 endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about other available settings, see Extra connection attributes when using Db2 LUW as a source for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
current_lsn (
Optional
[str
]) – For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.keep_csv_files (
Union
[bool
,IResolvable
,None
]) – If true, AWS DMS saves any .csv files to the Db2 LUW target that were used to replicate data. DMS uses these files for analysis and troubleshooting. The default value is false.load_timeout (
Union
[int
,float
,None
]) – The amount of time (in milliseconds) before AWS DMS times out operations performed by DMS on the Db2 target. The default value is 1200 (20 minutes).max_file_size (
Union
[int
,float
,None
]) – Specifies the maximum size (in KB) of .csv files used to transfer data to Db2 LUW.max_k_bytes_per_read (
Union
[int
,float
,None
]) – Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value ofthe AWS Secrets Manager secret that allows access to the Db2 LUW endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the IBMDB2 endpoint connection details.set_data_capture_changes (
Union
[bool
,IResolvable
,None
]) – Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.write_buffer_size (
Union
[int
,float
,None
]) – The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk on the DMS replication instance. The default value is 1024 (1 MB).
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms ibm_db2_settings_property = dms.CfnEndpoint.IbmDb2SettingsProperty( current_lsn="currentLsn", keep_csv_files=False, load_timeout=123, max_file_size=123, max_kBytes_per_read=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", set_data_capture_changes=False, write_buffer_size=123 )
Attributes
- current_lsn
For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
- keep_csv_files
If true, AWS DMS saves any .csv files to the Db2 LUW target that were used to replicate data. DMS uses these files for analysis and troubleshooting.
The default value is false.
- load_timeout
The amount of time (in milliseconds) before AWS DMS times out operations performed by DMS on the Db2 target.
The default value is 1200 (20 minutes).
- max_file_size
Specifies the maximum size (in KB) of .csv files used to transfer data to Db2 LUW.
- max_k_bytes_per_read
Maximum number of bytes per read, as a NUMBER value.
The default is 64 KB.
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value ofthe AWS Secrets Manager secret that allows access to the Db2 LUW endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the IBMDB2 endpoint connection details.
- set_data_capture_changes
Enables ongoing replication (CDC) as a BOOLEAN value.
The default is true.
- write_buffer_size
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk on the DMS replication instance. The default value is 1024 (1 MB).
KafkaSettingsProperty
- class CfnEndpoint.KafkaSettingsProperty(*, broker=None, include_control_details=None, include_null_and_empty=None, include_partition_value=None, include_table_alter_operations=None, include_transaction_details=None, message_format=None, message_max_bytes=None, no_hex_prefix=None, partition_include_schema_table=None, sasl_password=None, sasl_user_name=None, security_protocol=None, ssl_ca_certificate_arn=None, ssl_client_certificate_arn=None, ssl_client_key_arn=None, ssl_client_key_password=None, topic=None)
Bases:
object
Provides information that describes an Apache Kafka endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about other available settings, see Using object mapping to migrate data to a Kafka topic in the AWS Database Migration Service User Guide .
- Parameters:
broker (
Optional
[str
]) – A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form*broker-hostname-or-ip* : *port*
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for AWS Database Migration Service in the AWS Database Migration Service User Guide .include_control_details (
Union
[bool
,IResolvable
,None
]) – Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default isfalse
.include_null_and_empty (
Union
[bool
,IResolvable
,None
]) – Include NULL and empty columns for records migrated to the endpoint. The default isfalse
.include_partition_value (
Union
[bool
,IResolvable
,None
]) – Shows the partition value within the Kafka message output unless the partition type isschema-table-type
. The default isfalse
.include_table_alter_operations (
Union
[bool
,IResolvable
,None
]) – Includes any data definition language (DDL) operations that change the table in the control data, such asrename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.include_transaction_details (
Union
[bool
,IResolvable
,None
]) – Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values fortransaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.message_format (
Optional
[str
]) – The output format for the records created on the endpoint. The message format isJSON
(default) orJSON_UNFORMATTED
(a single line with no tab).message_max_bytes (
Union
[int
,float
,None
]) – The maximum size in bytes for records created on the endpoint The default is 1,000,000.no_hex_prefix (
Union
[bool
,IResolvable
,None
]) – Set this optional parameter totrue
to avoid adding a ‘0x’ prefix to raw data in hexadecimal format. For example, by default, AWS DMS adds a ‘0x’ prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the ‘0x’ prefix.partition_include_schema_table (
Union
[bool
,IResolvable
,None
]) – Prefixes schema and table names to partition values, when the partition type isprimary-key-type
. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse
.sasl_password (
Optional
[str
]) – The secure password that you created when you first set up your Amazon MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.sasl_user_name (
Optional
[str
]) – The secure user name you created when you first set up your Amazon MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.security_protocol (
Optional
[str
]) – Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options includessl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
.ssl_ca_certificate_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that AWS DMS uses to securely connect to your Kafka target endpoint.ssl_client_certificate_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.ssl_client_key_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.ssl_client_key_password (
Optional
[str
]) – The password for the client private key used to securely connect to a Kafka target endpoint.topic (
Optional
[str
]) – The topic to which you migrate the data. If you don’t specify a topic, AWS DMS specifies"kafka-default-topic"
as the migration topic.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms kafka_settings_property = dms.CfnEndpoint.KafkaSettingsProperty( broker="broker", include_control_details=False, include_null_and_empty=False, include_partition_value=False, include_table_alter_operations=False, include_transaction_details=False, message_format="messageFormat", message_max_bytes=123, no_hex_prefix=False, partition_include_schema_table=False, sasl_password="saslPassword", sasl_user_name="saslUserName", security_protocol="securityProtocol", ssl_ca_certificate_arn="sslCaCertificateArn", ssl_client_certificate_arn="sslClientCertificateArn", ssl_client_key_arn="sslClientKeyArn", ssl_client_key_password="sslClientKeyPassword", topic="topic" )
Attributes
- broker
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
Specify each broker location in the form
*broker-hostname-or-ip* : *port*
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for AWS Database Migration Service in the AWS Database Migration Service User Guide .
- include_control_details
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output.
The default is
false
.
- include_null_and_empty
Include NULL and empty columns for records migrated to the endpoint.
The default is
false
.
- include_partition_value
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
.The default is
false
.
- include_table_alter_operations
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
.The default is
false
.
- include_transaction_details
Provides detailed transaction information from the source database.
This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.
- message_format
The output format for the records created on the endpoint.
The message format is
JSON
(default) orJSON_UNFORMATTED
(a single line with no tab).
- message_max_bytes
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
- no_hex_prefix
Set this optional parameter to
true
to avoid adding a ‘0x’ prefix to raw data in hexadecimal format.For example, by default, AWS DMS adds a ‘0x’ prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use the
NoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the ‘0x’ prefix.
- partition_include_schema_table
Prefixes schema and table names to partition values, when the partition type is
primary-key-type
.Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
- sasl_password
The secure password that you created when you first set up your Amazon MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
- sasl_user_name
The secure user name you created when you first set up your Amazon MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
- security_protocol
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).
Options include
ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
.
- ssl_ca_certificate_arn
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that AWS DMS uses to securely connect to your Kafka target endpoint.
- ssl_client_certificate_arn
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
- ssl_client_key_arn
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
- ssl_client_key_password
The password for the client private key used to securely connect to a Kafka target endpoint.
- topic
The topic to which you migrate the data.
If you don’t specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.
KinesisSettingsProperty
- class CfnEndpoint.KinesisSettingsProperty(*, include_control_details=None, include_null_and_empty=None, include_partition_value=None, include_table_alter_operations=None, include_transaction_details=None, message_format=None, no_hex_prefix=None, partition_include_schema_table=None, service_access_role_arn=None, stream_arn=None)
Bases:
object
Provides information that describes an Amazon Kinesis Data Stream endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about other available settings, see Using object mapping to migrate data to a Kinesis data stream in the AWS Database Migration Service User Guide .
- Parameters:
include_control_details (
Union
[bool
,IResolvable
,None
]) – Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default isfalse
.include_null_and_empty (
Union
[bool
,IResolvable
,None
]) – Include NULL and empty columns for records migrated to the endpoint. The default isfalse
.include_partition_value (
Union
[bool
,IResolvable
,None
]) – Shows the partition value within the Kinesis message output, unless the partition type isschema-table-type
. The default isfalse
.include_table_alter_operations (
Union
[bool
,IResolvable
,None
]) – Includes any data definition language (DDL) operations that change the table in the control data, such asrename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isfalse
.include_transaction_details (
Union
[bool
,IResolvable
,None
]) – Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values fortransaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.message_format (
Optional
[str
]) – The output format for the records created on the endpoint. The message format isJSON
(default) orJSON_UNFORMATTED
(a single line with no tab).no_hex_prefix (
Union
[bool
,IResolvable
,None
]) – Set this optional parameter totrue
to avoid adding a ‘0x’ prefix to raw data in hexadecimal format. For example, by default, AWS DMS adds a ‘0x’ prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use theNoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the ‘0x’ prefix.partition_include_schema_table (
Union
[bool
,IResolvable
,None
]) – Prefixes schema and table names to partition values, when the partition type isprimary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isfalse
.service_access_role_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) for the IAM role that AWS DMS uses to write to the Kinesis data stream. The role must allow theiam:PassRole
action.stream_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms kinesis_settings_property = dms.CfnEndpoint.KinesisSettingsProperty( include_control_details=False, include_null_and_empty=False, include_partition_value=False, include_table_alter_operations=False, include_transaction_details=False, message_format="messageFormat", no_hex_prefix=False, partition_include_schema_table=False, service_access_role_arn="serviceAccessRoleArn", stream_arn="streamArn" )
Attributes
- include_control_details
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output.
The default is
false
.
- include_null_and_empty
Include NULL and empty columns for records migrated to the endpoint.
The default is
false
.
- include_partition_value
Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
.The default is
false
.
- include_table_alter_operations
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
.The default is
false
.
- include_transaction_details
Provides detailed transaction information from the source database.
This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.
- message_format
The output format for the records created on the endpoint.
The message format is
JSON
(default) orJSON_UNFORMATTED
(a single line with no tab).
- no_hex_prefix
Set this optional parameter to
true
to avoid adding a ‘0x’ prefix to raw data in hexadecimal format.For example, by default, AWS DMS adds a ‘0x’ prefix to the LOB column type in hexadecimal format moving from an Oracle source to an Amazon Kinesis target. Use the
NoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the ‘0x’ prefix.
- partition_include_schema_table
Prefixes schema and table names to partition values, when the partition type is
primary-key-type
.Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is
false
.
- service_access_role_arn
The Amazon Resource Name (ARN) for the IAM role that AWS DMS uses to write to the Kinesis data stream.
The role must allow the
iam:PassRole
action.
- stream_arn
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MicrosoftSqlServerSettingsProperty
- class CfnEndpoint.MicrosoftSqlServerSettingsProperty(*, bcp_packet_size=None, control_tables_file_group=None, database_name=None, force_lob_lookup=None, password=None, port=None, query_single_always_on_node=None, read_backup_only=None, safeguard_policy=None, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None, server_name=None, tlog_access_mode=None, trim_space_in_char=None, use_bcp_full_load=None, username=None, use_third_party_backup_device=None)
Bases:
object
Provides information that defines a Microsoft SQL Server endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For information about other available settings, see Extra connection attributes when using SQL Server as a source for AWS DMS and Extra connection attributes when using SQL Server as a target for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
bcp_packet_size (
Union
[int
,float
,None
]) – The maximum size of the packets (in bytes) used to transfer data using BCP.control_tables_file_group (
Optional
[str
]) – Specifies a file group for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.database_name (
Optional
[str
]) – Database name for the endpoint.force_lob_lookup (
Union
[bool
,IResolvable
,None
]) – Forces LOB lookup on inline LOB.password (
Optional
[str
]) – Endpoint connection password.port (
Union
[int
,float
,None
]) – Endpoint TCP port.query_single_always_on_node (
Union
[bool
,IResolvable
,None
]) – Cleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.read_backup_only (
Union
[bool
,IResolvable
,None
]) – When this attribute is set toY
, AWS DMS only reads changes from transaction log backups and doesn’t read from the active transaction log file during ongoing replication. Setting this parameter toY
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.safeguard_policy (
Optional
[str
]) – Use this attribute to minimize the need to access the backup log and enable AWS DMS to prevent truncation using one of the following two methods. Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren’t truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method. Exclusively use sp_repldone within a single task : When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn’t involve any transactional activities, it can only be used when Microsoft Replication isn’t running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the SQL Server endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the MicrosoftSQLServer endpoint connection details.server_name (
Optional
[str
]) – Fully qualified domain name of the endpoint. For an Amazon RDS SQL Server instance, this is the output of DescribeDBInstances , in the[Endpoint](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Endpoint.html) .Address
field.tlog_access_mode (
Optional
[str
]) – Indicates the mode used to fetch CDC data.trim_space_in_char (
Union
[bool
,IResolvable
,None
]) – Use theTrimSpaceInChar
source endpoint setting to right-trim data on CHAR and NCHAR data types during migration. SettingTrimSpaceInChar
does not left-trim data. The default value istrue
.use_bcp_full_load (
Union
[bool
,IResolvable
,None
]) – Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.username (
Optional
[str
]) – Endpoint connection user name.use_third_party_backup_device (
Union
[bool
,IResolvable
,None
]) – When this attribute is set toY
, DMS processes third-party transaction log backups if they are created in native format.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms microsoft_sql_server_settings_property = dms.CfnEndpoint.MicrosoftSqlServerSettingsProperty( bcp_packet_size=123, control_tables_file_group="controlTablesFileGroup", database_name="databaseName", force_lob_lookup=False, password="password", port=123, query_single_always_on_node=False, read_backup_only=False, safeguard_policy="safeguardPolicy", secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_name="serverName", tlog_access_mode="tlogAccessMode", trim_space_in_char=False, use_bcp_full_load=False, username="username", use_third_party_backup_device=False )
Attributes
- bcp_packet_size
The maximum size of the packets (in bytes) used to transfer data using BCP.
- control_tables_file_group
Specifies a file group for the AWS DMS internal tables.
When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
- database_name
Database name for the endpoint.
- force_lob_lookup
Forces LOB lookup on inline LOB.
- password
Endpoint connection password.
- port
Endpoint TCP port.
- query_single_always_on_node
Cleans and recreates table metadata information on the replication instance when a mismatch occurs.
An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.
- read_backup_only
When this attribute is set to
Y
, AWS DMS only reads changes from transaction log backups and doesn’t read from the active transaction log file during ongoing replication.Setting this parameter to
Y
enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.
- safeguard_policy
Use this attribute to minimize the need to access the backup log and enable AWS DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren’t truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task : When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn’t involve any transactional activities, it can only be used when Microsoft Replication isn’t running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the SQL Server endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the MicrosoftSQLServer endpoint connection details.
- server_name
Fully qualified domain name of the endpoint.
For an Amazon RDS SQL Server instance, this is the output of DescribeDBInstances , in the
[Endpoint](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Endpoint.html) .Address
field.
- tlog_access_mode
Indicates the mode used to fetch CDC data.
- trim_space_in_char
Use the
TrimSpaceInChar
source endpoint setting to right-trim data on CHAR and NCHAR data types during migration.Setting
TrimSpaceInChar
does not left-trim data. The default value istrue
.
- use_bcp_full_load
Use this to attribute to transfer data for full-load operations using BCP.
When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
- use_third_party_backup_device
When this attribute is set to
Y
, DMS processes third-party transaction log backups if they are created in native format.
- username
Endpoint connection user name.
MongoDbSettingsProperty
- class CfnEndpoint.MongoDbSettingsProperty(*, auth_mechanism=None, auth_source=None, auth_type=None, database_name=None, docs_to_investigate=None, extract_doc_id=None, nesting_level=None, password=None, port=None, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None, server_name=None, username=None)
Bases:
object
Provides information that defines a MongoDB endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about other available settings, see Endpoint configuration settings when using MongoDB as a source for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
auth_mechanism (
Optional
[str
]) – The authentication mechanism you use to access the MongoDB source endpoint. For the default value, in MongoDB version 2.x,"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn’t used whenAuthType
is set to"no"
.auth_source (
Optional
[str
]) – The MongoDB database name. This setting isn’t used whenAuthType
is set to"no"
. The default is"admin"
.auth_type (
Optional
[str
]) – The authentication type you use to access the MongoDB source endpoint. When set to"no"
, user name and password parameters are not used and can be empty.database_name (
Optional
[str
]) – The database name on the MongoDB source endpoint.docs_to_investigate (
Optional
[str
]) – Indicates the number of documents to preview to determine the document organization. Use this setting whenNestingLevel
is set to"one"
. Must be a positive value greater than0
. Default value is1000
.extract_doc_id (
Optional
[str
]) – Specifies the document ID. Use this setting whenNestingLevel
is set to"none"
. Default value is"false"
.nesting_level (
Optional
[str
]) – Specifies either document or table mode. Default value is"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode.password (
Optional
[str
]) – The password for the user account you use to access the MongoDB source endpoint.port (
Union
[int
,float
,None
]) – The port value for the MongoDB source endpoint.secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the MongoDB endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the MongoDB endpoint connection details.server_name (
Optional
[str
]) – The name of the server on the MongoDB source endpoint.username (
Optional
[str
]) – The user name you use to access the MongoDB source endpoint.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms mongo_db_settings_property = dms.CfnEndpoint.MongoDbSettingsProperty( auth_mechanism="authMechanism", auth_source="authSource", auth_type="authType", database_name="databaseName", docs_to_investigate="docsToInvestigate", extract_doc_id="extractDocId", nesting_level="nestingLevel", password="password", port=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_name="serverName", username="username" )
Attributes
- auth_mechanism
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn’t used whenAuthType
is set to"no"
.
- auth_source
The MongoDB database name. This setting isn’t used when
AuthType
is set to"no"
.The default is
"admin"
.
- auth_type
The authentication type you use to access the MongoDB source endpoint.
When set to
"no"
, user name and password parameters are not used and can be empty.
- database_name
The database name on the MongoDB source endpoint.
- docs_to_investigate
Indicates the number of documents to preview to determine the document organization.
Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.
- extract_doc_id
Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.
- nesting_level
Specifies either document or table mode.
Default value is
"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode.
- password
The password for the user account you use to access the MongoDB source endpoint.
- port
The port value for the MongoDB source endpoint.
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the MongoDB endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the MongoDB endpoint connection details.
- server_name
The name of the server on the MongoDB source endpoint.
- username
The user name you use to access the MongoDB source endpoint.
MySqlSettingsProperty
- class CfnEndpoint.MySqlSettingsProperty(*, after_connect_script=None, clean_source_metadata_on_mismatch=None, events_poll_interval=None, max_file_size=None, parallel_load_threads=None, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None, server_timezone=None, target_db_type=None)
Bases:
object
Provides information that defines a MySQL endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For information about other available settings, see Extra connection attributes when using MySQL as a source for AWS DMS and Extra connection attributes when using a MySQL-compatible database as a target for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
after_connect_script (
Optional
[str
]) – Specifies a script to run immediately after AWS DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails. For this parameter, provide the code of the script itself, not the name of a file containing the script.clean_source_metadata_on_mismatch (
Union
[bool
,IResolvable
,None
]) – Cleans and recreates table metadata information on the replication instance when a mismatch occurs. For example, in a situation where running an alter DDL on the table could result in different information about the table cached in the replication instance.events_poll_interval (
Union
[int
,float
,None
]) – Specifies how often to check the binary log for new changes/events when the database is idle. The default is five seconds. Example:eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.max_file_size (
Union
[int
,float
,None
]) – Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database. Example:maxFileSize=512
parallel_load_threads (
Union
[int
,float
,None
]) – Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread. The default is one. Example:parallelLoadThreads=1
secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the MySQL endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the MySQL endpoint connection details.server_timezone (
Optional
[str
]) – Specifies the time zone for the source MySQL database. Example:serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.target_db_type (
Optional
[str
]) – Specifies where to migrate source tables on the target, either to a single database or multiple databases. If you specifySPECIFIC_DATABASE
, specify the database name using theDatabaseName
parameter of theEndpoint
object. Example:targetDbType=MULTIPLE_DATABASES
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms my_sql_settings_property = dms.CfnEndpoint.MySqlSettingsProperty( after_connect_script="afterConnectScript", clean_source_metadata_on_mismatch=False, events_poll_interval=123, max_file_size=123, parallel_load_threads=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_timezone="serverTimezone", target_db_type="targetDbType" )
Attributes
- after_connect_script
Specifies a script to run immediately after AWS DMS connects to the endpoint.
The migration task continues running regardless if the SQL statement succeeds or fails.
For this parameter, provide the code of the script itself, not the name of a file containing the script.
- clean_source_metadata_on_mismatch
Cleans and recreates table metadata information on the replication instance when a mismatch occurs.
For example, in a situation where running an alter DDL on the table could result in different information about the table cached in the replication instance.
- events_poll_interval
Specifies how often to check the binary log for new changes/events when the database is idle.
The default is five seconds.
Example:
eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.
- max_file_size
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example:
maxFileSize=512
- parallel_load_threads
Improves performance when loading data into the MySQL-compatible target database.
Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread. The default is one.
Example:
parallelLoadThreads=1
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the MySQL endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the MySQL endpoint connection details.
- server_timezone
Specifies the time zone for the source MySQL database.
Example:
serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
- target_db_type
Specifies where to migrate source tables on the target, either to a single database or multiple databases.
If you specify
SPECIFIC_DATABASE
, specify the database name using theDatabaseName
parameter of theEndpoint
object.Example:
targetDbType=MULTIPLE_DATABASES
NeptuneSettingsProperty
- class CfnEndpoint.NeptuneSettingsProperty(*, error_retry_duration=None, iam_auth_enabled=None, max_file_size=None, max_retry_count=None, s3_bucket_folder=None, s3_bucket_name=None, service_access_role_arn=None)
Bases:
object
Provides information that defines an Amazon Neptune endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about the available settings, see Specifying endpoint settings for Amazon Neptune as a target in the AWS Database Migration Service User Guide .
- Parameters:
error_retry_duration (
Union
[int
,float
,None
]) – The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.iam_auth_enabled (
Union
[bool
,IResolvable
,None
]) – If you want IAM authorization enabled for this endpoint, set this parameter totrue
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.max_file_size (
Union
[int
,float
,None
]) – The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.max_retry_count (
Union
[int
,float
,None
]) – The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.s3_bucket_folder (
Optional
[str
]) – A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified byS3BucketName
.s3_bucket_name (
Optional
[str
]) – The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.service_access_role_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. The role must allow theiam:PassRole
action. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms neptune_settings_property = dms.CfnEndpoint.NeptuneSettingsProperty( error_retry_duration=123, iam_auth_enabled=False, max_file_size=123, max_retry_count=123, s3_bucket_folder="s3BucketFolder", s3_bucket_name="s3BucketName", service_access_role_arn="serviceAccessRoleArn" )
Attributes
- error_retry_duration
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error.
The default is 250.
- iam_auth_enabled
If you want IAM authorization enabled for this endpoint, set this parameter to
true
.Then attach the appropriate IAM policy document to your service role specified by
ServiceAccessRoleArn
. The default isfalse
.
- max_file_size
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
- max_retry_count
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error.
The default is 5.
- s3_bucket_folder
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
.
- s3_bucket_name
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
- service_access_role_arn
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint.
The role must allow the
iam:PassRole
action.For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide .
OracleSettingsProperty
- class CfnEndpoint.OracleSettingsProperty(*, access_alternate_directly=None, additional_archived_log_dest_id=None, add_supplemental_logging=None, allow_select_nested_tables=None, archived_log_dest_id=None, archived_logs_only=None, asm_password=None, asm_server=None, asm_user=None, char_length_semantics=None, direct_path_no_log=None, direct_path_parallel_load=None, enable_homogenous_tablespace=None, extra_archived_log_dest_ids=None, fail_tasks_on_lob_truncation=None, number_datatype_scale=None, oracle_path_prefix=None, parallel_asm_read_threads=None, read_ahead_blocks=None, read_table_space_name=None, replace_path_prefix=None, retry_interval=None, secrets_manager_access_role_arn=None, secrets_manager_oracle_asm_access_role_arn=None, secrets_manager_oracle_asm_secret_id=None, secrets_manager_secret_id=None, security_db_encryption=None, security_db_encryption_name=None, spatial_data_option_to_geo_json_function_name=None, standby_delay_time=None, use_alternate_folder_for_online=None, use_b_file=None, use_direct_path_full_load=None, use_logminer_reader=None, use_path_prefix=None)
Bases:
object
Provides information that defines an Oracle endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For information about other available settings, see Extra connection attributes when using Oracle as a source for AWS DMS and Extra connection attributes when using Oracle as a target for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
access_alternate_directly (
Union
[bool
,IResolvable
,None
]) – Set this attribute tofalse
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.additional_archived_log_dest_id (
Union
[int
,float
,None
]) – Set this attribute withArchivedLogDestId
in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover. Although AWS DMS supports the use of the OracleRESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User’s Guide .add_supplemental_logging (
Union
[bool
,IResolvable
,None
]) – Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task. If you use this option, you still need to enable database-level supplemental logging.allow_select_nested_tables (
Union
[bool
,IResolvable
,None
]) – Set this attribute totrue
to enable replication of Oracle tables containing columns that are nested tables or defined types.archived_log_dest_id (
Union
[int
,float
,None
]) – Specifies the ID of the destination for the archived redo logs. This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use theAdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.archived_logs_only (
Union
[bool
,IResolvable
,None
]) – When this field is set toTrue
, AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Automatic Storage Management (ASM) only, the AWS DMS user account needs to be granted ASM privileges.asm_password (
Optional
[str
]) – For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the*asm_user_password*
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database .asm_server (
Optional
[str
]) –For an Oracle source endpoint, your ASM server address. You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .asm_user (
Optional
[str
]) –For an Oracle source endpoint, your ASM user name. You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .char_length_semantics (
Optional
[str
]) – Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute toCHAR
. Otherwise, the character column length is in bytes. Example:charLengthSemantics=CHAR;
direct_path_no_log (
Union
[bool
,IResolvable
,None
]) – When set totrue
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.direct_path_parallel_load (
Union
[bool
,IResolvable
,None
]) – When set totrue
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
. This attribute also only applies when you use the AWS DMS parallel load feature. Note that the target table cannot have any constraints or indexes.enable_homogenous_tablespace (
Union
[bool
,IResolvable
,None
]) – Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.extra_archived_log_dest_ids (
Union
[IResolvable
,Sequence
[Union
[int
,float
]],None
]) –Specifies the IDs of one more destinations for one or more archived redo logs. These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup. This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, AWS DMS needs information about what destination to get archive redo logs from to read changes. AWS DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although AWS DMS supports the use of the OracleRESETLOGS
option to open the database, never useRESETLOGS
unless it’s necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User’s Guide .fail_tasks_on_lob_truncation (
Union
[bool
,IResolvable
,None
]) – When set totrue
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
. If a task is set to limited LOB mode and this option is set totrue
, the task fails instead of truncating the LOB data.number_datatype_scale (
Union
[int
,float
,None
]) – Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10. Example:numberDataTypeScale=12
oracle_path_prefix (
Optional
[str
]) – Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.parallel_asm_read_threads (
Union
[int
,float
,None
]) – Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with thereadAheadBlocks
attribute.read_ahead_blocks (
Union
[int
,float
,None
]) – Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).read_table_space_name (
Union
[bool
,IResolvable
,None
]) – When set totrue
, this attribute supports tablespace replication.replace_path_prefix (
Union
[bool
,IResolvable
,None
]) – Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specifiedusePathPrefix
setting to access the redo logs.retry_interval (
Union
[int
,float
,None
]) – Specifies the number of seconds that the system waits before resending a query. Example:retryInterval=6;
secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the Oracle endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_oracle_asm_access_role_arn (
Optional
[str
]) –Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerOracleAsmSecretId
. Or you can specify clear-text values forAsmUser
,AsmPassword
, andAsmServerName
. You can’t specify both. For more information on creating thisSecretsManagerOracleAsmSecret
, the correspondingSecretsManagerOracleAsmAccessRoleArn
, and theSecretsManagerOracleAsmSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_oracle_asm_secret_id (
Optional
[str
]) – Required only if your Oracle endpoint uses Advanced Storage Manager (ASM). The full ARN, partial ARN, or display name of theSecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the Oracle endpoint connection details.security_db_encryption (
Optional
[str
]) – For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the*TDE_Password*
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .security_db_encryption_name (
Optional
[str
]) –For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .spatial_data_option_to_geo_json_function_name (
Optional
[str
]) – Use this attribute to convertSDO_GEOMETRY
toGEOJSON
format. By default, DMS calls theSDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.standby_delay_time (
Union
[int
,float
,None
]) – Use this attribute to specify a time in minutes for the delay in standby sync. If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases. In AWS DMS , you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.use_alternate_folder_for_online (
Union
[bool
,IResolvable
,None
]) – Set this attribute totrue
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.use_b_file (
Union
[bool
,IResolvable
,None
]) – Set this attribute to True to capture change data using the Binary Reader utility. SetUseLogminerReader
to False to set this attribute to True. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or AWS DMS Binary Reader for CDC .use_direct_path_full_load (
Union
[bool
,IResolvable
,None
]) – Set this attribute to True to have AWS DMS use a direct path full load. Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.use_logminer_reader (
Union
[bool
,IResolvable
,None
]) –Set this attribute to True to capture change data using the Oracle LogMiner utility (the default). Set this attribute to False if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to False, also setUseBfile
to True. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or AWS DMS Binary Reader for CDC in the AWS DMS User Guide .use_path_prefix (
Optional
[str
]) – Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms oracle_settings_property = dms.CfnEndpoint.OracleSettingsProperty( access_alternate_directly=False, additional_archived_log_dest_id=123, add_supplemental_logging=False, allow_select_nested_tables=False, archived_log_dest_id=123, archived_logs_only=False, asm_password="asmPassword", asm_server="asmServer", asm_user="asmUser", char_length_semantics="charLengthSemantics", direct_path_no_log=False, direct_path_parallel_load=False, enable_homogenous_tablespace=False, extra_archived_log_dest_ids=[123], fail_tasks_on_lob_truncation=False, number_datatype_scale=123, oracle_path_prefix="oraclePathPrefix", parallel_asm_read_threads=123, read_ahead_blocks=123, read_table_space_name=False, replace_path_prefix=False, retry_interval=123, secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_oracle_asm_access_role_arn="secretsManagerOracleAsmAccessRoleArn", secrets_manager_oracle_asm_secret_id="secretsManagerOracleAsmSecretId", secrets_manager_secret_id="secretsManagerSecretId", security_db_encryption="securityDbEncryption", security_db_encryption_name="securityDbEncryptionName", spatial_data_option_to_geo_json_function_name="spatialDataOptionToGeoJsonFunctionName", standby_delay_time=123, use_alternate_folder_for_online=False, use_bFile=False, use_direct_path_full_load=False, use_logminer_reader=False, use_path_prefix="usePathPrefix" )
Attributes
- access_alternate_directly
Set this attribute to
false
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source.This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.
- add_supplemental_logging
Set this attribute to set up table-level supplemental logging for the Oracle database.
This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
- additional_archived_log_dest_id
Set this attribute with
ArchivedLogDestId
in a primary/ standby setup.This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.
Although AWS DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless necessary. For additional information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User’s Guide .
- allow_select_nested_tables
Set this attribute to
true
to enable replication of Oracle tables containing columns that are nested tables or defined types.
- archived_log_dest_id
Specifies the ID of the destination for the archived redo logs.
This value should be the same as a number in the dest_id column of the v$archived_log view. If you work with an additional redo log destination, use the
AdditionalArchivedLogDestId
option to specify the additional destination ID. Doing this improves performance by ensuring that the correct logs are accessed from the outset.
- archived_logs_only
When this field is set to
True
, AWS DMS only accesses the archived redo logs.If the archived redo logs are stored on Automatic Storage Management (ASM) only, the AWS DMS user account needs to be granted ASM privileges.
- asm_password
For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password.
You can set this value from the
*asm_user_password*
value. You set this value as part of the comma-separated value that you set to thePassword
request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
- asm_server
For an Oracle source endpoint, your ASM server address.
You can set this value from the
asm_server
value. You setasm_server
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
- asm_user
For an Oracle source endpoint, your ASM user name.
You can set this value from the
asm_user
value. You setasm_user
as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
- char_length_semantics
Specifies whether the length of a character column is in bytes or in characters.
To indicate that the character column length is in characters, set this attribute to
CHAR
. Otherwise, the character column length is in bytes.Example:
charLengthSemantics=CHAR;
- direct_path_no_log
When set to
true
, this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.
- direct_path_parallel_load
When set to
true
, this attribute specifies a parallel load whenuseDirectPathFullLoad
is set toY
.This attribute also only applies when you use the AWS DMS parallel load feature. Note that the target table cannot have any constraints or indexes.
- enable_homogenous_tablespace
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
- extra_archived_log_dest_ids
Specifies the IDs of one more destinations for one or more archived redo logs.
These IDs are the values of the
dest_id
column in thev$archived_log
view. Use this setting with thearchivedLogDestId
extra connection attribute in a primary-to-single setup or a primary-to-multiple-standby setup.This setting is useful in a switchover when you use an Oracle Data Guard database as a source. In this case, AWS DMS needs information about what destination to get archive redo logs from to read changes. AWS DMS needs this because after the switchover the previous primary is a standby instance. For example, in a primary-to-single standby setup you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2]
In a primary-to-multiple-standby setup, you might apply the following settings.
archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4]
Although AWS DMS supports the use of the Oracle
RESETLOGS
option to open the database, never useRESETLOGS
unless it’s necessary. For more information aboutRESETLOGS
, see RMAN Data Repair Concepts in the Oracle Database Backup and Recovery User’s Guide .
- fail_tasks_on_lob_truncation
When set to
true
, this attribute causes a task to fail if the actual size of an LOB column is greater than the specifiedLobMaxSize
.If a task is set to limited LOB mode and this option is set to
true
, the task fails instead of truncating the LOB data.
- number_datatype_scale
Specifies the number scale.
You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example:
numberDataTypeScale=12
- oracle_path_prefix
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source.
This value specifies the default Oracle root used to access the redo logs.
- parallel_asm_read_threads
Set this attribute to change the number of threads that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM).
You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the
readAheadBlocks
attribute.
- read_ahead_blocks
Set this attribute to change the number of read-ahead blocks that DMS configures to perform a change data capture (CDC) load using Oracle Automatic Storage Management (ASM).
You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
- read_table_space_name
When set to
true
, this attribute supports tablespace replication.
- replace_path_prefix
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source.
This setting tells DMS instance to replace the default Oracle root with the specified
usePathPrefix
setting to access the redo logs.
- retry_interval
Specifies the number of seconds that the system waits before resending a query.
Example:
retryInterval=6;
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the Oracle endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_oracle_asm_access_role_arn
Required only if your Oracle endpoint uses Advanced Storage Manager (ASM).
The full ARN of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the
SecretsManagerOracleAsmSecret
. ThisSecretsManagerOracleAsmSecret
has the secret value that allows access to the Oracle ASM of the endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerOracleAsmSecretId`` . Or you can specify clear-text values for ``AsmUser`` , ``AsmPassword`` , and ``AsmServerName`` . You can't specify both. For more information on creating this ``SecretsManagerOracleAsmSecret`` , the corresponding ``SecretsManagerOracleAsmAccessRoleArn`` , and the ``SecretsManagerOracleAsmSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_oracle_asm_secret_id
Required only if your Oracle endpoint uses Advanced Storage Manager (ASM).
The full ARN, partial ARN, or display name of the
SecretsManagerOracleAsmSecret
that contains the Oracle ASM connection details for the Oracle endpoint.
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the Oracle endpoint connection details.
- security_db_encryption
For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader.
It is also the
*TDE_Password*
part of the comma-separated value you set to thePassword
request parameter when you create the endpoint. TheSecurityDbEncryptian
setting is related to thisSecurityDbEncryptionName
setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
- security_db_encryption_name
For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE.
The key value is the value of the
SecurityDbEncryption
setting. For more information on setting the key name value ofSecurityDbEncryptionName
, see the information and example for setting thesecurityDbEncryptionName
extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
- spatial_data_option_to_geo_json_function_name
Use this attribute to convert
SDO_GEOMETRY
toGEOJSON
format.By default, DMS calls the
SDO2GEOJSON
custom function if present and accessible. Or you can create your own custom function that mimics the operation ofSDOGEOJSON
and setSpatialDataOptionToGeoJsonFunctionName
to call it instead.
- standby_delay_time
Use this attribute to specify a time in minutes for the delay in standby sync.
If the source is an Oracle Active Data Guard standby database, use this attribute to specify the time lag between primary and standby databases.
In AWS DMS , you can create an Oracle CDC task that uses an Active Data Guard standby instance as a source for replicating ongoing changes. Doing this eliminates the need to connect to an active database that might be in production.
- use_alternate_folder_for_online
Set this attribute to
true
in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source.This tells the DMS instance to use any specified prefix replacement to access all online redo logs.
- use_b_file
Set this attribute to True to capture change data using the Binary Reader utility.
Set
UseLogminerReader
to False to set this attribute to True. To use Binary Reader with Amazon RDS for Oracle as the source, you set additional attributes. For more information about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or AWS DMS Binary Reader for CDC .
- use_direct_path_full_load
Set this attribute to True to have AWS DMS use a direct path full load.
Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
- use_logminer_reader
Set this attribute to True to capture change data using the Oracle LogMiner utility (the default).
Set this attribute to False if you want to access the redo logs as a binary file. When you set
UseLogminerReader
to False, also setUseBfile
to True. For more information on this setting and using Oracle ASM, see Using Oracle LogMiner or AWS DMS Binary Reader for CDC in the AWS DMS User Guide .
- use_path_prefix
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source.
This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
PostgreSqlSettingsProperty
- class CfnEndpoint.PostgreSqlSettingsProperty(*, after_connect_script=None, babelfish_database_name=None, capture_ddls=None, database_mode=None, ddl_artifacts_schema=None, execute_timeout=None, fail_tasks_on_lob_truncation=None, heartbeat_enable=None, heartbeat_frequency=None, heartbeat_schema=None, map_boolean_as_boolean=None, max_file_size=None, plugin_name=None, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None, slot_name=None)
Bases:
object
Provides information that defines a PostgreSQL endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For information about other available settings, see Extra connection attributes when using PostgreSQL as a source for AWS DMS and Extra connection attributes when using PostgreSQL as a target for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
after_connect_script (
Optional
[str
]) – For use with change data capture (CDC) only, this attribute has AWS DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data. Example:afterConnectScript=SET session_replication_role='replica'
babelfish_database_name (
Optional
[str
]) – The Babelfish for Aurora PostgreSQL database name for the endpoint.capture_ddls (
Union
[bool
,IResolvable
,None
]) – To capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts. If this value is set toTrue
, you don’t have to create tables or triggers on the source database.database_mode (
Optional
[str
]) – Specifies the default behavior of the replication’s handling of PostgreSQL- compatible endpoints that require some additional configuration, such as Babelfish endpoints.ddl_artifacts_schema (
Optional
[str
]) – The schema in which the operational DDL database artifacts are created. The default value ispublic
. Example:ddlArtifactsSchema=xyzddlschema;
execute_timeout (
Union
[int
,float
,None
]) – Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds. Example:executeTimeout=100;
fail_tasks_on_lob_truncation (
Union
[bool
,IResolvable
,None
]) – When set totrue
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
. The default value isfalse
. If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.heartbeat_enable (
Union
[bool
,IResolvable
,None
]) – The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keepsrestart_lsn
moving and prevents storage full scenarios. The default value isfalse
.heartbeat_frequency (
Union
[int
,float
,None
]) – Sets the WAL heartbeat frequency (in minutes). The default value is 5 minutes.heartbeat_schema (
Optional
[str
]) – Sets the schema in which the heartbeat artifacts are created. The default value ispublic
.map_boolean_as_boolean (
Union
[bool
,IResolvable
,None
]) – When true, lets PostgreSQL migrate the boolean type as boolean. By default, PostgreSQL migrates booleans asvarchar(5)
. You must set this setting on both the source and target endpoints for it to take effect. The default value isfalse
.max_file_size (
Union
[int
,float
,None
]) – Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL. The default value is 32,768 KB (32 MB). Example:maxFileSize=512
plugin_name (
Optional
[str
]) – Specifies the plugin to use to create a replication slot. The default value ispglogical
.secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the PostgreSQL endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the PostgreSQL endpoint connection details.slot_name (
Optional
[str
]) – Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance. When used with theCdcStartPosition
request parameter for the AWS DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn’t exist or the task doesn’t have a validCdcStartPosition
setting, DMS raises an error. For more information about setting theCdcStartPosition
request parameter, see Determining a CDC native start point in the AWS Database Migration Service User Guide . For more information about usingCdcStartPosition
, see CreateReplicationTask , StartReplicationTask , and ModifyReplicationTask .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms postgre_sql_settings_property = dms.CfnEndpoint.PostgreSqlSettingsProperty( after_connect_script="afterConnectScript", babelfish_database_name="babelfishDatabaseName", capture_ddls=False, database_mode="databaseMode", ddl_artifacts_schema="ddlArtifactsSchema", execute_timeout=123, fail_tasks_on_lob_truncation=False, heartbeat_enable=False, heartbeat_frequency=123, heartbeat_schema="heartbeatSchema", map_boolean_as_boolean=False, max_file_size=123, plugin_name="pluginName", secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", slot_name="slotName" )
Attributes
- after_connect_script
For use with change data capture (CDC) only, this attribute has AWS DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example:
afterConnectScript=SET session_replication_role='replica'
- babelfish_database_name
The Babelfish for Aurora PostgreSQL database name for the endpoint.
- capture_ddls
To capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts.
You can later remove these artifacts.
If this value is set to
True
, you don’t have to create tables or triggers on the source database.
- database_mode
Specifies the default behavior of the replication’s handling of PostgreSQL- compatible endpoints that require some additional configuration, such as Babelfish endpoints.
- ddl_artifacts_schema
The schema in which the operational DDL database artifacts are created.
The default value is
public
.Example:
ddlArtifactsSchema=xyzddlschema;
- execute_timeout
Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example:
executeTimeout=100;
- fail_tasks_on_lob_truncation
When set to
true
, this value causes a task to fail if the actual size of a LOB column is greater than the specifiedLobMaxSize
.The default value is
false
.If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
- heartbeat_enable
The write-ahead log (WAL) heartbeat feature mimics a dummy transaction.
By doing this, it prevents idle logical replication slots from holding onto old WAL logs, which can result in storage full situations on the source. This heartbeat keeps
restart_lsn
moving and prevents storage full scenarios.The default value is
false
.
- heartbeat_frequency
Sets the WAL heartbeat frequency (in minutes).
The default value is 5 minutes.
- heartbeat_schema
Sets the schema in which the heartbeat artifacts are created.
The default value is
public
.
- map_boolean_as_boolean
When true, lets PostgreSQL migrate the boolean type as boolean.
By default, PostgreSQL migrates booleans as
varchar(5)
. You must set this setting on both the source and target endpoints for it to take effect.The default value is
false
.
- max_file_size
Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
The default value is 32,768 KB (32 MB).
Example:
maxFileSize=512
- plugin_name
Specifies the plugin to use to create a replication slot.
The default value is
pglogical
.
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the PostgreSQL endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
- slot_name
Sets the name of a previously created logical replication slot for a change data capture (CDC) load of the PostgreSQL source instance.
When used with the
CdcStartPosition
request parameter for the AWS DMS API , this attribute also makes it possible to use native CDC start points. DMS verifies that the specified logical replication slot exists before starting the CDC load task. It also verifies that the task was created with a valid setting ofCdcStartPosition
. If the specified slot doesn’t exist or the task doesn’t have a validCdcStartPosition
setting, DMS raises an error.For more information about setting the
CdcStartPosition
request parameter, see Determining a CDC native start point in the AWS Database Migration Service User Guide . For more information about usingCdcStartPosition
, see CreateReplicationTask , StartReplicationTask , and ModifyReplicationTask .
RedisSettingsProperty
- class CfnEndpoint.RedisSettingsProperty(*, auth_password=None, auth_type=None, auth_user_name=None, port=None, server_name=None, ssl_ca_certificate_arn=None, ssl_security_protocol=None)
Bases:
object
Provides information that defines a Redis target endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For information about other available settings, see Specifying endpoint settings for Redis as a target in the AWS Database Migration Service User Guide .
- Parameters:
auth_password (
Optional
[str
]) – The password provided with theauth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.auth_type (
Optional
[str
]) – The type of authentication to perform when connecting to a Redis target. Options includenone
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided.auth_user_name (
Optional
[str
]) – The user name provided with theauth-role
option of theAuthType
setting for a Redis target endpoint.port (
Union
[int
,float
,None
]) – Transmission Control Protocol (TCP) port for the endpoint.server_name (
Optional
[str
]) – Fully qualified domain name of the endpoint.ssl_ca_certificate_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.ssl_security_protocol (
Optional
[str
]) – The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values includeplaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn’t given for a CA, DMS uses the Amazon root CA. Theplaintext
option doesn’t provide Transport Layer Security (TLS) encryption for traffic between endpoint and database.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms redis_settings_property = dms.CfnEndpoint.RedisSettingsProperty( auth_password="authPassword", auth_type="authType", auth_user_name="authUserName", port=123, server_name="serverName", ssl_ca_certificate_arn="sslCaCertificateArn", ssl_security_protocol="sslSecurityProtocol" )
Attributes
- auth_password
The password provided with the
auth-role
andauth-token
options of theAuthType
setting for a Redis target endpoint.
- auth_type
The type of authentication to perform when connecting to a Redis target.
Options include
none
,auth-token
, andauth-role
. Theauth-token
option requires anAuthPassword
value to be provided. Theauth-role
option requiresAuthUserName
andAuthPassword
values to be provided.
- auth_user_name
The user name provided with the
auth-role
option of theAuthType
setting for a Redis target endpoint.
- port
Transmission Control Protocol (TCP) port for the endpoint.
- server_name
Fully qualified domain name of the endpoint.
- ssl_ca_certificate_arn
The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.
- ssl_security_protocol
The connection to a Redis target endpoint using Transport Layer Security (TLS).
Valid values include
plaintext
andssl-encryption
. The default isssl-encryption
. Thessl-encryption
option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using theSslCaCertificateArn
setting. If an ARN isn’t given for a CA, DMS uses the Amazon root CA.The
plaintext
option doesn’t provide Transport Layer Security (TLS) encryption for traffic between endpoint and database.
RedshiftSettingsProperty
- class CfnEndpoint.RedshiftSettingsProperty(*, accept_any_date=None, after_connect_script=None, bucket_folder=None, bucket_name=None, case_sensitive_names=None, comp_update=None, connection_timeout=None, date_format=None, empty_as_null=None, encryption_mode=None, explicit_ids=None, file_transfer_upload_streams=None, load_timeout=None, map_boolean_as_boolean=None, max_file_size=None, remove_quotes=None, replace_chars=None, replace_invalid_chars=None, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None, server_side_encryption_kms_key_id=None, service_access_role_arn=None, time_format=None, trim_blanks=None, truncate_columns=None, write_buffer_size=None)
Bases:
object
Provides information that defines an Amazon Redshift endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about other available settings, see Extra connection attributes when using Amazon Redshift as a target for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
accept_any_date (
Union
[bool
,IResolvable
,None
]) – A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choosetrue
orfalse
(the default). This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn’t match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.after_connect_script (
Optional
[str
]) – Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.bucket_folder (
Optional
[str
]) – An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster. For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the RedshiftCOPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide . For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.bucket_name (
Optional
[str
]) – The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.case_sensitive_names (
Union
[bool
,IResolvable
,None
]) – If Amazon Redshift is configured to support case sensitive schema names, setCaseSensitiveNames
totrue
. The default isfalse
.comp_update (
Union
[bool
,IResolvable
,None
]) – If you setCompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren’t changed. The default istrue
.connection_timeout (
Union
[int
,float
,None
]) – A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.date_format (
Optional
[str
]) – The date format that you are using. Valid values areauto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of ‘YYYY-MM-DD’. Usingauto
recognizes most strings, even some that aren’t supported when you use a date format string. If your date and time values use formats different from each other, set this toauto
.empty_as_null (
Union
[bool
,IResolvable
,None
]) – A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value oftrue
sets empty CHAR and VARCHAR fields to null. The default isfalse
.encryption_mode (
Optional
[str
]) – The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose eitherSSE_S3
(the default) orSSE_KMS
. .. epigraph:: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
. To useSSE_S3
, create an AWS Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
explicit_ids (
Union
[bool
,IResolvable
,None
]) – This setting is only valid for a full-load migration task. SetExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.file_transfer_upload_streams (
Union
[int
,float
,None
]) – The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10. The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview .FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.load_timeout (
Union
[int
,float
,None
]) – The amount of time to wait (in milliseconds) before timing out of operations performed by AWS DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.map_boolean_as_boolean (
Union
[bool
,IResolvable
,None
]) – When true, lets Redshift migrate the boolean type as boolean. By default, Redshift migrates booleans asvarchar(1)
. You must set this setting on both the source and target endpoints for it to take effect.max_file_size (
Union
[int
,float
,None
]) – The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).remove_quotes (
Union
[bool
,IResolvable
,None
]) – A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choosetrue
to remove quotation marks. The default isfalse
.replace_chars (
Optional
[str
]) – A value that specifies to replaces the invalid characters specified inReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.replace_invalid_chars (
Optional
[str
]) – A list of characters that you want to replace. Use withReplaceChars
.secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the Amazon Redshift endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.server_side_encryption_kms_key_id (
Optional
[str
]) – The AWS KMS key ID. If you are usingSSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.service_access_role_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow theiam:PassRole
action.time_format (
Optional
[str
]) – The time format that you want to use. Valid values areauto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren’t supported when you use a time format string. If your date and time values use formats different from each other, set this parameter toauto
.trim_blanks (
Union
[bool
,IResolvable
,None
]) – A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choosetrue
to remove unneeded white space. The default isfalse
.truncate_columns (
Union
[bool
,IResolvable
,None
]) – A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choosetrue
to truncate data. The default isfalse
.write_buffer_size (
Union
[int
,float
,None
]) – The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms redshift_settings_property = dms.CfnEndpoint.RedshiftSettingsProperty( accept_any_date=False, after_connect_script="afterConnectScript", bucket_folder="bucketFolder", bucket_name="bucketName", case_sensitive_names=False, comp_update=False, connection_timeout=123, date_format="dateFormat", empty_as_null=False, encryption_mode="encryptionMode", explicit_ids=False, file_transfer_upload_streams=123, load_timeout=123, map_boolean_as_boolean=False, max_file_size=123, remove_quotes=False, replace_chars="replaceChars", replace_invalid_chars="replaceInvalidChars", secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId", server_side_encryption_kms_key_id="serverSideEncryptionKmsKeyId", service_access_role_arn="serviceAccessRoleArn", time_format="timeFormat", trim_blanks=False, truncate_columns=False, write_buffer_size=123 )
Attributes
- accept_any_date
00, to be loaded without generating an error.
You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn’t match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- See:
- Type:
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00
- Type:
00
- after_connect_script
Code to run after connecting.
This parameter should contain the code itself, not the name of a file containing the code.
- bucket_folder
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide .For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
- bucket_name
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
- case_sensitive_names
If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
.The default is
false
.
- comp_update
If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty.This applies even if the table columns already have encodings other than
RAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren’t changed. The default istrue
.
- connection_timeout
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- date_format
The date format that you are using.
Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of ‘YYYY-MM-DD’. Usingauto
recognizes most strings, even some that aren’t supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.
- empty_as_null
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL.
A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.
- encryption_mode
The type of server-side encryption that you want to use for your data.
This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
. .. epigraph:For the ``ModifyEndpoint`` operation, you can change the existing value of the ``EncryptionMode`` parameter from ``SSE_KMS`` to ``SSE_S3`` . But you can’t change the existing value from ``SSE_S3`` to ``SSE_KMS`` .
To use
SSE_S3
, create an AWS Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
- explicit_ids
This setting is only valid for a full-load migration task.
Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.
- file_transfer_upload_streams
The number of threads used to upload a single file.
This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview .
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.
- load_timeout
The amount of time to wait (in milliseconds) before timing out of operations performed by AWS DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
- map_boolean_as_boolean
When true, lets Redshift migrate the boolean type as boolean.
By default, Redshift migrates booleans as
varchar(1)
. You must set this setting on both the source and target endpoints for it to take effect.
- max_file_size
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
- remove_quotes
A value that specifies to remove surrounding quotation marks from strings in the incoming data.
All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.
- replace_chars
A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead.The default is
"?"
.
- replace_invalid_chars
A list of characters that you want to replace.
Use with
ReplaceChars
.
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the Amazon Redshift endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
- server_side_encryption_kms_key_id
The AWS KMS key ID.
If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
- service_access_role_arn
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
The role must allow the
iam:PassRole
action.
- time_format
The time format that you want to use.
Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren’t supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.
- trim_blanks
A value that specifies to remove the trailing white space characters from a VARCHAR string.
This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.
- truncate_columns
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column.
This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.
- write_buffer_size
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
S3SettingsProperty
- class CfnEndpoint.S3SettingsProperty(*, add_column_name=None, add_trailing_padding_character=None, bucket_folder=None, bucket_name=None, canned_acl_for_objects=None, cdc_inserts_and_updates=None, cdc_inserts_only=None, cdc_max_batch_interval=None, cdc_min_file_size=None, cdc_path=None, compression_type=None, csv_delimiter=None, csv_no_sup_value=None, csv_null_value=None, csv_row_delimiter=None, data_format=None, data_page_size=None, date_partition_delimiter=None, date_partition_enabled=None, date_partition_sequence=None, date_partition_timezone=None, dict_page_size_limit=None, enable_statistics=None, encoding_type=None, encryption_mode=None, expected_bucket_owner=None, external_table_definition=None, glue_catalog_generation=None, ignore_header_rows=None, include_op_for_full_load=None, max_file_size=None, parquet_timestamp_in_millisecond=None, parquet_version=None, preserve_transactions=None, rfc4180=None, row_group_length=None, server_side_encryption_kms_key_id=None, service_access_role_arn=None, timestamp_column_name=None, use_csv_no_sup_value=None, use_task_start_time_for_full_load_timestamp=None)
Bases:
object
Provides information that defines an Amazon S3 endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about the available settings, see Extra connection attributes when using Amazon S3 as a source for AWS DMS and Extra connection attributes when using Amazon S3 as a target for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
add_column_name (
Union
[bool
,IResolvable
,None
]) – An optional parameter that, when set totrue
ory
, you can use to add column name information to the .csv output file. The default value isfalse
. Valid values aretrue
,false
,y
, andn
.add_trailing_padding_character (
Union
[bool
,IResolvable
,None
]) – Use the S3 target endpoint settingAddTrailingPaddingCharacter
to add padding on string data. The default value isfalse
.bucket_folder (
Optional
[str
]) – An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path*bucketFolder* / *schema_name* / *table_name* /
. If this parameter isn’t specified, the path used is*schema_name* / *table_name* /
.bucket_name (
Optional
[str
]) – The name of the S3 bucket.canned_acl_for_objects (
Optional
[str
]) – A value that enables AWS DMS to specify a predefined (canned) access control list (ACL) for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide . The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.cdc_inserts_and_updates (
Union
[bool
,IResolvable
,None
]) – A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting isfalse
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file. For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of theIncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide . .. epigraph:: AWS DMS supports the use of theCdcInsertsAndUpdates
parameter in versions 3.3.1 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can’t both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.cdc_inserts_only (
Union
[bool
,IResolvable
,None
]) –A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target. IfCdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide . .. epigraph:: AWS DMS supports the interaction described preceding between theCdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.CdcInsertsOnly
andCdcInsertsAndUpdates
can’t both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.cdc_max_batch_interval (
Union
[int
,float
,None
]) – Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3. WhenCdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an AWS DMS CloudFormation template. The default value is 60 seconds.cdc_min_file_size (
Union
[int
,float
,None
]) – Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3. WhenCdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an AWS DMS CloudFormation template. The default value is 32 MB.cdc_path (
Optional
[str
]) – Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it’s optional. IfCdcPath
is set, AWS DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set`PreserveTransactions
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-PreserveTransactions>`_ totrue
, AWS DMS verifies that you have set this parameter to a folder path on your S3 target where AWS DMS can save the transaction order for the CDC load. AWS DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by`BucketFolder
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketFolder>`_ and`BucketName
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketName>`_ . For example, if you specifyCdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, AWS DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
. If you specify the sameCdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, AWS DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
. For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target . .. epigraph:: This setting is supported in AWS DMS versions 3.4.2 and later.compression_type (
Optional
[str
]) – An optional parameter. When set to GZIP it enables the service to compress the target files. To allow the service to write the target files uncompressed, either set this parameter to NONE (the default) or don’t specify the parameter at all. This parameter applies to both .csv and .parquet file formats.csv_delimiter (
Optional
[str
]) – The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.csv_no_sup_value (
Optional
[str
]) – This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If`UseCsvNoSupValue
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-UseCsvNoSupValue>`_ is set to true, specify a string value that you want AWS DMS to use for all columns not included in the supplemental log. If you do not specify a string value, AWS DMS uses the null value for these columns regardless of theUseCsvNoSupValue
setting. .. epigraph:: This setting is supported in AWS DMS versions 3.4.1 and later.csv_null_value (
Optional
[str
]) – An optional parameter that specifies how AWS DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string (”” or ‘’), AWS DMS treats the empty string as the null value instead ofNULL
. The default value isNULL
. Valid values include any valid string.csv_row_delimiter (
Optional
[str
]) – The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n
).data_format (
Optional
[str
]) – The format of the data that you want to use for output. You can choose one of the following:. -csv
: This is a row-based file format with comma-separated values (.csv). -parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.data_page_size (
Union
[int
,float
,None
]) – The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.date_partition_delimiter (
Optional
[str
]) – Specifies a date separating delimiter to use during folder partitioning. The default value isSLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
.date_partition_enabled (
Union
[bool
,IResolvable
,None
]) – When set totrue
, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse
. For more information about date-based folder partitioning, see Using date-based folder partitioning .date_partition_sequence (
Optional
[str
]) – Identifies the sequence of the date format to use during folder partitioning. The default value isYYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
.date_partition_timezone (
Optional
[str
]) – When creating an S3 target endpoint, setDatePartitionTimezone
to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a change data capture (CDC) file name is generated. The time zone format is Area/Location. Use this parameter whenDatePartitionedEnabled
is set totrue
, as shown in the following example.s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":" *Asia/Seoul* ", "BucketName": "dms-nattarat-test"}'
dict_page_size_limit (
Union
[int
,float
,None
]) – The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type ofPLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.enable_statistics (
Union
[bool
,IResolvable
,None
]) – A value that enables statistics for Parquet pages and row groups. Choosetrue
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.encoding_type (
Optional
[str
]) – The type of encoding that you’re using:. -RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default. -PLAIN
doesn’t use encoding at all. Values are stored as they are. -PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.encryption_mode (
Optional
[str
]) – The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose eitherSSE_S3
(the default) orSSE_KMS
. .. epigraph:: For theModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
. To useSSE_S3
, you need an IAM role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions: -s3:CreateBucket
-s3:ListBucket
-s3:DeleteBucket
-s3:GetBucketLocation
-s3:GetObject
-s3:PutObject
-s3:DeleteObject
-s3:GetObjectVersion
-s3:GetBucketPolicy
-s3:PutBucketPolicy
-s3:DeleteBucketPolicy
expected_bucket_owner (
Optional
[str
]) – To specify a bucket owner and prevent sniping, you can use theExpectedBucketOwner
endpoint setting. Example:--s3-settings='{"ExpectedBucketOwner": " *AWS_Account_ID* "}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.external_table_definition (
Optional
[str
]) – The external table definition. Conditional: IfS3
is used as a source thenExternalTableDefinition
is required.glue_catalog_generation (
Union
[bool
,IResolvable
,None
]) – When true, allows AWS Glue to catalog your S3 bucket. Creating an AWS Glue catalog lets you use Athena to query your data.ignore_header_rows (
Union
[int
,float
,None
]) – When this value is set to 1, AWS DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature. The default is 0.include_op_for_full_load (
Union
[bool
,IResolvable
,None
]) –A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database. .. epigraph:: AWS DMS supports the
IncludeOpForFullLoad
parameter in versions 3.1.4 and later. For full load, records can only be inserted. By default (thefalse
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load. .. epigraph:: This setting works together with theCdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide .max_file_size (
Union
[int
,float
,None
]) – A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load. The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.parquet_timestamp_in_millisecond (
Union
[bool
,IResolvable
,None
]) – A value that specifies the precision of anyTIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format. .. epigraph:: AWS DMS supports theParquetTimestampInMillisecond
parameter in versions 3.1.4 and later. WhenParquetTimestampInMillisecond
is set totrue
ory
, AWS DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision. Currently, Amazon Athena and AWS Glue can handle only millisecond precision forTIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue . .. epigraph:: AWS DMS writes anyTIMESTAMP
column values written to an S3 file in .csv format with microsecond precision. SettingParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.parquet_version (
Optional
[str
]) – The version of the Apache Parquet format that you want to use:parquet_1_0
(the default) orparquet_2_0
.preserve_transactions (
Union
[bool
,IResolvable
,None
]) –If this setting is set to
true
, AWS DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by`CdcPath
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CdcPath>`_ . For more information, see Capturing data changes (CDC) including transaction order on the S3 target . .. epigraph:: This setting is supported in AWS DMS versions 3.4.2 and later.rfc4180 (
Union
[bool
,IResolvable
,None
]) – For an S3 source, when this value is set totrue
ory
, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set tofalse
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can’t use a delimiter as part of the string, because it signals the end of the value. For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set totrue
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, AWS DMS encloses the entire column with an additional pair of double quotation marks (“). Every quotation mark within the data is repeated twice. The default value istrue
. Valid values includetrue
,false
,y
, andn
.row_group_length (
Union
[int
,float
,None
]) – The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only. If you choose a value larger than the maximum,RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).server_side_encryption_kms_key_id (
Optional
[str
]) – If you are usingSSE_KMS
for theEncryptionMode
, provide the AWS KMS key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key. Here is a CLI example:aws dms create-endpoint --endpoint-identifier *value* --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn= *value* ,BucketFolder= *value* ,BucketName= *value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId= *value*
service_access_role_arn (
Optional
[str
]) – A required parameter that specifies the Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow theiam:PassRole
action. It enables AWS DMS to read and write objects from an S3 bucket.timestamp_column_name (
Optional
[str
]) – A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target. .. epigraph:: AWS DMS supports theTimestampColumnName
parameter in versions 3.1.4 and later. AWS DMS includes an additionalSTRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value. For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS. For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database. The string format for this timestamp column value isyyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database. When theAddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.use_csv_no_sup_value (
Union
[bool
,IResolvable
,None
]) – This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If this setting is set totrue
for columns not included in the supplemental log, AWS DMS uses the value specified by`CsvNoSupValue
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CsvNoSupValue>`_ . If this setting isn’t set or is set tofalse
, AWS DMS uses the null value for these columns. .. epigraph:: This setting is supported in AWS DMS versions 3.4.1 and later.use_task_start_time_for_full_load_timestamp (
Union
[bool
,IResolvable
,None
]) – When set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target. For full load, whenuseTaskStartTimeForFullLoadTimestamp
is set totrue
, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time. WhenuseTaskStartTimeForFullLoadTimestamp
is set tofalse
, the full load timestamp in the timestamp column increments with the time data arrives at the target.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms s3_settings_property = dms.CfnEndpoint.S3SettingsProperty( add_column_name=False, add_trailing_padding_character=False, bucket_folder="bucketFolder", bucket_name="bucketName", canned_acl_for_objects="cannedAclForObjects", cdc_inserts_and_updates=False, cdc_inserts_only=False, cdc_max_batch_interval=123, cdc_min_file_size=123, cdc_path="cdcPath", compression_type="compressionType", csv_delimiter="csvDelimiter", csv_no_sup_value="csvNoSupValue", csv_null_value="csvNullValue", csv_row_delimiter="csvRowDelimiter", data_format="dataFormat", data_page_size=123, date_partition_delimiter="datePartitionDelimiter", date_partition_enabled=False, date_partition_sequence="datePartitionSequence", date_partition_timezone="datePartitionTimezone", dict_page_size_limit=123, enable_statistics=False, encoding_type="encodingType", encryption_mode="encryptionMode", expected_bucket_owner="expectedBucketOwner", external_table_definition="externalTableDefinition", glue_catalog_generation=False, ignore_header_rows=123, include_op_for_full_load=False, max_file_size=123, parquet_timestamp_in_millisecond=False, parquet_version="parquetVersion", preserve_transactions=False, rfc4180=False, row_group_length=123, server_side_encryption_kms_key_id="serverSideEncryptionKmsKeyId", service_access_role_arn="serviceAccessRoleArn", timestamp_column_name="timestampColumnName", use_csv_no_sup_value=False, use_task_start_time_for_full_load_timestamp=False )
Attributes
- add_column_name
An optional parameter that, when set to
true
ory
, you can use to add column name information to the .csv output file.The default value is
false
. Valid values aretrue
,false
,y
, andn
.
- add_trailing_padding_character
Use the S3 target endpoint setting
AddTrailingPaddingCharacter
to add padding on string data.The default value is
false
.
- bucket_folder
An optional parameter to set a folder name in the S3 bucket.
If provided, tables are created in the path
*bucketFolder* / *schema_name* / *table_name* /
. If this parameter isn’t specified, the path used is*schema_name* / *table_name* /
.
- bucket_name
The name of the S3 bucket.
- canned_acl_for_objects
//docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl>`_ in the Amazon S3 Developer Guide .
The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.
- See:
- Type:
A value that enables AWS DMS to specify a predefined (canned) access control list (ACL) for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see `Canned ACL <https
- cdc_inserts_and_updates
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide . .. epigraph:AWS DMS supports the use of the ``CdcInsertsAndUpdates`` parameter in versions 3.3.1 and later. ``CdcInsertsOnly`` and ``CdcInsertsAndUpdates`` can't both be set to ``true`` for the same endpoint. Set either ``CdcInsertsOnly`` or ``CdcInsertsAndUpdates`` to ``true`` for the same endpoint, but not both.
- cdc_inserts_only
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide . .. epigraph:AWS DMS supports the interaction described preceding between the ``CdcInsertsOnly`` and ``IncludeOpForFullLoad`` parameters in versions 3.1.4 and later. ``CdcInsertsOnly`` and ``CdcInsertsAndUpdates`` can't both be set to ``true`` for the same endpoint. Set either ``CdcInsertsOnly`` or ``CdcInsertsAndUpdates`` to ``true`` for the same endpoint, but not both.
- cdc_max_batch_interval
Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.
When
CdcMaxBatchInterval
andCdcMinFileSize
are both specified, the file write is triggered by whichever parameter condition is met first within an AWS DMS CloudFormation template.The default value is 60 seconds.
- cdc_min_file_size
Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.
When
CdcMinFileSize
andCdcMaxBatchInterval
are both specified, the file write is triggered by whichever parameter condition is met first within an AWS DMS CloudFormation template.The default value is 32 MB.
- cdc_path
Specifies the folder path of CDC files.
For an S3 source, this setting is required if a task captures change data; otherwise, it’s optional. If
CdcPath
is set, AWS DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set`PreserveTransactions
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-PreserveTransactions>`_ totrue
, AWS DMS verifies that you have set this parameter to a folder path on your S3 target where AWS DMS can save the transaction order for the CDC load. AWS DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by`BucketFolder
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketFolder>`_ and`BucketName
<https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketName>`_ .For example, if you specify
CdcPath
asMyChangedData
, and you specifyBucketName
asMyTargetBucket
but do not specifyBucketFolder
, AWS DMS creates the CDC folder path following:MyTargetBucket/MyChangedData
.If you specify the same
CdcPath
, and you specifyBucketName
asMyTargetBucket
andBucketFolder
asMyTargetData
, AWS DMS creates the CDC folder path following:MyTargetBucket/MyTargetData/MyChangedData
.For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target . .. epigraph:
This setting is supported in AWS DMS versions 3.4.2 and later.
- compression_type
An optional parameter.
When set to GZIP it enables the service to compress the target files. To allow the service to write the target files uncompressed, either set this parameter to NONE (the default) or don’t specify the parameter at all. This parameter applies to both .csv and .parquet file formats.
- csv_delimiter
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
- csv_no_sup_value
//docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-UseCsvNoSupValue>`_ is set to true, specify a string value that you want AWS DMS to use for all columns not included in the supplemental log. If you do not specify a string value, AWS DMS uses the null value for these columns regardless of the
UseCsvNoSupValue
setting.This setting is supported in AWS DMS versions 3.4.1 and later.
- See:
- Type:
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If
`UseCsvNoSupValue
<https
- csv_null_value
An optional parameter that specifies how AWS DMS treats null values.
While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string (”” or ‘’), AWS DMS treats the empty string as the null value instead of
NULL
.The default value is
NULL
. Valid values include any valid string.
- csv_row_delimiter
The delimiter used to separate rows in the .csv file for both source and target.
The default is a carriage return (
\n
).
- data_format
.
csv
: This is a row-based file format with comma-separated values (.csv).parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
- See:
- Type:
The format of the data that you want to use for output. You can choose one of the following
- data_page_size
The size of one data page in bytes.
This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
- date_partition_delimiter
Specifies a date separating delimiter to use during folder partitioning.
The default value is
SLASH
. Use this parameter whenDatePartitionedEnabled
is set totrue
.
- date_partition_enabled
When set to
true
, this parameter partitions S3 bucket folders based on transaction commit dates.The default value is
false
. For more information about date-based folder partitioning, see Using date-based folder partitioning .
- date_partition_sequence
Identifies the sequence of the date format to use during folder partitioning.
The default value is
YYYYMMDD
. Use this parameter whenDatePartitionedEnabled
is set totrue
.
- date_partition_timezone
When creating an S3 target endpoint, set
DatePartitionTimezone
to convert the current UTC time into a specified time zone.The conversion occurs when a date partition folder is created and a change data capture (CDC) file name is generated. The time zone format is Area/Location. Use this parameter when
DatePartitionedEnabled
is set totrue
, as shown in the following example.s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":" *Asia/Seoul* ", "BucketName": "dms-nattarat-test"}'
- dict_page_size_limit
The maximum size of an encoded dictionary page of a column.
If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.
- enable_statistics
A value that enables statistics for Parquet pages and row groups.
Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.
- encoding_type
.
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.PLAIN
doesn’t use encoding at all. Values are stored as they are.PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
- See:
- Type:
The type of encoding that you’re using
- encryption_mode
The type of server-side encryption that you want to use for your data.
This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
. .. epigraph:For the ``ModifyEndpoint`` operation, you can change the existing value of the ``EncryptionMode`` parameter from ``SSE_KMS`` to ``SSE_S3`` . But you can’t change the existing value from ``SSE_S3`` to ``SSE_KMS`` .
To use
SSE_S3
, you need an IAM role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
- expected_bucket_owner
To specify a bucket owner and prevent sniping, you can use the
ExpectedBucketOwner
endpoint setting.Example:
--s3-settings='{"ExpectedBucketOwner": " *AWS_Account_ID* "}'
When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.
- external_table_definition
The external table definition.
Conditional: If
S3
is used as a source thenExternalTableDefinition
is required.
- glue_catalog_generation
When true, allows AWS Glue to catalog your S3 bucket.
Creating an AWS Glue catalog lets you use Athena to query your data.
- ignore_header_rows
When this value is set to 1, AWS DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.
The default is 0.
- include_op_for_full_load
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
AWS DMS supports the
IncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load. .. epigraph:This setting works together with the ``CdcInsertsOnly`` and the ``CdcInsertsAndUpdates`` parameters for output to .csv files only. For more information about how these settings work together, see `Indicating Source DB Operations in Migrated S3 Data <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps>`_ in the *AWS Database Migration Service User Guide* .
- max_file_size
A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.
The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.
- parquet_timestamp_in_millisecond
A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.AWS DMS supports the
ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, AWS DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and AWS Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue . .. epigraph:AWS DMS writes any ``TIMESTAMP`` column values written to an S3 file in .csv format with microsecond precision. Setting ``ParquetTimestampInMillisecond`` has no effect on the string format of the timestamp column value that is inserted by setting the ``TimestampColumnName`` parameter.
- parquet_version
parquet_1_0
(the default) orparquet_2_0
.- See:
- Type:
The version of the Apache Parquet format that you want to use
- preserve_transactions
//docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPath>`_ .
This setting is supported in AWS DMS versions 3.4.2 and later.
- See:
- Type:
If this setting is set to
true
, AWS DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by`CdcPath
<https- Type:
//docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CdcPath>`_ . For more information, see `Capturing data changes (CDC) including transaction order on the S3 target <https
- rfc4180
For an S3 source, when this value is set to
true
ory
, each leading double quotation mark has to be followed by an ending double quotation mark.This formatting complies with RFC 4180. When this value is set to
false
orn
, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can’t use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to
true
ory
using Amazon S3 as a target, if the data has quotation marks or newline characters in it, AWS DMS encloses the entire column with an additional pair of double quotation marks (“). Every quotation mark within the data is repeated twice.The default value is
true
. Valid values includetrue
,false
,y
, andn
.
- row_group_length
The number of rows in a row group.
A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).
- server_side_encryption_kms_key_id
If you are using
SSE_KMS
for theEncryptionMode
, provide the AWS KMS key ID.The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
Here is a CLI example:
aws dms create-endpoint --endpoint-identifier *value* --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn= *value* ,BucketFolder= *value* ,BucketName= *value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId= *value*
- service_access_role_arn
A required parameter that specifies the Amazon Resource Name (ARN) used by the service to access the IAM role.
The role must allow the
iam:PassRole
action. It enables AWS DMS to read and write objects from an S3 bucket.
- timestamp_column_name
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
AWS DMS supports the
TimestampColumnName
parameter in versions 3.1.4 and later.AWS DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.
- use_csv_no_sup_value
//docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CsvNoSupValue>`_ . If this setting isn’t set or is set to
false
, AWS DMS uses the null value for these columns.This setting is supported in AWS DMS versions 3.4.1 and later.
- See:
- Type:
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If this setting is set to
true
for columns not included in the supplemental log, AWS DMS uses the value specified by`CsvNoSupValue
<https
- use_task_start_time_for_full_load_timestamp
When set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target.
For full load, when
useTaskStartTimeForFullLoadTimestamp
is set totrue
, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.When
useTaskStartTimeForFullLoadTimestamp
is set tofalse
, the full load timestamp in the timestamp column increments with the time data arrives at the target.
SybaseSettingsProperty
- class CfnEndpoint.SybaseSettingsProperty(*, secrets_manager_access_role_arn=None, secrets_manager_secret_id=None)
Bases:
object
Provides information that defines a SAP ASE endpoint.
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For information about other available settings, see Extra connection attributes when using SAP ASE as a source for AWS DMS and Extra connection attributes when using SAP ASE as a target for AWS DMS in the AWS Database Migration Service User Guide .
- Parameters:
secrets_manager_access_role_arn (
Optional
[str
]) –The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the SAP ASE endpoint. .. epigraph:: You can specify one of two sets of values for these permissions. You can specify the values for this setting andSecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can’t specify both. For more information on creating thisSecretsManagerSecret
, the correspondingSecretsManagerAccessRoleArn
, and theSecretsManagerSecretId
that is required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide .secrets_manager_secret_id (
Optional
[str
]) – The full ARN, partial ARN, or display name of theSecretsManagerSecret
that contains the SAP SAE endpoint connection details.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_dms as dms sybase_settings_property = dms.CfnEndpoint.SybaseSettingsProperty( secrets_manager_access_role_arn="secretsManagerAccessRoleArn", secrets_manager_secret_id="secretsManagerSecretId" )
Attributes
- secrets_manager_access_role_arn
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
.The role must allow the
iam:PassRole
action.SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the SAP ASE endpoint. .. epigraph:You can specify one of two sets of values for these permissions. You can specify the values for this setting and ``SecretsManagerSecretId`` . Or you can specify clear-text values for ``UserName`` , ``Password`` , ``ServerName`` , and ``Port`` . You can't specify both. For more information on creating this ``SecretsManagerSecret`` , the corresponding ``SecretsManagerAccessRoleArn`` , and the ``SecretsManagerSecretId`` that is required to access it, see `Using secrets to access AWS Database Migration Service resources <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager>`_ in the *AWS Database Migration Service User Guide* .
- secrets_manager_secret_id
The full ARN, partial ARN, or display name of the
SecretsManagerSecret
that contains the SAP SAE endpoint connection details.