@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class KafkaSettings extends Object implements Serializable, Cloneable, StructuredPojo
Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.
Constructor and Description |
---|
KafkaSettings() |
Modifier and Type | Method and Description |
---|---|
KafkaSettings |
clone() |
boolean |
equals(Object obj) |
String |
getBroker()
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
|
Boolean |
getIncludeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
Boolean |
getIncludeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint.
|
Boolean |
getIncludePartitionValue()
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type . |
Boolean |
getIncludeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table , drop-table , add-column , drop-column , and
rename-column . |
Boolean |
getIncludeTransactionDetails()
Provides detailed transaction information from the source database.
|
String |
getMessageFormat()
The output format for the records created on the endpoint.
|
Integer |
getMessageMaxBytes()
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
|
Boolean |
getNoHexPrefix()
Set this optional parameter to
true to avoid adding a '0x' prefix to raw data in hexadecimal format. |
Boolean |
getPartitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is
primary-key-type . |
String |
getSaslMechanism()
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512 mechanism by default. |
String |
getSaslPassword()
The secure password you created when you first set up your MSK cluster to validate a client identity and make an
encrypted connection between server and client using SASL-SSL authentication.
|
String |
getSaslUsername()
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an
encrypted connection between server and client using SASL-SSL authentication.
|
String |
getSecurityProtocol()
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).
|
String |
getSslCaCertificateArn()
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect
to your Kafka target endpoint.
|
String |
getSslClientCertificateArn()
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
|
String |
getSslClientKeyArn()
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
|
String |
getSslClientKeyPassword()
The password for the client private key used to securely connect to a Kafka target endpoint.
|
String |
getSslEndpointIdentificationAlgorithm()
Sets hostname verification for the certificate.
|
String |
getTopic()
The topic to which you migrate the data.
|
int |
hashCode() |
Boolean |
isIncludeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
Boolean |
isIncludeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint.
|
Boolean |
isIncludePartitionValue()
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type . |
Boolean |
isIncludeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table , drop-table , add-column , drop-column , and
rename-column . |
Boolean |
isIncludeTransactionDetails()
Provides detailed transaction information from the source database.
|
Boolean |
isNoHexPrefix()
Set this optional parameter to
true to avoid adding a '0x' prefix to raw data in hexadecimal format. |
Boolean |
isPartitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is
primary-key-type . |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setBroker(String broker)
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
|
void |
setIncludeControlDetails(Boolean includeControlDetails)
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
void |
setIncludeNullAndEmpty(Boolean includeNullAndEmpty)
Include NULL and empty columns for records migrated to the endpoint.
|
void |
setIncludePartitionValue(Boolean includePartitionValue)
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type . |
void |
setIncludeTableAlterOperations(Boolean includeTableAlterOperations)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table , drop-table , add-column , drop-column , and
rename-column . |
void |
setIncludeTransactionDetails(Boolean includeTransactionDetails)
Provides detailed transaction information from the source database.
|
void |
setMessageFormat(String messageFormat)
The output format for the records created on the endpoint.
|
void |
setMessageMaxBytes(Integer messageMaxBytes)
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
|
void |
setNoHexPrefix(Boolean noHexPrefix)
Set this optional parameter to
true to avoid adding a '0x' prefix to raw data in hexadecimal format. |
void |
setPartitionIncludeSchemaTable(Boolean partitionIncludeSchemaTable)
Prefixes schema and table names to partition values, when the partition type is
primary-key-type . |
void |
setSaslMechanism(String saslMechanism)
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512 mechanism by default. |
void |
setSaslPassword(String saslPassword)
The secure password you created when you first set up your MSK cluster to validate a client identity and make an
encrypted connection between server and client using SASL-SSL authentication.
|
void |
setSaslUsername(String saslUsername)
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an
encrypted connection between server and client using SASL-SSL authentication.
|
void |
setSecurityProtocol(String securityProtocol)
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).
|
void |
setSslCaCertificateArn(String sslCaCertificateArn)
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect
to your Kafka target endpoint.
|
void |
setSslClientCertificateArn(String sslClientCertificateArn)
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
|
void |
setSslClientKeyArn(String sslClientKeyArn)
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
|
void |
setSslClientKeyPassword(String sslClientKeyPassword)
The password for the client private key used to securely connect to a Kafka target endpoint.
|
void |
setSslEndpointIdentificationAlgorithm(String sslEndpointIdentificationAlgorithm)
Sets hostname verification for the certificate.
|
void |
setTopic(String topic)
The topic to which you migrate the data.
|
String |
toString()
Returns a string representation of this object.
|
KafkaSettings |
withBroker(String broker)
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
|
KafkaSettings |
withIncludeControlDetails(Boolean includeControlDetails)
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output.
|
KafkaSettings |
withIncludeNullAndEmpty(Boolean includeNullAndEmpty)
Include NULL and empty columns for records migrated to the endpoint.
|
KafkaSettings |
withIncludePartitionValue(Boolean includePartitionValue)
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type . |
KafkaSettings |
withIncludeTableAlterOperations(Boolean includeTableAlterOperations)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table , drop-table , add-column , drop-column , and
rename-column . |
KafkaSettings |
withIncludeTransactionDetails(Boolean includeTransactionDetails)
Provides detailed transaction information from the source database.
|
KafkaSettings |
withMessageFormat(MessageFormatValue messageFormat)
The output format for the records created on the endpoint.
|
KafkaSettings |
withMessageFormat(String messageFormat)
The output format for the records created on the endpoint.
|
KafkaSettings |
withMessageMaxBytes(Integer messageMaxBytes)
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
|
KafkaSettings |
withNoHexPrefix(Boolean noHexPrefix)
Set this optional parameter to
true to avoid adding a '0x' prefix to raw data in hexadecimal format. |
KafkaSettings |
withPartitionIncludeSchemaTable(Boolean partitionIncludeSchemaTable)
Prefixes schema and table names to partition values, when the partition type is
primary-key-type . |
KafkaSettings |
withSaslMechanism(KafkaSaslMechanism saslMechanism)
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512 mechanism by default. |
KafkaSettings |
withSaslMechanism(String saslMechanism)
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512 mechanism by default. |
KafkaSettings |
withSaslPassword(String saslPassword)
The secure password you created when you first set up your MSK cluster to validate a client identity and make an
encrypted connection between server and client using SASL-SSL authentication.
|
KafkaSettings |
withSaslUsername(String saslUsername)
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an
encrypted connection between server and client using SASL-SSL authentication.
|
KafkaSettings |
withSecurityProtocol(KafkaSecurityProtocol securityProtocol)
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).
|
KafkaSettings |
withSecurityProtocol(String securityProtocol)
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).
|
KafkaSettings |
withSslCaCertificateArn(String sslCaCertificateArn)
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect
to your Kafka target endpoint.
|
KafkaSettings |
withSslClientCertificateArn(String sslClientCertificateArn)
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
|
KafkaSettings |
withSslClientKeyArn(String sslClientKeyArn)
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
|
KafkaSettings |
withSslClientKeyPassword(String sslClientKeyPassword)
The password for the client private key used to securely connect to a Kafka target endpoint.
|
KafkaSettings |
withSslEndpointIdentificationAlgorithm(KafkaSslEndpointIdentificationAlgorithm sslEndpointIdentificationAlgorithm)
Sets hostname verification for the certificate.
|
KafkaSettings |
withSslEndpointIdentificationAlgorithm(String sslEndpointIdentificationAlgorithm)
Sets hostname verification for the certificate.
|
KafkaSettings |
withTopic(String topic)
The topic to which you migrate the data.
|
public void setBroker(String broker)
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
Specify each broker location in the form broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a
list of broker locations, see Using Apache Kafka as a target for
Database Migration Service in the Database Migration Service User Guide.
broker
- A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka
instance. Specify each broker location in the form broker-hostname-or-ip:port
. For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and
examples of specifying a list of broker locations, see Using Apache Kafka as a
target for Database Migration Service in the Database Migration Service User Guide.public String getBroker()
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
Specify each broker location in the form broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a
list of broker locations, see Using Apache Kafka as a target for
Database Migration Service in the Database Migration Service User Guide.
broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of
specifying a list of broker locations, see Using Apache Kafka as a
target for Database Migration Service in the Database Migration Service User Guide.public KafkaSettings withBroker(String broker)
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.
Specify each broker location in the form broker-hostname-or-ip:port
. For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a
list of broker locations, see Using Apache Kafka as a target for
Database Migration Service in the Database Migration Service User Guide.
broker
- A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka
instance. Specify each broker location in the form broker-hostname-or-ip:port
. For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and
examples of specifying a list of broker locations, see Using Apache Kafka as a
target for Database Migration Service in the Database Migration Service User Guide.public void setTopic(String topic)
The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.
topic
- The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.public String getTopic()
The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.
"kafka-default-topic"
as the migration topic.public KafkaSettings withTopic(String topic)
The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.
topic
- The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"
as the migration topic.public void setMessageFormat(String messageFormat)
The output format for the records created on the endpoint. The message format is JSON
(default) or
JSON_UNFORMATTED
(a single line with no tab).
messageFormat
- The output format for the records created on the endpoint. The message format is JSON
(default) or JSON_UNFORMATTED
(a single line with no tab).MessageFormatValue
public String getMessageFormat()
The output format for the records created on the endpoint. The message format is JSON
(default) or
JSON_UNFORMATTED
(a single line with no tab).
JSON
(default) or JSON_UNFORMATTED
(a single line with no tab).MessageFormatValue
public KafkaSettings withMessageFormat(String messageFormat)
The output format for the records created on the endpoint. The message format is JSON
(default) or
JSON_UNFORMATTED
(a single line with no tab).
messageFormat
- The output format for the records created on the endpoint. The message format is JSON
(default) or JSON_UNFORMATTED
(a single line with no tab).MessageFormatValue
public KafkaSettings withMessageFormat(MessageFormatValue messageFormat)
The output format for the records created on the endpoint. The message format is JSON
(default) or
JSON_UNFORMATTED
(a single line with no tab).
messageFormat
- The output format for the records created on the endpoint. The message format is JSON
(default) or JSON_UNFORMATTED
(a single line with no tab).MessageFormatValue
public void setIncludeTransactionDetails(Boolean includeTransactionDetails)
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id
, previous transaction_id
, and
transaction_record_id
(the record offset within a transaction). The default is false
.
includeTransactionDetails
- Provides detailed transaction information from the source database. This information includes a commit
timestamp, a log position, and values for transaction_id
, previous
transaction_id
, and transaction_record_id
(the record offset within a
transaction). The default is false
.public Boolean getIncludeTransactionDetails()
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id
, previous transaction_id
, and
transaction_record_id
(the record offset within a transaction). The default is false
.
transaction_id
, previous
transaction_id
, and transaction_record_id
(the record offset within a
transaction). The default is false
.public KafkaSettings withIncludeTransactionDetails(Boolean includeTransactionDetails)
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id
, previous transaction_id
, and
transaction_record_id
(the record offset within a transaction). The default is false
.
includeTransactionDetails
- Provides detailed transaction information from the source database. This information includes a commit
timestamp, a log position, and values for transaction_id
, previous
transaction_id
, and transaction_record_id
(the record offset within a
transaction). The default is false
.public Boolean isIncludeTransactionDetails()
Provides detailed transaction information from the source database. This information includes a commit timestamp,
a log position, and values for transaction_id
, previous transaction_id
, and
transaction_record_id
(the record offset within a transaction). The default is false
.
transaction_id
, previous
transaction_id
, and transaction_record_id
(the record offset within a
transaction). The default is false
.public void setIncludePartitionValue(Boolean includePartitionValue)
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default is false
.
includePartitionValue
- Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default is false
.public Boolean getIncludePartitionValue()
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default is false
.
schema-table-type
. The default is false
.public KafkaSettings withIncludePartitionValue(Boolean includePartitionValue)
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default is false
.
includePartitionValue
- Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default is false
.public Boolean isIncludePartitionValue()
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type
. The default is false
.
schema-table-type
. The default is false
.public void setPartitionIncludeSchemaTable(Boolean partitionIncludeSchemaTable)
Prefixes schema and table names to partition values, when the partition type is primary-key-type
.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
partitionIncludeSchemaTable
- Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example,
suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary
key. In this case, the same primary key is sent from thousands of tables to the same partition, which
causes throttling. The default is false
.public Boolean getPartitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is primary-key-type
.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
primary-key-type
. Doing this increases data distribution among Kafka partitions. For
example, suppose that a SysBench schema has thousands of tables and each table has only limited range for
a primary key. In this case, the same primary key is sent from thousands of tables to the same partition,
which causes throttling. The default is false
.public KafkaSettings withPartitionIncludeSchemaTable(Boolean partitionIncludeSchemaTable)
Prefixes schema and table names to partition values, when the partition type is primary-key-type
.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
partitionIncludeSchemaTable
- Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kafka partitions. For example,
suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary
key. In this case, the same primary key is sent from thousands of tables to the same partition, which
causes throttling. The default is false
.public Boolean isPartitionIncludeSchemaTable()
Prefixes schema and table names to partition values, when the partition type is primary-key-type
.
Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has
thousands of tables and each table has only limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.
primary-key-type
. Doing this increases data distribution among Kafka partitions. For
example, suppose that a SysBench schema has thousands of tables and each table has only limited range for
a primary key. In this case, the same primary key is sent from thousands of tables to the same partition,
which causes throttling. The default is false
.public void setIncludeTableAlterOperations(Boolean includeTableAlterOperations)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.
includeTableAlterOperations
- Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.public Boolean getIncludeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.
rename-table
, drop-table
, add-column
, drop-column
,
and rename-column
. The default is false
.public KafkaSettings withIncludeTableAlterOperations(Boolean includeTableAlterOperations)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.
includeTableAlterOperations
- Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.public Boolean isIncludeTableAlterOperations()
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
, drop-table
, add-column
, drop-column
, and
rename-column
. The default is false
.
rename-table
, drop-table
, add-column
, drop-column
,
and rename-column
. The default is false
.public void setIncludeControlDetails(Boolean includeControlDetails)
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false
.
includeControlDetails
- Shows detailed control information for table definition, column definition, and table and column changes
in the Kafka message output. The default is false
.public Boolean getIncludeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false
.
false
.public KafkaSettings withIncludeControlDetails(Boolean includeControlDetails)
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false
.
includeControlDetails
- Shows detailed control information for table definition, column definition, and table and column changes
in the Kafka message output. The default is false
.public Boolean isIncludeControlDetails()
Shows detailed control information for table definition, column definition, and table and column changes in the
Kafka message output. The default is false
.
false
.public void setMessageMaxBytes(Integer messageMaxBytes)
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
messageMaxBytes
- The maximum size in bytes for records created on the endpoint The default is 1,000,000.public Integer getMessageMaxBytes()
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
public KafkaSettings withMessageMaxBytes(Integer messageMaxBytes)
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
messageMaxBytes
- The maximum size in bytes for records created on the endpoint The default is 1,000,000.public void setIncludeNullAndEmpty(Boolean includeNullAndEmpty)
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
includeNullAndEmpty
- Include NULL and empty columns for records migrated to the endpoint. The default is false
.public Boolean getIncludeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
false
.public KafkaSettings withIncludeNullAndEmpty(Boolean includeNullAndEmpty)
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
includeNullAndEmpty
- Include NULL and empty columns for records migrated to the endpoint. The default is false
.public Boolean isIncludeNullAndEmpty()
Include NULL and empty columns for records migrated to the endpoint. The default is false
.
false
.public void setSecurityProtocol(String securityProtocol)
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
. sasl-ssl
requires SaslUsername
and SaslPassword
.
securityProtocol
- Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
.
sasl-ssl
requires SaslUsername
and SaslPassword
.KafkaSecurityProtocol
public String getSecurityProtocol()
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
. sasl-ssl
requires SaslUsername
and SaslPassword
.
ssl-encryption
, ssl-authentication
, and sasl-ssl
.
sasl-ssl
requires SaslUsername
and SaslPassword
.KafkaSecurityProtocol
public KafkaSettings withSecurityProtocol(String securityProtocol)
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
. sasl-ssl
requires SaslUsername
and SaslPassword
.
securityProtocol
- Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
.
sasl-ssl
requires SaslUsername
and SaslPassword
.KafkaSecurityProtocol
public KafkaSettings withSecurityProtocol(KafkaSecurityProtocol securityProtocol)
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
. sasl-ssl
requires SaslUsername
and SaslPassword
.
securityProtocol
- Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption
, ssl-authentication
, and sasl-ssl
.
sasl-ssl
requires SaslUsername
and SaslPassword
.KafkaSecurityProtocol
public void setSslClientCertificateArn(String sslClientCertificateArn)
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
sslClientCertificateArn
- The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target
endpoint.public String getSslClientCertificateArn()
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
public KafkaSettings withSslClientCertificateArn(String sslClientCertificateArn)
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
sslClientCertificateArn
- The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target
endpoint.public void setSslClientKeyArn(String sslClientKeyArn)
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
sslClientKeyArn
- The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target
endpoint.public String getSslClientKeyArn()
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
public KafkaSettings withSslClientKeyArn(String sslClientKeyArn)
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
sslClientKeyArn
- The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target
endpoint.public void setSslClientKeyPassword(String sslClientKeyPassword)
The password for the client private key used to securely connect to a Kafka target endpoint.
sslClientKeyPassword
- The password for the client private key used to securely connect to a Kafka target endpoint.public String getSslClientKeyPassword()
The password for the client private key used to securely connect to a Kafka target endpoint.
public KafkaSettings withSslClientKeyPassword(String sslClientKeyPassword)
The password for the client private key used to securely connect to a Kafka target endpoint.
sslClientKeyPassword
- The password for the client private key used to securely connect to a Kafka target endpoint.public void setSslCaCertificateArn(String sslCaCertificateArn)
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
sslCaCertificateArn
- The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely
connect to your Kafka target endpoint.public String getSslCaCertificateArn()
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
public KafkaSettings withSslCaCertificateArn(String sslCaCertificateArn)
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
sslCaCertificateArn
- The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely
connect to your Kafka target endpoint.public void setSaslUsername(String saslUsername)
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
saslUsername
- The secure user name you created when you first set up your MSK cluster to validate a client identity and
make an encrypted connection between server and client using SASL-SSL authentication.public String getSaslUsername()
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
public KafkaSettings withSaslUsername(String saslUsername)
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
saslUsername
- The secure user name you created when you first set up your MSK cluster to validate a client identity and
make an encrypted connection between server and client using SASL-SSL authentication.public void setSaslPassword(String saslPassword)
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
saslPassword
- The secure password you created when you first set up your MSK cluster to validate a client identity and
make an encrypted connection between server and client using SASL-SSL authentication.public String getSaslPassword()
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
public KafkaSettings withSaslPassword(String saslPassword)
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
saslPassword
- The secure password you created when you first set up your MSK cluster to validate a client identity and
make an encrypted connection between server and client using SASL-SSL authentication.public void setNoHexPrefix(Boolean noHexPrefix)
Set this optional parameter to true
to avoid adding a '0x' prefix to raw data in hexadecimal format.
For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an
Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to enable migration of RAW
data type columns without adding the '0x' prefix.
noHexPrefix
- Set this optional parameter to true
to avoid adding a '0x' prefix to raw data in hexadecimal
format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format
moving from an Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to
enable migration of RAW data type columns without adding the '0x' prefix.public Boolean getNoHexPrefix()
Set this optional parameter to true
to avoid adding a '0x' prefix to raw data in hexadecimal format.
For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an
Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to enable migration of RAW
data type columns without adding the '0x' prefix.
true
to avoid adding a '0x' prefix to raw data in hexadecimal
format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format
moving from an Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to
enable migration of RAW data type columns without adding the '0x' prefix.public KafkaSettings withNoHexPrefix(Boolean noHexPrefix)
Set this optional parameter to true
to avoid adding a '0x' prefix to raw data in hexadecimal format.
For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an
Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to enable migration of RAW
data type columns without adding the '0x' prefix.
noHexPrefix
- Set this optional parameter to true
to avoid adding a '0x' prefix to raw data in hexadecimal
format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format
moving from an Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to
enable migration of RAW data type columns without adding the '0x' prefix.public Boolean isNoHexPrefix()
Set this optional parameter to true
to avoid adding a '0x' prefix to raw data in hexadecimal format.
For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an
Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to enable migration of RAW
data type columns without adding the '0x' prefix.
true
to avoid adding a '0x' prefix to raw data in hexadecimal
format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format
moving from an Oracle source to a Kafka target. Use the NoHexPrefix
endpoint setting to
enable migration of RAW data type columns without adding the '0x' prefix.public void setSaslMechanism(String saslMechanism)
For SASL/SSL authentication, DMS supports the SCRAM-SHA-512
mechanism by default. DMS versions 3.5.0
and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this
parameter to PLAIN.
saslMechanism
- For SASL/SSL authentication, DMS supports the SCRAM-SHA-512
mechanism by default. DMS
versions 3.5.0 and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this parameter to PLAIN.
KafkaSaslMechanism
public String getSaslMechanism()
For SASL/SSL authentication, DMS supports the SCRAM-SHA-512
mechanism by default. DMS versions 3.5.0
and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this
parameter to PLAIN.
SCRAM-SHA-512
mechanism by default. DMS
versions 3.5.0 and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this parameter to PLAIN.
KafkaSaslMechanism
public KafkaSettings withSaslMechanism(String saslMechanism)
For SASL/SSL authentication, DMS supports the SCRAM-SHA-512
mechanism by default. DMS versions 3.5.0
and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this
parameter to PLAIN.
saslMechanism
- For SASL/SSL authentication, DMS supports the SCRAM-SHA-512
mechanism by default. DMS
versions 3.5.0 and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this parameter to PLAIN.
KafkaSaslMechanism
public KafkaSettings withSaslMechanism(KafkaSaslMechanism saslMechanism)
For SASL/SSL authentication, DMS supports the SCRAM-SHA-512
mechanism by default. DMS versions 3.5.0
and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this
parameter to PLAIN.
saslMechanism
- For SASL/SSL authentication, DMS supports the SCRAM-SHA-512
mechanism by default. DMS
versions 3.5.0 and later also support the PLAIN
mechanism. To use the PLAIN
mechanism, set this parameter to PLAIN.
KafkaSaslMechanism
public void setSslEndpointIdentificationAlgorithm(String sslEndpointIdentificationAlgorithm)
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
sslEndpointIdentificationAlgorithm
- Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.KafkaSslEndpointIdentificationAlgorithm
public String getSslEndpointIdentificationAlgorithm()
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
KafkaSslEndpointIdentificationAlgorithm
public KafkaSettings withSslEndpointIdentificationAlgorithm(String sslEndpointIdentificationAlgorithm)
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
sslEndpointIdentificationAlgorithm
- Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.KafkaSslEndpointIdentificationAlgorithm
public KafkaSettings withSslEndpointIdentificationAlgorithm(KafkaSslEndpointIdentificationAlgorithm sslEndpointIdentificationAlgorithm)
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
sslEndpointIdentificationAlgorithm
- Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.KafkaSslEndpointIdentificationAlgorithm
public String toString()
toString
in class Object
Object.toString()
public KafkaSettings clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.