@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class KafkaStreamingSourceOptions extends Object implements Serializable, Cloneable, StructuredPojo
Additional options for streaming.
Constructor and Description |
---|
KafkaStreamingSourceOptions() |
Modifier and Type | Method and Description |
---|---|
KafkaStreamingSourceOptions |
clone() |
boolean |
equals(Object obj) |
String |
getAddRecordTimestamp()
When this option is set to 'true', the data output will contain an additional column named "__src_timestamp" that
indicates the time when the corresponding record received by the topic.
|
String |
getAssign()
The specific
TopicPartitions to consume. |
String |
getBootstrapServers()
A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . |
String |
getClassification()
An optional classification.
|
String |
getConnectionName()
The name of the connection.
|
String |
getDelimiter()
Specifies the delimiter character.
|
String |
getEmitConsumerLagMetrics()
When this option is set to 'true', for each batch, it will emit the metrics for the duration between the oldest
record received by the topic and the time it arrives in Glue to CloudWatch.
|
String |
getEndingOffsets()
The end point when a batch query is ended.
|
Boolean |
getIncludeHeaders()
Whether to include the Kafka headers.
|
Long |
getMaxOffsetsPerTrigger()
The rate limit on the maximum number of offsets that are processed per trigger interval.
|
Integer |
getMinPartitions()
The desired minimum number of partitions to read from Kafka.
|
Integer |
getNumRetries()
The number of times to retry before failing to fetch Kafka offsets.
|
Long |
getPollTimeoutMs()
The timeout in milliseconds to poll data from Kafka in Spark job executors.
|
Long |
getRetryIntervalMs()
The time in milliseconds to wait before retrying to fetch Kafka offsets.
|
String |
getSecurityProtocol()
The protocol used to communicate with brokers.
|
String |
getStartingOffsets()
The starting position in the Kafka topic to read data from.
|
Date |
getStartingTimestamp()
The timestamp of the record in the Kafka topic to start reading data from.
|
String |
getSubscribePattern()
A Java regex string that identifies the topic list to subscribe to.
|
String |
getTopicName()
The topic name as specified in Apache Kafka.
|
int |
hashCode() |
Boolean |
isIncludeHeaders()
Whether to include the Kafka headers.
|
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setAddRecordTimestamp(String addRecordTimestamp)
When this option is set to 'true', the data output will contain an additional column named "__src_timestamp" that
indicates the time when the corresponding record received by the topic.
|
void |
setAssign(String assign)
The specific
TopicPartitions to consume. |
void |
setBootstrapServers(String bootstrapServers)
A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . |
void |
setClassification(String classification)
An optional classification.
|
void |
setConnectionName(String connectionName)
The name of the connection.
|
void |
setDelimiter(String delimiter)
Specifies the delimiter character.
|
void |
setEmitConsumerLagMetrics(String emitConsumerLagMetrics)
When this option is set to 'true', for each batch, it will emit the metrics for the duration between the oldest
record received by the topic and the time it arrives in Glue to CloudWatch.
|
void |
setEndingOffsets(String endingOffsets)
The end point when a batch query is ended.
|
void |
setIncludeHeaders(Boolean includeHeaders)
Whether to include the Kafka headers.
|
void |
setMaxOffsetsPerTrigger(Long maxOffsetsPerTrigger)
The rate limit on the maximum number of offsets that are processed per trigger interval.
|
void |
setMinPartitions(Integer minPartitions)
The desired minimum number of partitions to read from Kafka.
|
void |
setNumRetries(Integer numRetries)
The number of times to retry before failing to fetch Kafka offsets.
|
void |
setPollTimeoutMs(Long pollTimeoutMs)
The timeout in milliseconds to poll data from Kafka in Spark job executors.
|
void |
setRetryIntervalMs(Long retryIntervalMs)
The time in milliseconds to wait before retrying to fetch Kafka offsets.
|
void |
setSecurityProtocol(String securityProtocol)
The protocol used to communicate with brokers.
|
void |
setStartingOffsets(String startingOffsets)
The starting position in the Kafka topic to read data from.
|
void |
setStartingTimestamp(Date startingTimestamp)
The timestamp of the record in the Kafka topic to start reading data from.
|
void |
setSubscribePattern(String subscribePattern)
A Java regex string that identifies the topic list to subscribe to.
|
void |
setTopicName(String topicName)
The topic name as specified in Apache Kafka.
|
String |
toString()
Returns a string representation of this object.
|
KafkaStreamingSourceOptions |
withAddRecordTimestamp(String addRecordTimestamp)
When this option is set to 'true', the data output will contain an additional column named "__src_timestamp" that
indicates the time when the corresponding record received by the topic.
|
KafkaStreamingSourceOptions |
withAssign(String assign)
The specific
TopicPartitions to consume. |
KafkaStreamingSourceOptions |
withBootstrapServers(String bootstrapServers)
A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . |
KafkaStreamingSourceOptions |
withClassification(String classification)
An optional classification.
|
KafkaStreamingSourceOptions |
withConnectionName(String connectionName)
The name of the connection.
|
KafkaStreamingSourceOptions |
withDelimiter(String delimiter)
Specifies the delimiter character.
|
KafkaStreamingSourceOptions |
withEmitConsumerLagMetrics(String emitConsumerLagMetrics)
When this option is set to 'true', for each batch, it will emit the metrics for the duration between the oldest
record received by the topic and the time it arrives in Glue to CloudWatch.
|
KafkaStreamingSourceOptions |
withEndingOffsets(String endingOffsets)
The end point when a batch query is ended.
|
KafkaStreamingSourceOptions |
withIncludeHeaders(Boolean includeHeaders)
Whether to include the Kafka headers.
|
KafkaStreamingSourceOptions |
withMaxOffsetsPerTrigger(Long maxOffsetsPerTrigger)
The rate limit on the maximum number of offsets that are processed per trigger interval.
|
KafkaStreamingSourceOptions |
withMinPartitions(Integer minPartitions)
The desired minimum number of partitions to read from Kafka.
|
KafkaStreamingSourceOptions |
withNumRetries(Integer numRetries)
The number of times to retry before failing to fetch Kafka offsets.
|
KafkaStreamingSourceOptions |
withPollTimeoutMs(Long pollTimeoutMs)
The timeout in milliseconds to poll data from Kafka in Spark job executors.
|
KafkaStreamingSourceOptions |
withRetryIntervalMs(Long retryIntervalMs)
The time in milliseconds to wait before retrying to fetch Kafka offsets.
|
KafkaStreamingSourceOptions |
withSecurityProtocol(String securityProtocol)
The protocol used to communicate with brokers.
|
KafkaStreamingSourceOptions |
withStartingOffsets(String startingOffsets)
The starting position in the Kafka topic to read data from.
|
KafkaStreamingSourceOptions |
withStartingTimestamp(Date startingTimestamp)
The timestamp of the record in the Kafka topic to start reading data from.
|
KafkaStreamingSourceOptions |
withSubscribePattern(String subscribePattern)
A Java regex string that identifies the topic list to subscribe to.
|
KafkaStreamingSourceOptions |
withTopicName(String topicName)
The topic name as specified in Apache Kafka.
|
public void setBootstrapServers(String bootstrapServers)
A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This option must be specified in the
API call or defined in the table metadata in the Data Catalog.
bootstrapServers
- A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This option must be specified in
the API call or defined in the table metadata in the Data Catalog.public String getBootstrapServers()
A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This option must be specified in the
API call or defined in the table metadata in the Data Catalog.
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This option must be specified
in the API call or defined in the table metadata in the Data Catalog.public KafkaStreamingSourceOptions withBootstrapServers(String bootstrapServers)
A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This option must be specified in the
API call or defined in the table metadata in the Data Catalog.
bootstrapServers
- A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This option must be specified in
the API call or defined in the table metadata in the Data Catalog.public void setSecurityProtocol(String securityProtocol)
The protocol used to communicate with brokers. The possible values are "SSL"
or
"PLAINTEXT"
.
securityProtocol
- The protocol used to communicate with brokers. The possible values are "SSL"
or
"PLAINTEXT"
.public String getSecurityProtocol()
The protocol used to communicate with brokers. The possible values are "SSL"
or
"PLAINTEXT"
.
"SSL"
or
"PLAINTEXT"
.public KafkaStreamingSourceOptions withSecurityProtocol(String securityProtocol)
The protocol used to communicate with brokers. The possible values are "SSL"
or
"PLAINTEXT"
.
securityProtocol
- The protocol used to communicate with brokers. The possible values are "SSL"
or
"PLAINTEXT"
.public void setConnectionName(String connectionName)
The name of the connection.
connectionName
- The name of the connection.public String getConnectionName()
The name of the connection.
public KafkaStreamingSourceOptions withConnectionName(String connectionName)
The name of the connection.
connectionName
- The name of the connection.public void setTopicName(String topicName)
The topic name as specified in Apache Kafka. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.
topicName
- The topic name as specified in Apache Kafka. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.public String getTopicName()
The topic name as specified in Apache Kafka. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.
"topicName"
,
"assign"
or "subscribePattern"
.public KafkaStreamingSourceOptions withTopicName(String topicName)
The topic name as specified in Apache Kafka. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.
topicName
- The topic name as specified in Apache Kafka. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.public void setAssign(String assign)
The specific TopicPartitions
to consume. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.
assign
- The specific TopicPartitions
to consume. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.public String getAssign()
The specific TopicPartitions
to consume. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.
TopicPartitions
to consume. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.public KafkaStreamingSourceOptions withAssign(String assign)
The specific TopicPartitions
to consume. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.
assign
- The specific TopicPartitions
to consume. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.public void setSubscribePattern(String subscribePattern)
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.
subscribePattern
- A Java regex string that identifies the topic list to subscribe to. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.public String getSubscribePattern()
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.
"topicName"
, "assign"
or "subscribePattern"
.public KafkaStreamingSourceOptions withSubscribePattern(String subscribePattern)
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.
subscribePattern
- A Java regex string that identifies the topic list to subscribe to. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.public void setClassification(String classification)
An optional classification.
classification
- An optional classification.public String getClassification()
An optional classification.
public KafkaStreamingSourceOptions withClassification(String classification)
An optional classification.
classification
- An optional classification.public void setDelimiter(String delimiter)
Specifies the delimiter character.
delimiter
- Specifies the delimiter character.public String getDelimiter()
Specifies the delimiter character.
public KafkaStreamingSourceOptions withDelimiter(String delimiter)
Specifies the delimiter character.
delimiter
- Specifies the delimiter character.public void setStartingOffsets(String startingOffsets)
The starting position in the Kafka topic to read data from. The possible values are "earliest"
or
"latest"
. The default value is "latest"
.
startingOffsets
- The starting position in the Kafka topic to read data from. The possible values are
"earliest"
or "latest"
. The default value is "latest"
.public String getStartingOffsets()
The starting position in the Kafka topic to read data from. The possible values are "earliest"
or
"latest"
. The default value is "latest"
.
"earliest"
or "latest"
. The default value is "latest"
.public KafkaStreamingSourceOptions withStartingOffsets(String startingOffsets)
The starting position in the Kafka topic to read data from. The possible values are "earliest"
or
"latest"
. The default value is "latest"
.
startingOffsets
- The starting position in the Kafka topic to read data from. The possible values are
"earliest"
or "latest"
. The default value is "latest"
.public void setEndingOffsets(String endingOffsets)
The end point when a batch query is ended. Possible values are either "latest"
or a JSON string that
specifies an ending offset for each TopicPartition
.
endingOffsets
- The end point when a batch query is ended. Possible values are either "latest"
or a JSON
string that specifies an ending offset for each TopicPartition
.public String getEndingOffsets()
The end point when a batch query is ended. Possible values are either "latest"
or a JSON string that
specifies an ending offset for each TopicPartition
.
"latest"
or a JSON
string that specifies an ending offset for each TopicPartition
.public KafkaStreamingSourceOptions withEndingOffsets(String endingOffsets)
The end point when a batch query is ended. Possible values are either "latest"
or a JSON string that
specifies an ending offset for each TopicPartition
.
endingOffsets
- The end point when a batch query is ended. Possible values are either "latest"
or a JSON
string that specifies an ending offset for each TopicPartition
.public void setPollTimeoutMs(Long pollTimeoutMs)
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512
.
pollTimeoutMs
- The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is
512
.public Long getPollTimeoutMs()
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512
.
512
.public KafkaStreamingSourceOptions withPollTimeoutMs(Long pollTimeoutMs)
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512
.
pollTimeoutMs
- The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is
512
.public void setNumRetries(Integer numRetries)
The number of times to retry before failing to fetch Kafka offsets. The default value is 3
.
numRetries
- The number of times to retry before failing to fetch Kafka offsets. The default value is 3
.public Integer getNumRetries()
The number of times to retry before failing to fetch Kafka offsets. The default value is 3
.
3
.public KafkaStreamingSourceOptions withNumRetries(Integer numRetries)
The number of times to retry before failing to fetch Kafka offsets. The default value is 3
.
numRetries
- The number of times to retry before failing to fetch Kafka offsets. The default value is 3
.public void setRetryIntervalMs(Long retryIntervalMs)
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10
.
retryIntervalMs
- The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is
10
.public Long getRetryIntervalMs()
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10
.
10
.public KafkaStreamingSourceOptions withRetryIntervalMs(Long retryIntervalMs)
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10
.
retryIntervalMs
- The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is
10
.public void setMaxOffsetsPerTrigger(Long maxOffsetsPerTrigger)
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total
number of offsets is proportionally split across topicPartitions
of different volumes. The default
value is null, which means that the consumer reads all offsets until the known latest offset.
maxOffsetsPerTrigger
- The rate limit on the maximum number of offsets that are processed per trigger interval. The specified
total number of offsets is proportionally split across topicPartitions
of different volumes.
The default value is null, which means that the consumer reads all offsets until the known latest offset.public Long getMaxOffsetsPerTrigger()
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total
number of offsets is proportionally split across topicPartitions
of different volumes. The default
value is null, which means that the consumer reads all offsets until the known latest offset.
topicPartitions
of different volumes.
The default value is null, which means that the consumer reads all offsets until the known latest offset.public KafkaStreamingSourceOptions withMaxOffsetsPerTrigger(Long maxOffsetsPerTrigger)
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total
number of offsets is proportionally split across topicPartitions
of different volumes. The default
value is null, which means that the consumer reads all offsets until the known latest offset.
maxOffsetsPerTrigger
- The rate limit on the maximum number of offsets that are processed per trigger interval. The specified
total number of offsets is proportionally split across topicPartitions
of different volumes.
The default value is null, which means that the consumer reads all offsets until the known latest offset.public void setMinPartitions(Integer minPartitions)
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
minPartitions
- The desired minimum number of partitions to read from Kafka. The default value is null, which means that
the number of spark partitions is equal to the number of Kafka partitions.public Integer getMinPartitions()
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
public KafkaStreamingSourceOptions withMinPartitions(Integer minPartitions)
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
minPartitions
- The desired minimum number of partitions to read from Kafka. The default value is null, which means that
the number of spark partitions is equal to the number of Kafka partitions.public void setIncludeHeaders(Boolean includeHeaders)
Whether to include the Kafka headers. When the option is set to "true", the data output will contain an
additional column named "glue_streaming_kafka_headers" with type
Array[Struct(key: String, value: String)]
. The default value is "false". This option is available in
Glue version 3.0 or later only.
includeHeaders
- Whether to include the Kafka headers. When the option is set to "true", the data output will contain an
additional column named "glue_streaming_kafka_headers" with type
Array[Struct(key: String, value: String)]
. The default value is "false". This option is
available in Glue version 3.0 or later only.public Boolean getIncludeHeaders()
Whether to include the Kafka headers. When the option is set to "true", the data output will contain an
additional column named "glue_streaming_kafka_headers" with type
Array[Struct(key: String, value: String)]
. The default value is "false". This option is available in
Glue version 3.0 or later only.
Array[Struct(key: String, value: String)]
. The default value is "false". This option is
available in Glue version 3.0 or later only.public KafkaStreamingSourceOptions withIncludeHeaders(Boolean includeHeaders)
Whether to include the Kafka headers. When the option is set to "true", the data output will contain an
additional column named "glue_streaming_kafka_headers" with type
Array[Struct(key: String, value: String)]
. The default value is "false". This option is available in
Glue version 3.0 or later only.
includeHeaders
- Whether to include the Kafka headers. When the option is set to "true", the data output will contain an
additional column named "glue_streaming_kafka_headers" with type
Array[Struct(key: String, value: String)]
. The default value is "false". This option is
available in Glue version 3.0 or later only.public Boolean isIncludeHeaders()
Whether to include the Kafka headers. When the option is set to "true", the data output will contain an
additional column named "glue_streaming_kafka_headers" with type
Array[Struct(key: String, value: String)]
. The default value is "false". This option is available in
Glue version 3.0 or later only.
Array[Struct(key: String, value: String)]
. The default value is "false". This option is
available in Glue version 3.0 or later only.public void setAddRecordTimestamp(String addRecordTimestamp)
When this option is set to 'true', the data output will contain an additional column named "__src_timestamp" that indicates the time when the corresponding record received by the topic. The default value is 'false'. This option is supported in Glue version 4.0 or later.
addRecordTimestamp
- When this option is set to 'true', the data output will contain an additional column named
"__src_timestamp" that indicates the time when the corresponding record received by the topic. The default
value is 'false'. This option is supported in Glue version 4.0 or later.public String getAddRecordTimestamp()
When this option is set to 'true', the data output will contain an additional column named "__src_timestamp" that indicates the time when the corresponding record received by the topic. The default value is 'false'. This option is supported in Glue version 4.0 or later.
public KafkaStreamingSourceOptions withAddRecordTimestamp(String addRecordTimestamp)
When this option is set to 'true', the data output will contain an additional column named "__src_timestamp" that indicates the time when the corresponding record received by the topic. The default value is 'false'. This option is supported in Glue version 4.0 or later.
addRecordTimestamp
- When this option is set to 'true', the data output will contain an additional column named
"__src_timestamp" that indicates the time when the corresponding record received by the topic. The default
value is 'false'. This option is supported in Glue version 4.0 or later.public void setEmitConsumerLagMetrics(String emitConsumerLagMetrics)
When this option is set to 'true', for each batch, it will emit the metrics for the duration between the oldest record received by the topic and the time it arrives in Glue to CloudWatch. The metric's name is "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This option is supported in Glue version 4.0 or later.
emitConsumerLagMetrics
- When this option is set to 'true', for each batch, it will emit the metrics for the duration between the
oldest record received by the topic and the time it arrives in Glue to CloudWatch. The metric's name is
"glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This option is supported in Glue
version 4.0 or later.public String getEmitConsumerLagMetrics()
When this option is set to 'true', for each batch, it will emit the metrics for the duration between the oldest record received by the topic and the time it arrives in Glue to CloudWatch. The metric's name is "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This option is supported in Glue version 4.0 or later.
public KafkaStreamingSourceOptions withEmitConsumerLagMetrics(String emitConsumerLagMetrics)
When this option is set to 'true', for each batch, it will emit the metrics for the duration between the oldest record received by the topic and the time it arrives in Glue to CloudWatch. The metric's name is "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This option is supported in Glue version 4.0 or later.
emitConsumerLagMetrics
- When this option is set to 'true', for each batch, it will emit the metrics for the duration between the
oldest record received by the topic and the time it arrives in Glue to CloudWatch. The metric's name is
"glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This option is supported in Glue
version 4.0 or later.public void setStartingTimestamp(Date startingTimestamp)
The timestamp of the record in the Kafka topic to start reading data from. The possible values are a timestamp
string in UTC format of the pattern yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset
with a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of StartingTimestamp
or StartingOffsets
must be set.
startingTimestamp
- The timestamp of the record in the Kafka topic to start reading data from. The possible values are a
timestamp string in UTC format of the pattern yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC
timezone offset with a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of StartingTimestamp
or StartingOffsets
must be set.
public Date getStartingTimestamp()
The timestamp of the record in the Kafka topic to start reading data from. The possible values are a timestamp
string in UTC format of the pattern yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset
with a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of StartingTimestamp
or StartingOffsets
must be set.
yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC
timezone offset with a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of StartingTimestamp
or StartingOffsets
must be set.
public KafkaStreamingSourceOptions withStartingTimestamp(Date startingTimestamp)
The timestamp of the record in the Kafka topic to start reading data from. The possible values are a timestamp
string in UTC format of the pattern yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset
with a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of StartingTimestamp
or StartingOffsets
must be set.
startingTimestamp
- The timestamp of the record in the Kafka topic to start reading data from. The possible values are a
timestamp string in UTC format of the pattern yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC
timezone offset with a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of StartingTimestamp
or StartingOffsets
must be set.
public String toString()
toString
in class Object
Object.toString()
public KafkaStreamingSourceOptions clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.