Amazon Managed Service for Apache Flink was previously known as Amazon Kinesis Data Analytics for Apache Flink.
Tutorial: Using a Managed Service for Apache Flink application to Replicate Data from One Topic in an MSK Cluster to Another in a VPC
The following tutorial demonstrates how to create an Amazon VPC with an Amazon MSK cluster and two topics, and how to create a Managed Service for Apache Flink application that reads from one Amazon MSK topic and writes to another.
Note
To set up required prerequisites for this exercise, first complete the Getting Started (DataStream API) exercise.
This tutorial contains the following sections:
Create an Amazon VPC with an Amazon MSK cluster
To create a sample VPC and Amazon MSK cluster to access from a Managed Service for Apache Flink application, follow the Getting Started Using Amazon MSK tutorial.
When completing the tutorial, note the following:
In Step 3: Create a Topic, repeat the
kafka-topics.sh --create
command to create a destination topic namedAWSKafkaTutorialTopicDestination
:bin/kafka-topics.sh --create --zookeeper
ZooKeeperConnectionString
--replication-factor 3 --partitions 1 --topic AWSKafkaTutorialTopicDestinationRecord the bootstrap server list for your cluster. You can get the list of bootstrap servers with the following command (replace
ClusterArn
with the ARN of your MSK cluster):aws kafka get-bootstrap-brokers --region us-west-2 --cluster-arn
ClusterArn
{... "BootstrapBrokerStringTls": "b-2.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094,b-1.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094,b-3.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094" }When following the steps in the tutorials, be sure to use your selected AWS Region in your code, commands, and console entries.
Create the Application Code
In this section, you'll download and compile the application JAR file. We recommend using Java 11.
The Java application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git
. Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
The application code is located in the
amazon-kinesis-data-analytics-java-examples/KafkaConnectors/KafkaGettingStartedJob.java
file. You can examine the code to familiarize yourself with the structure of Managed Service for Apache Flink application code.Use either the command-line Maven tool or your preferred development environment to create the JAR file. To compile the JAR file using the command-line Maven tool, enter the following:
mvn package -Dflink.version=1.15.3
If the build is successful, the following file is created:
target/KafkaGettingStartedJob-1.0.jar
Note
The provided source code relies on libraries from Java 11. If you are using a development environment,
Upload the Apache Flink Streaming Java Code
In this section, you upload your application code to the Amazon S3 bucket you created in the Getting Started (DataStream API) tutorial.
Note
If you deleted the Amazon S3 bucket from the Getting Started tutorial, follow the Upload the Apache Flink Streaming Java Code step again.
-
In the Amazon S3 console, choose the ka-app-code-
<username>
bucket, and choose Upload. -
In the Select files step, choose Add files. Navigate to the
KafkaGettingStartedJob-1.0.jar
file that you created in the previous step. You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create the Application
Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
-
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
-
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
-
For Application name, enter
MyApplication
. -
For Runtime, choose Apache Flink version 1.15.2.
-
-
For Access permissions, choose Create / update IAM role
kinesis-analytics-MyApplication-us-west-2
. -
Choose Create application.
Note
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
-
Policy:
kinesis-analytics-service-
MyApplication
-us-west-2
-
Role:
kinesisanalytics-
MyApplication
-us-west-2
Configure the Application
-
On the MyApplication page, choose Configure.
-
On the Configure application page, provide the Code location:
-
For Amazon S3 bucket, enter
ka-app-code-
.<username>
-
For Path to Amazon S3 object, enter
KafkaGettingStartedJob-1.0.jar
.
-
-
Under Access to application resources, for Access permissions, choose Create / update IAM role
kinesis-analytics-MyApplication-us-west-2
.Note
When you specify application resources using the console (such as CloudWatch Logs or an Amazon VPC), the console modifies your application execution role to grant permission to access those resources.
-
Under Properties, choose Add Group. Enter the following properties:
Group ID Key Value KafkaSource
topic AWSKafkaTutorialTopic KafkaSource
bootstrap.servers The bootstrap server list you saved previously
KafkaSource
security.protocol SSL KafkaSource
ssl.truststore.location /usr/lib/jvm/java-11-amazon-corretto/lib/security/cacerts KafkaSource
ssl.truststore.password changeit Note
The ssl.truststore.password for the default certificate is "changeit"; you do not need to change this value if you are using the default certificate.
Choose Add Group again. Enter the following properties:
Group ID Key Value KafkaSink
topic AWSKafkaTutorialTopicDestination KafkaSink
bootstrap.servers The bootstrap server list you saved previously
KafkaSink
security.protocol SSL KafkaSink
ssl.truststore.location /usr/lib/jvm/java-11-amazon-corretto/lib/security/cacerts KafkaSink
ssl.truststore.password changeit KafkaSink
transaction.timeout.ms 1000 The application code reads the above application properties to configure the source and sink used to interact with your VPC and Amazon MSK cluster. For more information about using properties, see Runtime Properties.
-
Under Snapshots, choose Disable. This will make it easier to update the application without loading invalid application state data.
-
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
-
For CloudWatch logging, choose the Enable check box.
-
In the Virtual Private Cloud (VPC) section, choose the VPC to associate with your application. Choose the subnets and security group associated with your VPC that you want the application to use to access VPC resources.
-
Choose Update.
Note
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
-
Log group:
/aws/kinesis-analytics/MyApplication
-
Log stream:
kinesis-analytics-log-stream
This log stream is used to monitor the application.
Run the Application
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Test the Application
In this section, you write records to the source topic. The application reads records from the source topic and writes them to the destination topic. You verify the application is working by writing records to the source topic and reading records from the destination topic.
To write and read records from the topics, follow the steps in Step 6: Produce and Consume Data in the Getting Started Using Amazon MSK tutorial.
To read from the destination topic, use the destination topic name instead of the source topic in your second connection to the cluster:
bin/kafka-console-consumer.sh --bootstrap-server
BootstrapBrokerString
--consumer.config client.properties --topic AWSKafkaTutorialTopicDestination --from-beginning
If no records appear in the destination topic, see the Cannot Access Resources in a VPC section in the Troubleshooting topic.