Amazon S3 目的地連接器 - Amazon Managed Streaming for Apache Kafka

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

Amazon S3 目的地連接器

此範例顯示如何使用匯流 Amazon S3 接收器 Connect 器, AWS CLI 以及在連接中建立 Amazon S3 接收器MSK連接器。

  1. 複製以下內容JSON並將其粘貼到新文件中。將預留位置字串取代為對應於 Amazon MSK 叢集的啟動程序伺服器連接字串以及叢集的子網路和安全群組的值IDs。如需有關如何設定服務執行角色的詳細資訊,請參閱 MSK Connect 的 IAM 角色和政策

    { "connectorConfiguration": { "connector.class": "io.confluent.connect.s3.S3SinkConnector", "s3.region": "us-east-1", "format.class": "io.confluent.connect.s3.format.json.JsonFormat", "flush.size": "1", "schema.compatibility": "NONE", "topics": "my-test-topic", "tasks.max": "2", "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner", "storage.class": "io.confluent.connect.s3.storage.S3Storage", "s3.bucket.name": "my-test-bucket" }, "connectorName": "example-S3-sink-connector", "kafkaCluster": { "apacheKafkaCluster": { "bootstrapServers": "<cluster-bootstrap-servers-string>", "vpc": { "subnets": [ "<cluster-subnet-1>", "<cluster-subnet-2>", "<cluster-subnet-3>" ], "securityGroups": ["<cluster-security-group-id>"] } } }, "capacity": { "provisionedCapacity": { "mcuCount": 2, "workerCount": 4 } }, "kafkaConnectVersion": "2.7.1", "serviceExecutionRoleArn": "<arn-of-a-role-that-msk-connect-can-assume>", "plugins": [ { "customPlugin": { "customPluginArn": "<arn-of-custom-plugin-that-contains-connector-code>", "revision": 1 } } ], "kafkaClusterEncryptionInTransit": {"encryptionType": "PLAINTEXT"}, "kafkaClusterClientAuthentication": {"authenticationType": "NONE"} }
  2. 在上一個步 AWS CLI 驟中儲存JSON檔案的資料夾中執行下列命令。

    aws kafkaconnect create-connector --cli-input-json file://connector-info.json

    以下是成功執行命令時所得到的輸出範例。

    { "ConnectorArn": "arn:aws:kafkaconnect:us-east-1:123450006789:connector/example-S3-sink-connector/abc12345-abcd-4444-a8b9-123456f513ed-2", "ConnectorState": "CREATING", "ConnectorName": "example-S3-sink-connector" }