Amazon Redshift - Amazon AppFlow

Amazon Redshift

The following are the requirements and connection instructions for using Amazon Redshift with Amazon AppFlow.

Note

You can use Amazon Redshift as a destination only.

Requirements

You must provide Amazon AppFlow with the following:

  • The name and prefix of the S3 bucket that Amazon AppFlow will use when moving data into Amazon Redshift.

  • The user name and password of your Amazon Redshift user account.

  • The JDBC URL of your Amazon Redshift cluster. For more information, see Finding your cluster connection string in the Amazon Redshift Cluster Management Guide.

You must also do the following:

  • Ensure that you enter a correct JDBC connector and password when configuring your Redshift connections. An incorrect JDBC connector or password can return an '[Amazon](500310)' error.

  • Create an AWS Identity and Access Management (IAM) role that grants AmazonS3ReadOnlyAccess and access to the kms:Decrypt action (see the following example). This allows Amazon Redshift to access the encrypted data that Amazon AppFlow stored in the S3 bucket. Attach the role to your cluster.

    For more information, see Create an IAM role in the Amazon Redshift Getting Started Guide.

    { "Effect": "Allow", "Action": "kms:Decrypt", "Resource": "*" }
  • Ensure that your cluster is publicly accessible. For more information, see How to make a private Redshift cluster publicly accessible in the AWS Knowledge Center.

  • Ensure that your Amazon Redshift cluster is accessible from Amazon AppFlow IP address ranges in your Region.

Connection instructions

To ensure that your Amazon Redshift cluster is accessible from Amazon AppFlow IP address ranges in your Region

  1. Sign in to the AWS Management Console and open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.

  2. Choose the cluster to modify.

  3. Choose the link next to VPC security groups to open the Amazon Elastic Compute Cloud (Amazon EC2) console.

  4. On the Inbound Rules tab, be sure that all Amazon AppFlow IP CIDR blocks for your region and the port of your Amazon Redshift cluster are allowed.

To connect to Amazon Redshift while creating a flow

  1. Open the Amazon AppFlow console at https://console.aws.amazon.com/appflow/.

  2. Choose Create flow.

  3. For Flow details, enter a name and description for the flow.

  4. (Optional) To use a customer managed CMK instead of the default AWS managed CMK, choose Data encryption, Customize encryption settings and then choose an existing CMK or create a new one.

  5. (Optional) To add a tag, choose Tags, Add tag and then enter the key name and value.

  6. Choose Next.

  7. Choose Amazon Redshift from the Destination name list.

  8. Choose Connect to open the Connect to Amazon Redshift dialog box.

    1. Under JDBC URL, enter your access key ID.

    2. Under Bucket details, select the S3 bucket where Amazon AppFlow will write data before copying it.

    3. Under Role, select the IAM role that you created when you set up Amazon Redshift for Amazon S3 access.

    4. Under User name, enter the user name that you use to log into Amazon Redshift.

    5. Under Password, enter the password that you use to log into Amazon Redshift.

    6. Under Data encryption, enter your AWS KMS key.

    7. Under Connection name, specify a name for your connection.

  9. Choose Connect.

Now that you are connected to Amazon Redshift, you can continue with the flow creation steps as described in Getting started with Amazon AppFlow.

Tip

If you aren’t connected successfully, ensure that you have followed the instructions in the Requirements section.

Notes

  • The default port for Amazon Redshift is 5439, but your port might be different. To find the Amazon AppFlow IP CIDR block for your region, see AWS IP address ranges in the Amazon Web Services General Reference.

  • Amazon AppFlow currently supports the insert action when transferring data into Amazon Redshift, but not the update or upsert action.

Related resources