Requirements for Amazon AppFlow - Amazon AppFlow

Requirements for Amazon AppFlow

Before you create a flow using Amazon AppFlow, verify that you have the required information about the source and destination and that they meet any configuration requirements.

General information for all applications

Source and destination API limits

The API calls that Amazon AppFlow makes to data sources and destinations count against any API limits for that application. For example, if you set up an hourly flow that pulls 5 pages of data from Salesforce, Amazon AppFlow will make a total of 120 daily API calls (24x5=120). This will count against your 24-hour Salesforce API limit. The exact Salesforce API limit in this example would vary depending on your edition and number of licenses.

IP address ranges

Amazon AppFlow operates from the AWS IP address ranges shown in the Amazon Web Services General Reference Guide. Configuring a flow connection with an incorrect URL, URI, or IP address range can return a 'bad gateway' error. If you encounter this error, we recommend deleting your connection and creating a new one with the correct URL, URI, or IP address range.

Schema changes

Amazon AppFlow supports the automatic import of newly created Salesforce fields into Amazon S3 without requiring the user to update their flow configurations. For other source applications, Amazon AppFlow does not currently support schema changes, but you can edit your flows to create new fields.

Note

If the source or destination fields in a flow's configuration are deleted from the source or destination application (including Salesforce), then the flow run will fail. To prevent failed flows, we recommend that you edit your flows to remove deleted fields from the mapping.

Amazon S3

Note

You can use Amazon S3 as a source or a destination.

  • Your Amazon S3 buckets must be in the same AWS Region as your console and flow.

  • If you use Amazon S3 as a source, all source files in the chosen Amazon S3 location must be in CSV format with a header row that includes the field names in each file. Before you set up the flow, ensure that the source location has at least one file in CSV format, with a list of field names separated by commas in the first line.

  • Each source file should not exceed 25 MB in size. However, you can upload multiple CSV files in the source location and Amazon AppFlow will read from all of them to transfer data over a single flow run.

The following additional settings are available when Amazon S3 is selected as a destination:

  • You can choose to add a timestamp to the filename. Your filename will end with the file creation timestamp in YYYY-MM-DDThh:mm:sss format. The creation date is in UTC time.

  • You can choose to place the file in a timestamped folder. You can choose your preferred level of granularity (year, month, week, day, or minute). The granularity that you choose determines the naming format of the folder. The timestamp is in UTC time.

  • You can specify your preferred file format for the transferred records. The following options are currently available: JSON (default), CSV, or Parquet.

    Note

    If you choose Parquet as the format for your destination file in Amazon S3, the option to aggregate all records into one file per flow run will not be available.

Amazon EventBridge

Note

You can use Amazon EventBridge as a destination only.

Configuring Amazon EventBridge integration in Amazon AppFlow

Amazon AppFlow integrates with Amazon EventBridge to receive events from Salesforce. When you configure a flow that responds to Salesforce events, you can select Amazon EventBridge as a destination. This enables Salesforce events received by Amazon AppFlow to be routed directly to a partner event bus. To configure Amazon EventBridge integration in Amazon AppFlow, you must first create a flow with Amazon EventBridge as the destination and then specify the partner event source.

To create a flow with Amazon EventBridge as the destination

  1. Open the Amazon AppFlow console at https://console.aws.amazon.com/appflow/.

  2. Choose Create flow and enter a name for your flow.

  3. For Source details, choose Salesforce as the source and provide the requested information.

  4. For Destination details, choose Amazon EventBridge as the destination and one of the following partner event sources:

    • Existing partner event source - Amazon AppFlow displays a list of existing partner event sources that are available to you.

    • New partner event source - Amazon AppFlow creates a new partner event source on your behalf. If you choose this option, the partner event source name generated by Amazon AppFlow appears in a dialog box. (Optional) You can modify this name if needed.

    Note

    The actual call to Amazon EventBridge API operations for creating this partner event source happens only when you choose Create flow in step 11 of this procedure.

  5. For Large event handling, specify the Amazon S3 bucket where you want Amazon AppFlow to send large event information.

  6. Ensure that Run flow on event is selected in the Flow trigger section. This setting ensures that the flow is executed when a new Salesforce event occurs.

  7. For field mapping, choose Map all fields directly. Alternatively, you can choose the fields that you're interested in using from the Source field name list.

  8. Choose Next.

  9. (Optional) Configure filters for data fields in Amazon AppFlow.

  10. Choose Next.

  11. Review the settings and then choose Create flow.

Associating the partner event source with the event bus in Amazon EventBridge

Before you can activate the flow you created in the previous procedure, you must go to Amazon EventBridge to associate the partner event source with the event bus. After you complete this association and activate the flow, Salesforce events start flowing to the Amazon EventBridge event bus. You must ensure that the Amazon AppFlow flow that uses Amazon EventBridge as a destination is configured before performing the steps in the following procedure.

To associate the partner event source with the event bus in Amazon EventBridge

  1. Open the Partner event sources view in the Amazon EventBridge console at https://console.aws.amazon.com/events/home?#/partners/.

  2. Choose the partner event source that you created.

  3. Choose Associate with event bus.

  4. Validate the name of the partner event bus.

  5. Choose Associate.

  6. Return to Amazon AppFlow and choose Activate flow to activate the flow.

The destination service receives all Salesforce events configured for your account. If you need to filter the kinds of events that you want to process, or send different events to different targets, you can use content-based filtering with event patterns.

Note

For events larger than 256 KB, Amazon AppFlow won't send the full event to Amazon EventBridge. Instead, the event payload contains a pointer to an Amazon S3 bucket, where you can get the full event.

Amazon Redshift

Note

You can use Amazon Redshift as a destination only.

You must provide Amazon AppFlow with the following:

  • The name and prefix of the S3 bucket that Amazon AppFlow will use when moving data into Amazon Redshift.

  • The user name and password of your Amazon Redshift user account.

  • The JDBC URL of your Amazon Redshift cluster. For more information, see Finding your cluster connection string in the Amazon Redshift Cluster Management Guide.

You must also do the following:

  • Ensure that you enter a correct JDBC connector and password when configuring your Redshift connections. An incorrect JDBC connector or password can return an '[Amazon](500310)' error.

  • Create an AWS Identity and Access Management (IAM) role that grants AmazonS3ReadOnlyAccess and access to the kms:Decrypt action (see below) so that Amazon Redshift can access the encrypted data that Amazon AppFlow stored in the S3 bucket. Attach the role to your cluster. For more information, see Create an IAM role in the Amazon Redshift Getting Started Guide.

    { "Effect": "Allow", "Action": "kms:Decrypt", "Resource": "*" }
  • Ensure that your cluster is publicly accessible. For more information, see How to make a private Redshift cluster publicly accessible in the AWS Knowledge Center.

  • Ensure that your Amazon Redshift cluster is accessible from Amazon AppFlow IP address ranges in your Region.

To ensure that your Amazon Redshift cluster is accessible from Amazon AppFlow IP address ranges in your Region

  1. Sign in to the AWS Management Console and open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.

  2. Choose the cluster to modify.

  3. Choose the link next to VPC security groups to open the Amazon Elastic Compute Cloud (Amazon EC2) console.

  4. On the Inbound Rules tab, be sure that all Amazon AppFlow IP CIDR blocks for your region and the port of your Amazon Redshift cluster are allowed.

Note

The default port for Amazon Redshift is 5439, but your port might be different. To find the Amazon AppFlow IP CIDR block for your region, see AWS IP address ranges in the Amazon Web Services General Reference.

Amplitude

Note

You can use Amplitude as a source only.

You must provide Amazon AppFlow with the API Key and Secret Key for the project with the data. For more information, see Manage Data in the Amplitude documentation.

Datadog

Note

You can use Datadog as a source only.

  • You must provide Amazon AppFlow with an API key and an application key.

  • You must configure your flow with a date range and query filters.

Dynatrace

Note

You can use Dynatrace as a source only.

  • You must provide Amazon AppFlow with an API token.

  • You must configure your flow with a date filter with a date range that does not exceed 30 days.

Google Analytics

Note

You can use Google Analytics as a source only.

Log in to the Google API Console at https://console.developers.google.com and do the following:

  • Activate the Analytics API.

  • Create a new app named AppFlow. Set the user type as "Internal". Add the scope for read only access and add amazon.com as an authorized domain.

  • Create a new OAuth 2.0 client. Set the application type as "Web application". Set the authorized JavaScript origins URL to https://console.aws.amazon.com/. Set the authorized redirect URL to https://console.aws.amazon.com/appflow/oauth for the us-east-1 Region or https://region.console.aws.amazon.com/appflow/oauth for all other Regions.

  • You must provide Amazon AppFlow with your client ID and client secret. After you provide them, you are redirected to the Google login page. When prompted, grant Amazon AppFlow permissions to access your Google Analytics account.

For more information, see Management API - Authorization in the Google Analytics documentation.

Infor Nexus

Note

You can use Infor Nexus as a source only.

Amazon AppFlow uses Hash-based Message Authentication (HMAC) to connect to Infor Nexus. You must provide Amazon AppFlow with your access key ID, user ID, secret access key, and data key.

Marketo

Note

You can use Marketo as a source only.

You must provide Amazon AppFlow with your client ID and client secret.

Salesforce

Note

You can use Salesforce as a source or destination.

  • Your Salesforce account must be enabled for API access. API access is enabled by default for Enterprise, Unlimited, Developer, and Performance editions.

  • Your Salesforce account must allow you to install connected apps. If this is disabled, contact your Salesforce administrator. After you create a Salesforce connection in Amazon AppFlow, verify that the connected app named "Amazon AppFlow Embedded Login App" is installed in your Salesforce account.

  • The refresh token policy for the "Amazon AppFlow Embedded Login App" must be set to "Refresh token is valid until revoked". Otherwise, your flows will fail when your refresh token expires.

  • You must enable Change Data Capture in Salesforce to use event-driven flow triggers. From Setup, enter "Change Data Capture" in Quick Find.

  • If your Salesforce app enforces IP address restrictions, you must grant access to the addresses used by Amazon AppFlow. For more information, see AWS IP address ranges in the Amazon Web Services General Reference.

  • If you are transferring over 1 million Salesforce records, you cannot choose any Salesforce compound field. Amazon AppFlow uses Salesforce Bulk APIs for the transfer, which does not allow transfer of compound fields.

  • To create private connections using AWS PrivateLink, you must enable both "Manager Metadata" and "Manage External Connections" user permissions in your Salesforce account. Private connections are currently available in the us-east-1 and us-west-2 AWS Regions.

  • Some Salesforce objects can't be updated, such as history objects. For these objects, Amazon AppFlow does not support incremental export (the "Transfer new data only" option) for schedule-triggered flows. Instead, you can choose the "Transfer all data" option and then select the appropriate filter to limit the records you transfer.

ServiceNow

Note

You can use ServiceNow as a source only.

  • You must provide Amazon AppFlow with your user name, password, and instance name.

  • Verify that you have web_service_admin, rest_api_explorer, and admin roles.

Singular

Note

You can use Singular as a source only.

  • You must provide Amazon AppFlow with an API key.

  • The date range for the flow cannot exceed 30 days.

  • The flow cannot return more than 100,000 records.

Slack

Note

You can use Slack as a source only.

  • To create a Slack connection in Amazon AppFlow, create a new app and note your client ID, client secret, and Slack instance name. When you configure Slack as a source, Amazon AppFlow will redirect you to the Slack login page. After you log in, you'll be prompted to give Amazon AppFlow permission to access your account.

  • Set the redirect URL to https://console.aws.amazon.com/appflow/oauth for the us-east-1 Region or https://region.console.aws.amazon.com/appflow/oauth for all other Regions.

  • Set the following user token scopes:

    • channels:history

    • channels:read

    • groups:history

    • groups:read

    • im:history

    • im:read

    • mpim:history

    • mpim:read

Snowflake

Note

You can use Snowflake as a destination only.

  • Amazon AppFlow uses the Snowflake COPY command to move data using an S3 bucket. To configure the integration, see Configuring Secure Access to Amazon S3 in the Snowflake documentation. You must also add access to the kms:Decrypt action so that Snowflake can access the encrypted data that Amazon AppFlow stored in the S3 bucket.

    { "Effect": "Allow", "Action": "kms:Decrypt", "Resource": "*" }
  • You must provide Amazon AppFlow with the name of the stage and the S3 bucket for the stage.

  • You must provide Amazon AppFlow with the user name and password for your Snowflake account.

Trend Micro

Note

You can use TrendMicro as a source only.

You must provide Amazon AppFlow with an API secret.

Veeva

Note

You can use Veeva as a source only.

You must provide Amazon AppFlow with your user name, password, and Veeva instance name. Your user account must have API access.

Zendesk

Note

You can use Zendesk as a source only.

Create an OAuth client with the following settings:

  • Unique identifier: aws_integration_to_Zendesk

  • Redirect URL: https://console.aws.amazon.com/appflow/oauth (us-east-1) or https://region.console.aws.amazon.com/appflow/oauth (all other Regions)