Streaming Analytics Pipeline
Streaming Analytics Pipeline

The AWS Documentation website is getting a new look!
Try it now and let us know what you think. Switch to the new look >>

You can return to the original look by selecting English in the language selector above.

Automated Deployment

Before you launch the automated deployment, please review the architecture, configuration, and other considerations discussed in this guide. Follow the step-by-step instructions in this section to configure and deploy a Streaming Analytics Pipeline into your account.

Time to deploy: Approximately five (5) minutes


If you choose Amazon Redshift or Amazon Elasticsearch Service as the destination for your analyzed data, you must configure them to work with the Streaming Analytics Pipeline solution.

Amazon Redshift

To configure Amazon Redshift, your Amazon Redshift cluster must have a table that is configured to accept the data in the format that is output by the Amazon Kinesis Data Analytics application. You must also have write permissions to write to that table. If your Amazon Redshift cluster is located in an Amazon Virtual Private Cloud, the cluster must be publicly accessible with a public IP address. The cluster’s Amazon Elastic Compute Cloud (Amazon EC2) security group should allow access from the AWS Region’s Amazon Kinesis Data Firehose IP addresses:

  • US East (N. Virginia) Region:

  • US West (Oregon) Region:

  • US West (N. California) Region:

Amazon Elasticsearch Service

To configure Amazon Elasticsearch Service, your Amazon Elasticsearch Service domain should have an existing index and type to which data can be assigned. We also recommend you create and map your fields to the appropriate data type before you start the Amazon Kinesis Data Analytics application to ensure that the solution assigns your data to the right type. If you do not map the data types before you deploy the Streaming Analytics Pipeline, the solution will create data types for you. But, these data types may not be the types you want.

What We'll Cover

The procedure for deploying this architecture on AWS consists of the following steps. For detailed instructions, follow the links for each step.

Step 1. Launch the Stack

  • Launch the AWS CloudFormation template into your AWS account.

  • Enter values for required parameters.

  • Review the other template parameters, and adjust if necessary.

Step 2. Validate and Start the Application

  • Verify that the schema and application code are correct.

  • Start the application.

Step 3. Start Streaming Data

  • Start streaming data to the source Kinesis stream.

  • View results in your external destination.