AWS Data Pipeline
Developer Guide (API Version 2012-10-29)
Did this page help you?  Yes | No |  Tell us about it...
Next »
View the PDF for this guide.Go to the AWS Discussion Forum for this product.Go to the Kindle Store to download this guide in Kindle format.

What is AWS Data Pipeline?

AWS Data Pipeline is a web service that you can use to automate the movement and transformation of data. With AWS Data Pipeline, you can define data-driven workflows, so that tasks can be dependent on the successful completion of previous tasks.

For example, you can use AWS Data Pipeline to archive your web server's logs to Amazon Simple Storage Service (Amazon S3) each day and then run a weekly Amazon Elastic MapReduce (Amazon EMR) cluster over those logs to generate traffic reports.

AWS Data Pipeline functional overview

In this example, AWS Data Pipeline would schedule the daily tasks to copy data and the weekly task to launch the Amazon EMR cluster. AWS Data Pipeline would also ensure that Amazon EMR waits for the final day's data to be uploaded to Amazon S3 before it began its analysis, even if there is an unforeseen delay in uploading the logs.

AWS Data Pipeline handles the ambiguities of real-world data management. You define the parameters of your data transformations and AWS Data Pipeline enforces the logic that you've set up.

AWS Data Pipeline In Action

The following video shows AWS Data Pipeline in action.

Accessing AWS Data Pipeline

AWS Data Pipeline provides a web-based user interface, the AWS Data Pipeline console. If you've signed up for an AWS account, you can access the AWS Data Pipeline console by signing into the AWS Management console and selecting Data Pipeline from the console home page. The AWS Data Pipeline console provides several templates, which are pre-configured pipelines for common scenarios. As you keep building your pipeline, graphical representation of the components appear on the design pane. The arrows between the components indicate the connection between the components. For more information, see Working with Pipelines.

If you prefer to use a command line interface or API to automate the process of creating and managing pipelines, you have several options:

  • AWS Command Line Interface (CLI) — Provides commands for a broad set of AWS products, and is supported on Windows, Mac, and Linux/Unix. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for AWS Data Pipeline, see the AWS Command Line Interface Reference.

  • AWS Data Pipeline Command Line Interface (CLI) — An application that you run on your local computer to connect to AWS Data Pipeline and create and manage pipelines. This CLI is written in Ruby and makes calls to the web service on your behalf. To specify a pipeline definition, you pass in a JSON file. To get started, see Install the AWS Data Pipeline Command Line Interface.

  • AWS Software Development Kits (SDK) — Enable you to build applications that create and manage pipelines using language-specific APIs. Using an SDK is the best option if you want to extend or customize the functionality of AWS Data Pipeline. For more information, see Working with the API.

  • Web Service API — AWS provides a low-level interface that you can use to call the web service directly using JSON. Using the API is the best option if you want to create an custom SDK that calls AWS Data Pipeline. For more information, see Making an HTTP Request to AWS Data Pipeline.