Skip to content

API for AWS Data Pipeline

ABAP Interface /AWS1/IF_DPI

The "TLA" is a Three Letter Abbreviation that appears in ABAP class names, data dictionary objects and other ABAP objects throughout the AWS SDK for SAP ABAP. The TLA for AWS Data Pipeline is DPI. This TLA helps squeeze ABAP objects into the 30-character length limit of the ABAP data dictionary.


To install the AWS SDK for SAP ABAP, import the Core transport, along with the transport for the Data Pipeline module and other API modules you are interested in. A few modules are included in the Core transport itself. For more information, see the Developer Guide guide.

About The Service

AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data.

AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management.

AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.

Using the SDK

In your code, create a client using the SDK module for AWS Data Pipeline, which is created with factory method /AWS1/CL_DPI_FACTORY=>create(). In this example we will assume you have configured an SDK profile in transaction /AWS1/IMG called ZFINANCE.

DATA(go_session)   = /aws1/cl_rt_session_aws=>create( 'ZFINANCE' ).
DATA(go_dpi)       = /aws1/cl_dpi_factory=>create( go_session ).

Your variable go_dpi is an instance of /AWS1/IF_DPI, and all of the operations in the AWS Data Pipeline service are accessed by calling methods in /AWS1/IF_DPI.

API Operations

For an overview of ABAP method calls corresponding to API operations in AWS Data Pipeline, see the Operation List.

Factory Method

/AWS1/CL_DPI_FACTORY=>create( )

Creates an object of type /AWS1/IF_DPI.


Optional arguments:







/AWS1/IF_DPI represents the ABAP client for the Data Pipeline service, representing each operation as a method call. For more information see the API Page page.

Configuring Programmatically

DATA(lo_config) = DATA(go_dpi)->get_config( ).

lo_config is a variable of type /AWS1/CL_DPI_CONFIG. See the documentation for /AWS1/CL_DPI_CONFIG for details on the settings that can be configured.


Paginators for AWS Data Pipeline can be created via get_paginator() which returns a paginator object of type /AWS1/IF_DPI_PAGINATOR. The operation method that is being paginated is called using the paginator object, which accepts any necessary parameters to provide to the underlying API operation. This returns an iterator object which can be used to iterate over paginated results using has_next() and get_next() methods.

Details about the paginator methods available for service AWS Data Pipeline can be found in interface /AWS1/IF_DPI_PAGINATOR.