Building visual ETL jobs with AWS Glue Studio - AWS Glue

Building visual ETL jobs with AWS Glue Studio

An AWS Glue job encapsulates a script that connects to your source data, processes it, and then writes it out to your data target. Typically, a job runs extract, transform, and load (ETL) scripts. Jobs can run scripts designed for Apache Spark and Ray runtime environments. Jobs can also run general-purpose Python scripts (Python shell jobs.) AWS Glue triggers can start jobs based on a schedule or event, or on demand. You can monitor job runs to understand runtime metrics such as completion status, duration, and start time.

You can use scripts that AWS Glue generates or you can provide your own. With a source schema and target location or schema, the AWS Glue Studio code generator can automatically create an Apache Spark API (PySpark) script. You can use this script as a starting point and edit it to meet your goals.

AWS Glue can write output files in several data formats. Each job type may support different output formats. For some data formats, common compression formats can be written.

Signing in to the AWS Glue console

A job in AWS Glue consists of the business logic that performs extract, transform, and load (ETL) work. You can create jobs in the ETL section of the AWS Glue console.

To view existing jobs, sign in to the AWS Management Console and open the AWS Glue console at https://console.aws.amazon.com/glue/. Then choose the Jobs tab in AWS Glue. The Jobs list displays the location of the script that is associated with each job, when the job was last modified, and the current job bookmark option.

While creating a new job, or after you have saved your job, you can use can AWS Glue Studio to modify your ETL jobs. You can do this by editing the nodes in the visual editor or by editing the job script in developer mode. You can also add and remove nodes in the visual editor to create more complicated ETL jobs.

Next steps for creating a job in AWS Glue Studio

You use the visual job editor to configure nodes for your job. Each node represents an action, such as reading data from the source location or applying a transform to the data. Each node you add to your job has properties that provide information about either the data location or the transform.

The next steps for creating and managing your jobs are: