Namespace Amazon.CDK.AWS.Glue.Alpha
AWS Glue Construct Library
---The APIs of higher level constructs in this module are experimental and under active development.
They are subject to non-backward compatible changes or removal in any future version. These are
not subject to the <a href="https://semver.org/">Semantic Versioning</a> model and breaking changes will be
announced in the release notes. This means that while you may use them, you may need to update
your source code when upgrading to a newer version of this package.
This module is part of the AWS Cloud Development Kit project.
README
AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development.
The Glue L2 construct has convenience methods working backwards from common use cases and sets required parameters to defaults that align with recommended best practices for each job type. It also provides customers with a balance between flexibility via optional parameter overrides, and opinionated interfaces that discouraging anti-patterns, resulting in reduced time to develop and deploy new resources.
References
Create a Glue Job
A Job encapsulates a script that connects to data sources, processes them, and then writes output to a data target. There are four types of Glue Jobs: Spark (ETL and Streaming), Python Shell, Ray, and Flex Jobs. Most of the required parameters for these jobs are common across all types, but there are a few differences depending on the languages supported and features provided by each type. For all job types, the L2 defaults to AWS best practice recommendations, such as:
This iteration of the L2 construct introduces breaking changes to the existing glue-alpha-module, but these changes streamline the developer experience, introduce new constants for defaults, and replacing synth-time validations with interface contracts for enforcement of the parameter combinations that Glue supports. As an opinionated construct, the Glue L2 construct does not allow developers to create resources that use non-current versions of Glue or deprecated language dependencies (e.g. deprecated versions of Python). As always, L1s allow you to specify a wider range of parameters if you need or want to use alternative configurations.
Optional and required parameters for each job are enforced via interface rather than validation; see Glue's public documentation for more granular details.
Spark Jobs
ETL jobs support pySpark and Scala languages, for which there are separate but
similar constructors. ETL jobs default to the G2 worker type, but you can
override this default with other supported worker type values (G1, G2, G4
and G8). ETL jobs defaults to Glue version 4.0, which you can override to 3.0.
The following ETL features are enabled by default:
—enable-metrics, —enable-spark-ui, —enable-continuous-cloudwatch-log.
You can find more details about version, worker type and other features in
Glue's public documentation.
Reference the pyspark-etl-jobs.test.ts and scalaspark-etl-jobs.test.ts unit tests for examples of required-only and optional job parameters when creating these types of jobs.
For the sake of brevity, examples are shown using the pySpark job variety.
Example with only required parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PySparkEtlJob(stack, "PySparkETLJob", new PySparkEtlJobProps {
Role = role,
Script = script,
JobName = "PySparkETLJob"
});
Example with optional override parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PySparkEtlJob(stack, "PySparkETLJob", new PySparkEtlJobProps {
JobName = "PySparkETLJobCustomName",
Description = "This is a description",
Role = role,
Script = script,
GlueVersion = GlueVersion.V3_0,
ContinuousLogging = new ContinuousLoggingProps { Enabled = false },
WorkerType = WorkerType.G_2X,
MaxConcurrentRuns = 100,
Timeout = Duration.Hours(2),
Connections = new [] { Connection.FromConnectionName(stack, "Connection", "connectionName") },
SecurityConfiguration = SecurityConfiguration.FromSecurityConfigurationName(stack, "SecurityConfig", "securityConfigName"),
Tags = new Dictionary<string, string> {
{ "FirstTagName", "FirstTagValue" },
{ "SecondTagName", "SecondTagValue" },
{ "XTagName", "XTagValue" }
},
NumberOfWorkers = 2,
MaxRetries = 2
});
Streaming Jobs
Streaming jobs are similar to ETL jobs, except that they perform ETL on data
streams using the Apache Spark Structured Streaming framework. Some Spark
job features are not available to Streaming ETL jobs. They support Scala
and pySpark languages. PySpark streaming jobs default Python 3.9,
which you can override with any non-deprecated version of Python. It
defaults to the G2 worker type and Glue 4.0, both of which you can override.
The following best practice features are enabled by default:
—enable-metrics, —enable-spark-ui, —enable-continuous-cloudwatch-log
.
Reference the pyspark-streaming-jobs.test.ts and scalaspark-streaming-jobs.test.ts unit tests for examples of required-only and optional job parameters when creating these types of jobs.
Example with only required parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PySparkStreamingJob(stack, "ImportedJob", new PySparkStreamingJobProps { Role = role, Script = script });
Example with optional override parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PySparkStreamingJob(stack, "PySparkStreamingJob", new PySparkStreamingJobProps {
JobName = "PySparkStreamingJobCustomName",
Description = "This is a description",
Role = role,
Script = script,
GlueVersion = GlueVersion.V3_0,
ContinuousLogging = new ContinuousLoggingProps { Enabled = false },
WorkerType = WorkerType.G_2X,
MaxConcurrentRuns = 100,
Timeout = Duration.Hours(2),
Connections = new [] { Connection.FromConnectionName(stack, "Connection", "connectionName") },
SecurityConfiguration = SecurityConfiguration.FromSecurityConfigurationName(stack, "SecurityConfig", "securityConfigName"),
Tags = new Dictionary<string, string> {
{ "FirstTagName", "FirstTagValue" },
{ "SecondTagName", "SecondTagValue" },
{ "XTagName", "XTagValue" }
},
NumberOfWorkers = 2,
MaxRetries = 2
});
Flex Jobs
The flexible execution class is appropriate for non-urgent jobs such as
pre-production jobs, testing, and one-time data loads. Flexible jobs default
to Glue version 3.0 and worker type G_2X
. The following best practice
features are enabled by default:
—enable-metrics, —enable-spark-ui, —enable-continuous-cloudwatch-log
Reference the pyspark-flex-etl-jobs.test.ts and scalaspark-flex-etl-jobs.test.ts unit tests for examples of required-only and optional job parameters when creating these types of jobs.
Example with only required parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PySparkFlexEtlJob(stack, "ImportedJob", new PySparkFlexEtlJobProps { Role = role, Script = script });
Example with optional override parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PySparkEtlJob(stack, "pySparkEtlJob", new PySparkEtlJobProps {
JobName = "pySparkEtlJob",
Description = "This is a description",
Role = role,
Script = script,
GlueVersion = GlueVersion.V3_0,
ContinuousLogging = new ContinuousLoggingProps { Enabled = false },
WorkerType = WorkerType.G_2X,
MaxConcurrentRuns = 100,
Timeout = Duration.Hours(2),
Connections = new [] { Connection.FromConnectionName(stack, "Connection", "connectionName") },
SecurityConfiguration = SecurityConfiguration.FromSecurityConfigurationName(stack, "SecurityConfig", "securityConfigName"),
Tags = new Dictionary<string, string> {
{ "FirstTagName", "FirstTagValue" },
{ "SecondTagName", "SecondTagValue" },
{ "XTagName", "XTagValue" }
},
NumberOfWorkers = 2,
MaxRetries = 2
});
Python Shell Jobs
Python shell jobs support a Python version that depends on the AWS Glue
version you use. These can be used to schedule and run tasks that don't
require an Apache Spark environment. Python shell jobs default to
Python 3.9 and a MaxCapacity of 0.0625
. Python 3.9 supports pre-loaded
analytics libraries using the library-set=analytics
flag, which is
enabled by default.
Reference the pyspark-shell-job.test.ts unit tests for examples of required-only and optional job parameters when creating these types of jobs.
Example with only required parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PythonShellJob(stack, "ImportedJob", new PythonShellJobProps { Role = role, Script = script });
Example with optional override parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PythonShellJob(stack, "PythonShellJob", new PythonShellJobProps {
JobName = "PythonShellJobCustomName",
Description = "This is a description",
PythonVersion = PythonVersion.TWO,
MaxCapacity = MaxCapacity.DPU_1,
Role = role,
Script = script,
GlueVersion = GlueVersion.V2_0,
ContinuousLogging = new ContinuousLoggingProps { Enabled = false },
WorkerType = WorkerType.G_2X,
MaxConcurrentRuns = 100,
Timeout = Duration.Hours(2),
Connections = new [] { Connection.FromConnectionName(stack, "Connection", "connectionName") },
SecurityConfiguration = SecurityConfiguration.FromSecurityConfigurationName(stack, "SecurityConfig", "securityConfigName"),
Tags = new Dictionary<string, string> {
{ "FirstTagName", "FirstTagValue" },
{ "SecondTagName", "SecondTagValue" },
{ "XTagName", "XTagValue" }
},
NumberOfWorkers = 2,
MaxRetries = 2
});
Ray Jobs
Glue Ray jobs use worker type Z.2X and Glue version 4.0. These are not overrideable since these are the only configuration that Glue Ray jobs currently support. The runtime defaults to Ray2.4 and min workers defaults to 3.
Reference the ray-job.test.ts unit tests for examples of required-only and optional job parameters when creating these types of jobs.
Example with only required parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new RayJob(stack, "ImportedJob", new RayJobProps { Role = role, Script = script });
Example with optional override parameters:
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new RayJob(stack, "ImportedJob", new RayJobProps {
Role = role,
Script = script,
JobName = "RayCustomJobName",
Description = "This is a description",
WorkerType = WorkerType.Z_2X,
NumberOfWorkers = 5,
Runtime = Runtime.RAY_TWO_FOUR,
MaxRetries = 3,
MaxConcurrentRuns = 100,
Timeout = Duration.Hours(2),
Connections = new [] { Connection.FromConnectionName(stack, "Connection", "connectionName") },
SecurityConfiguration = SecurityConfiguration.FromSecurityConfigurationName(stack, "SecurityConfig", "securityConfigName"),
Tags = new Dictionary<string, string> {
{ "FirstTagName", "FirstTagValue" },
{ "SecondTagName", "SecondTagValue" },
{ "XTagName", "XTagValue" }
}
});
Enable Job Run Queuing
AWS Glue job queuing monitors your account level quotas and limits. If quotas or limits are insufficient to start a Glue job run, AWS Glue will automatically queue the job and wait for limits to free up. Once limits become available, AWS Glue will retry the job run. Glue jobs will queue for limits like max concurrent job runs per account, max concurrent Data Processing Units (DPU), and resource unavailable due to IP address exhaustion in Amazon Virtual Private Cloud (Amazon VPC).
Enable job run queuing by setting the jobRunQueuingEnabled
property to true
.
using Amazon.CDK;
using Amazon.CDK.AWS.IAM;
Stack stack;
IRole role;
Code script;
new PySparkEtlJob(stack, "PySparkETLJob", new PySparkEtlJobProps {
Role = role,
Script = script,
JobName = "PySparkETLJob",
JobRunQueuingEnabled = true
});
Uploading scripts from the CDK app repository to S3
Similar to other L2 constructs, the Glue L2 automates uploading / updating scripts to S3 via an optional fromAsset parameter pointing to a script in the local file structure. You provide the existing S3 bucket and path to which you'd like the script to be uploaded.
Reference the unit tests for examples of repo and S3 code target examples.
Workflow Triggers
You can use Glue workflows to create and visualize complex extract, transform, and load (ETL) activities involving multiple crawlers, jobs, and triggers. Standalone triggers are an anti-pattern, so you must create triggers from within a workflow using the L2 construct.
Within a workflow object, there are functions to create different types of triggers with actions and predicates. You then add those triggers to jobs.
StartOnCreation defaults to true for all trigger types, but you can override it if you prefer for your trigger not to start on creation.
Reference the workflow-triggers.test.ts unit tests for examples of creating workflows and triggers.
On-demand triggers can start glue jobs or crawlers. This construct provides convenience functions to create on-demand crawler or job triggers. The constructor takes an optional description parameter, but abstracts the requirement of an actions list using the job or crawler objects using conditional types.
You can create scheduled triggers using cron expressions. This construct provides daily, weekly, and monthly convenience functions, as well as a custom function that allows you to create your own custom timing using the existing event Schedule class without having to build your own cron expressions. The L2 extracts the expression that Glue requires from the Schedule object. The constructor takes an optional description and a list of jobs or crawlers as actions.
3. Notify Event Triggers
There are two types of notify event triggers: batching and non-batching.
For batching triggers, you must specify BatchSize
. For non-batching
triggers, BatchSize
defaults to 1. For both triggers, BatchWindow
defaults to 900 seconds, but you can override the window to align with
your workload's requirements.
4. Conditional Triggers
Conditional triggers have a predicate and actions associated with them. The trigger actions are executed when the predicateCondition is true.
Connection Properties
A Connection
allows Glue jobs, crawlers and development endpoints to access
certain types of data stores.
***Secrets Management **You must specify JDBC connection credentials in Secrets Manager and provide the Secrets Manager Key name as a property to the job connection.
SecurityGroup securityGroup;
Subnet subnet;
new Connection(this, "MyConnection", new ConnectionProps {
Type = ConnectionType.NETWORK,
// The security groups granting AWS Glue inbound access to the data source within the VPC
SecurityGroups = new [] { securityGroup },
// The VPC subnet which contains the data source
Subnet = subnet
});
For RDS Connection
by JDBC, it is recommended to manage credentials using AWS Secrets Manager. To use Secret, specify SECRET_ID
in properties
like the following code. Note that in this case, the subnet must have a route to the AWS Secrets Manager VPC endpoint or to the AWS Secrets Manager endpoint through a NAT gateway.
SecurityGroup securityGroup;
Subnet subnet;
DatabaseCluster db;
new Connection(this, "RdsConnection", new ConnectionProps {
Type = ConnectionType.JDBC,
SecurityGroups = new [] { securityGroup },
Subnet = subnet,
Properties = new Dictionary<string, string> {
{ "JDBC_CONNECTION_URL", $"jdbc:mysql://{db.clusterEndpoint.socketAddress}/databasename" },
{ "JDBC_ENFORCE_SSL", "false" },
{ "SECRET_ID", db.Secret.SecretName }
}
});
If you need to use a connection type that doesn't exist as a static member on ConnectionType
, you can instantiate a ConnectionType
object, e.g: new glue.ConnectionType('NEW_TYPE')
.
See Adding a Connection to Your Data Store and Connection Structure documentation for more information on the supported data stores and their configurations.
SecurityConfiguration
A SecurityConfiguration
is a set of security properties that can be used by AWS Glue to encrypt data at rest.
new SecurityConfiguration(this, "MySecurityConfiguration", new SecurityConfigurationProps {
CloudWatchEncryption = new CloudWatchEncryption {
Mode = CloudWatchEncryptionMode.KMS
},
JobBookmarksEncryption = new JobBookmarksEncryption {
Mode = JobBookmarksEncryptionMode.CLIENT_SIDE_KMS
},
S3Encryption = new S3Encryption {
Mode = S3EncryptionMode.KMS
}
});
By default, a shared KMS key is created for use with the encryption configurations that require one. You can also supply your own key for each encryption config, for example, for CloudWatch encryption:
Key key;
new SecurityConfiguration(this, "MySecurityConfiguration", new SecurityConfigurationProps {
CloudWatchEncryption = new CloudWatchEncryption {
Mode = CloudWatchEncryptionMode.KMS,
KmsKey = key
}
});
See documentation for more info for Glue encrypting data written by Crawlers, Jobs, and Development Endpoints.
Database
A Database
is a logical grouping of Tables
in the Glue Catalog.
new Database(this, "MyDatabase", new DatabaseProps {
DatabaseName = "my_database",
Description = "my_database_description"
});
Table
A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in a S3 bucket), and format for the files (Json, Avro, Parquet, etc.):
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
}, new Column {
Name = "col2",
Type = Schema.Array(Schema.STRING),
Comment = "col2 is an array of strings"
} },
DataFormat = DataFormat.JSON
});
By default, a S3 bucket will be created to store the table's data but you can manually pass the bucket
and s3Prefix
:
Bucket myBucket;
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Bucket = myBucket,
S3Prefix = "my-table/",
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Glue tables can be configured to contain user-defined properties, to describe the physical storage of table data, through the storageParameters
property:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
StorageParameters = new [] { StorageParameter.SkipHeaderLineCount(1), StorageParameter.CompressionType(CompressionType.GZIP), StorageParameter.Custom("separatorChar", ",") },
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Glue tables can also be configured to contain user-defined table properties through the parameters
property:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Parameters = new Dictionary<string, string> {
{ "key1", "val1" },
{ "key2", "val2" }
},
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Partition Keys
To improve query performance, a table can specify partitionKeys
on which data is stored and queried separately. For example, you might partition a table by year
and month
to optimize queries based on a time window:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
PartitionKeys = new [] { new Column {
Name = "year",
Type = Schema.SMALL_INT
}, new Column {
Name = "month",
Type = Schema.SMALL_INT
} },
DataFormat = DataFormat.JSON
});
Partition Indexes
Another way to improve query performance is to specify partition indexes. If no partition indexes are present on the table, AWS Glue loads all partitions of the table and filters the loaded partitions using the query expression. The query takes more time to run as the number of partitions increase. With an index, the query will try to fetch a subset of the partitions instead of loading all partitions of the table.
The keys of a partition index must be a subset of the partition keys of the table. You can have a
maximum of 3 partition indexes per table. To specify a partition index, you can use the partitionIndexes
property:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
PartitionKeys = new [] { new Column {
Name = "year",
Type = Schema.SMALL_INT
}, new Column {
Name = "month",
Type = Schema.SMALL_INT
} },
PartitionIndexes = new [] { new PartitionIndex {
IndexName = "my-index", // optional
KeyNames = new [] { "year" }
} }, // supply up to 3 indexes
DataFormat = DataFormat.JSON
});
Alternatively, you can call the addPartitionIndex()
function on a table:
Table myTable;
myTable.AddPartitionIndex(new PartitionIndex {
IndexName = "my-index",
KeyNames = new [] { "year" }
});
Partition Filtering
If you have a table with a large number of partitions that grows over time, consider using AWS Glue partition indexing and filtering.
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
PartitionKeys = new [] { new Column {
Name = "year",
Type = Schema.SMALL_INT
}, new Column {
Name = "month",
Type = Schema.SMALL_INT
} },
DataFormat = DataFormat.JSON,
EnablePartitionFiltering = true
});
Glue Connections
Glue connections allow external data connections to third party databases and data warehouses. However, these connections can also be assigned to Glue Tables, allowing you to query external data sources using the Glue Data Catalog.
Whereas S3Table
will point to (and if needed, create) a bucket to store the tables' data, ExternalTable
will point to an existing table in a data source. For example, to create a table in Glue that points to a table in Redshift:
Connection myConnection;
Database myDatabase;
new ExternalTable(this, "MyTable", new ExternalTableProps {
Connection = myConnection,
ExternalDataLocation = "default_db_public_example", // A table in Redshift
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Encryption
You can enable encryption on a Table's data:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.S3_MANAGED,
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Database myDatabase;
// KMS key is created automatically
// KMS key is created automatically
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.KMS,
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
// with an explicit KMS key
// with an explicit KMS key
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.KMS,
EncryptionKey = new Key(this, "MyKey"),
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.KMS_MANAGED,
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Database myDatabase;
// KMS key is created automatically
// KMS key is created automatically
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.CLIENT_SIDE_KMS,
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
// with an explicit KMS key
// with an explicit KMS key
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.CLIENT_SIDE_KMS,
EncryptionKey = new Key(this, "MyKey"),
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Note: you cannot provide a Bucket
when creating the S3Table
if you wish to use server-side encryption (KMS
, KMS_MANAGED
or S3_MANAGED
).
Types
A table's schema is a collection of columns, each of which have a name
and a type
. Types are recursive structures, consisting of primitive and complex types:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Columns = new [] { new Column {
Name = "primitive_column",
Type = Schema.STRING
}, new Column {
Name = "array_column",
Type = Schema.Array(Schema.INTEGER),
Comment = "array<integer>"
}, new Column {
Name = "map_column",
Type = Schema.Map(Schema.STRING, Schema.TIMESTAMP),
Comment = "map<string,string>"
}, new Column {
Name = "struct_column",
Type = Schema.Struct(new [] { new Column {
Name = "nested_column",
Type = Schema.DATE,
Comment = "nested comment"
} }),
Comment = "struct<nested_column:date COMMENT 'nested comment'>"
} },
// ...
Database = myDatabase,
DataFormat = DataFormat.JSON
});
Public FAQ
What are we launching today?
We’re launching new features to an AWS CDK Glue L2 Construct to provide best-practice defaults and convenience methods to create Glue Jobs, Connections, Triggers, Workflows, and the underlying permissions and configuration.
Why should I use this Construct?
Developers should use this Construct to reduce the amount of boilerplate code and complexity each individual has to navigate, and make it easier to create best-practice Glue resources.
What’s not in scope?
Glue Crawlers and other resources that are now managed by the AWS LakeFormation team are not in scope for this effort. Developers should use existing methods to create these resources, and the new Glue L2 construct assumes they already exist as inputs. While best practice is for application and infrastructure code to be as close as possible for teams using fully-implemented DevOps mechanisms, in practice these ETL scripts are likely managed by a data science team who know Python or Scala and don’t necessarily own or manage their own infrastructure deployments. We want to meet developers where they are, and not assume that all of the code resides in the same repository, Developers can automate this themselves via the CDK, however, if they do own both.
Validating Glue version and feature use per AWS region at synth time is also not in scope. AWS’ intention is for all features to eventually be propagated to all Global regions, so the complexity involved in creating and updating region- specific configuration to match shifting feature sets does not out-weigh the likelihood that a developer will use this construct to deploy resources to a region without a particular new feature to a region that doesn’t yet support it without researching or manually attempting to use that feature before developing it via IaC. The developer will, of course, still get feedback from the underlying Glue APIs as CloudFormation deploys the resources similar to the current CDK L1 Glue experience.
Classes
Action | (experimental) Represents a trigger action. |
AssetCode | (experimental) Job Code from a local file. |
ClassificationString | (experimental) Classification string given to tables with this data format. |
CloudWatchEncryption | (experimental) CloudWatch Logs encryption configuration. |
CloudWatchEncryptionMode | (experimental) Encryption mode for CloudWatch Logs. |
Code | (experimental) Represents a Glue Job's Code assets (an asset can be a scripts, a jar, a python file or any other file). |
CodeConfig | (experimental) Result of binding |
Column | (experimental) A column of a table. |
ColumnCountMismatchHandlingAction | (experimental) Identifies if the file contains less or more values for a row than the number of columns specified in the external table definition. |
CompressionType | (experimental) The compression type. |
Condition | (experimental) Represents a trigger condition. |
ConditionalTriggerOptions | (experimental) Properties for configuring a Condition (Predicate) based Glue Trigger. |
ConditionLogicalOperator | (experimental) Represents the logical operator for evaluating a single condition in the Glue Trigger API. |
Connection | (experimental) An AWS Glue connection to a data source. |
ConnectionOptions | (experimental) Base Connection Options. |
ConnectionProps | (experimental) Construction properties for |
ConnectionType | (experimental) The type of the glue connection. |
ContinuousLoggingProps | (experimental) Properties for enabling Continuous Logging for Glue Jobs. |
CrawlerState | (experimental) Represents the state of a crawler for a condition in the Glue Trigger API. |
CustomScheduledTriggerOptions | (experimental) Properties for configuring a custom-scheduled Glue Trigger. |
DailyScheduleTriggerOptions | (experimental) Properties for configuring a daily-scheduled Glue Trigger. |
Database | (experimental) A Glue database. |
DatabaseProps | |
DataFormat | (experimental) Defines the input/output formats and ser/de for a single DataFormat. |
DataFormatProps | (experimental) Properties of a DataFormat instance. |
DataQualityRuleset | (experimental) A Glue Data Quality ruleset. |
DataQualityRulesetProps | (experimental) Construction properties for |
DataQualityTargetTable | (experimental) Properties of a DataQualityTargetTable. |
EventBatchingCondition | (experimental) Represents event trigger batch condition. |
ExecutionClass | (experimental) The ExecutionClass whether the job is run with a standard or flexible execution class. |
ExternalTable | (experimental) A Glue table that targets an external data location (e.g. A table in a Redshift Cluster). |
ExternalTableProps | |
GlueVersion | (experimental) AWS Glue version determines the versions of Apache Spark and Python that are available to the job. |
InputFormat | (experimental) Absolute class name of the Hadoop |
InvalidCharHandlingAction | (experimental) Specifies the action to perform when query results contain invalid UTF-8 character values. |
Job | (experimental) A Glue Job. |
JobAttributes | (experimental) A subset of Job attributes are required for importing an existing job into a CDK project. |
JobBase | (experimental) A base class is needed to be able to import existing Jobs into a CDK app to reference as part of a larger stack or construct. |
JobBookmarksEncryption | (experimental) Job bookmarks encryption configuration. |
JobBookmarksEncryptionMode | (experimental) Encryption mode for Job Bookmarks. |
JobLanguage | (experimental) Runtime language of the Glue job. |
JobProps | (experimental) JobProps will be used to create new Glue Jobs using this L2 Construct. |
JobState | (experimental) Job states emitted by Glue to CloudWatch Events. |
JobType | (experimental) The job type. |
MaxCapacity | (experimental) The number of AWS Glue data processing units (DPUs) that can be allocated when this job runs. |
MetricType | (experimental) The Glue CloudWatch metric type. |
NotifyEventTriggerOptions | (experimental) Properties for configuring an Event Bridge based Glue Trigger. |
NumericOverflowHandlingAction | (experimental) Specifies the action to perform when ORC data contains an integer (for example, BIGINT or int64) that is larger than the column definition (for example, SMALLINT or int16). |
OnDemandTriggerOptions | (experimental) Properties for configuring an on-demand Glue Trigger. |
OrcColumnMappingType | (experimental) Specifies how to map columns when the table uses ORC data format. |
OutputFormat | (experimental) Absolute class name of the Hadoop |
PartitionIndex | (experimental) Properties of a Partition Index. |
Predicate | (experimental) Represents a trigger predicate. |
PredicateLogical | |
PySparkEtlJob | (experimental) PySpark ETL Jobs class. |
PySparkEtlJobProps | (experimental) Properties for creating a Python Spark ETL job. |
PySparkFlexEtlJob | (experimental) Flex Jobs class. |
PySparkFlexEtlJobProps | (experimental) Properties for PySparkFlexEtlJob. |
PySparkStreamingJob | (experimental) Python Spark Streaming Jobs class. |
PySparkStreamingJobProps | (experimental) Properties for creating a Python Spark ETL job. |
PythonShellJob | (experimental) Python Shell Jobs class. |
PythonShellJobProps | (experimental) Properties for creating a Python Shell job. |
PythonVersion | (experimental) Python version. |
RayJob | (experimental) Ray Jobs class. |
RayJobProps | (experimental) Properties for creating a Ray Glue job. |
Runtime | (experimental) AWS Glue runtime determines the runtime engine of the job. |
S3Code | (experimental) Glue job Code from an S3 bucket. |
S3Encryption | (experimental) S3 encryption configuration. |
S3EncryptionMode | (experimental) Encryption mode for S3. |
S3Table | (experimental) A Glue table that targets a S3 dataset. |
S3TableProps | |
ScalaSparkEtlJob | (experimental) Spark ETL Jobs class. |
ScalaSparkEtlJobProps | (experimental) Properties for creating a Scala Spark ETL job. |
ScalaSparkFlexEtlJob | (experimental) Spark ETL Jobs class. |
ScalaSparkFlexEtlJobProps | (experimental) Flex Jobs class. |
ScalaSparkStreamingJob | (experimental) Scala Streaming Jobs class. |
ScalaSparkStreamingJobProps | (experimental) Properties for creating a Scala Spark ETL job. |
Schema | |
SecurityConfiguration | (experimental) A security configuration is a set of security properties that can be used by AWS Glue to encrypt data at rest. |
SecurityConfigurationProps | (experimental) Constructions properties of |
SerializationLibrary | (experimental) Serialization library to use when serializing/deserializing (SerDe) table records. |
SparkExtraCodeProps | (experimental) Code props for different {@link Code} assets used by different types of Spark jobs. |
SparkJob | (experimental) Base class for different types of Spark Jobs. |
SparkJobProps | (experimental) Common properties for different types of Spark jobs. |
SparkUILoggingLocation | (experimental) The Spark UI logging location. |
SparkUIProps | (experimental) Properties for enabling Spark UI monitoring feature for Spark-based Glue jobs. |
StorageParameter | (experimental) A storage parameter. The list of storage parameters available is not exhaustive and other keys may be used. |
StorageParameters | (experimental) The storage parameter keys that are currently known, this list is not exhaustive and other keys may be used. |
SurplusBytesHandlingAction | (experimental) Specifies how to handle data being loaded that exceeds the length of the data type defined for columns containing VARBYTE data. |
SurplusCharHandlingAction | (experimental) Specifies how to handle data being loaded that exceeds the length of the data type defined for columns containing VARCHAR, CHAR, or string data. |
Table | (deprecated) A Glue table. |
TableAttributes | |
TableBase | (experimental) A Glue table. |
TableBaseProps | |
TableEncryption | (experimental) Encryption options for a Table. |
TableProps | |
TriggerOptions | (experimental) Properties for configuring a Glue Trigger. |
TriggerSchedule | (experimental) Represents a trigger schedule. |
Type | (experimental) Represents a type of a column in a table schema. |
WeeklyScheduleTriggerOptions | (experimental) Properties for configuring a weekly-scheduled Glue Trigger. |
WorkerType | (experimental) The type of predefined worker that is allocated when a job runs. |
Workflow | (experimental) This module defines a construct for creating and managing AWS Glue Workflows and Triggers. |
WorkflowAttributes | (experimental) Properties for importing a Workflow using its attributes. |
WorkflowBase | (experimental) Base abstract class for Workflow. |
WorkflowProps | (experimental) Properties for defining a Workflow. |
WriteParallel | (experimental) Specifies how to handle data being loaded that exceeds the length of the data type defined for columns containing VARCHAR, CHAR, or string data. |
Interfaces
IAction | (experimental) Represents a trigger action. |
ICloudWatchEncryption | (experimental) CloudWatch Logs encryption configuration. |
ICodeConfig | (experimental) Result of binding |
IColumn | (experimental) A column of a table. |
ICondition | (experimental) Represents a trigger condition. |
IConditionalTriggerOptions | (experimental) Properties for configuring a Condition (Predicate) based Glue Trigger. |
IConnection | (experimental) Interface representing a created or an imported |
IConnectionOptions | (experimental) Base Connection Options. |
IConnectionProps | (experimental) Construction properties for |
IContinuousLoggingProps | (experimental) Properties for enabling Continuous Logging for Glue Jobs. |
ICustomScheduledTriggerOptions | (experimental) Properties for configuring a custom-scheduled Glue Trigger. |
IDailyScheduleTriggerOptions | (experimental) Properties for configuring a daily-scheduled Glue Trigger. |
IDatabase | |
IDatabaseProps | |
IDataFormatProps | (experimental) Properties of a DataFormat instance. |
IDataQualityRuleset | |
IDataQualityRulesetProps | (experimental) Construction properties for |
IEventBatchingCondition | (experimental) Represents event trigger batch condition. |
IExternalTableProps | |
IJob | (experimental) Interface representing a new or an imported Glue Job. |
IJobAttributes | (experimental) A subset of Job attributes are required for importing an existing job into a CDK project. |
IJobBookmarksEncryption | (experimental) Job bookmarks encryption configuration. |
IJobProps | (experimental) JobProps will be used to create new Glue Jobs using this L2 Construct. |
INotifyEventTriggerOptions | (experimental) Properties for configuring an Event Bridge based Glue Trigger. |
IOnDemandTriggerOptions | (experimental) Properties for configuring an on-demand Glue Trigger. |
IPartitionIndex | (experimental) Properties of a Partition Index. |
IPredicate | (experimental) Represents a trigger predicate. |
IPySparkEtlJobProps | (experimental) Properties for creating a Python Spark ETL job. |
IPySparkFlexEtlJobProps | (experimental) Properties for PySparkFlexEtlJob. |
IPySparkStreamingJobProps | (experimental) Properties for creating a Python Spark ETL job. |
IPythonShellJobProps | (experimental) Properties for creating a Python Shell job. |
IRayJobProps | (experimental) Properties for creating a Ray Glue job. |
IS3Encryption | (experimental) S3 encryption configuration. |
IS3TableProps | |
IScalaSparkEtlJobProps | (experimental) Properties for creating a Scala Spark ETL job. |
IScalaSparkFlexEtlJobProps | (experimental) Flex Jobs class. |
IScalaSparkStreamingJobProps | (experimental) Properties for creating a Scala Spark ETL job. |
ISecurityConfiguration | (experimental) Interface representing a created or an imported |
ISecurityConfigurationProps | (experimental) Constructions properties of |
ISparkExtraCodeProps | (experimental) Code props for different {@link Code} assets used by different types of Spark jobs. |
ISparkJobProps | (experimental) Common properties for different types of Spark jobs. |
ISparkUILoggingLocation | (experimental) The Spark UI logging location. |
ISparkUIProps | (experimental) Properties for enabling Spark UI monitoring feature for Spark-based Glue jobs. |
ITable | |
ITableAttributes | |
ITableBaseProps | |
ITableProps | |
ITriggerOptions | (experimental) Properties for configuring a Glue Trigger. |
IType | (experimental) Represents a type of a column in a table schema. |
IWeeklyScheduleTriggerOptions | (experimental) Properties for configuring a weekly-scheduled Glue Trigger. |
IWorkflow | (experimental) The base interface for Glue Workflow. |
IWorkflowAttributes | (experimental) Properties for importing a Workflow using its attributes. |
IWorkflowProps | (experimental) Properties for defining a Workflow. |