Interface | Description |
---|---|
CfnClassifier.CsvClassifierProperty |
A classifier for custom `CSV` content.
|
CfnClassifier.GrokClassifierProperty |
A classifier that uses `grok` patterns.
|
CfnClassifier.JsonClassifierProperty |
A classifier for `JSON` content.
|
CfnClassifier.XMLClassifierProperty |
A classifier for `XML` content.
|
CfnClassifierProps |
Properties for defining a `CfnClassifier`.
|
CfnConnection.ConnectionInputProperty |
A structure that is used to specify a connection to create or update.
|
CfnConnection.PhysicalConnectionRequirementsProperty |
Specifies the physical requirements for a connection.
|
CfnConnectionProps |
Properties for defining a `CfnConnection`.
|
CfnCrawler.CatalogTargetProperty |
Specifies an AWS Glue Data Catalog target.
|
CfnCrawler.DynamoDBTargetProperty |
Specifies an Amazon DynamoDB table to crawl.
|
CfnCrawler.JdbcTargetProperty |
Specifies a JDBC data store to crawl.
|
CfnCrawler.MongoDBTargetProperty |
Specifies an Amazon DocumentDB or MongoDB data store to crawl.
|
CfnCrawler.RecrawlPolicyProperty |
When crawling an Amazon S3 data source after the first crawl is complete, specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.
|
CfnCrawler.S3TargetProperty |
Specifies a data store in Amazon Simple Storage Service (Amazon S3).
|
CfnCrawler.ScheduleProperty |
A scheduling object using a `cron` statement to schedule an event.
|
CfnCrawler.SchemaChangePolicyProperty |
The policy that specifies update and delete behaviors for the crawler.
|
CfnCrawler.TargetsProperty |
Specifies data stores to crawl.
|
CfnCrawlerProps |
Properties for defining a `CfnCrawler`.
|
CfnDatabase.DatabaseIdentifierProperty |
A structure that describes a target database for resource linking.
|
CfnDatabase.DatabaseInputProperty |
The structure used to create or update a database.
|
CfnDatabase.DataLakePrincipalProperty |
The AWS Lake Formation principal.
|
CfnDatabase.PrincipalPrivilegesProperty |
the permissions granted to a principal.
|
CfnDatabaseProps |
Properties for defining a `CfnDatabase`.
|
CfnDataCatalogEncryptionSettings.ConnectionPasswordEncryptionProperty |
The data structure used by the Data Catalog to encrypt the password as part of `CreateConnection` or `UpdateConnection` and store it in the `ENCRYPTED_PASSWORD` field in the connection properties.
|
CfnDataCatalogEncryptionSettings.DataCatalogEncryptionSettingsProperty |
Contains configuration information for maintaining Data Catalog security.
|
CfnDataCatalogEncryptionSettings.EncryptionAtRestProperty |
Specifies the encryption-at-rest configuration for the Data Catalog.
|
CfnDataCatalogEncryptionSettingsProps |
Properties for defining a `CfnDataCatalogEncryptionSettings`.
|
CfnDevEndpointProps |
Properties for defining a `CfnDevEndpoint`.
|
CfnJob.ConnectionsListProperty |
Specifies the connections used by a job.
|
CfnJob.ExecutionPropertyProperty |
An execution property of a job.
|
CfnJob.JobCommandProperty |
Specifies code executed when a job is run.
|
CfnJob.NotificationPropertyProperty |
Specifies configuration properties of a notification.
|
CfnJobProps |
Properties for defining a `CfnJob`.
|
CfnMLTransform.FindMatchesParametersProperty |
The parameters to configure the find matches transform.
|
CfnMLTransform.GlueTablesProperty |
The database and table in the AWS Glue Data Catalog that is used for input or output data.
|
CfnMLTransform.InputRecordTablesProperty |
A list of AWS Glue table definitions used by the transform.
|
CfnMLTransform.MLUserDataEncryptionProperty |
The encryption-at-rest settings of the transform that apply to accessing user data.
|
CfnMLTransform.TransformEncryptionProperty |
The encryption-at-rest settings of the transform that apply to accessing user data.
|
CfnMLTransform.TransformParametersProperty |
The algorithm-specific parameters that are associated with the machine learning transform.
|
CfnMLTransformProps |
Properties for defining a `CfnMLTransform`.
|
CfnPartition.ColumnProperty |
A column in a `Table` .
|
CfnPartition.OrderProperty |
Specifies the sort order of a sorted column.
|
CfnPartition.PartitionInputProperty |
The structure used to create and update a partition.
|
CfnPartition.SchemaIdProperty |
A structure that contains schema identity fields.
|
CfnPartition.SchemaReferenceProperty |
An object that references a schema stored in the AWS Glue Schema Registry.
|
CfnPartition.SerdeInfoProperty |
Information about a serialization/deserialization program (SerDe) that serves as an extractor and loader.
|
CfnPartition.SkewedInfoProperty |
Specifies skewed values in a table.
|
CfnPartition.StorageDescriptorProperty |
Describes the physical storage of table data.
|
CfnPartitionProps |
Properties for defining a `CfnPartition`.
|
CfnRegistryProps |
Properties for defining a `CfnRegistry`.
|
CfnSchema.RegistryProperty |
Specifies a registry in the AWS Glue Schema Registry.
|
CfnSchema.SchemaVersionProperty |
Specifies the version of a schema.
|
CfnSchemaProps |
Properties for defining a `CfnSchema`.
|
CfnSchemaVersion.SchemaProperty |
A wrapper structure to contain schema identity fields.
|
CfnSchemaVersionMetadataProps |
Properties for defining a `CfnSchemaVersionMetadata`.
|
CfnSchemaVersionProps |
Properties for defining a `CfnSchemaVersion`.
|
CfnSecurityConfiguration.CloudWatchEncryptionProperty |
Specifies how Amazon CloudWatch data should be encrypted.
|
CfnSecurityConfiguration.EncryptionConfigurationProperty |
Specifies an encryption configuration.
|
CfnSecurityConfiguration.JobBookmarksEncryptionProperty |
Specifies how job bookmark data should be encrypted.
|
CfnSecurityConfiguration.S3EncryptionProperty |
Specifies how Amazon Simple Storage Service (Amazon S3) data should be encrypted.
|
CfnSecurityConfigurationProps |
Properties for defining a `CfnSecurityConfiguration`.
|
CfnTable.ColumnProperty |
A column in a `Table` .
|
CfnTable.OrderProperty |
Specifies the sort order of a sorted column.
|
CfnTable.SchemaIdProperty |
A structure that contains schema identity fields.
|
CfnTable.SchemaReferenceProperty |
An object that references a schema stored in the AWS Glue Schema Registry.
|
CfnTable.SerdeInfoProperty |
Information about a serialization/deserialization program (SerDe) that serves as an extractor and loader.
|
CfnTable.SkewedInfoProperty |
Specifies skewed values in a table.
|
CfnTable.StorageDescriptorProperty |
Describes the physical storage of table data.
|
CfnTable.TableIdentifierProperty |
A structure that describes a target table for resource linking.
|
CfnTable.TableInputProperty |
A structure used to define a table.
|
CfnTableProps |
Properties for defining a `CfnTable`.
|
CfnTrigger.ActionProperty |
Defines an action to be initiated by a trigger.
|
CfnTrigger.ConditionProperty |
Defines a condition under which a trigger fires.
|
CfnTrigger.EventBatchingConditionProperty |
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
|
CfnTrigger.NotificationPropertyProperty |
Specifies configuration properties of a job run notification.
|
CfnTrigger.PredicateProperty |
Defines the predicate of the trigger, which determines when it fires.
|
CfnTriggerProps |
Properties for defining a `CfnTrigger`.
|
CfnWorkflowProps |
Properties for defining a `CfnWorkflow`.
|
CloudWatchEncryption |
(experimental) CloudWatch Logs encryption configuration.
|
CodeConfig |
(experimental) Result of binding `Code` into a `Job`.
|
Column |
(experimental) A column of a table.
|
ConnectionOptions |
(experimental) Base Connection Options.
|
ConnectionProps |
(experimental) Construction properties for
Connection . |
ContinuousLoggingProps |
(experimental) Properties for enabling Continuous Logging for Glue Jobs.
|
DatabaseProps |
Example:
|
DataFormatProps |
(experimental) Properties of a DataFormat instance.
|
IConnection |
(experimental) Interface representing a created or an imported
Connection . |
IConnection.Jsii$Default |
Internal default implementation for
IConnection . |
IDatabase | |
IDatabase.Jsii$Default |
Internal default implementation for
IDatabase . |
IJob |
(experimental) Interface representing a created or an imported
Job . |
IJob.Jsii$Default |
Internal default implementation for
IJob . |
ISecurityConfiguration |
(experimental) Interface representing a created or an imported
SecurityConfiguration . |
ISecurityConfiguration.Jsii$Default |
Internal default implementation for
ISecurityConfiguration . |
ITable | |
ITable.Jsii$Default |
Internal default implementation for
ITable . |
JobAttributes |
(experimental) Attributes for importing
Job . |
JobBookmarksEncryption |
(experimental) Job bookmarks encryption configuration.
|
JobExecutableConfig |
(experimental) Result of binding a `JobExecutable` into a `Job`.
|
JobProps |
(experimental) Construction properties for
Job . |
PartitionIndex |
(experimental) Properties of a Partition Index.
|
PythonShellExecutableProps |
(experimental) Props for creating a Python shell job executable.
|
PythonSparkJobExecutableProps |
(experimental) Props for creating a Python Spark (ETL or Streaming) job executable.
|
S3Encryption |
(experimental) S3 encryption configuration.
|
ScalaJobExecutableProps |
(experimental) Props for creating a Scala Spark (ETL or Streaming) job executable.
|
SecurityConfigurationProps |
(experimental) Constructions properties of
SecurityConfiguration . |
SparkUILoggingLocation |
(experimental) The Spark UI logging location.
|
SparkUIProps |
(experimental) Properties for enabling Spark UI monitoring feature for Spark-based Glue jobs.
|
TableAttributes |
Example:
|
TableProps |
Example:
|
Type |
(experimental) Represents a type of a column in a table schema.
|
Enum | Description |
---|---|
CloudWatchEncryptionMode |
(experimental) Encryption mode for CloudWatch Logs.
|
JobBookmarksEncryptionMode |
(experimental) Encryption mode for Job Bookmarks.
|
JobLanguage |
(experimental) Runtime language of the Glue job.
|
JobState |
(experimental) Job states emitted by Glue to CloudWatch Events.
|
MetricType |
(experimental) The Glue CloudWatch metric type.
|
PythonVersion |
(experimental) Python version.
|
S3EncryptionMode |
(experimental) Encryption mode for S3.
|
TableEncryption |
(experimental) Encryption options for a Table.
|
---
All classes with the
Cfn
prefix in this module (CFN Resources) are always stable and safe to use.
The APIs of higher level constructs in this module are experimental and under active development. They are subject to non-backward compatible changes or removal in any future version. These are not subject to the Semantic Versioning model and breaking changes will be announced in the release notes. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.
This module is part of the AWS Cloud Development Kit project.
A Job
encapsulates a script that connects to data sources, processes them, and then writes output to a data target.
There are 3 types of jobs supported by AWS Glue: Spark ETL, Spark Streaming, and Python Shell jobs.
The glue.JobExecutable
allows you to specify the type of job, the language to use and the code assets required by the job.
glue.Code
allows you to refer to the different code assets required by the job, either from an existing S3 location or from a local file path.
These jobs run in an Apache Spark environment managed by AWS Glue.
An ETL job processes data in batches using Apache Spark.
Bucket bucket; Job.Builder.create(this, "ScalaSparkEtlJob") .executable(JobExecutable.scalaEtl(ScalaJobExecutableProps.builder() .glueVersion(GlueVersion.V2_0) .script(Code.fromBucket(bucket, "src/com/example/HelloWorld.scala")) .className("com.example.HelloWorld") .extraJars(List.of(Code.fromBucket(bucket, "jars/HelloWorld.jar"))) .build())) .description("an example Scala ETL job") .build();
A Streaming job is similar to an ETL job, except that it performs ETL on data streams. It uses the Apache Spark Structured Streaming framework. Some Spark job features are not available to streaming ETL jobs.
Job.Builder.create(this, "PythonSparkStreamingJob") .executable(JobExecutable.pythonStreaming(PythonSparkJobExecutableProps.builder() .glueVersion(GlueVersion.V2_0) .pythonVersion(PythonVersion.THREE) .script(Code.fromAsset(join(__dirname, "job-script/hello_world.py"))) .build())) .description("an example Python Streaming job") .build();
A Python shell job runs Python scripts as a shell and supports a Python version that depends on the AWS Glue version you are using. This can be used to schedule and run tasks that don't require an Apache Spark environment.
Bucket bucket; Job.Builder.create(this, "PythonShellJob") .executable(JobExecutable.pythonShell(PythonShellExecutableProps.builder() .glueVersion(GlueVersion.V1_0) .pythonVersion(PythonVersion.THREE) .script(Code.fromBucket(bucket, "script.py")) .build())) .description("an example Python Shell job") .build();
See documentation for more information on adding jobs in Glue.
A Connection
allows Glue jobs, crawlers and development endpoints to access certain types of data stores. For example, to create a network connection to connect to a data source within a VPC:
SecurityGroup securityGroup; Subnet subnet; Connection.Builder.create(this, "MyConnection") .type(ConnectionType.NETWORK) // The security groups granting AWS Glue inbound access to the data source within the VPC .securityGroups(List.of(securityGroup)) // The VPC subnet which contains the data source .subnet(subnet) .build();
If you need to use a connection type that doesn't exist as a static member on ConnectionType
, you can instantiate a ConnectionType
object, e.g: new glue.ConnectionType('NEW_TYPE')
.
See Adding a Connection to Your Data Store and Connection Structure documentation for more information on the supported data stores and their configurations.
A SecurityConfiguration
is a set of security properties that can be used by AWS Glue to encrypt data at rest.
SecurityConfiguration.Builder.create(this, "MySecurityConfiguration") .securityConfigurationName("name") .cloudWatchEncryption(CloudWatchEncryption.builder() .mode(CloudWatchEncryptionMode.KMS) .build()) .jobBookmarksEncryption(JobBookmarksEncryption.builder() .mode(JobBookmarksEncryptionMode.CLIENT_SIDE_KMS) .build()) .s3Encryption(S3Encryption.builder() .mode(S3EncryptionMode.KMS) .build()) .build();
By default, a shared KMS key is created for use with the encryption configurations that require one. You can also supply your own key for each encryption config, for example, for CloudWatch encryption:
Key key; SecurityConfiguration.Builder.create(this, "MySecurityConfiguration") .securityConfigurationName("name") .cloudWatchEncryption(CloudWatchEncryption.builder() .mode(CloudWatchEncryptionMode.KMS) .kmsKey(key) .build()) .build();
See documentation for more info for Glue encrypting data written by Crawlers, Jobs, and Development Endpoints.
A Database
is a logical grouping of Tables
in the Glue Catalog.
Database.Builder.create(this, "MyDatabase") .databaseName("my_database") .build();
A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in a S3 bucket), and format for the files (Json, Avro, Parquet, etc.):
Database myDatabase; Table.Builder.create(this, "MyTable") .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build(), Column.builder() .name("col2") .type(Schema.array(Schema.STRING)) .comment("col2 is an array of strings") .build())) .dataFormat(DataFormat.JSON) .build();
By default, a S3 bucket will be created to store the table's data but you can manually pass the bucket
and s3Prefix
:
Bucket myBucket; Database myDatabase; Table.Builder.create(this, "MyTable") .bucket(myBucket) .s3Prefix("my-table/") // ... .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .dataFormat(DataFormat.JSON) .build();
By default, an S3 bucket will be created to store the table's data and stored in the bucket root. You can also manually pass the bucket
and s3Prefix
:
To improve query performance, a table can specify partitionKeys
on which data is stored and queried separately. For example, you might partition a table by year
and month
to optimize queries based on a time window:
Database myDatabase; Table.Builder.create(this, "MyTable") .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .partitionKeys(List.of(Column.builder() .name("year") .type(Schema.SMALL_INT) .build(), Column.builder() .name("month") .type(Schema.SMALL_INT) .build())) .dataFormat(DataFormat.JSON) .build();
Another way to improve query performance is to specify partition indexes. If no partition indexes are present on the table, AWS Glue loads all partitions of the table and filters the loaded partitions using the query expression. The query takes more time to run as the number of partitions increase. With an index, the query will try to fetch a subset of the partitions instead of loading all partitions of the table.
The keys of a partition index must be a subset of the partition keys of the table. You can have a
maximum of 3 partition indexes per table. To specify a partition index, you can use the partitionIndexes
property:
Database myDatabase; Table.Builder.create(this, "MyTable") .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .partitionKeys(List.of(Column.builder() .name("year") .type(Schema.SMALL_INT) .build(), Column.builder() .name("month") .type(Schema.SMALL_INT) .build())) .partitionIndexes(List.of(PartitionIndex.builder() .indexName("my-index") // optional .keyNames(List.of("year")) .build())) // supply up to 3 indexes .dataFormat(DataFormat.JSON) .build();
Alternatively, you can call the addPartitionIndex()
function on a table:
Table myTable; myTable.addPartitionIndex(PartitionIndex.builder() .indexName("my-index") .keyNames(List.of("year")) .build());
You can enable encryption on a Table's data:
Unencrypted
- files are not encrypted. The default encryption setting.SSE-S3
) with an Amazon S3-managed key.
Database myDatabase; Table.Builder.create(this, "MyTable") .encryption(TableEncryption.S3_MANAGED) // ... .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .dataFormat(DataFormat.JSON) .build();
SSE-KMS
) with an AWS KMS Key managed by the account owner.
Database myDatabase; // KMS key is created automatically // KMS key is created automatically Table.Builder.create(this, "MyTable") .encryption(TableEncryption.KMS) // ... .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .dataFormat(DataFormat.JSON) .build(); // with an explicit KMS key // with an explicit KMS key Table.Builder.create(this, "MyTable") .encryption(TableEncryption.KMS) .encryptionKey(new Key(this, "MyKey")) // ... .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .dataFormat(DataFormat.JSON) .build();
SSE-KMS
), like Kms
, except with an AWS KMS Key managed by the AWS Key Management Service.
Database myDatabase; Table.Builder.create(this, "MyTable") .encryption(TableEncryption.KMS_MANAGED) // ... .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .dataFormat(DataFormat.JSON) .build();
CSE-KMS
) with an AWS KMS Key managed by the account owner.
Database myDatabase; // KMS key is created automatically // KMS key is created automatically Table.Builder.create(this, "MyTable") .encryption(TableEncryption.CLIENT_SIDE_KMS) // ... .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .dataFormat(DataFormat.JSON) .build(); // with an explicit KMS key // with an explicit KMS key Table.Builder.create(this, "MyTable") .encryption(TableEncryption.CLIENT_SIDE_KMS) .encryptionKey(new Key(this, "MyKey")) // ... .database(myDatabase) .tableName("my_table") .columns(List.of(Column.builder() .name("col1") .type(Schema.STRING) .build())) .dataFormat(DataFormat.JSON) .build();
Note: you cannot provide a Bucket
when creating the Table
if you wish to use server-side encryption (KMS
, KMS_MANAGED
or S3_MANAGED
).
A table's schema is a collection of columns, each of which have a name
and a type
. Types are recursive structures, consisting of primitive and complex types:
Database myDatabase; Table.Builder.create(this, "MyTable") .columns(List.of(Column.builder() .name("primitive_column") .type(Schema.STRING) .build(), Column.builder() .name("array_column") .type(Schema.array(Schema.INTEGER)) .comment("array<integer>") .build(), Column.builder() .name("map_column") .type(Schema.map(Schema.STRING, Schema.TIMESTAMP)) .comment("map<string,string>") .build(), Column.builder() .name("struct_column") .type(Schema.struct(List.of(Column.builder() .name("nested_column") .type(Schema.DATE) .comment("nested comment") .build()))) .comment("struct<nested_column:date COMMENT 'nested comment'>") .build())) // ... .database(myDatabase) .tableName("my_table") .dataFormat(DataFormat.JSON) .build();
| Name | Type | Comments | |----------- |---------- |------------------------------------------------------------------------------------------------------------------ | | FLOAT | Constant | A 32-bit single-precision floating point number | | INTEGER | Constant | A 32-bit signed value in two's complement format, with a minimum value of -2^31 and a maximum value of 2^31-1 | | DOUBLE | Constant | A 64-bit double-precision floating point number | | BIG_INT | Constant | A 64-bit signed INTEGER in two’s complement format, with a minimum value of -2^63 and a maximum value of 2^63 -1 | | SMALL_INT | Constant | A 16-bit signed INTEGER in two’s complement format, with a minimum value of -2^15 and a maximum value of 2^15-1 | | TINY_INT | Constant | A 8-bit signed INTEGER in two’s complement format, with a minimum value of -2^7 and a maximum value of 2^7-1 |
| Name | Type | Comments | |----------- |---------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | DATE | Constant | A date in UNIX format, such as YYYY-MM-DD. | | TIMESTAMP | Constant | Date and time instant in the UNiX format, such as yyyy-mm-dd hh:mm:ss[.f...]. For example, TIMESTAMP '2008-09-15 03:04:05.324'. This format uses the session time zone. |
| Name | Type | Comments |
|-------------------------------------------- |---------- |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| STRING | Constant | A string literal enclosed in single or double quotes |
| decimal(precision: number, scale?: number) | Function | precision
is the total number of digits. scale
(optional) is the number of digits in fractional part with a default of 0. For example, use these type definitions: decimal(11,5), decimal(15) |
| char(length: number) | Function | Fixed length character data, with a specified length between 1 and 255, such as char(10) |
| varchar(length: number) | Function | Variable length character data, with a specified length between 1 and 65535, such as varchar(10) |
| Name | Type | Comments |
|--------- |---------- |------------------------------- |
| BOOLEAN | Constant | Values are true
and false
|
| BINARY | Constant | Value is in binary |
| Name | Type | Comments | |------------------------------------- |---------- |------------------------------------------------------------------- | | array(itemType: Type) | Function | An array of some other type | | map(keyType: Type, valueType: Type) | Function | A map of some primitive key type to any value type | | struct(collumns: Column[]) | Function | Nested structure containing individually named and typed collumns |