Namespace Amazon.CDK.AWS.Glue.Alpha
AWS Glue Construct Library
---The APIs of higher level constructs in this module are experimental and under active development.
They are subject to non-backward compatible changes or removal in any future version. These are
not subject to the <a href="https://semver.org/">Semantic Versioning</a> model and breaking changes will be
announced in the release notes. This means that while you may use them, you may need to update
your source code when upgrading to a newer version of this package.
This module is part of the AWS Cloud Development Kit project.
Job
A Job
encapsulates a script that connects to data sources, processes them, and then writes output to a data target.
There are 3 types of jobs supported by AWS Glue: Spark ETL, Spark Streaming, and Python Shell jobs.
The glue.JobExecutable
allows you to specify the type of job, the language to use and the code assets required by the job.
glue.Code
allows you to refer to the different code assets required by the job, either from an existing S3 location or from a local file path.
glue.ExecutionClass
allows you to specify FLEX
or STANDARD
. FLEX
is appropriate for non-urgent jobs such as pre-production jobs, testing, and one-time data loads.
Spark Jobs
These jobs run in an Apache Spark environment managed by AWS Glue.
ETL Jobs
An ETL job processes data in batches using Apache Spark.
Bucket bucket;
new Job(this, "ScalaSparkEtlJob", new JobProps {
Executable = JobExecutable.ScalaEtl(new ScalaJobExecutableProps {
GlueVersion = GlueVersion.V4_0,
Script = Code.FromBucket(bucket, "src/com/example/HelloWorld.scala"),
ClassName = "com.example.HelloWorld",
ExtraJars = new [] { Code.FromBucket(bucket, "jars/HelloWorld.jar") }
}),
WorkerType = WorkerType.G_8X,
Description = "an example Scala ETL job"
});
Streaming Jobs
A Streaming job is similar to an ETL job, except that it performs ETL on data streams. It uses the Apache Spark Structured Streaming framework. Some Spark job features are not available to streaming ETL jobs.
new Job(this, "PythonSparkStreamingJob", new JobProps {
Executable = JobExecutable.PythonStreaming(new PythonSparkJobExecutableProps {
GlueVersion = GlueVersion.V4_0,
PythonVersion = PythonVersion.THREE,
Script = Code.FromAsset(Join(__dirname, "job-script", "hello_world.py"))
}),
Description = "an example Python Streaming job"
});
Python Shell Jobs
A Python shell job runs Python scripts as a shell and supports a Python version that depends on the AWS Glue version you are using. This can be used to schedule and run tasks that don't require an Apache Spark environment. Currently, three flavors are supported:
Bucket bucket;
new Job(this, "PythonShellJob", new JobProps {
Executable = JobExecutable.PythonShell(new PythonShellExecutableProps {
GlueVersion = GlueVersion.V1_0,
PythonVersion = PythonVersion.THREE,
Script = Code.FromBucket(bucket, "script.py")
}),
Description = "an example Python Shell job"
});
Ray Jobs
These jobs run in a Ray environment managed by AWS Glue.
new Job(this, "RayJob", new JobProps {
Executable = JobExecutable.PythonRay(new PythonRayExecutableProps {
GlueVersion = GlueVersion.V4_0,
PythonVersion = PythonVersion.THREE_NINE,
Runtime = Runtime.RAY_TWO_FOUR,
Script = Code.FromAsset(Join(__dirname, "job-script", "hello_world.py"))
}),
WorkerType = WorkerType.Z_2X,
WorkerCount = 2,
Description = "an example Ray job"
});
Enable Spark UI
Enable Spark UI setting the sparkUI
property.
new Job(this, "EnableSparkUI", new JobProps {
JobName = "EtlJobWithSparkUIPrefix",
SparkUI = new SparkUIProps {
Enabled = true
},
Executable = JobExecutable.PythonEtl(new PythonSparkJobExecutableProps {
GlueVersion = GlueVersion.V3_0,
PythonVersion = PythonVersion.THREE,
Script = Code.FromAsset(Join(__dirname, "job-script", "hello_world.py"))
})
});
The sparkUI
property also allows the specification of an s3 bucket and a bucket prefix.
See documentation for more information on adding jobs in Glue.
Connection
A Connection
allows Glue jobs, crawlers and development endpoints to access certain types of data stores. For example, to create a network connection to connect to a data source within a VPC:
SecurityGroup securityGroup;
Subnet subnet;
new Connection(this, "MyConnection", new ConnectionProps {
Type = ConnectionType.NETWORK,
// The security groups granting AWS Glue inbound access to the data source within the VPC
SecurityGroups = new [] { securityGroup },
// The VPC subnet which contains the data source
Subnet = subnet
});
For RDS Connection
by JDBC, it is recommended to manage credentials using AWS Secrets Manager. To use Secret, specify SECRET_ID
in properties
like the following code. Note that in this case, the subnet must have a route to the AWS Secrets Manager VPC endpoint or to the AWS Secrets Manager endpoint through a NAT gateway.
SecurityGroup securityGroup;
Subnet subnet;
DatabaseCluster db;
new Connection(this, "RdsConnection", new ConnectionProps {
Type = ConnectionType.JDBC,
SecurityGroups = new [] { securityGroup },
Subnet = subnet,
Properties = new Dictionary<string, string> {
{ "JDBC_CONNECTION_URL", $"jdbc:mysql://{db.clusterEndpoint.socketAddress}/databasename" },
{ "JDBC_ENFORCE_SSL", "false" },
{ "SECRET_ID", db.Secret.SecretName }
}
});
If you need to use a connection type that doesn't exist as a static member on ConnectionType
, you can instantiate a ConnectionType
object, e.g: new glue.ConnectionType('NEW_TYPE')
.
See Adding a Connection to Your Data Store and Connection Structure documentation for more information on the supported data stores and their configurations.
SecurityConfiguration
A SecurityConfiguration
is a set of security properties that can be used by AWS Glue to encrypt data at rest.
new SecurityConfiguration(this, "MySecurityConfiguration", new SecurityConfigurationProps {
CloudWatchEncryption = new CloudWatchEncryption {
Mode = CloudWatchEncryptionMode.KMS
},
JobBookmarksEncryption = new JobBookmarksEncryption {
Mode = JobBookmarksEncryptionMode.CLIENT_SIDE_KMS
},
S3Encryption = new S3Encryption {
Mode = S3EncryptionMode.KMS
}
});
By default, a shared KMS key is created for use with the encryption configurations that require one. You can also supply your own key for each encryption config, for example, for CloudWatch encryption:
Key key;
new SecurityConfiguration(this, "MySecurityConfiguration", new SecurityConfigurationProps {
CloudWatchEncryption = new CloudWatchEncryption {
Mode = CloudWatchEncryptionMode.KMS,
KmsKey = key
}
});
See documentation for more info for Glue encrypting data written by Crawlers, Jobs, and Development Endpoints.
Database
A Database
is a logical grouping of Tables
in the Glue Catalog.
new Database(this, "MyDatabase", new DatabaseProps {
DatabaseName = "my_database",
Description = "my_database_description"
});
Table
A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in a S3 bucket), and format for the files (Json, Avro, Parquet, etc.):
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
}, new Column {
Name = "col2",
Type = Schema.Array(Schema.STRING),
Comment = "col2 is an array of strings"
} },
DataFormat = DataFormat.JSON
});
By default, a S3 bucket will be created to store the table's data but you can manually pass the bucket
and s3Prefix
:
Bucket myBucket;
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Bucket = myBucket,
S3Prefix = "my-table/",
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Glue tables can be configured to contain user-defined properties, to describe the physical storage of table data, through the storageParameters
property:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
StorageParameters = new [] { StorageParameter.SkipHeaderLineCount(1), StorageParameter.CompressionType(CompressionType.GZIP), StorageParameter.Custom("separatorChar", ",") },
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Glue tables can also be configured to contain user-defined table properties through the parameters
property:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Parameters = new Dictionary<string, string> {
{ "key1", "val1" },
{ "key2", "val2" }
},
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Partition Keys
To improve query performance, a table can specify partitionKeys
on which data is stored and queried separately. For example, you might partition a table by year
and month
to optimize queries based on a time window:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
PartitionKeys = new [] { new Column {
Name = "year",
Type = Schema.SMALL_INT
}, new Column {
Name = "month",
Type = Schema.SMALL_INT
} },
DataFormat = DataFormat.JSON
});
Partition Indexes
Another way to improve query performance is to specify partition indexes. If no partition indexes are present on the table, AWS Glue loads all partitions of the table and filters the loaded partitions using the query expression. The query takes more time to run as the number of partitions increase. With an index, the query will try to fetch a subset of the partitions instead of loading all partitions of the table.
The keys of a partition index must be a subset of the partition keys of the table. You can have a
maximum of 3 partition indexes per table. To specify a partition index, you can use the partitionIndexes
property:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
PartitionKeys = new [] { new Column {
Name = "year",
Type = Schema.SMALL_INT
}, new Column {
Name = "month",
Type = Schema.SMALL_INT
} },
PartitionIndexes = new [] { new PartitionIndex {
IndexName = "my-index", // optional
KeyNames = new [] { "year" }
} }, // supply up to 3 indexes
DataFormat = DataFormat.JSON
});
Alternatively, you can call the addPartitionIndex()
function on a table:
Table myTable;
myTable.AddPartitionIndex(new PartitionIndex {
IndexName = "my-index",
KeyNames = new [] { "year" }
});
Partition Filtering
If you have a table with a large number of partitions that grows over time, consider using AWS Glue partition indexing and filtering.
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
PartitionKeys = new [] { new Column {
Name = "year",
Type = Schema.SMALL_INT
}, new Column {
Name = "month",
Type = Schema.SMALL_INT
} },
DataFormat = DataFormat.JSON,
EnablePartitionFiltering = true
});
Glue Connections
Glue connections allow external data connections to third party databases and data warehouses. However, these connections can also be assigned to Glue Tables, allowing you to query external data sources using the Glue Data Catalog.
Whereas S3Table
will point to (and if needed, create) a bucket to store the tables' data, ExternalTable
will point to an existing table in a data source. For example, to create a table in Glue that points to a table in Redshift:
Connection myConnection;
Database myDatabase;
new ExternalTable(this, "MyTable", new ExternalTableProps {
Connection = myConnection,
ExternalDataLocation = "default_db_public_example", // A table in Redshift
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Encryption
You can enable encryption on a Table's data:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.S3_MANAGED,
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Database myDatabase;
// KMS key is created automatically
// KMS key is created automatically
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.KMS,
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
// with an explicit KMS key
// with an explicit KMS key
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.KMS,
EncryptionKey = new Key(this, "MyKey"),
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.KMS_MANAGED,
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Database myDatabase;
// KMS key is created automatically
// KMS key is created automatically
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.CLIENT_SIDE_KMS,
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
// with an explicit KMS key
// with an explicit KMS key
new S3Table(this, "MyTable", new S3TableProps {
Encryption = TableEncryption.CLIENT_SIDE_KMS,
EncryptionKey = new Key(this, "MyKey"),
// ...
Database = myDatabase,
Columns = new [] { new Column {
Name = "col1",
Type = Schema.STRING
} },
DataFormat = DataFormat.JSON
});
Note: you cannot provide a Bucket
when creating the S3Table
if you wish to use server-side encryption (KMS
, KMS_MANAGED
or S3_MANAGED
).
Types
A table's schema is a collection of columns, each of which have a name
and a type
. Types are recursive structures, consisting of primitive and complex types:
Database myDatabase;
new S3Table(this, "MyTable", new S3TableProps {
Columns = new [] { new Column {
Name = "primitive_column",
Type = Schema.STRING
}, new Column {
Name = "array_column",
Type = Schema.Array(Schema.INTEGER),
Comment = "array<integer>"
}, new Column {
Name = "map_column",
Type = Schema.Map(Schema.STRING, Schema.TIMESTAMP),
Comment = "map<string,string>"
}, new Column {
Name = "struct_column",
Type = Schema.Struct(new [] { new Column {
Name = "nested_column",
Type = Schema.DATE,
Comment = "nested comment"
} }),
Comment = "struct<nested_column:date COMMENT 'nested comment'>"
} },
// ...
Database = myDatabase,
DataFormat = DataFormat.JSON
});
Primitives
Numeric
Name | Type | Comments |
---|---|---|
FLOAT | Constant | A 32-bit single-precision floating point number |
INTEGER | Constant | A 32-bit signed value in two's complement format, with a minimum value of -2^31 and a maximum value of 2^31-1 |
DOUBLE | Constant | A 64-bit double-precision floating point number |
BIG_INT | Constant | A 64-bit signed INTEGER in two’s complement format, with a minimum value of -2^63 and a maximum value of 2^63 -1 |
SMALL_INT | Constant | A 16-bit signed INTEGER in two’s complement format, with a minimum value of -2^15 and a maximum value of 2^15-1 |
TINY_INT | Constant | A 8-bit signed INTEGER in two’s complement format, with a minimum value of -2^7 and a maximum value of 2^7-1 |
Date and time
Name | Type | Comments |
---|---|---|
DATE | Constant | A date in UNIX format, such as YYYY-MM-DD. |
TIMESTAMP | Constant | Date and time instant in the UNiX format, such as yyyy-mm-dd hh:mm:ss[.f...]. For example, TIMESTAMP '2008-09-15 03:04:05.324'. This format uses the session time zone. |
String
Name | Type | Comments |
---|---|---|
STRING | Constant | A string literal enclosed in single or double quotes |
decimal(precision: number, scale?: number) | Function | precision is the total number of digits. scale (optional) is the number of digits in fractional part with a default of 0. For example, use these type definitions: decimal(11,5), decimal(15) |
char(length: number) | Function | Fixed length character data, with a specified length between 1 and 255, such as char(10) |
varchar(length: number) | Function | Variable length character data, with a specified length between 1 and 65535, such as varchar(10) |
Miscellaneous
Name | Type | Comments |
---|---|---|
BOOLEAN | Constant | Values are true and false |
BINARY | Constant | Value is in binary |
Complex
Name | Type | Comments |
---|---|---|
array(itemType: Type) | Function | An array of some other type |
map(keyType: Type, valueType: Type) | Function | A map of some primitive key type to any value type |
struct(collumns: Column[]) | Function | Nested structure containing individually named and typed collumns |
Data Quality Ruleset
A DataQualityRuleset
specifies a data quality ruleset with DQDL rules applied to a specified AWS Glue table. For example, to create a data quality ruleset for a given table:
new DataQualityRuleset(this, "MyDataQualityRuleset", new DataQualityRulesetProps {
ClientToken = "client_token",
Description = "description",
RulesetName = "ruleset_name",
RulesetDqdl = "ruleset_dqdl",
Tags = new Dictionary<string, string> {
{ "key1", "value1" },
{ "key2", "value2" }
},
TargetTable = new DataQualityTargetTable("database_name", "table_name")
});
For more information, see AWS Glue Data Quality.
Classes
AssetCode | (experimental) Job Code from a local file. |
ClassificationString | (experimental) Classification string given to tables with this data format. |
CloudWatchEncryption | (experimental) CloudWatch Logs encryption configuration. |
CloudWatchEncryptionMode | (experimental) Encryption mode for CloudWatch Logs. |
Code | (experimental) Represents a Glue Job's Code assets (an asset can be a scripts, a jar, a python file or any other file). |
CodeConfig | (experimental) Result of binding |
Column | (experimental) A column of a table. |
ColumnCountMismatchHandlingAction | (experimental) Identifies if the file contains less or more values for a row than the number of columns specified in the external table definition. |
CompressionType | (experimental) The compression type. |
Connection | (experimental) An AWS Glue connection to a data source. |
ConnectionOptions | (experimental) Base Connection Options. |
ConnectionProps | (experimental) Construction properties for |
ConnectionType | (experimental) The type of the glue connection. |
ContinuousLoggingProps | (experimental) Properties for enabling Continuous Logging for Glue Jobs. |
Database | (experimental) A Glue database. |
DatabaseProps | |
DataFormat | (experimental) Defines the input/output formats and ser/de for a single DataFormat. |
DataFormatProps | (experimental) Properties of a DataFormat instance. |
DataQualityRuleset | (experimental) A Glue Data Quality ruleset. |
DataQualityRulesetProps | (experimental) Construction properties for |
DataQualityTargetTable | (experimental) Properties of a DataQualityTargetTable. |
ExecutionClass | (experimental) The ExecutionClass whether the job is run with a standard or flexible execution class. |
ExternalTable | (experimental) A Glue table that targets an external data location (e.g. A table in a Redshift Cluster). |
ExternalTableProps | |
GlueVersion | (experimental) AWS Glue version determines the versions of Apache Spark and Python that are available to the job. |
InputFormat | (experimental) Absolute class name of the Hadoop |
InvalidCharHandlingAction | (experimental) Specifies the action to perform when query results contain invalid UTF-8 character values. |
Job | (experimental) A Glue Job. |
JobAttributes | (experimental) Attributes for importing |
JobBookmarksEncryption | (experimental) Job bookmarks encryption configuration. |
JobBookmarksEncryptionMode | (experimental) Encryption mode for Job Bookmarks. |
JobExecutable | (experimental) The executable properties related to the Glue job's GlueVersion, JobType and code. |
JobExecutableConfig | (experimental) Result of binding a |
JobLanguage | (experimental) Runtime language of the Glue job. |
JobProps | (experimental) Construction properties for |
JobState | (experimental) Job states emitted by Glue to CloudWatch Events. |
JobType | (experimental) The job type. |
MetricType | (experimental) The Glue CloudWatch metric type. |
NumericOverflowHandlingAction | (experimental) Specifies the action to perform when ORC data contains an integer (for example, BIGINT or int64) that is larger than the column definition (for example, SMALLINT or int16). |
OrcColumnMappingType | (experimental) Specifies how to map columns when the table uses ORC data format. |
OutputFormat | (experimental) Absolute class name of the Hadoop |
PartitionIndex | (experimental) Properties of a Partition Index. |
PythonRayExecutableProps | (experimental) Props for creating a Python Ray job executable. |
PythonShellExecutableProps | (experimental) Props for creating a Python shell job executable. |
PythonSparkJobExecutableProps | (experimental) Props for creating a Python Spark (ETL or Streaming) job executable. |
PythonVersion | (experimental) Python version. |
Runtime | (experimental) AWS Glue runtime determines the runtime engine of the job. |
S3Code | (experimental) Glue job Code from an S3 bucket. |
S3Encryption | (experimental) S3 encryption configuration. |
S3EncryptionMode | (experimental) Encryption mode for S3. |
S3Table | (experimental) A Glue table that targets a S3 dataset. |
S3TableProps | |
ScalaJobExecutableProps | (experimental) Props for creating a Scala Spark (ETL or Streaming) job executable. |
Schema | |
SecurityConfiguration | (experimental) A security configuration is a set of security properties that can be used by AWS Glue to encrypt data at rest. |
SecurityConfigurationProps | (experimental) Constructions properties of |
SerializationLibrary | (experimental) Serialization library to use when serializing/deserializing (SerDe) table records. |
SparkUILoggingLocation | (experimental) The Spark UI logging location. |
SparkUIProps | (experimental) Properties for enabling Spark UI monitoring feature for Spark-based Glue jobs. |
StorageParameter | (experimental) A storage parameter. The list of storage parameters available is not exhaustive and other keys may be used. |
StorageParameters | (experimental) The storage parameter keys that are currently known, this list is not exhaustive and other keys may be used. |
SurplusBytesHandlingAction | (experimental) Specifies how to handle data being loaded that exceeds the length of the data type defined for columns containing VARBYTE data. |
SurplusCharHandlingAction | (experimental) Specifies how to handle data being loaded that exceeds the length of the data type defined for columns containing VARCHAR, CHAR, or string data. |
Table | (deprecated) A Glue table. |
TableAttributes | |
TableBase | (experimental) A Glue table. |
TableBaseProps | |
TableEncryption | (experimental) Encryption options for a Table. |
TableProps | |
Type | (experimental) Represents a type of a column in a table schema. |
WorkerType | (experimental) The type of predefined worker that is allocated when a job runs. |
WriteParallel | (experimental) Specifies how to handle data being loaded that exceeds the length of the data type defined for columns containing VARCHAR, CHAR, or string data. |
Interfaces
ICloudWatchEncryption | (experimental) CloudWatch Logs encryption configuration. |
ICodeConfig | (experimental) Result of binding |
IColumn | (experimental) A column of a table. |
IConnection | (experimental) Interface representing a created or an imported |
IConnectionOptions | (experimental) Base Connection Options. |
IConnectionProps | (experimental) Construction properties for |
IContinuousLoggingProps | (experimental) Properties for enabling Continuous Logging for Glue Jobs. |
IDatabase | |
IDatabaseProps | |
IDataFormatProps | (experimental) Properties of a DataFormat instance. |
IDataQualityRuleset | |
IDataQualityRulesetProps | (experimental) Construction properties for |
IExternalTableProps | |
IJob | (experimental) Interface representing a created or an imported |
IJobAttributes | (experimental) Attributes for importing |
IJobBookmarksEncryption | (experimental) Job bookmarks encryption configuration. |
IJobExecutableConfig | (experimental) Result of binding a |
IJobProps | (experimental) Construction properties for |
IPartitionIndex | (experimental) Properties of a Partition Index. |
IPythonRayExecutableProps | (experimental) Props for creating a Python Ray job executable. |
IPythonShellExecutableProps | (experimental) Props for creating a Python shell job executable. |
IPythonSparkJobExecutableProps | (experimental) Props for creating a Python Spark (ETL or Streaming) job executable. |
IS3Encryption | (experimental) S3 encryption configuration. |
IS3TableProps | |
IScalaJobExecutableProps | (experimental) Props for creating a Scala Spark (ETL or Streaming) job executable. |
ISecurityConfiguration | (experimental) Interface representing a created or an imported |
ISecurityConfigurationProps | (experimental) Constructions properties of |
ISparkUILoggingLocation | (experimental) The Spark UI logging location. |
ISparkUIProps | (experimental) Properties for enabling Spark UI monitoring feature for Spark-based Glue jobs. |
ITable | |
ITableAttributes | |
ITableBaseProps | |
ITableProps | |
IType | (experimental) Represents a type of a column in a table schema. |