CfnJob
- class aws_cdk.aws_glue.CfnJob(scope, id, *, command, role, allocated_capacity=None, connections=None, default_arguments=None, description=None, execution_class=None, execution_property=None, glue_version=None, job_mode=None, job_run_queuing_enabled=None, log_uri=None, maintenance_window=None, max_capacity=None, max_retries=None, name=None, non_overridable_arguments=None, notification_property=None, number_of_workers=None, security_configuration=None, tags=None, timeout=None, worker_type=None)
Bases:
CfnResource
The
AWS::Glue::Job
resource specifies an AWS Glue job in the data catalog.For more information, see Adding Jobs in AWS Glue and Job Structure in the AWS Glue Developer Guide.
- See:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-glue-job.html
- CloudformationResource:
AWS::Glue::Job
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_glue as glue # default_arguments: Any # non_overridable_arguments: Any # tags: Any cfn_job = glue.CfnJob(self, "MyCfnJob", command=glue.CfnJob.JobCommandProperty( name="name", python_version="pythonVersion", runtime="runtime", script_location="scriptLocation" ), role="role", # the properties below are optional allocated_capacity=123, connections=glue.CfnJob.ConnectionsListProperty( connections=["connections"] ), default_arguments=default_arguments, description="description", execution_class="executionClass", execution_property=glue.CfnJob.ExecutionPropertyProperty( max_concurrent_runs=123 ), glue_version="glueVersion", job_mode="jobMode", job_run_queuing_enabled=False, log_uri="logUri", maintenance_window="maintenanceWindow", max_capacity=123, max_retries=123, name="name", non_overridable_arguments=non_overridable_arguments, notification_property=glue.CfnJob.NotificationPropertyProperty( notify_delay_after=123 ), number_of_workers=123, security_configuration="securityConfiguration", tags=tags, timeout=123, worker_type="workerType" )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).command (
Union
[IResolvable
,JobCommandProperty
,Dict
[str
,Any
]]) – The code that executes a job.role (
str
) – The name or Amazon Resource Name (ARN) of the IAM role associated with this job.allocated_capacity (
Union
[int
,float
,None
]) – This parameter is no longer supported. UseMaxCapacity
instead. The number of capacity units that are allocated to this job.connections (
Union
[IResolvable
,ConnectionsListProperty
,Dict
[str
,Any
],None
]) – The connections used for this job.default_arguments (
Any
) – The default arguments for this job, specified as name-value pairs. You can specify arguments here that your own job-execution script consumes, in addition to arguments that AWS Glue itself consumes. For information about how to specify and consume your own job arguments, see Calling AWS Glue APIs in Python in the AWS Glue Developer Guide . For information about the key-value pairs that AWS Glue consumes to set up your job, see Special Parameters Used by AWS Glue in the AWS Glue Developer Guide .description (
Optional
[str
]) – A description of the job.execution_class (
Optional
[str
]) – Indicates whether the job is run with a standard or flexible execution class. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources. The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary. Only jobs with AWS Glue version 3.0 and above and command typeglueetl
will be allowed to setExecutionClass
toFLEX
. The flexible execution class is available for Spark jobs.execution_property (
Union
[IResolvable
,ExecutionPropertyProperty
,Dict
[str
,Any
],None
]) – The maximum number of concurrent runs that are allowed for this job.glue_version (
Optional
[str
]) – Glue version determines the versions of Apache Spark and Python that AWS Glue supports. The Python version indicates the version supported for jobs of type Spark. For more information about the available AWS Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide. Jobs that are created without specifying a Glue version default to the latest Glue version available.job_mode (
Optional
[str
]) – A mode that describes how a job was created. Valid values are:. -SCRIPT
- The job was created using the AWS Glue Studio script editor. -VISUAL
- The job was created using the AWS Glue Studio visual editor. -NOTEBOOK
- The job was created using an interactive sessions notebook. When theJobMode
field is missing or null,SCRIPT
is assigned as the default value.job_run_queuing_enabled (
Union
[bool
,IResolvable
,None
]) – Specifies whether job run queuing is enabled for the job runs for this job. A value of true means job run queuing is enabled for the job runs. If false or not populated, the job runs will not be considered for queueing. If this field does not match the value set in the job run, then the value from the job run field will be used.log_uri (
Optional
[str
]) – This field is reserved for future use.maintenance_window (
Optional
[str
]) – This field specifies a day of the week and hour for a maintenance window for streaming jobs. AWS Glue periodically performs maintenance activities. During these maintenance windows, AWS Glue will need to restart your streaming jobs. AWS Glue will restart the job within 3 hours of the specified maintenance window. For instance, if you set up the maintenance window for Monday at 10:00AM GMT, your jobs will be restarted between 10:00AM GMT to 1:00PM GMT.max_capacity (
Union
[int
,float
,None
]) – The number of AWS Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. Do not setMax Capacity
if usingWorkerType
andNumberOfWorkers
. The value that can be allocated forMaxCapacity
depends on whether you are running a Python shell job or an Apache Spark ETL job: - When you specify a Python shell job (JobCommand.Name
=”pythonshell”), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU. - When you specify an Apache Spark ETL job (JobCommand.Name
=”glueetl”), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.max_retries (
Union
[int
,float
,None
]) – The maximum number of times to retry this job after a JobRun fails.name (
Optional
[str
]) – The name you assign to this job definition.non_overridable_arguments (
Any
) – Non-overridable arguments for this job, specified as name-value pairs.notification_property (
Union
[IResolvable
,NotificationPropertyProperty
,Dict
[str
,Any
],None
]) – Specifies configuration properties of a notification.number_of_workers (
Union
[int
,float
,None
]) – The number of workers of a definedworkerType
that are allocated when a job runs. The maximum number of workers you can define are 299 forG.1X
, and 149 forG.2X
.security_configuration (
Optional
[str
]) – The name of theSecurityConfiguration
structure to be used with this job.tags (
Any
) – The tags to use with this job.timeout (
Union
[int
,float
,None
]) – The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).worker_type (
Optional
[str
]) – The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs. - For theG.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. - For theG.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. - For theG.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for AWS Glue version 3.0 or later Spark ETL jobs in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). - For theG.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for AWS Glue version 3.0 or later Spark ETL jobs, in the same AWS Regions as supported for theG.4X
worker type. - For theG.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for AWS Glue version 3.0 streaming jobs. - For theZ.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::Glue::Job'
- allocated_capacity
This parameter is no longer supported.
Use
MaxCapacity
instead.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- command
The code that executes a job.
- connections
The connections used for this job.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- default_arguments
The default arguments for this job, specified as name-value pairs.
- description
A description of the job.
- execution_class
Indicates whether the job is run with a standard or flexible execution class.
- execution_property
The maximum number of concurrent runs that are allowed for this job.
- glue_version
Glue version determines the versions of Apache Spark and Python that AWS Glue supports.
- job_mode
A mode that describes how a job was created.
Valid values are:.
- job_run_queuing_enabled
Specifies whether job run queuing is enabled for the job runs for this job.
- log_uri
This field is reserved for future use.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- maintenance_window
This field specifies a day of the week and hour for a maintenance window for streaming jobs.
- max_capacity
The number of AWS Glue data processing units (DPUs) that can be allocated when this job runs.
- max_retries
The maximum number of times to retry this job after a JobRun fails.
- name
The name you assign to this job definition.
- node
The tree node.
- non_overridable_arguments
Non-overridable arguments for this job, specified as name-value pairs.
- notification_property
Specifies configuration properties of a notification.
- number_of_workers
The number of workers of a defined
workerType
that are allocated when a job runs.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- role
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
- security_configuration
The name of the
SecurityConfiguration
structure to be used with this job.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
The tags to use with this job.
- timeout
The job timeout in minutes.
- worker_type
The type of predefined worker that is allocated when a job runs.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
ConnectionsListProperty
- class CfnJob.ConnectionsListProperty(*, connections=None)
Bases:
object
Specifies the connections used by a job.
- Parameters:
connections (
Optional
[Sequence
[str
]]) – A list of connections used by the job.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_glue as glue connections_list_property = glue.CfnJob.ConnectionsListProperty( connections=["connections"] )
Attributes
- connections
A list of connections used by the job.
ExecutionPropertyProperty
- class CfnJob.ExecutionPropertyProperty(*, max_concurrent_runs=None)
Bases:
object
An execution property of a job.
- Parameters:
max_concurrent_runs (
Union
[int
,float
,None
]) – The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_glue as glue execution_property_property = glue.CfnJob.ExecutionPropertyProperty( max_concurrent_runs=123 )
Attributes
- max_concurrent_runs
The maximum number of concurrent runs allowed for the job.
The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.
JobCommandProperty
- class CfnJob.JobCommandProperty(*, name=None, python_version=None, runtime=None, script_location=None)
Bases:
object
Specifies code executed when a job is run.
- Parameters:
name (
Optional
[str
]) – The name of the job command. For an Apache Spark ETL job, this must beglueetl
. For a Python shell job, it must bepythonshell
. For an Apache Spark streaming ETL job, this must begluestreaming
. For a Ray job, this must beglueray
.python_version (
Optional
[str
]) – The Python version being used to execute a Python shell job. Allowed values are 3 or 3.9. Version 2 is deprecated.runtime (
Optional
[str
]) – In Ray jobs, Runtime is used to specify the versions of Ray, Python and additional libraries available in your environment. This field is not used in other job types. For supported runtime environment values, see Working with Ray jobs in the AWS Glue Developer Guide.script_location (
Optional
[str
]) – Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that executes a job (required).
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_glue as glue job_command_property = glue.CfnJob.JobCommandProperty( name="name", python_version="pythonVersion", runtime="runtime", script_location="scriptLocation" )
Attributes
- name
The name of the job command.
For an Apache Spark ETL job, this must be
glueetl
. For a Python shell job, it must bepythonshell
. For an Apache Spark streaming ETL job, this must begluestreaming
. For a Ray job, this must beglueray
.
- python_version
The Python version being used to execute a Python shell job.
Allowed values are 3 or 3.9. Version 2 is deprecated.
- runtime
In Ray jobs, Runtime is used to specify the versions of Ray, Python and additional libraries available in your environment.
This field is not used in other job types. For supported runtime environment values, see Working with Ray jobs in the AWS Glue Developer Guide.
- script_location
Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that executes a job (required).
NotificationPropertyProperty
- class CfnJob.NotificationPropertyProperty(*, notify_delay_after=None)
Bases:
object
Specifies configuration properties of a notification.
- Parameters:
notify_delay_after (
Union
[int
,float
,None
]) – After a job run starts, the number of minutes to wait before sending a job run delay notification.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_glue as glue notification_property_property = glue.CfnJob.NotificationPropertyProperty( notify_delay_after=123 )
Attributes
- notify_delay_after
After a job run starts, the number of minutes to wait before sending a job run delay notification.