Amazon.PowerShell.Cmdlets.GLUE.AmazonGlueClientCmdlet.ClientConfig
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Specifies the name of the SessionCommand. Can be 'glueetl' or 'gluestreaming'.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-Command_PythonVersion <
String>
Specifies the Python version. The Python version indicates the version supported for jobs of type Spark.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
A list of connections used by the job.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | Connections_Connections |
A map array of key-value pairs. Max is 75 pairs.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | DefaultArguments |
The description of the session.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The Glue version determines the versions of Apache Spark and Python that Glue supports. The GlueVersion must be greater than 2.0.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The ID of the session request.
Required? | True |
Position? | 1 |
Accept pipeline input? | True (ByValue, ByPropertyName) |
The number of minutes when idle before session times out. Default for Spark ETL jobs is value of Timeout. Consult the documentation for other job types.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The number of Glue data processing units (DPUs) that can be allocated when the job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB memory.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The number of workers of a defined WorkerType to use for the session.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | NumberOfWorkers |
Changes the cmdlet behavior to return the value passed to the Id parameter. The -PassThru parameter is deprecated, use -Select '^Id' instead. This parameter will be removed in a future version.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The origin of the request.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The IAM Role ARN
Required? | True |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-SecurityConfiguration <
String>
The name of the SecurityConfiguration structure to be used with the session
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Use the -Select parameter to control the cmdlet output. The default value is 'Session'. Specifying -Select '*' will result in the cmdlet returning the whole service response (Amazon.Glue.Model.CreateSessionResponse). Specifying the name of a property of type Amazon.Glue.Model.CreateSessionResponse will result in that property being returned. Specifying -Select '^ParameterName' will result in the cmdlet returning the selected cmdlet parameter value.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The map of key value pairs (tags) belonging to the session.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | Tags |
The number of minutes before session times out. Default for Spark ETL jobs is 48 hours (2880 minutes), the maximum session lifetime for this job type. Consult the documentation for other job types.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, or G.8X for Spark jobs. Accepts the value Z.2X for Ray notebooks.
- For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
- For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
- For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
- For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X worker type.
- For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |