CfnProcessingJob
- class aws_cdk.aws_sagemaker.CfnProcessingJob(scope, id, *, app_specification, processing_resources, role_arn, environment=None, experiment_config=None, network_config=None, processing_inputs=None, processing_job_name=None, processing_output_config=None, stopping_condition=None, tags=None)
Bases:
CfnResource
An Amazon SageMaker processing job that is used to analyze data and evaluate models.
For more information, see Process Data and Evaluate Models .
Also, note the following details specific to processing jobs created using CloudFormation stacks:
When you delete a CloudFormation stack with a processing job resource, the processing job is stopped using the StopProcessingJob API but not deleted. Any tags associated with the processing job are deleted using the DeleteTags API.
If any part of your CloudFormation stack deployment fails and a rollback initiates, processing jobs with a specified
ProcessingJobName
value might cause the stack to become stuck in a failed state. This occurs because during a rollback, CloudFormation attempts to recreate the stack resources. Processing job names must be unique, so when CloudFormation attempts to recreate a processing job using the already defined name, this results in anAlreadyExists
error. To prevent this, we recommend that you don’t specify the optionalProcessingJobName
property, thereby allowing SageMaker to auto-generate a unique name for your processing job. This ensures successful stack rollbacks when necessary. If you must use custom job names, you have to manually modify theProcessingJobName
and redeploy the stack to recover from a failed rollback.
- See:
- CloudformationResource:
AWS::SageMaker::ProcessingJob
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker cfn_processing_job = sagemaker.CfnProcessingJob(self, "MyCfnProcessingJob", app_specification=sagemaker.CfnProcessingJob.AppSpecificationProperty( image_uri="imageUri", # the properties below are optional container_arguments=["containerArguments"], container_entrypoint=["containerEntrypoint"] ), processing_resources=sagemaker.CfnProcessingJob.ProcessingResourcesProperty( cluster_config=sagemaker.CfnProcessingJob.ClusterConfigProperty( instance_count=123, instance_type="instanceType", volume_size_in_gb=123, # the properties below are optional volume_kms_key_id="volumeKmsKeyId" ) ), role_arn="roleArn", # the properties below are optional environment={ "environment_key": "environment" }, experiment_config=sagemaker.CfnProcessingJob.ExperimentConfigProperty( experiment_name="experimentName", run_name="runName", trial_component_display_name="trialComponentDisplayName", trial_name="trialName" ), network_config=sagemaker.CfnProcessingJob.NetworkConfigProperty( enable_inter_container_traffic_encryption=False, enable_network_isolation=False, vpc_config=sagemaker.CfnProcessingJob.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) ), processing_inputs=[sagemaker.CfnProcessingJob.ProcessingInputsObjectProperty( input_name="inputName", # the properties below are optional app_managed=False, dataset_definition=sagemaker.CfnProcessingJob.DatasetDefinitionProperty( athena_dataset_definition=sagemaker.CfnProcessingJob.AthenaDatasetDefinitionProperty( catalog="catalog", database="database", output_format="outputFormat", output_s3_uri="outputS3Uri", query_string="queryString", # the properties below are optional kms_key_id="kmsKeyId", output_compression="outputCompression", work_group="workGroup" ), data_distribution_type="dataDistributionType", input_mode="inputMode", local_path="localPath", redshift_dataset_definition=sagemaker.CfnProcessingJob.RedshiftDatasetDefinitionProperty( cluster_id="clusterId", cluster_role_arn="clusterRoleArn", database="database", db_user="dbUser", output_format="outputFormat", output_s3_uri="outputS3Uri", query_string="queryString", # the properties below are optional kms_key_id="kmsKeyId", output_compression="outputCompression" ) ), s3_input=sagemaker.CfnProcessingJob.S3InputProperty( s3_data_type="s3DataType", s3_uri="s3Uri", # the properties below are optional local_path="localPath", s3_compression_type="s3CompressionType", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ) )], processing_job_name="processingJobName", processing_output_config=sagemaker.CfnProcessingJob.ProcessingOutputConfigProperty( outputs=[sagemaker.CfnProcessingJob.ProcessingOutputsObjectProperty( output_name="outputName", # the properties below are optional app_managed=False, feature_store_output=sagemaker.CfnProcessingJob.FeatureStoreOutputProperty( feature_group_name="featureGroupName" ), s3_output=sagemaker.CfnProcessingJob.S3OutputProperty( s3_upload_mode="s3UploadMode", s3_uri="s3Uri", # the properties below are optional local_path="localPath" ) )], # the properties below are optional kms_key_id="kmsKeyId" ), stopping_condition=sagemaker.CfnProcessingJob.StoppingConditionProperty( max_runtime_in_seconds=123 ), tags=[CfnTag( key="key", value="value" )] )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).app_specification (
Union
[IResolvable
,AppSpecificationProperty
,Dict
[str
,Any
]]) – Configuration to run a processing job in a specified container image.processing_resources (
Union
[IResolvable
,ProcessingResourcesProperty
,Dict
[str
,Any
]]) – Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job. In distributed training, you specify more than one instance.role_arn (
str
) – The ARN of the role used to create the processing job.environment (
Union
[Mapping
[str
,str
],IResolvable
,None
]) – Sets the environment variables in the Docker container.experiment_config (
Union
[IResolvable
,ExperimentConfigProperty
,Dict
[str
,Any
],None
]) – Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the CreateProcessingJob API.network_config (
Union
[IResolvable
,NetworkConfigProperty
,Dict
[str
,Any
],None
]) – Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.processing_inputs (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,ProcessingInputsObjectProperty
,Dict
[str
,Any
]]],None
]) – List of input configurations for the processing job.processing_job_name (
Optional
[str
]) – The name of the processing job. If you don’t provide a job name, then a unique name is automatically created for the job.processing_output_config (
Union
[IResolvable
,ProcessingOutputConfigProperty
,Dict
[str
,Any
],None
]) – Contains information about the output location for the compiled model and the target device that the model runs on.TargetDevice
andTargetPlatform
are mutually exclusive, so you need to choose one between the two to specify your target device or platform. If you cannot find your device you want to use from theTargetDevice
list, useTargetPlatform
to describe the platform of your edge device andCompilerOptions
if there are specific settings that are required or recommended to use for particular TargetPlatform.stopping_condition (
Union
[IResolvable
,StoppingConditionProperty
,Dict
[str
,Any
],None
]) – Configures conditions under which the processing job should be stopped, such as how long the processing job has been running. After the condition is met, the processing job is stopped.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide .
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
)- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
)- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
)value (
Any
)
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
])apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
])
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
)- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
)- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::SageMaker::ProcessingJob'
- app_specification
Configuration to run a processing job in a specified container image.
- attr_auto_ml_job_arn
The Amazon Resource Name (ARN) of the AutoML job associated with this processing job.
- CloudformationAttribute:
AutoMLJobArn
- attr_creation_time
The time the processing job was created.
- CloudformationAttribute:
CreationTime
- attr_exit_message
A string, up to one KB in size, that contains metadata from the processing container when the processing job exits.
- CloudformationAttribute:
ExitMessage
- attr_failure_reason
A string, up to one KB in size, that contains the reason a processing job failed, if it failed.
- CloudformationAttribute:
FailureReason
- attr_last_modified_time
The time the processing job was last modified.
- CloudformationAttribute:
LastModifiedTime
- attr_monitoring_schedule_arn
The ARN of a monitoring schedule for an endpoint associated with this processing job.
- CloudformationAttribute:
MonitoringScheduleArn
- attr_processing_end_time
The time that the processing job ended.
- CloudformationAttribute:
ProcessingEndTime
- attr_processing_job_arn
The ARN of the processing job.
- CloudformationAttribute:
ProcessingJobArn
- attr_processing_job_status
The status of the processing job.
- CloudformationAttribute:
ProcessingJobStatus
- attr_processing_start_time
The time that the processing job started.
- CloudformationAttribute:
ProcessingStartTime
- attr_training_job_arn
The ARN of the training job associated with this processing job.
- CloudformationAttribute:
TrainingJobArn
- cdk_tag_manager
Tag Manager which manages the tags for this resource.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- environment
Sets the environment variables in the Docker container.
- experiment_config
Associates a SageMaker job as a trial component with an experiment and trial.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- network_config
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
- node
The tree node.
- processing_inputs
List of input configurations for the processing job.
- processing_job_name
The name of the processing job.
- processing_job_ref
A reference to a ProcessingJob resource.
- processing_output_config
Contains information about the output location for the compiled model and the target device that the model runs on.
- processing_resources
Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- role_arn
The ARN of the role used to create the processing job.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- stopping_condition
Configures conditions under which the processing job should be stopped, such as how long the processing job has been running.
- tags
An array of key-value pairs.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
)- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
)- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
AppSpecificationProperty
- class CfnProcessingJob.AppSpecificationProperty(*, image_uri, container_arguments=None, container_entrypoint=None)
Bases:
object
Configuration to run a processing job in a specified container image.
- Parameters:
image_uri (
str
) – The container image to be run by the processing job.container_arguments (
Optional
[Sequence
[str
]]) – The arguments for a container used to run a processing job.container_entrypoint (
Optional
[Sequence
[str
]]) – The entrypoint for a container used to run a processing job.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker app_specification_property = sagemaker.CfnProcessingJob.AppSpecificationProperty( image_uri="imageUri", # the properties below are optional container_arguments=["containerArguments"], container_entrypoint=["containerEntrypoint"] )
Attributes
- container_arguments
The arguments for a container used to run a processing job.
- container_entrypoint
The entrypoint for a container used to run a processing job.
- image_uri
The container image to be run by the processing job.
AthenaDatasetDefinitionProperty
- class CfnProcessingJob.AthenaDatasetDefinitionProperty(*, catalog, database, output_format, output_s3_uri, query_string, kms_key_id=None, output_compression=None, work_group=None)
Bases:
object
Configuration for Athena Dataset Definition input.
- Parameters:
catalog (
str
) – The name of the data catalog used in Athena query execution.database (
str
) – The name of the database used in the Athena query execution.output_format (
str
) – The data storage format for Athena query results.output_s3_uri (
str
) – The location in Amazon S3 where Athena query results are stored.query_string (
str
) – The SQL query statements, to be executed.kms_key_id (
Optional
[str
]) – The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt data generated from an Athena query execution.output_compression (
Optional
[str
]) – The compression used for Athena query results.work_group (
Optional
[str
]) – The name of the workgroup in which the Athena query is being started.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker athena_dataset_definition_property = sagemaker.CfnProcessingJob.AthenaDatasetDefinitionProperty( catalog="catalog", database="database", output_format="outputFormat", output_s3_uri="outputS3Uri", query_string="queryString", # the properties below are optional kms_key_id="kmsKeyId", output_compression="outputCompression", work_group="workGroup" )
Attributes
- catalog
The name of the data catalog used in Athena query execution.
- database
The name of the database used in the Athena query execution.
- kms_key_id
The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt data generated from an Athena query execution.
- output_compression
The compression used for Athena query results.
- output_format
The data storage format for Athena query results.
- output_s3_uri
The location in Amazon S3 where Athena query results are stored.
- query_string
The SQL query statements, to be executed.
- work_group
The name of the workgroup in which the Athena query is being started.
ClusterConfigProperty
- class CfnProcessingJob.ClusterConfigProperty(*, instance_count, instance_type, volume_size_in_gb, volume_kms_key_id=None)
Bases:
object
Configuration for the cluster used to run a processing job.
- Parameters:
instance_count (
Union
[int
,float
]) – The number of ML compute instances to use in the processing job. For distributed processing jobs, specify a value greater than 1. The default value is 1.instance_type (
str
) – The ML compute instance type for the processing job.volume_size_in_gb (
Union
[int
,float
]) – The size of the ML storage volume in gigabytes that you want to provision. You must specify sufficient ML storage for your scenario. .. epigraph:: Certain Nitro-based instances include local storage with a fixed total size, dependent on the instance type. When using these instances for processing, Amazon SageMaker mounts the local instance storage instead of Amazon EBS gp2 storage. You can’t request aVolumeSizeInGB
greater than the total size of the local instance storage. For a list of instance types that support local instance storage, including the total size per instance type, see Instance Store Volumes .volume_kms_key_id (
Optional
[str
]) –The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the processing job. .. epigraph:: Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can’t request a
VolumeKmsKeyId
when using an instance type with local storage. For a list of instance types that support local instance storage, see Instance Store Volumes . For more information about local instance storage encryption, see SSD Instance Store Volumes .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker cluster_config_property = sagemaker.CfnProcessingJob.ClusterConfigProperty( instance_count=123, instance_type="instanceType", volume_size_in_gb=123, # the properties below are optional volume_kms_key_id="volumeKmsKeyId" )
Attributes
- instance_count
The number of ML compute instances to use in the processing job.
For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance_type
The ML compute instance type for the processing job.
- volume_kms_key_id
The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the processing job.
Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can’t request a
VolumeKmsKeyId
when using an instance type with local storage.For a list of instance types that support local instance storage, see Instance Store Volumes .
For more information about local instance storage encryption, see SSD Instance Store Volumes .
- volume_size_in_gb
The size of the ML storage volume in gigabytes that you want to provision.
You must specify sufficient ML storage for your scenario. .. epigraph:
Certain Nitro-based instances include local storage with a fixed total size, dependent on the instance type. When using these instances for processing, Amazon SageMaker mounts the local instance storage instead of Amazon EBS gp2 storage. You can't request a ``VolumeSizeInGB`` greater than the total size of the local instance storage. For a list of instance types that support local instance storage, including the total size per instance type, see `Instance Store Volumes <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes>`_ .
DatasetDefinitionProperty
- class CfnProcessingJob.DatasetDefinitionProperty(*, athena_dataset_definition=None, data_distribution_type=None, input_mode=None, local_path=None, redshift_dataset_definition=None)
Bases:
object
Configuration for Dataset Definition inputs.
The Dataset Definition input must specify exactly one of either
AthenaDatasetDefinition
orRedshiftDatasetDefinition
types.- Parameters:
athena_dataset_definition (
Union
[IResolvable
,AthenaDatasetDefinitionProperty
,Dict
[str
,Any
],None
]) – Configuration for Athena Dataset Definition input.data_distribution_type (
Optional
[str
]) – Whether the generated dataset isFullyReplicated
orShardedByS3Key
(default).input_mode (
Optional
[str
]) – Whether to useFile
orPipe
input mode. InFile
(default) mode, Amazon SageMaker copies the data from the input source onto the local Amazon Elastic Block Store (Amazon EBS) volumes before starting your training algorithm. This is the most commonly used input mode. InPipe
mode, Amazon SageMaker streams input data from the source directly to your algorithm without using the EBS volume.local_path (
Optional
[str
]) – The local path where you want Amazon SageMaker to download the Dataset Definition inputs to run a processing job.LocalPath
is an absolute path to the input data. This is a required parameter whenAppManaged
isFalse
(default).redshift_dataset_definition (
Union
[IResolvable
,RedshiftDatasetDefinitionProperty
,Dict
[str
,Any
],None
]) – Configuration for Redshift Dataset Definition input.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker dataset_definition_property = sagemaker.CfnProcessingJob.DatasetDefinitionProperty( athena_dataset_definition=sagemaker.CfnProcessingJob.AthenaDatasetDefinitionProperty( catalog="catalog", database="database", output_format="outputFormat", output_s3_uri="outputS3Uri", query_string="queryString", # the properties below are optional kms_key_id="kmsKeyId", output_compression="outputCompression", work_group="workGroup" ), data_distribution_type="dataDistributionType", input_mode="inputMode", local_path="localPath", redshift_dataset_definition=sagemaker.CfnProcessingJob.RedshiftDatasetDefinitionProperty( cluster_id="clusterId", cluster_role_arn="clusterRoleArn", database="database", db_user="dbUser", output_format="outputFormat", output_s3_uri="outputS3Uri", query_string="queryString", # the properties below are optional kms_key_id="kmsKeyId", output_compression="outputCompression" ) )
Attributes
- athena_dataset_definition
Configuration for Athena Dataset Definition input.
- data_distribution_type
Whether the generated dataset is
FullyReplicated
orShardedByS3Key
(default).
- input_mode
Whether to use
File
orPipe
input mode.In
File
(default) mode, Amazon SageMaker copies the data from the input source onto the local Amazon Elastic Block Store (Amazon EBS) volumes before starting your training algorithm. This is the most commonly used input mode. InPipe
mode, Amazon SageMaker streams input data from the source directly to your algorithm without using the EBS volume.
- local_path
The local path where you want Amazon SageMaker to download the Dataset Definition inputs to run a processing job.
LocalPath
is an absolute path to the input data. This is a required parameter whenAppManaged
isFalse
(default).
- redshift_dataset_definition
Configuration for Redshift Dataset Definition input.
ExperimentConfigProperty
- class CfnProcessingJob.ExperimentConfigProperty(*, experiment_name=None, run_name=None, trial_component_display_name=None, trial_name=None)
Bases:
object
Associates a SageMaker job as a trial component with an experiment and trial.
Specified when you call the CreateProcessingJob API.
- Parameters:
experiment_name (
Optional
[str
]) – The name of an existing experiment to associate with the trial component.run_name (
Optional
[str
]) – The name of the experiment run to associate with the trial component.trial_component_display_name (
Optional
[str
]) – The display name for the trial component. If this key isn’t specified, the display name is the trial component name.trial_name (
Optional
[str
]) – The name of an existing trial to associate the trial component with. If not specified, a new trial is created.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker experiment_config_property = sagemaker.CfnProcessingJob.ExperimentConfigProperty( experiment_name="experimentName", run_name="runName", trial_component_display_name="trialComponentDisplayName", trial_name="trialName" )
Attributes
- experiment_name
The name of an existing experiment to associate with the trial component.
- run_name
The name of the experiment run to associate with the trial component.
- trial_component_display_name
The display name for the trial component.
If this key isn’t specified, the display name is the trial component name.
- trial_name
The name of an existing trial to associate the trial component with.
If not specified, a new trial is created.
FeatureStoreOutputProperty
- class CfnProcessingJob.FeatureStoreOutputProperty(*, feature_group_name)
Bases:
object
Configuration for processing job outputs in Amazon SageMaker Feature Store.
- Parameters:
feature_group_name (
str
) – The name of the Amazon SageMaker FeatureGroup to use as the destination for processing job output. Note that your processing script is responsible for putting records into your Feature Store.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker feature_store_output_property = sagemaker.CfnProcessingJob.FeatureStoreOutputProperty( feature_group_name="featureGroupName" )
Attributes
- feature_group_name
The name of the Amazon SageMaker FeatureGroup to use as the destination for processing job output.
Note that your processing script is responsible for putting records into your Feature Store.
NetworkConfigProperty
- class CfnProcessingJob.NetworkConfigProperty(*, enable_inter_container_traffic_encryption=None, enable_network_isolation=None, vpc_config=None)
Bases:
object
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
- Parameters:
enable_inter_container_traffic_encryption (
Union
[bool
,IResolvable
,None
]) – Whether to encrypt all communications between distributed processing jobs. ChooseTrue
to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.enable_network_isolation (
Union
[bool
,IResolvable
,None
]) – Whether to allow inbound and outbound network calls to and from the containers used for the processing job.vpc_config (
Union
[IResolvable
,VpcConfigProperty
,Dict
[str
,Any
],None
]) – Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to. You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker network_config_property = sagemaker.CfnProcessingJob.NetworkConfigProperty( enable_inter_container_traffic_encryption=False, enable_network_isolation=False, vpc_config=sagemaker.CfnProcessingJob.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) )
Attributes
- enable_inter_container_traffic_encryption
Whether to encrypt all communications between distributed processing jobs.
Choose
True
to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable_network_isolation
Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc_config
Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to.
You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC .
ProcessingInputsObjectProperty
- class CfnProcessingJob.ProcessingInputsObjectProperty(*, input_name, app_managed=None, dataset_definition=None, s3_input=None)
Bases:
object
The inputs for a processing job.
The processing input must specify exactly one of either
S3Input
orDatasetDefinition
types.- Parameters:
input_name (
str
) – The name for the processing job input.app_managed (
Union
[bool
,IResolvable
,None
]) – WhenTrue
, input operations such as data download are managed natively by the processing job application. WhenFalse
(default), input operations are managed by Amazon SageMaker.dataset_definition (
Union
[IResolvable
,DatasetDefinitionProperty
,Dict
[str
,Any
],None
]) – Configuration for Dataset Definition inputs. The Dataset Definition input must specify exactly one of eitherAthenaDatasetDefinition
orRedshiftDatasetDefinition
types.s3_input (
Union
[IResolvable
,S3InputProperty
,Dict
[str
,Any
],None
]) – Configuration for downloading input data from Amazon S3 into the processing container.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker processing_inputs_object_property = sagemaker.CfnProcessingJob.ProcessingInputsObjectProperty( input_name="inputName", # the properties below are optional app_managed=False, dataset_definition=sagemaker.CfnProcessingJob.DatasetDefinitionProperty( athena_dataset_definition=sagemaker.CfnProcessingJob.AthenaDatasetDefinitionProperty( catalog="catalog", database="database", output_format="outputFormat", output_s3_uri="outputS3Uri", query_string="queryString", # the properties below are optional kms_key_id="kmsKeyId", output_compression="outputCompression", work_group="workGroup" ), data_distribution_type="dataDistributionType", input_mode="inputMode", local_path="localPath", redshift_dataset_definition=sagemaker.CfnProcessingJob.RedshiftDatasetDefinitionProperty( cluster_id="clusterId", cluster_role_arn="clusterRoleArn", database="database", db_user="dbUser", output_format="outputFormat", output_s3_uri="outputS3Uri", query_string="queryString", # the properties below are optional kms_key_id="kmsKeyId", output_compression="outputCompression" ) ), s3_input=sagemaker.CfnProcessingJob.S3InputProperty( s3_data_type="s3DataType", s3_uri="s3Uri", # the properties below are optional local_path="localPath", s3_compression_type="s3CompressionType", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ) )
Attributes
- app_managed
When
True
, input operations such as data download are managed natively by the processing job application.When
False
(default), input operations are managed by Amazon SageMaker.
- dataset_definition
Configuration for Dataset Definition inputs.
The Dataset Definition input must specify exactly one of either
AthenaDatasetDefinition
orRedshiftDatasetDefinition
types.
- input_name
The name for the processing job input.
- s3_input
Configuration for downloading input data from Amazon S3 into the processing container.
ProcessingOutputConfigProperty
- class CfnProcessingJob.ProcessingOutputConfigProperty(*, outputs, kms_key_id=None)
Bases:
object
Configuration for uploading output from the processing container.
- Parameters:
outputs (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,ProcessingOutputsObjectProperty
,Dict
[str
,Any
]]]]) – An array of outputs configuring the data to upload from the processing container.kms_key_id (
Optional
[str
]) – The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt the processing job output.KmsKeyId
can be an ID of a KMS key, ARN of a KMS key, or alias of a KMS key. TheKmsKeyId
is applied to all outputs.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker processing_output_config_property = sagemaker.CfnProcessingJob.ProcessingOutputConfigProperty( outputs=[sagemaker.CfnProcessingJob.ProcessingOutputsObjectProperty( output_name="outputName", # the properties below are optional app_managed=False, feature_store_output=sagemaker.CfnProcessingJob.FeatureStoreOutputProperty( feature_group_name="featureGroupName" ), s3_output=sagemaker.CfnProcessingJob.S3OutputProperty( s3_upload_mode="s3UploadMode", s3_uri="s3Uri", # the properties below are optional local_path="localPath" ) )], # the properties below are optional kms_key_id="kmsKeyId" )
Attributes
- kms_key_id
The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt the processing job output.
KmsKeyId
can be an ID of a KMS key, ARN of a KMS key, or alias of a KMS key. TheKmsKeyId
is applied to all outputs.
- outputs
An array of outputs configuring the data to upload from the processing container.
ProcessingOutputsObjectProperty
- class CfnProcessingJob.ProcessingOutputsObjectProperty(*, output_name, app_managed=None, feature_store_output=None, s3_output=None)
Bases:
object
Describes the results of a processing job.
The processing output must specify exactly one of either
S3Output
orFeatureStoreOutput
types.- Parameters:
output_name (
str
) – The name for the processing job output.app_managed (
Union
[bool
,IResolvable
,None
]) – WhenTrue
, output operations such as data upload are managed natively by the processing job application. WhenFalse
(default), output operations are managed by Amazon SageMaker.feature_store_output (
Union
[IResolvable
,FeatureStoreOutputProperty
,Dict
[str
,Any
],None
]) – Configuration for processing job outputs in Amazon SageMaker Feature Store.s3_output (
Union
[IResolvable
,S3OutputProperty
,Dict
[str
,Any
],None
]) – Configuration for uploading output data to Amazon S3 from the processing container.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker processing_outputs_object_property = sagemaker.CfnProcessingJob.ProcessingOutputsObjectProperty( output_name="outputName", # the properties below are optional app_managed=False, feature_store_output=sagemaker.CfnProcessingJob.FeatureStoreOutputProperty( feature_group_name="featureGroupName" ), s3_output=sagemaker.CfnProcessingJob.S3OutputProperty( s3_upload_mode="s3UploadMode", s3_uri="s3Uri", # the properties below are optional local_path="localPath" ) )
Attributes
- app_managed
When
True
, output operations such as data upload are managed natively by the processing job application.When
False
(default), output operations are managed by Amazon SageMaker.
- feature_store_output
Configuration for processing job outputs in Amazon SageMaker Feature Store.
- output_name
The name for the processing job output.
- s3_output
Configuration for uploading output data to Amazon S3 from the processing container.
ProcessingResourcesProperty
- class CfnProcessingJob.ProcessingResourcesProperty(*, cluster_config)
Bases:
object
Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job.
In distributed training, you specify more than one instance.
- Parameters:
cluster_config (
Union
[IResolvable
,ClusterConfigProperty
,Dict
[str
,Any
]]) – The configuration for the resources in a cluster used to run the processing job.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker processing_resources_property = sagemaker.CfnProcessingJob.ProcessingResourcesProperty( cluster_config=sagemaker.CfnProcessingJob.ClusterConfigProperty( instance_count=123, instance_type="instanceType", volume_size_in_gb=123, # the properties below are optional volume_kms_key_id="volumeKmsKeyId" ) )
Attributes
- cluster_config
The configuration for the resources in a cluster used to run the processing job.
RedshiftDatasetDefinitionProperty
- class CfnProcessingJob.RedshiftDatasetDefinitionProperty(*, cluster_id, cluster_role_arn, database, db_user, output_format, output_s3_uri, query_string, kms_key_id=None, output_compression=None)
Bases:
object
Configuration for Redshift Dataset Definition input.
- Parameters:
cluster_id (
str
) – The Redshift cluster Identifier.cluster_role_arn (
str
) – The IAM role attached to your Redshift cluster that Amazon SageMaker uses to generate datasets.database (
str
) – The name of the Redshift database used in Redshift query execution.db_user (
str
) – The database user name used in Redshift query execution.output_format (
str
) – The data storage format for Redshift query results.output_s3_uri (
str
) – The location in Amazon S3 where the Redshift query results are stored.query_string (
str
) – The SQL query statements to be executed.kms_key_id (
Optional
[str
]) – The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt data from a Redshift execution.output_compression (
Optional
[str
]) – The compression used for Redshift query results.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker redshift_dataset_definition_property = sagemaker.CfnProcessingJob.RedshiftDatasetDefinitionProperty( cluster_id="clusterId", cluster_role_arn="clusterRoleArn", database="database", db_user="dbUser", output_format="outputFormat", output_s3_uri="outputS3Uri", query_string="queryString", # the properties below are optional kms_key_id="kmsKeyId", output_compression="outputCompression" )
Attributes
- cluster_id
The Redshift cluster Identifier.
- cluster_role_arn
The IAM role attached to your Redshift cluster that Amazon SageMaker uses to generate datasets.
- database
The name of the Redshift database used in Redshift query execution.
- db_user
The database user name used in Redshift query execution.
- kms_key_id
The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt data from a Redshift execution.
- output_compression
The compression used for Redshift query results.
- output_format
The data storage format for Redshift query results.
- output_s3_uri
The location in Amazon S3 where the Redshift query results are stored.
- query_string
The SQL query statements to be executed.
S3InputProperty
- class CfnProcessingJob.S3InputProperty(*, s3_data_type, s3_uri, local_path=None, s3_compression_type=None, s3_data_distribution_type=None, s3_input_mode=None)
Bases:
object
Configuration for downloading input data from Amazon S3 into the processing container.
- Parameters:
s3_data_type (
str
) – Whether you use anS3Prefix
or aManifestFile
for the data type. If you chooseS3Prefix
,S3Uri
identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for the processing job. If you chooseManifestFile
,S3Uri
identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for the processing job.s3_uri (
str
) – The URI of the Amazon S3 prefix Amazon SageMaker downloads data required to run a processing job.local_path (
Optional
[str
]) – The local path in your container where you want Amazon SageMaker to write input data to.LocalPath
is an absolute path to the input data and must begin with/opt/ml/processing/
.LocalPath
is a required parameter whenAppManaged
isFalse
(default).s3_compression_type (
Optional
[str
]) – Whether to GZIP-decompress the data in Amazon S3 as it is streamed into the processing container.Gzip
can only be used whenPipe
mode is specified as theS3InputMode
. InPipe
mode, Amazon SageMaker streams input data from the source directly to your container without using the EBS volume.s3_data_distribution_type (
Optional
[str
]) – Whether to distribute the data from Amazon S3 to all processing instances withFullyReplicated
, or whether the data from Amazon S3 is shared by Amazon S3 key, downloading one shard of data to each processing instance.s3_input_mode (
Optional
[str
]) – Whether to useFile
orPipe
input mode. In File mode, Amazon SageMaker copies the data from the input source onto the local ML storage volume before starting your processing container. This is the most commonly used input mode. InPipe
mode, Amazon SageMaker streams input data from the source directly to your processing container into named pipes without using the ML storage volume.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker s3_input_property = sagemaker.CfnProcessingJob.S3InputProperty( s3_data_type="s3DataType", s3_uri="s3Uri", # the properties below are optional local_path="localPath", s3_compression_type="s3CompressionType", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" )
Attributes
- local_path
The local path in your container where you want Amazon SageMaker to write input data to.
LocalPath
is an absolute path to the input data and must begin with/opt/ml/processing/
.LocalPath
is a required parameter whenAppManaged
isFalse
(default).
- s3_compression_type
Whether to GZIP-decompress the data in Amazon S3 as it is streamed into the processing container.
Gzip
can only be used whenPipe
mode is specified as theS3InputMode
. InPipe
mode, Amazon SageMaker streams input data from the source directly to your container without using the EBS volume.
- s3_data_distribution_type
Whether to distribute the data from Amazon S3 to all processing instances with
FullyReplicated
, or whether the data from Amazon S3 is shared by Amazon S3 key, downloading one shard of data to each processing instance.
- s3_data_type
Whether you use an
S3Prefix
or aManifestFile
for the data type.If you choose
S3Prefix
,S3Uri
identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for the processing job. If you chooseManifestFile
,S3Uri
identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for the processing job.
- s3_input_mode
Whether to use
File
orPipe
input mode.In File mode, Amazon SageMaker copies the data from the input source onto the local ML storage volume before starting your processing container. This is the most commonly used input mode. In
Pipe
mode, Amazon SageMaker streams input data from the source directly to your processing container into named pipes without using the ML storage volume.
- s3_uri
The URI of the Amazon S3 prefix Amazon SageMaker downloads data required to run a processing job.
S3OutputProperty
- class CfnProcessingJob.S3OutputProperty(*, s3_upload_mode, s3_uri, local_path=None)
Bases:
object
Configuration for uploading output data to Amazon S3 from the processing container.
- Parameters:
s3_upload_mode (
str
) – Whether to upload the results of the processing job continuously or after the job completes.s3_uri (
str
) – The URI of the Amazon S3 prefix Amazon SageMaker downloads data required to run a processing job.local_path (
Optional
[str
]) – The local path of a directory where you want Amazon SageMaker to upload its contents to Amazon S3.LocalPath
is an absolute path to a directory containing output files. This directory will be created by the platform and exist when your container’s entrypoint is invoked.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker s3_output_property = sagemaker.CfnProcessingJob.S3OutputProperty( s3_upload_mode="s3UploadMode", s3_uri="s3Uri", # the properties below are optional local_path="localPath" )
Attributes
- local_path
The local path of a directory where you want Amazon SageMaker to upload its contents to Amazon S3.
LocalPath
is an absolute path to a directory containing output files. This directory will be created by the platform and exist when your container’s entrypoint is invoked.
- s3_upload_mode
Whether to upload the results of the processing job continuously or after the job completes.
- s3_uri
The URI of the Amazon S3 prefix Amazon SageMaker downloads data required to run a processing job.
StoppingConditionProperty
- class CfnProcessingJob.StoppingConditionProperty(*, max_runtime_in_seconds)
Bases:
object
Configures conditions under which the processing job should be stopped, such as how long the processing job has been running.
After the condition is met, the processing job is stopped.
- Parameters:
max_runtime_in_seconds (
Union
[int
,float
]) – The maximum length of time, in seconds, that a training or compilation job can run before it is stopped. For compilation jobs, if the job does not complete during this time, aTimeOut
error is generated. We recommend starting with 900 seconds and increasing as necessary based on your model. For all other jobs, if the job does not complete during this time, SageMaker ends the job. WhenRetryStrategy
is specified in the job request,MaxRuntimeInSeconds
specifies the maximum time for all of the attempts in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days. The maximum time that aTrainingJob
can run in total, including any time spent publishing metrics or archiving and uploading models after it has been stopped, is 30 days.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker stopping_condition_property = sagemaker.CfnProcessingJob.StoppingConditionProperty( max_runtime_in_seconds=123 )
Attributes
- max_runtime_in_seconds
The maximum length of time, in seconds, that a training or compilation job can run before it is stopped.
For compilation jobs, if the job does not complete during this time, a
TimeOut
error is generated. We recommend starting with 900 seconds and increasing as necessary based on your model.For all other jobs, if the job does not complete during this time, SageMaker ends the job. When
RetryStrategy
is specified in the job request,MaxRuntimeInSeconds
specifies the maximum time for all of the attempts in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days.The maximum time that a
TrainingJob
can run in total, including any time spent publishing metrics or archiving and uploading models after it has been stopped, is 30 days.
VpcConfigProperty
- class CfnProcessingJob.VpcConfigProperty(*, security_group_ids, subnets)
Bases:
object
Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to.
You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC .
- Parameters:
security_group_ids (
Sequence
[str
]) – The VPC security group IDs, in the formsg-xxxxxxxx
. Specify the security groups for the VPC that is specified in theSubnets
field.subnets (
Sequence
[str
]) – The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker vpc_config_property = sagemaker.CfnProcessingJob.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] )
Attributes
- security_group_ids
The VPC security group IDs, in the form
sg-xxxxxxxx
.Specify the security groups for the VPC that is specified in the
Subnets
field.
- subnets
The ID of the subnets in the VPC to which you want to connect your training job or model.
For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .