CfnMonitoringSchedule
- class aws_cdk.aws_sagemaker.CfnMonitoringSchedule(scope, id, *, monitoring_schedule_config, monitoring_schedule_name, endpoint_name=None, failure_reason=None, last_monitoring_execution_summary=None, monitoring_schedule_status=None, tags=None)
Bases:
CfnResource
The
AWS::SageMaker::MonitoringSchedule
resource is an Amazon SageMaker resource type that regularly starts SageMaker processing Jobs to monitor the data captured for a SageMaker endpoint.- See:
- CloudformationResource:
AWS::SageMaker::MonitoringSchedule
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker cfn_monitoring_schedule = sagemaker.CfnMonitoringSchedule(self, "MyCfnMonitoringSchedule", monitoring_schedule_config=sagemaker.CfnMonitoringSchedule.MonitoringScheduleConfigProperty( monitoring_job_definition=sagemaker.CfnMonitoringSchedule.MonitoringJobDefinitionProperty( monitoring_app_specification=sagemaker.CfnMonitoringSchedule.MonitoringAppSpecificationProperty( image_uri="imageUri", # the properties below are optional container_arguments=["containerArguments"], container_entrypoint=["containerEntrypoint"], post_analytics_processor_source_uri="postAnalyticsProcessorSourceUri", record_preprocessor_source_uri="recordPreprocessorSourceUri" ), monitoring_inputs=[sagemaker.CfnMonitoringSchedule.MonitoringInputProperty( batch_transform_input=sagemaker.CfnMonitoringSchedule.BatchTransformInputProperty( data_captured_destination_s3_uri="dataCapturedDestinationS3Uri", dataset_format=sagemaker.CfnMonitoringSchedule.DatasetFormatProperty( csv=sagemaker.CfnMonitoringSchedule.CsvProperty( header=False ), json=sagemaker.CfnMonitoringSchedule.JsonProperty( line=False ), parquet=False ), local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ), endpoint_input=sagemaker.CfnMonitoringSchedule.EndpointInputProperty( endpoint_name="endpointName", local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ) )], monitoring_output_config=sagemaker.CfnMonitoringSchedule.MonitoringOutputConfigProperty( monitoring_outputs=[sagemaker.CfnMonitoringSchedule.MonitoringOutputProperty( s3_output=sagemaker.CfnMonitoringSchedule.S3OutputProperty( local_path="localPath", s3_uri="s3Uri", # the properties below are optional s3_upload_mode="s3UploadMode" ) )], # the properties below are optional kms_key_id="kmsKeyId" ), monitoring_resources=sagemaker.CfnMonitoringSchedule.MonitoringResourcesProperty( cluster_config=sagemaker.CfnMonitoringSchedule.ClusterConfigProperty( instance_count=123, instance_type="instanceType", volume_size_in_gb=123, # the properties below are optional volume_kms_key_id="volumeKmsKeyId" ) ), role_arn="roleArn", # the properties below are optional baseline_config=sagemaker.CfnMonitoringSchedule.BaselineConfigProperty( constraints_resource=sagemaker.CfnMonitoringSchedule.ConstraintsResourceProperty( s3_uri="s3Uri" ), statistics_resource=sagemaker.CfnMonitoringSchedule.StatisticsResourceProperty( s3_uri="s3Uri" ) ), environment={ "environment_key": "environment" }, network_config=sagemaker.CfnMonitoringSchedule.NetworkConfigProperty( enable_inter_container_traffic_encryption=False, enable_network_isolation=False, vpc_config=sagemaker.CfnMonitoringSchedule.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) ), stopping_condition=sagemaker.CfnMonitoringSchedule.StoppingConditionProperty( max_runtime_in_seconds=123 ) ), monitoring_job_definition_name="monitoringJobDefinitionName", monitoring_type="monitoringType", schedule_config=sagemaker.CfnMonitoringSchedule.ScheduleConfigProperty( schedule_expression="scheduleExpression", # the properties below are optional data_analysis_end_time="dataAnalysisEndTime", data_analysis_start_time="dataAnalysisStartTime" ) ), monitoring_schedule_name="monitoringScheduleName", # the properties below are optional endpoint_name="endpointName", failure_reason="failureReason", last_monitoring_execution_summary=sagemaker.CfnMonitoringSchedule.MonitoringExecutionSummaryProperty( creation_time="creationTime", last_modified_time="lastModifiedTime", monitoring_execution_status="monitoringExecutionStatus", monitoring_schedule_name="monitoringScheduleName", scheduled_time="scheduledTime", # the properties below are optional endpoint_name="endpointName", failure_reason="failureReason", processing_job_arn="processingJobArn" ), monitoring_schedule_status="monitoringScheduleStatus", tags=[CfnTag( key="key", value="value" )] )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).monitoring_schedule_config (
Union
[IResolvable
,MonitoringScheduleConfigProperty
,Dict
[str
,Any
]]) – The configuration object that specifies the monitoring schedule and defines the monitoring job.monitoring_schedule_name (
str
) – The name of the monitoring schedule.endpoint_name (
Optional
[str
]) – The name of the endpoint using the monitoring schedule.failure_reason (
Optional
[str
]) – Contains the reason a monitoring job failed, if it failed.last_monitoring_execution_summary (
Union
[IResolvable
,MonitoringExecutionSummaryProperty
,Dict
[str
,Any
],None
]) – Describes metadata on the last execution to run, if there was one.monitoring_schedule_status (
Optional
[str
]) – The status of the monitoring schedule.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – An array of key-value pairs to apply to this resource. For more information, see Tag .
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::SageMaker::MonitoringSchedule'
- attr_creation_time
The time when the monitoring schedule was created.
- CloudformationAttribute:
CreationTime
- attr_last_modified_time
The last time that the monitoring schedule was modified.
- CloudformationAttribute:
LastModifiedTime
- attr_monitoring_schedule_arn
The Amazon Resource Name (ARN) of the monitoring schedule.
- CloudformationAttribute:
MonitoringScheduleArn
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- endpoint_name
The name of the endpoint using the monitoring schedule.
- failure_reason
Contains the reason a monitoring job failed, if it failed.
- last_monitoring_execution_summary
Describes metadata on the last execution to run, if there was one.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- monitoring_schedule_config
The configuration object that specifies the monitoring schedule and defines the monitoring job.
- monitoring_schedule_name
The name of the monitoring schedule.
- monitoring_schedule_status
The status of the monitoring schedule.
- node
The tree node.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
An array of key-value pairs to apply to this resource.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
BaselineConfigProperty
- class CfnMonitoringSchedule.BaselineConfigProperty(*, constraints_resource=None, statistics_resource=None)
Bases:
object
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
- Parameters:
constraints_resource (
Union
[IResolvable
,ConstraintsResourceProperty
,Dict
[str
,Any
],None
]) – The Amazon S3 URI for the constraints resource.statistics_resource (
Union
[IResolvable
,StatisticsResourceProperty
,Dict
[str
,Any
],None
]) – The baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker baseline_config_property = sagemaker.CfnMonitoringSchedule.BaselineConfigProperty( constraints_resource=sagemaker.CfnMonitoringSchedule.ConstraintsResourceProperty( s3_uri="s3Uri" ), statistics_resource=sagemaker.CfnMonitoringSchedule.StatisticsResourceProperty( s3_uri="s3Uri" ) )
Attributes
- constraints_resource
The Amazon S3 URI for the constraints resource.
- statistics_resource
The baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
BatchTransformInputProperty
- class CfnMonitoringSchedule.BatchTransformInputProperty(*, data_captured_destination_s3_uri, dataset_format, local_path, exclude_features_attribute=None, s3_data_distribution_type=None, s3_input_mode=None)
Bases:
object
Input object for the batch transform job.
- Parameters:
data_captured_destination_s3_uri (
str
) – The Amazon S3 location being used to capture the data.dataset_format (
Union
[IResolvable
,DatasetFormatProperty
,Dict
[str
,Any
]]) – The dataset format for your batch transform job.local_path (
str
) – Path to the filesystem where the batch transform data is available to the container.exclude_features_attribute (
Optional
[str
]) – The attributes of the input data to exclude from the analysis.s3_data_distribution_type (
Optional
[str
]) – Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults toFullyReplicated
s3_input_mode (
Optional
[str
]) – Whether thePipe
orFile
is used as the input mode for transferring data for the monitoring job.Pipe
mode is recommended for large datasets.File
mode is useful for small files that fit in memory. Defaults toFile
.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker batch_transform_input_property = sagemaker.CfnMonitoringSchedule.BatchTransformInputProperty( data_captured_destination_s3_uri="dataCapturedDestinationS3Uri", dataset_format=sagemaker.CfnMonitoringSchedule.DatasetFormatProperty( csv=sagemaker.CfnMonitoringSchedule.CsvProperty( header=False ), json=sagemaker.CfnMonitoringSchedule.JsonProperty( line=False ), parquet=False ), local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" )
Attributes
- data_captured_destination_s3_uri
The Amazon S3 location being used to capture the data.
- dataset_format
The dataset format for your batch transform job.
- exclude_features_attribute
The attributes of the input data to exclude from the analysis.
- local_path
Path to the filesystem where the batch transform data is available to the container.
- s3_data_distribution_type
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key.
Defaults to
FullyReplicated
- s3_input_mode
Whether the
Pipe
orFile
is used as the input mode for transferring data for the monitoring job.Pipe
mode is recommended for large datasets.File
mode is useful for small files that fit in memory. Defaults toFile
.
ClusterConfigProperty
- class CfnMonitoringSchedule.ClusterConfigProperty(*, instance_count, instance_type, volume_size_in_gb, volume_kms_key_id=None)
Bases:
object
Configuration for the cluster used to run model monitoring jobs.
- Parameters:
instance_count (
Union
[int
,float
]) – The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.instance_type (
str
) – The ML compute instance type for the processing job.volume_size_in_gb (
Union
[int
,float
]) – The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.volume_kms_key_id (
Optional
[str
]) – The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker cluster_config_property = sagemaker.CfnMonitoringSchedule.ClusterConfigProperty( instance_count=123, instance_type="instanceType", volume_size_in_gb=123, # the properties below are optional volume_kms_key_id="volumeKmsKeyId" )
Attributes
- instance_count
The number of ML compute instances to use in the model monitoring job.
For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance_type
The ML compute instance type for the processing job.
- volume_kms_key_id
The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- volume_size_in_gb
The size of the ML storage volume, in gigabytes, that you want to provision.
You must specify sufficient ML storage for your scenario.
ConstraintsResourceProperty
- class CfnMonitoringSchedule.ConstraintsResourceProperty(*, s3_uri=None)
Bases:
object
The Amazon S3 URI for the constraints resource.
- Parameters:
s3_uri (
Optional
[str
]) – The Amazon S3 URI for the constraints resource.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker constraints_resource_property = sagemaker.CfnMonitoringSchedule.ConstraintsResourceProperty( s3_uri="s3Uri" )
Attributes
- s3_uri
The Amazon S3 URI for the constraints resource.
CsvProperty
- class CfnMonitoringSchedule.CsvProperty(*, header=None)
Bases:
object
The CSV format.
- Parameters:
header (
Union
[bool
,IResolvable
,None
]) – A boolean flag indicating if given CSV has header.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker csv_property = sagemaker.CfnMonitoringSchedule.CsvProperty( header=False )
Attributes
- header
A boolean flag indicating if given CSV has header.
DatasetFormatProperty
- class CfnMonitoringSchedule.DatasetFormatProperty(*, csv=None, json=None, parquet=None)
Bases:
object
The dataset format of the data to monitor.
- Parameters:
csv (
Union
[IResolvable
,CsvProperty
,Dict
[str
,Any
],None
]) – The CSV format.json (
Union
[IResolvable
,JsonProperty
,Dict
[str
,Any
],None
]) – The Json format.parquet (
Union
[bool
,IResolvable
,None
]) – A flag indicating if the dataset format is Parquet.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker dataset_format_property = sagemaker.CfnMonitoringSchedule.DatasetFormatProperty( csv=sagemaker.CfnMonitoringSchedule.CsvProperty( header=False ), json=sagemaker.CfnMonitoringSchedule.JsonProperty( line=False ), parquet=False )
Attributes
- csv
The CSV format.
- json
The Json format.
- parquet
A flag indicating if the dataset format is Parquet.
EndpointInputProperty
- class CfnMonitoringSchedule.EndpointInputProperty(*, endpoint_name, local_path, exclude_features_attribute=None, s3_data_distribution_type=None, s3_input_mode=None)
Bases:
object
Input object for the endpoint.
- Parameters:
endpoint_name (
str
) – An endpoint in customer’s account which has enabledDataCaptureConfig
enabled.local_path (
str
) – Path to the filesystem where the endpoint data is available to the container.exclude_features_attribute (
Optional
[str
]) – The attributes of the input data to exclude from the analysis.s3_data_distribution_type (
Optional
[str
]) – Whether input data distributed in Amazon S3 is fully replicated or sharded by an Amazon S3 key. Defaults toFullyReplicated
s3_input_mode (
Optional
[str
]) – Whether thePipe
orFile
is used as the input mode for transferring data for the monitoring job.Pipe
mode is recommended for large datasets.File
mode is useful for small files that fit in memory. Defaults toFile
.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker endpoint_input_property = sagemaker.CfnMonitoringSchedule.EndpointInputProperty( endpoint_name="endpointName", local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" )
Attributes
- endpoint_name
An endpoint in customer’s account which has enabled
DataCaptureConfig
enabled.
- exclude_features_attribute
The attributes of the input data to exclude from the analysis.
- local_path
Path to the filesystem where the endpoint data is available to the container.
- s3_data_distribution_type
Whether input data distributed in Amazon S3 is fully replicated or sharded by an Amazon S3 key.
Defaults to
FullyReplicated
- s3_input_mode
Whether the
Pipe
orFile
is used as the input mode for transferring data for the monitoring job.Pipe
mode is recommended for large datasets.File
mode is useful for small files that fit in memory. Defaults toFile
.
JsonProperty
- class CfnMonitoringSchedule.JsonProperty(*, line=None)
Bases:
object
The Json format.
- Parameters:
line (
Union
[bool
,IResolvable
,None
]) – A boolean flag indicating if it is JSON line format.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker json_property = sagemaker.CfnMonitoringSchedule.JsonProperty( line=False )
Attributes
- line
A boolean flag indicating if it is JSON line format.
MonitoringAppSpecificationProperty
- class CfnMonitoringSchedule.MonitoringAppSpecificationProperty(*, image_uri, container_arguments=None, container_entrypoint=None, post_analytics_processor_source_uri=None, record_preprocessor_source_uri=None)
Bases:
object
Container image configuration object for the monitoring job.
- Parameters:
image_uri (
str
) – The container image to be run by the monitoring job.container_arguments (
Optional
[Sequence
[str
]]) – An array of arguments for the container used to run the monitoring job.container_entrypoint (
Optional
[Sequence
[str
]]) – Specifies the entrypoint for a container used to run the monitoring job.post_analytics_processor_source_uri (
Optional
[str
]) – An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.record_preprocessor_source_uri (
Optional
[str
]) – An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flattened JSON so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker monitoring_app_specification_property = sagemaker.CfnMonitoringSchedule.MonitoringAppSpecificationProperty( image_uri="imageUri", # the properties below are optional container_arguments=["containerArguments"], container_entrypoint=["containerEntrypoint"], post_analytics_processor_source_uri="postAnalyticsProcessorSourceUri", record_preprocessor_source_uri="recordPreprocessorSourceUri" )
Attributes
- container_arguments
An array of arguments for the container used to run the monitoring job.
- container_entrypoint
Specifies the entrypoint for a container used to run the monitoring job.
- image_uri
The container image to be run by the monitoring job.
- post_analytics_processor_source_uri
An Amazon S3 URI to a script that is called after analysis has been performed.
Applicable only for the built-in (first party) containers.
- record_preprocessor_source_uri
An Amazon S3 URI to a script that is called per row prior to running analysis.
It can base64 decode the payload and convert it into a flattened JSON so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.
MonitoringExecutionSummaryProperty
- class CfnMonitoringSchedule.MonitoringExecutionSummaryProperty(*, creation_time, last_modified_time, monitoring_execution_status, monitoring_schedule_name, scheduled_time, endpoint_name=None, failure_reason=None, processing_job_arn=None)
Bases:
object
Summary of information about the last monitoring job to run.
- Parameters:
creation_time (
str
) – The time at which the monitoring job was created.last_modified_time (
str
) – A timestamp that indicates the last time the monitoring job was modified.monitoring_execution_status (
str
) – The status of the monitoring job.monitoring_schedule_name (
str
) – The name of the monitoring schedule.scheduled_time (
str
) – The time the monitoring job was scheduled.endpoint_name (
Optional
[str
]) – The name of the endpoint used to run the monitoring job.failure_reason (
Optional
[str
]) – Contains the reason a monitoring job failed, if it failed.processing_job_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the monitoring job.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker monitoring_execution_summary_property = sagemaker.CfnMonitoringSchedule.MonitoringExecutionSummaryProperty( creation_time="creationTime", last_modified_time="lastModifiedTime", monitoring_execution_status="monitoringExecutionStatus", monitoring_schedule_name="monitoringScheduleName", scheduled_time="scheduledTime", # the properties below are optional endpoint_name="endpointName", failure_reason="failureReason", processing_job_arn="processingJobArn" )
Attributes
- creation_time
The time at which the monitoring job was created.
- endpoint_name
The name of the endpoint used to run the monitoring job.
- failure_reason
Contains the reason a monitoring job failed, if it failed.
- last_modified_time
A timestamp that indicates the last time the monitoring job was modified.
- monitoring_execution_status
The status of the monitoring job.
- monitoring_schedule_name
The name of the monitoring schedule.
- processing_job_arn
The Amazon Resource Name (ARN) of the monitoring job.
- scheduled_time
The time the monitoring job was scheduled.
MonitoringInputProperty
- class CfnMonitoringSchedule.MonitoringInputProperty(*, batch_transform_input=None, endpoint_input=None)
Bases:
object
The inputs for a monitoring job.
- Parameters:
batch_transform_input (
Union
[IResolvable
,BatchTransformInputProperty
,Dict
[str
,Any
],None
]) – Input object for the batch transform job.endpoint_input (
Union
[IResolvable
,EndpointInputProperty
,Dict
[str
,Any
],None
]) – The endpoint for a monitoring job.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker monitoring_input_property = sagemaker.CfnMonitoringSchedule.MonitoringInputProperty( batch_transform_input=sagemaker.CfnMonitoringSchedule.BatchTransformInputProperty( data_captured_destination_s3_uri="dataCapturedDestinationS3Uri", dataset_format=sagemaker.CfnMonitoringSchedule.DatasetFormatProperty( csv=sagemaker.CfnMonitoringSchedule.CsvProperty( header=False ), json=sagemaker.CfnMonitoringSchedule.JsonProperty( line=False ), parquet=False ), local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ), endpoint_input=sagemaker.CfnMonitoringSchedule.EndpointInputProperty( endpoint_name="endpointName", local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ) )
Attributes
- batch_transform_input
Input object for the batch transform job.
- endpoint_input
The endpoint for a monitoring job.
MonitoringJobDefinitionProperty
- class CfnMonitoringSchedule.MonitoringJobDefinitionProperty(*, monitoring_app_specification, monitoring_inputs, monitoring_output_config, monitoring_resources, role_arn, baseline_config=None, environment=None, network_config=None, stopping_condition=None)
Bases:
object
Defines the monitoring job.
- Parameters:
monitoring_app_specification (
Union
[IResolvable
,MonitoringAppSpecificationProperty
,Dict
[str
,Any
]]) – Configures the monitoring job to run a specified Docker container image.monitoring_inputs (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,MonitoringInputProperty
,Dict
[str
,Any
]]]]) – The array of inputs for the monitoring job. Currently we support monitoring an Amazon SageMaker Endpoint.monitoring_output_config (
Union
[IResolvable
,MonitoringOutputConfigProperty
,Dict
[str
,Any
]]) – The array of outputs from the monitoring job to be uploaded to Amazon S3.monitoring_resources (
Union
[IResolvable
,MonitoringResourcesProperty
,Dict
[str
,Any
]]) – Identifies the resources, ML compute instances, and ML storage volumes to deploy for a monitoring job. In distributed processing, you specify more than one instance.role_arn (
str
) – The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.baseline_config (
Union
[IResolvable
,BaselineConfigProperty
,Dict
[str
,Any
],None
]) – Baseline configuration used to validate that the data conforms to the specified constraints and statistics.environment (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – Sets the environment variables in the Docker container.network_config (
Union
[IResolvable
,NetworkConfigProperty
,Dict
[str
,Any
],None
]) – Specifies networking options for an monitoring job.stopping_condition (
Union
[IResolvable
,StoppingConditionProperty
,Dict
[str
,Any
],None
]) – Specifies a time limit for how long the monitoring job is allowed to run.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker monitoring_job_definition_property = sagemaker.CfnMonitoringSchedule.MonitoringJobDefinitionProperty( monitoring_app_specification=sagemaker.CfnMonitoringSchedule.MonitoringAppSpecificationProperty( image_uri="imageUri", # the properties below are optional container_arguments=["containerArguments"], container_entrypoint=["containerEntrypoint"], post_analytics_processor_source_uri="postAnalyticsProcessorSourceUri", record_preprocessor_source_uri="recordPreprocessorSourceUri" ), monitoring_inputs=[sagemaker.CfnMonitoringSchedule.MonitoringInputProperty( batch_transform_input=sagemaker.CfnMonitoringSchedule.BatchTransformInputProperty( data_captured_destination_s3_uri="dataCapturedDestinationS3Uri", dataset_format=sagemaker.CfnMonitoringSchedule.DatasetFormatProperty( csv=sagemaker.CfnMonitoringSchedule.CsvProperty( header=False ), json=sagemaker.CfnMonitoringSchedule.JsonProperty( line=False ), parquet=False ), local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ), endpoint_input=sagemaker.CfnMonitoringSchedule.EndpointInputProperty( endpoint_name="endpointName", local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ) )], monitoring_output_config=sagemaker.CfnMonitoringSchedule.MonitoringOutputConfigProperty( monitoring_outputs=[sagemaker.CfnMonitoringSchedule.MonitoringOutputProperty( s3_output=sagemaker.CfnMonitoringSchedule.S3OutputProperty( local_path="localPath", s3_uri="s3Uri", # the properties below are optional s3_upload_mode="s3UploadMode" ) )], # the properties below are optional kms_key_id="kmsKeyId" ), monitoring_resources=sagemaker.CfnMonitoringSchedule.MonitoringResourcesProperty( cluster_config=sagemaker.CfnMonitoringSchedule.ClusterConfigProperty( instance_count=123, instance_type="instanceType", volume_size_in_gb=123, # the properties below are optional volume_kms_key_id="volumeKmsKeyId" ) ), role_arn="roleArn", # the properties below are optional baseline_config=sagemaker.CfnMonitoringSchedule.BaselineConfigProperty( constraints_resource=sagemaker.CfnMonitoringSchedule.ConstraintsResourceProperty( s3_uri="s3Uri" ), statistics_resource=sagemaker.CfnMonitoringSchedule.StatisticsResourceProperty( s3_uri="s3Uri" ) ), environment={ "environment_key": "environment" }, network_config=sagemaker.CfnMonitoringSchedule.NetworkConfigProperty( enable_inter_container_traffic_encryption=False, enable_network_isolation=False, vpc_config=sagemaker.CfnMonitoringSchedule.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) ), stopping_condition=sagemaker.CfnMonitoringSchedule.StoppingConditionProperty( max_runtime_in_seconds=123 ) )
Attributes
- baseline_config
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
- environment
Sets the environment variables in the Docker container.
- monitoring_app_specification
Configures the monitoring job to run a specified Docker container image.
- monitoring_inputs
The array of inputs for the monitoring job.
Currently we support monitoring an Amazon SageMaker Endpoint.
- monitoring_output_config
The array of outputs from the monitoring job to be uploaded to Amazon S3.
- monitoring_resources
Identifies the resources, ML compute instances, and ML storage volumes to deploy for a monitoring job.
In distributed processing, you specify more than one instance.
- network_config
Specifies networking options for an monitoring job.
- role_arn
The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- stopping_condition
Specifies a time limit for how long the monitoring job is allowed to run.
MonitoringOutputConfigProperty
- class CfnMonitoringSchedule.MonitoringOutputConfigProperty(*, monitoring_outputs, kms_key_id=None)
Bases:
object
The output configuration for monitoring jobs.
- Parameters:
monitoring_outputs (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,MonitoringOutputProperty
,Dict
[str
,Any
]]]]) – Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.kms_key_id (
Optional
[str
]) – The AWS Key Management Service ( AWS KMS ) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker monitoring_output_config_property = sagemaker.CfnMonitoringSchedule.MonitoringOutputConfigProperty( monitoring_outputs=[sagemaker.CfnMonitoringSchedule.MonitoringOutputProperty( s3_output=sagemaker.CfnMonitoringSchedule.S3OutputProperty( local_path="localPath", s3_uri="s3Uri", # the properties below are optional s3_upload_mode="s3UploadMode" ) )], # the properties below are optional kms_key_id="kmsKeyId" )
Attributes
- kms_key_id
The AWS Key Management Service ( AWS KMS ) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring_outputs
Monitoring outputs for monitoring jobs.
This is where the output of the periodic monitoring jobs is uploaded.
MonitoringOutputProperty
- class CfnMonitoringSchedule.MonitoringOutputProperty(*, s3_output)
Bases:
object
The output object for a monitoring job.
- Parameters:
s3_output (
Union
[IResolvable
,S3OutputProperty
,Dict
[str
,Any
]]) – The Amazon S3 storage location where the results of a monitoring job are saved.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker monitoring_output_property = sagemaker.CfnMonitoringSchedule.MonitoringOutputProperty( s3_output=sagemaker.CfnMonitoringSchedule.S3OutputProperty( local_path="localPath", s3_uri="s3Uri", # the properties below are optional s3_upload_mode="s3UploadMode" ) )
Attributes
- s3_output
The Amazon S3 storage location where the results of a monitoring job are saved.
MonitoringResourcesProperty
- class CfnMonitoringSchedule.MonitoringResourcesProperty(*, cluster_config)
Bases:
object
Identifies the resources to deploy for a monitoring job.
- Parameters:
cluster_config (
Union
[IResolvable
,ClusterConfigProperty
,Dict
[str
,Any
]]) – The configuration for the cluster resources used to run the processing job.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker monitoring_resources_property = sagemaker.CfnMonitoringSchedule.MonitoringResourcesProperty( cluster_config=sagemaker.CfnMonitoringSchedule.ClusterConfigProperty( instance_count=123, instance_type="instanceType", volume_size_in_gb=123, # the properties below are optional volume_kms_key_id="volumeKmsKeyId" ) )
Attributes
- cluster_config
The configuration for the cluster resources used to run the processing job.
MonitoringScheduleConfigProperty
- class CfnMonitoringSchedule.MonitoringScheduleConfigProperty(*, monitoring_job_definition=None, monitoring_job_definition_name=None, monitoring_type=None, schedule_config=None)
Bases:
object
Configures the monitoring schedule and defines the monitoring job.
- Parameters:
monitoring_job_definition (
Union
[IResolvable
,MonitoringJobDefinitionProperty
,Dict
[str
,Any
],None
]) – Defines the monitoring job.monitoring_job_definition_name (
Optional
[str
]) – The name of the monitoring job definition to schedule.monitoring_type (
Optional
[str
]) – The type of the monitoring job definition to schedule.schedule_config (
Union
[IResolvable
,ScheduleConfigProperty
,Dict
[str
,Any
],None
]) – Configures the monitoring schedule.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker monitoring_schedule_config_property = sagemaker.CfnMonitoringSchedule.MonitoringScheduleConfigProperty( monitoring_job_definition=sagemaker.CfnMonitoringSchedule.MonitoringJobDefinitionProperty( monitoring_app_specification=sagemaker.CfnMonitoringSchedule.MonitoringAppSpecificationProperty( image_uri="imageUri", # the properties below are optional container_arguments=["containerArguments"], container_entrypoint=["containerEntrypoint"], post_analytics_processor_source_uri="postAnalyticsProcessorSourceUri", record_preprocessor_source_uri="recordPreprocessorSourceUri" ), monitoring_inputs=[sagemaker.CfnMonitoringSchedule.MonitoringInputProperty( batch_transform_input=sagemaker.CfnMonitoringSchedule.BatchTransformInputProperty( data_captured_destination_s3_uri="dataCapturedDestinationS3Uri", dataset_format=sagemaker.CfnMonitoringSchedule.DatasetFormatProperty( csv=sagemaker.CfnMonitoringSchedule.CsvProperty( header=False ), json=sagemaker.CfnMonitoringSchedule.JsonProperty( line=False ), parquet=False ), local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ), endpoint_input=sagemaker.CfnMonitoringSchedule.EndpointInputProperty( endpoint_name="endpointName", local_path="localPath", # the properties below are optional exclude_features_attribute="excludeFeaturesAttribute", s3_data_distribution_type="s3DataDistributionType", s3_input_mode="s3InputMode" ) )], monitoring_output_config=sagemaker.CfnMonitoringSchedule.MonitoringOutputConfigProperty( monitoring_outputs=[sagemaker.CfnMonitoringSchedule.MonitoringOutputProperty( s3_output=sagemaker.CfnMonitoringSchedule.S3OutputProperty( local_path="localPath", s3_uri="s3Uri", # the properties below are optional s3_upload_mode="s3UploadMode" ) )], # the properties below are optional kms_key_id="kmsKeyId" ), monitoring_resources=sagemaker.CfnMonitoringSchedule.MonitoringResourcesProperty( cluster_config=sagemaker.CfnMonitoringSchedule.ClusterConfigProperty( instance_count=123, instance_type="instanceType", volume_size_in_gb=123, # the properties below are optional volume_kms_key_id="volumeKmsKeyId" ) ), role_arn="roleArn", # the properties below are optional baseline_config=sagemaker.CfnMonitoringSchedule.BaselineConfigProperty( constraints_resource=sagemaker.CfnMonitoringSchedule.ConstraintsResourceProperty( s3_uri="s3Uri" ), statistics_resource=sagemaker.CfnMonitoringSchedule.StatisticsResourceProperty( s3_uri="s3Uri" ) ), environment={ "environment_key": "environment" }, network_config=sagemaker.CfnMonitoringSchedule.NetworkConfigProperty( enable_inter_container_traffic_encryption=False, enable_network_isolation=False, vpc_config=sagemaker.CfnMonitoringSchedule.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) ), stopping_condition=sagemaker.CfnMonitoringSchedule.StoppingConditionProperty( max_runtime_in_seconds=123 ) ), monitoring_job_definition_name="monitoringJobDefinitionName", monitoring_type="monitoringType", schedule_config=sagemaker.CfnMonitoringSchedule.ScheduleConfigProperty( schedule_expression="scheduleExpression", # the properties below are optional data_analysis_end_time="dataAnalysisEndTime", data_analysis_start_time="dataAnalysisStartTime" ) )
Attributes
- monitoring_job_definition
Defines the monitoring job.
- monitoring_job_definition_name
The name of the monitoring job definition to schedule.
- monitoring_type
The type of the monitoring job definition to schedule.
- schedule_config
Configures the monitoring schedule.
NetworkConfigProperty
- class CfnMonitoringSchedule.NetworkConfigProperty(*, enable_inter_container_traffic_encryption=None, enable_network_isolation=None, vpc_config=None)
Bases:
object
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
- Parameters:
enable_inter_container_traffic_encryption (
Union
[bool
,IResolvable
,None
]) – Whether to encrypt all communications between distributed processing jobs. ChooseTrue
to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.enable_network_isolation (
Union
[bool
,IResolvable
,None
]) – Whether to allow inbound and outbound network calls to and from the containers used for the processing job.vpc_config (
Union
[IResolvable
,VpcConfigProperty
,Dict
[str
,Any
],None
]) – Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker network_config_property = sagemaker.CfnMonitoringSchedule.NetworkConfigProperty( enable_inter_container_traffic_encryption=False, enable_network_isolation=False, vpc_config=sagemaker.CfnMonitoringSchedule.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) )
Attributes
- enable_inter_container_traffic_encryption
Whether to encrypt all communications between distributed processing jobs.
Choose
True
to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable_network_isolation
Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc_config
Specifies a VPC that your training jobs and hosted models have access to.
Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud .
S3OutputProperty
- class CfnMonitoringSchedule.S3OutputProperty(*, local_path, s3_uri, s3_upload_mode=None)
Bases:
object
Information about where and how you want to store the results of a monitoring job.
- Parameters:
local_path (
str
) – The local path to the S3 storage location where SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.s3_uri (
str
) – A URI that identifies the S3 storage location where SageMaker saves the results of a monitoring job.s3_upload_mode (
Optional
[str
]) – Whether to upload the results of the monitoring job continuously or after the job completes.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker s3_output_property = sagemaker.CfnMonitoringSchedule.S3OutputProperty( local_path="localPath", s3_uri="s3Uri", # the properties below are optional s3_upload_mode="s3UploadMode" )
Attributes
- local_path
The local path to the S3 storage location where SageMaker saves the results of a monitoring job.
LocalPath is an absolute path for the output data.
- s3_upload_mode
Whether to upload the results of the monitoring job continuously or after the job completes.
- s3_uri
A URI that identifies the S3 storage location where SageMaker saves the results of a monitoring job.
ScheduleConfigProperty
- class CfnMonitoringSchedule.ScheduleConfigProperty(*, schedule_expression, data_analysis_end_time=None, data_analysis_start_time=None)
Bases:
object
Configuration details about the monitoring schedule.
- Parameters:
schedule_expression (
str
) – A cron expression that describes details about the monitoring schedule. The supported cron expressions are: - If you want to set the job to start every hour, use the following:Hourly: cron(0 * ? * * *)
- If you want to start the job daily:cron(0 [00-23] ? * * *)
- If you want to run the job one time, immediately, use the following keyword:NOW
For example, the following are valid cron expressions: - Daily at noon UTC:cron(0 12 ? * * *)
- Daily at midnight UTC:cron(0 0 ? * * *)
To support running every 6, 12 hours, the following are also supported:cron(0 [00-23]/[01-24] ? * * *)
For example, the following are valid cron expressions: - Every 12 hours, starting at 5pm UTC:cron(0 17/12 ? * * *)
- Every two hours starting at midnight:cron(0 0/2 ? * * *)
.. epigraph:: - Even though the cron expression is set to start at 5PM UTC, note that there could be a delay of 0-20 minutes from the actual requested time to run the execution. - We recommend that if you would like a daily schedule, you do not provide this parameter. Amazon SageMaker will pick a time for running every day. You can also specify the keywordNOW
to run the monitoring job immediately, one time, without recurring.data_analysis_end_time (
Optional
[str
]) – Sets the end time for a monitoring job window. Express this time as an offset to the times that you schedule your monitoring jobs to run. You schedule monitoring jobs with theScheduleExpression
parameter. Specify this offset in ISO 8601 duration format. For example, if you want to end the window one hour before the start of each monitoring job, you would specify:"-PT1H"
. The end time that you specify must not follow the start time that you specify by more than 24 hours. You specify the start time with theDataAnalysisStartTime
parameter. If you setScheduleExpression
toNOW
, this parameter is required.data_analysis_start_time (
Optional
[str
]) – Sets the start time for a monitoring job window. Express this time as an offset to the times that you schedule your monitoring jobs to run. You schedule monitoring jobs with theScheduleExpression
parameter. Specify this offset in ISO 8601 duration format. For example, if you want to monitor the five hours of data in your dataset that precede the start of each monitoring job, you would specify:"-PT5H"
. The start time that you specify must not precede the end time that you specify by more than 24 hours. You specify the end time with theDataAnalysisEndTime
parameter. If you setScheduleExpression
toNOW
, this parameter is required.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker schedule_config_property = sagemaker.CfnMonitoringSchedule.ScheduleConfigProperty( schedule_expression="scheduleExpression", # the properties below are optional data_analysis_end_time="dataAnalysisEndTime", data_analysis_start_time="dataAnalysisStartTime" )
Attributes
- data_analysis_end_time
Sets the end time for a monitoring job window.
Express this time as an offset to the times that you schedule your monitoring jobs to run. You schedule monitoring jobs with the
ScheduleExpression
parameter. Specify this offset in ISO 8601 duration format. For example, if you want to end the window one hour before the start of each monitoring job, you would specify:"-PT1H"
.The end time that you specify must not follow the start time that you specify by more than 24 hours. You specify the start time with the
DataAnalysisStartTime
parameter.If you set
ScheduleExpression
toNOW
, this parameter is required.
- data_analysis_start_time
Sets the start time for a monitoring job window.
Express this time as an offset to the times that you schedule your monitoring jobs to run. You schedule monitoring jobs with the
ScheduleExpression
parameter. Specify this offset in ISO 8601 duration format. For example, if you want to monitor the five hours of data in your dataset that precede the start of each monitoring job, you would specify:"-PT5H"
.The start time that you specify must not precede the end time that you specify by more than 24 hours. You specify the end time with the
DataAnalysisEndTime
parameter.If you set
ScheduleExpression
toNOW
, this parameter is required.
- schedule_expression
A cron expression that describes details about the monitoring schedule.
The supported cron expressions are:
If you want to set the job to start every hour, use the following:
Hourly: cron(0 * ? * * *)
If you want to start the job daily:
cron(0 [00-23] ? * * *)
If you want to run the job one time, immediately, use the following keyword:
NOW
For example, the following are valid cron expressions:
Daily at noon UTC:
cron(0 12 ? * * *)
Daily at midnight UTC:
cron(0 0 ? * * *)
To support running every 6, 12 hours, the following are also supported:
cron(0 [00-23]/[01-24] ? * * *)
For example, the following are valid cron expressions:
Every 12 hours, starting at 5pm UTC:
cron(0 17/12 ? * * *)
Every two hours starting at midnight:
cron(0 0/2 ? * * *)
Even though the cron expression is set to start at 5PM UTC, note that there could be a delay of 0-20 minutes from the actual requested time to run the execution.
We recommend that if you would like a daily schedule, you do not provide this parameter. Amazon SageMaker will pick a time for running every day.
You can also specify the keyword
NOW
to run the monitoring job immediately, one time, without recurring.
StatisticsResourceProperty
- class CfnMonitoringSchedule.StatisticsResourceProperty(*, s3_uri=None)
Bases:
object
The baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- Parameters:
s3_uri (
Optional
[str
]) – The S3 URI for the statistics resource.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker statistics_resource_property = sagemaker.CfnMonitoringSchedule.StatisticsResourceProperty( s3_uri="s3Uri" )
Attributes
- s3_uri
The S3 URI for the statistics resource.
StoppingConditionProperty
- class CfnMonitoringSchedule.StoppingConditionProperty(*, max_runtime_in_seconds)
Bases:
object
Specifies a limit to how long a job can run.
When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
To stop a training job, SageMaker sends the algorithm the
SIGTERM
signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with
CreateModel
. .. epigraph:The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
- Parameters:
max_runtime_in_seconds (
Union
[int
,float
]) – The maximum length of time, in seconds, that a training or compilation job can run before it is stopped. For compilation jobs, if the job does not complete during this time, aTimeOut
error is generated. We recommend starting with 900 seconds and increasing as necessary based on your model. For all other jobs, if the job does not complete during this time, SageMaker ends the job. WhenRetryStrategy
is specified in the job request,MaxRuntimeInSeconds
specifies the maximum time for all of the attempts in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days. The maximum time that aTrainingJob
can run in total, including any time spent publishing metrics or archiving and uploading models after it has been stopped, is 30 days.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker stopping_condition_property = sagemaker.CfnMonitoringSchedule.StoppingConditionProperty( max_runtime_in_seconds=123 )
Attributes
- max_runtime_in_seconds
The maximum length of time, in seconds, that a training or compilation job can run before it is stopped.
For compilation jobs, if the job does not complete during this time, a
TimeOut
error is generated. We recommend starting with 900 seconds and increasing as necessary based on your model.For all other jobs, if the job does not complete during this time, SageMaker ends the job. When
RetryStrategy
is specified in the job request,MaxRuntimeInSeconds
specifies the maximum time for all of the attempts in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days.The maximum time that a
TrainingJob
can run in total, including any time spent publishing metrics or archiving and uploading models after it has been stopped, is 30 days.
VpcConfigProperty
- class CfnMonitoringSchedule.VpcConfigProperty(*, security_group_ids, subnets)
Bases:
object
Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to.
You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC .
- Parameters:
security_group_ids (
Sequence
[str
]) – The VPC security group IDs, in the formsg-xxxxxxxx
. Specify the security groups for the VPC that is specified in theSubnets
field.subnets (
Sequence
[str
]) – The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker vpc_config_property = sagemaker.CfnMonitoringSchedule.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] )
Attributes
- security_group_ids
The VPC security group IDs, in the form
sg-xxxxxxxx
.Specify the security groups for the VPC that is specified in the
Subnets
field.
- subnets
The ID of the subnets in the VPC to which you want to connect your training job or model.
For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .