CfnEndpointConfig
- class aws_cdk.aws_sagemaker.CfnEndpointConfig(scope, id, *, production_variants, async_inference_config=None, data_capture_config=None, enable_network_isolation=None, endpoint_config_name=None, execution_role_arn=None, explainer_config=None, kms_key_id=None, shadow_production_variants=None, tags=None, vpc_config=None)
Bases:
CfnResource
The
AWS::SageMaker::EndpointConfig
resource creates a configuration for an Amazon SageMaker endpoint.For more information, see CreateEndpointConfig in the SageMaker Developer Guide .
- See:
- CloudformationResource:
AWS::SageMaker::EndpointConfig
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker cfn_endpoint_config = sagemaker.CfnEndpointConfig(self, "MyCfnEndpointConfig", production_variants=[sagemaker.CfnEndpointConfig.ProductionVariantProperty( variant_name="variantName", # the properties below are optional accelerator_type="acceleratorType", container_startup_health_check_timeout_in_seconds=123, enable_ssm_access=False, initial_instance_count=123, initial_variant_weight=123, instance_type="instanceType", managed_instance_scaling=sagemaker.CfnEndpointConfig.ManagedInstanceScalingProperty( max_instance_count=123, min_instance_count=123, status="status" ), model_data_download_timeout_in_seconds=123, model_name="modelName", routing_config=sagemaker.CfnEndpointConfig.RoutingConfigProperty( routing_strategy="routingStrategy" ), serverless_config=sagemaker.CfnEndpointConfig.ServerlessConfigProperty( max_concurrency=123, memory_size_in_mb=123, # the properties below are optional provisioned_concurrency=123 ), volume_size_in_gb=123 )], # the properties below are optional async_inference_config=sagemaker.CfnEndpointConfig.AsyncInferenceConfigProperty( output_config=sagemaker.CfnEndpointConfig.AsyncInferenceOutputConfigProperty( kms_key_id="kmsKeyId", notification_config=sagemaker.CfnEndpointConfig.AsyncInferenceNotificationConfigProperty( error_topic="errorTopic", include_inference_response_in=["includeInferenceResponseIn"], success_topic="successTopic" ), s3_failure_path="s3FailurePath", s3_output_path="s3OutputPath" ), # the properties below are optional client_config=sagemaker.CfnEndpointConfig.AsyncInferenceClientConfigProperty( max_concurrent_invocations_per_instance=123 ) ), data_capture_config=sagemaker.CfnEndpointConfig.DataCaptureConfigProperty( capture_options=[sagemaker.CfnEndpointConfig.CaptureOptionProperty( capture_mode="captureMode" )], destination_s3_uri="destinationS3Uri", initial_sampling_percentage=123, # the properties below are optional capture_content_type_header=sagemaker.CfnEndpointConfig.CaptureContentTypeHeaderProperty( csv_content_types=["csvContentTypes"], json_content_types=["jsonContentTypes"] ), enable_capture=False, kms_key_id="kmsKeyId" ), enable_network_isolation=False, endpoint_config_name="endpointConfigName", execution_role_arn="executionRoleArn", explainer_config=sagemaker.CfnEndpointConfig.ExplainerConfigProperty( clarify_explainer_config=sagemaker.CfnEndpointConfig.ClarifyExplainerConfigProperty( shap_config=sagemaker.CfnEndpointConfig.ClarifyShapConfigProperty( shap_baseline_config=sagemaker.CfnEndpointConfig.ClarifyShapBaselineConfigProperty( mime_type="mimeType", shap_baseline="shapBaseline", shap_baseline_uri="shapBaselineUri" ), # the properties below are optional number_of_samples=123, seed=123, text_config=sagemaker.CfnEndpointConfig.ClarifyTextConfigProperty( granularity="granularity", language="language" ), use_logit=False ), # the properties below are optional enable_explanations="enableExplanations", inference_config=sagemaker.CfnEndpointConfig.ClarifyInferenceConfigProperty( content_template="contentTemplate", feature_headers=["featureHeaders"], features_attribute="featuresAttribute", feature_types=["featureTypes"], label_attribute="labelAttribute", label_headers=["labelHeaders"], label_index=123, max_payload_in_mb=123, max_record_count=123, probability_attribute="probabilityAttribute", probability_index=123 ) ) ), kms_key_id="kmsKeyId", shadow_production_variants=[sagemaker.CfnEndpointConfig.ProductionVariantProperty( variant_name="variantName", # the properties below are optional accelerator_type="acceleratorType", container_startup_health_check_timeout_in_seconds=123, enable_ssm_access=False, initial_instance_count=123, initial_variant_weight=123, instance_type="instanceType", managed_instance_scaling=sagemaker.CfnEndpointConfig.ManagedInstanceScalingProperty( max_instance_count=123, min_instance_count=123, status="status" ), model_data_download_timeout_in_seconds=123, model_name="modelName", routing_config=sagemaker.CfnEndpointConfig.RoutingConfigProperty( routing_strategy="routingStrategy" ), serverless_config=sagemaker.CfnEndpointConfig.ServerlessConfigProperty( max_concurrency=123, memory_size_in_mb=123, # the properties below are optional provisioned_concurrency=123 ), volume_size_in_gb=123 )], tags=[CfnTag( key="key", value="value" )], vpc_config=sagemaker.CfnEndpointConfig.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).production_variants (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,ProductionVariantProperty
,Dict
[str
,Any
]]]]) – A list ofProductionVariant
objects, one for each model that you want to host at this endpoint.async_inference_config (
Union
[IResolvable
,AsyncInferenceConfigProperty
,Dict
[str
,Any
],None
]) – Specifies configuration for how an endpoint performs asynchronous inference.data_capture_config (
Union
[IResolvable
,DataCaptureConfigProperty
,Dict
[str
,Any
],None
]) – Specifies how to capture endpoint data for model monitor. The data capture configuration applies to all production variants hosted at the endpoint.enable_network_isolation (
Union
[bool
,IResolvable
,None
]) –endpoint_config_name (
Optional
[str
]) – The name of the endpoint configuration.execution_role_arn (
Optional
[str
]) –explainer_config (
Union
[IResolvable
,ExplainerConfigProperty
,Dict
[str
,Any
],None
]) – A parameter to activate explainers.kms_key_id (
Optional
[str
]) – The Amazon Resource Name (ARN) of an AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint. - Key ID:1234abcd-12ab-34cd-56ef-1234567890ab
- Key ARN:arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
- Alias name:alias/ExampleAlias
- Alias name ARN:arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
The KMS key policy must grant permission to the IAM role that you specify in yourCreateEndpoint
,UpdateEndpoint
requests. For more information, refer to the AWS Key Management Service section Using Key Policies in AWS KMS .. epigraph:: Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can’t request aKmsKeyId
when using an instance type with local storage. If any of the models that you specify in theProductionVariants
parameter use nitro-based instances with local storage, do not specify a value for theKmsKeyId
parameter. If you specify a value forKmsKeyId
when using any nitro-based instances with local storage, the call toCreateEndpointConfig
fails. For a list of instance types that support local instance storage, see Instance Store Volumes . For more information about local instance storage encryption, see SSD Instance Store Volumes .shadow_production_variants (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,ProductionVariantProperty
,Dict
[str
,Any
]]],None
]) – Array ofProductionVariant
objects. There is one for each model that you want to host at this endpoint in shadow mode with production traffic replicated from the model specified onProductionVariants
. If you use this field, you can only specify one variant forProductionVariants
and one variant forShadowProductionVariants
.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – A list of key-value pairs to apply to this resource. For more information, see Resource Tag and Using Cost Allocation Tags .vpc_config (
Union
[IResolvable
,VpcConfigProperty
,Dict
[str
,Any
],None
]) –
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::SageMaker::EndpointConfig'
- async_inference_config
Specifies configuration for how an endpoint performs asynchronous inference.
- attr_endpoint_config_name
The name of the endpoint configuration, such as
MyEndpointConfiguration
.- CloudformationAttribute:
EndpointConfigName
- attr_id
Id
- Type:
cloudformationAttribute
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- data_capture_config
Specifies how to capture endpoint data for model monitor.
- enable_network_isolation
- endpoint_config_name
The name of the endpoint configuration.
- execution_role_arn
- explainer_config
A parameter to activate explainers.
- kms_key_id
The Amazon Resource Name (ARN) of an AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- node
The tree node.
- production_variants
A list of
ProductionVariant
objects, one for each model that you want to host at this endpoint.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- shadow_production_variants
Array of
ProductionVariant
objects.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
A list of key-value pairs to apply to this resource.
- vpc_config
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
AsyncInferenceClientConfigProperty
- class CfnEndpointConfig.AsyncInferenceClientConfigProperty(*, max_concurrent_invocations_per_instance=None)
Bases:
object
Configures the behavior of the client used by SageMaker to interact with the model container during asynchronous inference.
- Parameters:
max_concurrent_invocations_per_instance (
Union
[int
,float
,None
]) – The maximum number of concurrent requests sent by the SageMaker client to the model container. If no value is provided, SageMaker will choose an optimal value for you.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker async_inference_client_config_property = sagemaker.CfnEndpointConfig.AsyncInferenceClientConfigProperty( max_concurrent_invocations_per_instance=123 )
Attributes
- max_concurrent_invocations_per_instance
The maximum number of concurrent requests sent by the SageMaker client to the model container.
If no value is provided, SageMaker will choose an optimal value for you.
AsyncInferenceConfigProperty
- class CfnEndpointConfig.AsyncInferenceConfigProperty(*, output_config, client_config=None)
Bases:
object
Specifies configuration for how an endpoint performs asynchronous inference.
- Parameters:
output_config (
Union
[IResolvable
,AsyncInferenceOutputConfigProperty
,Dict
[str
,Any
]]) – Specifies the configuration for asynchronous inference invocation outputs.client_config (
Union
[IResolvable
,AsyncInferenceClientConfigProperty
,Dict
[str
,Any
],None
]) – Configures the behavior of the client used by SageMaker to interact with the model container during asynchronous inference.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker async_inference_config_property = sagemaker.CfnEndpointConfig.AsyncInferenceConfigProperty( output_config=sagemaker.CfnEndpointConfig.AsyncInferenceOutputConfigProperty( kms_key_id="kmsKeyId", notification_config=sagemaker.CfnEndpointConfig.AsyncInferenceNotificationConfigProperty( error_topic="errorTopic", include_inference_response_in=["includeInferenceResponseIn"], success_topic="successTopic" ), s3_failure_path="s3FailurePath", s3_output_path="s3OutputPath" ), # the properties below are optional client_config=sagemaker.CfnEndpointConfig.AsyncInferenceClientConfigProperty( max_concurrent_invocations_per_instance=123 ) )
Attributes
- client_config
Configures the behavior of the client used by SageMaker to interact with the model container during asynchronous inference.
- output_config
Specifies the configuration for asynchronous inference invocation outputs.
AsyncInferenceNotificationConfigProperty
- class CfnEndpointConfig.AsyncInferenceNotificationConfigProperty(*, error_topic=None, include_inference_response_in=None, success_topic=None)
Bases:
object
Specifies the configuration for notifications of inference results for asynchronous inference.
- Parameters:
error_topic (
Optional
[str
]) – Amazon SNS topic to post a notification to when an inference fails. If no topic is provided, no notification is sent on failure.include_inference_response_in (
Optional
[Sequence
[str
]]) – The Amazon SNS topics where you want the inference response to be included. .. epigraph:: The inference response is included only if the response size is less than or equal to 128 KB.success_topic (
Optional
[str
]) – Amazon SNS topic to post a notification to when an inference completes successfully. If no topic is provided, no notification is sent on success.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker async_inference_notification_config_property = sagemaker.CfnEndpointConfig.AsyncInferenceNotificationConfigProperty( error_topic="errorTopic", include_inference_response_in=["includeInferenceResponseIn"], success_topic="successTopic" )
Attributes
- error_topic
Amazon SNS topic to post a notification to when an inference fails.
If no topic is provided, no notification is sent on failure.
- include_inference_response_in
The Amazon SNS topics where you want the inference response to be included.
The inference response is included only if the response size is less than or equal to 128 KB.
- success_topic
Amazon SNS topic to post a notification to when an inference completes successfully.
If no topic is provided, no notification is sent on success.
AsyncInferenceOutputConfigProperty
- class CfnEndpointConfig.AsyncInferenceOutputConfigProperty(*, kms_key_id=None, notification_config=None, s3_failure_path=None, s3_output_path=None)
Bases:
object
Specifies the configuration for asynchronous inference invocation outputs.
- Parameters:
kms_key_id (
Optional
[str
]) – The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt the asynchronous inference output in Amazon S3.notification_config (
Union
[IResolvable
,AsyncInferenceNotificationConfigProperty
,Dict
[str
,Any
],None
]) – Specifies the configuration for notifications of inference results for asynchronous inference.s3_failure_path (
Optional
[str
]) – The Amazon S3 location to upload failure inference responses to.s3_output_path (
Optional
[str
]) – The Amazon S3 location to upload inference responses to.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker async_inference_output_config_property = sagemaker.CfnEndpointConfig.AsyncInferenceOutputConfigProperty( kms_key_id="kmsKeyId", notification_config=sagemaker.CfnEndpointConfig.AsyncInferenceNotificationConfigProperty( error_topic="errorTopic", include_inference_response_in=["includeInferenceResponseIn"], success_topic="successTopic" ), s3_failure_path="s3FailurePath", s3_output_path="s3OutputPath" )
Attributes
- kms_key_id
The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt the asynchronous inference output in Amazon S3.
- notification_config
Specifies the configuration for notifications of inference results for asynchronous inference.
- s3_failure_path
The Amazon S3 location to upload failure inference responses to.
- s3_output_path
The Amazon S3 location to upload inference responses to.
CaptureContentTypeHeaderProperty
- class CfnEndpointConfig.CaptureContentTypeHeaderProperty(*, csv_content_types=None, json_content_types=None)
Bases:
object
Specifies the JSON and CSV content types of the data that the endpoint captures.
- Parameters:
csv_content_types (
Optional
[Sequence
[str
]]) – A list of the CSV content types of the data that the endpoint captures. For the endpoint to capture the data, you must also specify the content type when you invoke the endpoint.json_content_types (
Optional
[Sequence
[str
]]) – A list of the JSON content types of the data that the endpoint captures. For the endpoint to capture the data, you must also specify the content type when you invoke the endpoint.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker capture_content_type_header_property = sagemaker.CfnEndpointConfig.CaptureContentTypeHeaderProperty( csv_content_types=["csvContentTypes"], json_content_types=["jsonContentTypes"] )
Attributes
- csv_content_types
A list of the CSV content types of the data that the endpoint captures.
For the endpoint to capture the data, you must also specify the content type when you invoke the endpoint.
- json_content_types
A list of the JSON content types of the data that the endpoint captures.
For the endpoint to capture the data, you must also specify the content type when you invoke the endpoint.
CaptureOptionProperty
- class CfnEndpointConfig.CaptureOptionProperty(*, capture_mode)
Bases:
object
Specifies whether the endpoint captures input data or output data.
- Parameters:
capture_mode (
str
) – Specifies whether the endpoint captures input data or output data.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker capture_option_property = sagemaker.CfnEndpointConfig.CaptureOptionProperty( capture_mode="captureMode" )
Attributes
- capture_mode
Specifies whether the endpoint captures input data or output data.
ClarifyExplainerConfigProperty
- class CfnEndpointConfig.ClarifyExplainerConfigProperty(*, shap_config, enable_explanations=None, inference_config=None)
Bases:
object
The configuration parameters for the SageMaker Clarify explainer.
- Parameters:
shap_config (
Union
[IResolvable
,ClarifyShapConfigProperty
,Dict
[str
,Any
]]) – The configuration for SHAP analysis.enable_explanations (
Optional
[str
]) – A JMESPath boolean expression used to filter which records to explain. Explanations are activated by default. See`EnableExplanations
<https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-online-explainability-create-endpoint.html#clarify-online-explainability-create-endpoint-enable>`_ for additional information.inference_config (
Union
[IResolvable
,ClarifyInferenceConfigProperty
,Dict
[str
,Any
],None
]) – The inference configuration parameter for the model container.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker clarify_explainer_config_property = sagemaker.CfnEndpointConfig.ClarifyExplainerConfigProperty( shap_config=sagemaker.CfnEndpointConfig.ClarifyShapConfigProperty( shap_baseline_config=sagemaker.CfnEndpointConfig.ClarifyShapBaselineConfigProperty( mime_type="mimeType", shap_baseline="shapBaseline", shap_baseline_uri="shapBaselineUri" ), # the properties below are optional number_of_samples=123, seed=123, text_config=sagemaker.CfnEndpointConfig.ClarifyTextConfigProperty( granularity="granularity", language="language" ), use_logit=False ), # the properties below are optional enable_explanations="enableExplanations", inference_config=sagemaker.CfnEndpointConfig.ClarifyInferenceConfigProperty( content_template="contentTemplate", feature_headers=["featureHeaders"], features_attribute="featuresAttribute", feature_types=["featureTypes"], label_attribute="labelAttribute", label_headers=["labelHeaders"], label_index=123, max_payload_in_mb=123, max_record_count=123, probability_attribute="probabilityAttribute", probability_index=123 ) )
Attributes
- enable_explanations
A JMESPath boolean expression used to filter which records to explain.
Explanations are activated by default. See
`EnableExplanations
<https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-online-explainability-create-endpoint.html#clarify-online-explainability-create-endpoint-enable>`_ for additional information.
- inference_config
The inference configuration parameter for the model container.
- shap_config
The configuration for SHAP analysis.
ClarifyInferenceConfigProperty
- class CfnEndpointConfig.ClarifyInferenceConfigProperty(*, content_template=None, feature_headers=None, features_attribute=None, feature_types=None, label_attribute=None, label_headers=None, label_index=None, max_payload_in_mb=None, max_record_count=None, probability_attribute=None, probability_index=None)
Bases:
object
The inference configuration parameter for the model container.
- Parameters:
content_template (
Optional
[str
]) – A template string used to format a JSON record into an acceptable model container input. For example, aContentTemplate
string'{"myfeatures":$features}'
will format a list of features[1,2,3]
into the record string'{"myfeatures":[1,2,3]}'
. Required only when the model container input is in JSON Lines format.feature_headers (
Optional
[Sequence
[str
]]) – The names of the features. If provided, these are included in the endpoint response payload to help readability of theInvokeEndpoint
output. See the Response section under Invoke the endpoint in the Developer Guide for more information.features_attribute (
Optional
[str
]) – Provides the JMESPath expression to extract the features from a model container input in JSON Lines format. For example, ifFeaturesAttribute
is the JMESPath expression'myfeatures'
, it extracts a list of features[1,2,3]
from request data'{"myfeatures":[1,2,3]}'
.feature_types (
Optional
[Sequence
[str
]]) –A list of data types of the features (optional). Applicable only to NLP explainability. If provided,
FeatureTypes
must have at least one'text'
string (for example,['text']
). IfFeatureTypes
is not provided, the explainer infers the feature types based on the baseline data. The feature types are included in the endpoint response payload. For additional information see the response section under Invoke the endpoint in the Developer Guide for more information.label_attribute (
Optional
[str
]) – A JMESPath expression used to locate the list of label headers in the model container output. Example : If the model container output of a batch request is'{"labels":["cat","dog","fish"],"probability":[0.6,0.3,0.1]}'
, then setLabelAttribute
to'labels'
to extract the list of label headers["cat","dog","fish"]
label_headers (
Optional
[Sequence
[str
]]) –For multiclass classification problems, the label headers are the names of the classes. Otherwise, the label header is the name of the predicted label. These are used to help readability for the output of the
InvokeEndpoint
API. See the response section under Invoke the endpoint in the Developer Guide for more information. If there are no label headers in the model container output, provide them manually using this parameter.label_index (
Union
[int
,float
,None
]) – A zero-based index used to extract a label header or list of label headers from model container output in CSV format. Example for a multiclass model: If the model container output consists of label headers followed by probabilities:'"[\'cat\',\'dog\',\'fish\']","[0.1,0.6,0.3]"'
, setLabelIndex
to0
to select the label headers['cat','dog','fish']
.max_payload_in_mb (
Union
[int
,float
,None
]) – The maximum payload size (MB) allowed of a request from the explainer to the model container. Defaults to6
MB.max_record_count (
Union
[int
,float
,None
]) – The maximum number of records in a request that the model container can process when querying the model container for the predictions of a synthetic dataset . A record is a unit of input data that inference can be made on, for example, a single line in CSV data. IfMaxRecordCount
is1
, the model container expects one record per request. A value of 2 or greater means that the model expects batch requests, which can reduce overhead and speed up the inferencing process. If this parameter is not provided, the explainer will tune the record count per request according to the model container’s capacity at runtime.probability_attribute (
Optional
[str
]) – A JMESPath expression used to extract the probability (or score) from the model container output if the model container is in JSON Lines format. Example : If the model container output of a single request is'{"predicted_label":1,"probability":0.6}'
, then setProbabilityAttribute
to'probability'
.probability_index (
Union
[int
,float
,None
]) – A zero-based index used to extract a probability value (score) or list from model container output in CSV format. If this value is not provided, the entire model container output will be treated as a probability value (score) or list. Example for a single class model: If the model container output consists of a string-formatted prediction label followed by its probability:'1,0.6'
, setProbabilityIndex
to1
to select the probability value0.6
. Example for a multiclass model: If the model container output consists of a string-formatted prediction label followed by its probability:'"[\'cat\',\'dog\',\'fish\']","[0.1,0.6,0.3]"'
, setProbabilityIndex
to1
to select the probability values[0.1,0.6,0.3]
.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker clarify_inference_config_property = sagemaker.CfnEndpointConfig.ClarifyInferenceConfigProperty( content_template="contentTemplate", feature_headers=["featureHeaders"], features_attribute="featuresAttribute", feature_types=["featureTypes"], label_attribute="labelAttribute", label_headers=["labelHeaders"], label_index=123, max_payload_in_mb=123, max_record_count=123, probability_attribute="probabilityAttribute", probability_index=123 )
Attributes
- content_template
A template string used to format a JSON record into an acceptable model container input.
For example, a
ContentTemplate
string'{"myfeatures":$features}'
will format a list of features[1,2,3]
into the record string'{"myfeatures":[1,2,3]}'
. Required only when the model container input is in JSON Lines format.
- feature_headers
The names of the features.
If provided, these are included in the endpoint response payload to help readability of the
InvokeEndpoint
output. See the Response section under Invoke the endpoint in the Developer Guide for more information.
- feature_types
A list of data types of the features (optional).
Applicable only to NLP explainability. If provided,
FeatureTypes
must have at least one'text'
string (for example,['text']
). IfFeatureTypes
is not provided, the explainer infers the feature types based on the baseline data. The feature types are included in the endpoint response payload. For additional information see the response section under Invoke the endpoint in the Developer Guide for more information.
- features_attribute
Provides the JMESPath expression to extract the features from a model container input in JSON Lines format.
For example, if
FeaturesAttribute
is the JMESPath expression'myfeatures'
, it extracts a list of features[1,2,3]
from request data'{"myfeatures":[1,2,3]}'
.
- label_attribute
A JMESPath expression used to locate the list of label headers in the model container output.
Example : If the model container output of a batch request is
'{"labels":["cat","dog","fish"],"probability":[0.6,0.3,0.1]}'
, then setLabelAttribute
to'labels'
to extract the list of label headers["cat","dog","fish"]
- label_headers
For multiclass classification problems, the label headers are the names of the classes.
Otherwise, the label header is the name of the predicted label. These are used to help readability for the output of the
InvokeEndpoint
API. See the response section under Invoke the endpoint in the Developer Guide for more information. If there are no label headers in the model container output, provide them manually using this parameter.
- label_index
A zero-based index used to extract a label header or list of label headers from model container output in CSV format.
Example for a multiclass model: If the model container output consists of label headers followed by probabilities:
'"[\'cat\',\'dog\',\'fish\']","[0.1,0.6,0.3]"'
, setLabelIndex
to0
to select the label headers['cat','dog','fish']
.
- max_payload_in_mb
The maximum payload size (MB) allowed of a request from the explainer to the model container.
Defaults to
6
MB.
- max_record_count
//docs.aws.amazon.com/sagemaker/latest/dg/clarify-online-explainability-create-endpoint.html#clarify-online-explainability-create-endpoint-synthetic>`_ . A record is a unit of input data that inference can be made on, for example, a single line in CSV data. If
MaxRecordCount
is1
, the model container expects one record per request. A value of 2 or greater means that the model expects batch requests, which can reduce overhead and speed up the inferencing process. If this parameter is not provided, the explainer will tune the record count per request according to the model container’s capacity at runtime.- See:
- Type:
The maximum number of records in a request that the model container can process when querying the model container for the predictions of a `synthetic dataset <https
- probability_attribute
A JMESPath expression used to extract the probability (or score) from the model container output if the model container is in JSON Lines format.
Example : If the model container output of a single request is
'{"predicted_label":1,"probability":0.6}'
, then setProbabilityAttribute
to'probability'
.
- probability_index
A zero-based index used to extract a probability value (score) or list from model container output in CSV format.
If this value is not provided, the entire model container output will be treated as a probability value (score) or list.
Example for a single class model: If the model container output consists of a string-formatted prediction label followed by its probability:
'1,0.6'
, setProbabilityIndex
to1
to select the probability value0.6
.Example for a multiclass model: If the model container output consists of a string-formatted prediction label followed by its probability:
'"[\'cat\',\'dog\',\'fish\']","[0.1,0.6,0.3]"'
, setProbabilityIndex
to1
to select the probability values[0.1,0.6,0.3]
.
ClarifyShapBaselineConfigProperty
- class CfnEndpointConfig.ClarifyShapBaselineConfigProperty(*, mime_type=None, shap_baseline=None, shap_baseline_uri=None)
Bases:
object
The configuration for the SHAP baseline (also called the background or reference dataset) of the Kernal SHAP algorithm.
The number of records in the baseline data determines the size of the synthetic dataset, which has an impact on latency of explainability requests. For more information, see the Synthetic data of Configure and create an endpoint .
ShapBaseline
andShapBaselineUri
are mutually exclusive parameters. One or the either is required to configure a SHAP baseline.
- Parameters:
mime_type (
Optional
[str
]) – The MIME type of the baseline data. Choose from'text/csv'
or'application/jsonlines'
. Defaults to'text/csv'
.shap_baseline (
Optional
[str
]) – The inline SHAP baseline data in string format.ShapBaseline
can have one or multiple records to be used as the baseline dataset. The format of the SHAP baseline file should be the same format as the training dataset. For example, if the training dataset is in CSV format and each record contains four features, and all features are numerical, then the format of the baseline data should also share these characteristics. For natural language processing (NLP) of text columns, the baseline value should be the value used to replace the unit of text specified by theGranularity
of theTextConfig
parameter. The size limit forShapBasline
is 4 KB. Use theShapBaselineUri
parameter if you want to provide more than 4 KB of baseline data.shap_baseline_uri (
Optional
[str
]) – The uniform resource identifier (URI) of the S3 bucket where the SHAP baseline file is stored. The format of the SHAP baseline file should be the same format as the format of the training dataset. For example, if the training dataset is in CSV format, and each record in the training dataset has four features, and all features are numerical, then the baseline file should also have this same format. Each record should contain only the features. If you are using a virtual private cloud (VPC), theShapBaselineUri
should be accessible to the VPC. For more information about setting up endpoints with Amazon Virtual Private Cloud, see Give SageMaker access to Resources in your Amazon Virtual Private Cloud .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker clarify_shap_baseline_config_property = sagemaker.CfnEndpointConfig.ClarifyShapBaselineConfigProperty( mime_type="mimeType", shap_baseline="shapBaseline", shap_baseline_uri="shapBaselineUri" )
Attributes
- mime_type
The MIME type of the baseline data.
Choose from
'text/csv'
or'application/jsonlines'
. Defaults to'text/csv'
.
- shap_baseline
The inline SHAP baseline data in string format.
ShapBaseline
can have one or multiple records to be used as the baseline dataset. The format of the SHAP baseline file should be the same format as the training dataset. For example, if the training dataset is in CSV format and each record contains four features, and all features are numerical, then the format of the baseline data should also share these characteristics. For natural language processing (NLP) of text columns, the baseline value should be the value used to replace the unit of text specified by theGranularity
of theTextConfig
parameter. The size limit forShapBasline
is 4 KB. Use theShapBaselineUri
parameter if you want to provide more than 4 KB of baseline data.
- shap_baseline_uri
The uniform resource identifier (URI) of the S3 bucket where the SHAP baseline file is stored.
The format of the SHAP baseline file should be the same format as the format of the training dataset. For example, if the training dataset is in CSV format, and each record in the training dataset has four features, and all features are numerical, then the baseline file should also have this same format. Each record should contain only the features. If you are using a virtual private cloud (VPC), the
ShapBaselineUri
should be accessible to the VPC. For more information about setting up endpoints with Amazon Virtual Private Cloud, see Give SageMaker access to Resources in your Amazon Virtual Private Cloud .
ClarifyShapConfigProperty
- class CfnEndpointConfig.ClarifyShapConfigProperty(*, shap_baseline_config, number_of_samples=None, seed=None, text_config=None, use_logit=None)
Bases:
object
The configuration for SHAP analysis using SageMaker Clarify Explainer.
- Parameters:
shap_baseline_config (
Union
[IResolvable
,ClarifyShapBaselineConfigProperty
,Dict
[str
,Any
]]) – The configuration for the SHAP baseline of the Kernal SHAP algorithm.number_of_samples (
Union
[int
,float
,None
]) –The number of samples to be used for analysis by the Kernal SHAP algorithm. .. epigraph:: The number of samples determines the size of the synthetic dataset, which has an impact on latency of explainability requests. For more information, see the Synthetic data of Configure and create an endpoint .
seed (
Union
[int
,float
,None
]) – The starting value used to initialize the random number generator in the explainer. Provide a value for this parameter to obtain a deterministic SHAP result.text_config (
Union
[IResolvable
,ClarifyTextConfigProperty
,Dict
[str
,Any
],None
]) – A parameter that indicates if text features are treated as text and explanations are provided for individual units of text. Required for natural language processing (NLP) explainability only.use_logit (
Union
[bool
,IResolvable
,None
]) – A Boolean toggle to indicate if you want to use the logit function (true) or log-odds units (false) for model predictions. Defaults to false.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker clarify_shap_config_property = sagemaker.CfnEndpointConfig.ClarifyShapConfigProperty( shap_baseline_config=sagemaker.CfnEndpointConfig.ClarifyShapBaselineConfigProperty( mime_type="mimeType", shap_baseline="shapBaseline", shap_baseline_uri="shapBaselineUri" ), # the properties below are optional number_of_samples=123, seed=123, text_config=sagemaker.CfnEndpointConfig.ClarifyTextConfigProperty( granularity="granularity", language="language" ), use_logit=False )
Attributes
- number_of_samples
The number of samples to be used for analysis by the Kernal SHAP algorithm.
The number of samples determines the size of the synthetic dataset, which has an impact on latency of explainability requests. For more information, see the Synthetic data of Configure and create an endpoint .
- seed
The starting value used to initialize the random number generator in the explainer.
Provide a value for this parameter to obtain a deterministic SHAP result.
- shap_baseline_config
The configuration for the SHAP baseline of the Kernal SHAP algorithm.
- text_config
A parameter that indicates if text features are treated as text and explanations are provided for individual units of text.
Required for natural language processing (NLP) explainability only.
- use_logit
A Boolean toggle to indicate if you want to use the logit function (true) or log-odds units (false) for model predictions.
Defaults to false.
ClarifyTextConfigProperty
- class CfnEndpointConfig.ClarifyTextConfigProperty(*, granularity, language)
Bases:
object
A parameter used to configure the SageMaker Clarify explainer to treat text features as text so that explanations are provided for individual units of text.
Required only for natural language processing (NLP) explainability.
- Parameters:
granularity (
str
) – The unit of granularity for the analysis of text features. For example, if the unit is'token'
, then each token (like a word in English) of the text is treated as a feature. SHAP values are computed for each unit/feature.language (
str
) – Specifies the language of the text features in [ISO 639-1](https://docs.aws.amazon.com/ https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) or ISO 639-3 code of a supported language. .. epigraph:: For a mix of multiple languages, use code'xx'
.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker clarify_text_config_property = sagemaker.CfnEndpointConfig.ClarifyTextConfigProperty( granularity="granularity", language="language" )
Attributes
- granularity
The unit of granularity for the analysis of text features.
For example, if the unit is
'token'
, then each token (like a word in English) of the text is treated as a feature. SHAP values are computed for each unit/feature.
DataCaptureConfigProperty
- class CfnEndpointConfig.DataCaptureConfigProperty(*, capture_options, destination_s3_uri, initial_sampling_percentage, capture_content_type_header=None, enable_capture=None, kms_key_id=None)
Bases:
object
Specifies the configuration of your endpoint for model monitor data capture.
- Parameters:
capture_options (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,CaptureOptionProperty
,Dict
[str
,Any
]]]]) – Specifies whether the endpoint captures input data to your model, output data from your model, or both.destination_s3_uri (
str
) – The S3 bucket where model monitor stores captured data.initial_sampling_percentage (
Union
[int
,float
]) – The percentage of data to capture.capture_content_type_header (
Union
[IResolvable
,CaptureContentTypeHeaderProperty
,Dict
[str
,Any
],None
]) – A list of the JSON and CSV content type that the endpoint captures.enable_capture (
Union
[bool
,IResolvable
,None
]) – Set toTrue
to enable data capture.kms_key_id (
Optional
[str
]) – The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt the captured data at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats: Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab Alias name: alias/ExampleAlias Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias If you don’t provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role’s account. For more information, see KMS-Managed Encryption Keys (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html) in the Amazon Simple Storage Service Developer Guide. The KMS key policy must grant permission to the IAM role that you specify in your CreateModel (https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html) request. For more information, see Using Key Policies in AWS KMS (http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the AWS Key Management Service Developer Guide.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker data_capture_config_property = sagemaker.CfnEndpointConfig.DataCaptureConfigProperty( capture_options=[sagemaker.CfnEndpointConfig.CaptureOptionProperty( capture_mode="captureMode" )], destination_s3_uri="destinationS3Uri", initial_sampling_percentage=123, # the properties below are optional capture_content_type_header=sagemaker.CfnEndpointConfig.CaptureContentTypeHeaderProperty( csv_content_types=["csvContentTypes"], json_content_types=["jsonContentTypes"] ), enable_capture=False, kms_key_id="kmsKeyId" )
Attributes
- capture_content_type_header
A list of the JSON and CSV content type that the endpoint captures.
- capture_options
Specifies whether the endpoint captures input data to your model, output data from your model, or both.
- destination_s3_uri
The S3 bucket where model monitor stores captured data.
- enable_capture
Set to
True
to enable data capture.
- initial_sampling_percentage
The percentage of data to capture.
- kms_key_id
The AWS Key Management Service ( AWS KMS) key that Amazon SageMaker uses to encrypt the captured data at rest using Amazon S3 server-side encryption.
The KmsKeyId can be any of the following formats: Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab Alias name: alias/ExampleAlias Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias If you don’t provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role’s account. For more information, see KMS-Managed Encryption Keys (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html) in the Amazon Simple Storage Service Developer Guide. The KMS key policy must grant permission to the IAM role that you specify in your CreateModel (https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html) request. For more information, see Using Key Policies in AWS KMS (http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the AWS Key Management Service Developer Guide.
ExplainerConfigProperty
- class CfnEndpointConfig.ExplainerConfigProperty(*, clarify_explainer_config=None)
Bases:
object
A parameter to activate explainers.
- Parameters:
clarify_explainer_config (
Union
[IResolvable
,ClarifyExplainerConfigProperty
,Dict
[str
,Any
],None
]) – A member ofExplainerConfig
that contains configuration parameters for the SageMaker Clarify explainer.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker explainer_config_property = sagemaker.CfnEndpointConfig.ExplainerConfigProperty( clarify_explainer_config=sagemaker.CfnEndpointConfig.ClarifyExplainerConfigProperty( shap_config=sagemaker.CfnEndpointConfig.ClarifyShapConfigProperty( shap_baseline_config=sagemaker.CfnEndpointConfig.ClarifyShapBaselineConfigProperty( mime_type="mimeType", shap_baseline="shapBaseline", shap_baseline_uri="shapBaselineUri" ), # the properties below are optional number_of_samples=123, seed=123, text_config=sagemaker.CfnEndpointConfig.ClarifyTextConfigProperty( granularity="granularity", language="language" ), use_logit=False ), # the properties below are optional enable_explanations="enableExplanations", inference_config=sagemaker.CfnEndpointConfig.ClarifyInferenceConfigProperty( content_template="contentTemplate", feature_headers=["featureHeaders"], features_attribute="featuresAttribute", feature_types=["featureTypes"], label_attribute="labelAttribute", label_headers=["labelHeaders"], label_index=123, max_payload_in_mb=123, max_record_count=123, probability_attribute="probabilityAttribute", probability_index=123 ) ) )
Attributes
- clarify_explainer_config
A member of
ExplainerConfig
that contains configuration parameters for the SageMaker Clarify explainer.
ManagedInstanceScalingProperty
- class CfnEndpointConfig.ManagedInstanceScalingProperty(*, max_instance_count=None, min_instance_count=None, status=None)
Bases:
object
- Parameters:
max_instance_count (
Union
[int
,float
,None
]) –min_instance_count (
Union
[int
,float
,None
]) –status (
Optional
[str
]) –
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker managed_instance_scaling_property = sagemaker.CfnEndpointConfig.ManagedInstanceScalingProperty( max_instance_count=123, min_instance_count=123, status="status" )
Attributes
- max_instance_count
-
- Type:
see
- min_instance_count
-
- Type:
see
ProductionVariantProperty
- class CfnEndpointConfig.ProductionVariantProperty(*, variant_name, accelerator_type=None, container_startup_health_check_timeout_in_seconds=None, enable_ssm_access=None, initial_instance_count=None, initial_variant_weight=None, instance_type=None, managed_instance_scaling=None, model_data_download_timeout_in_seconds=None, model_name=None, routing_config=None, serverless_config=None, volume_size_in_gb=None)
Bases:
object
Specifies a model that you want to host and the resources to deploy for hosting it.
If you are deploying multiple models, tell Amazon SageMaker how to distribute traffic among the models by specifying the
InitialVariantWeight
objects.- Parameters:
variant_name (
str
) – The name of the production variant.accelerator_type (
Optional
[str
]) –The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker . For more information, see Using Elastic Inference in Amazon SageMaker .
container_startup_health_check_timeout_in_seconds (
Union
[int
,float
,None
]) – The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .enable_ssm_access (
Union
[bool
,IResolvable
,None
]) – You can use this parameter to turn on native AWS Systems Manager (SSM) access for a production variant behind an endpoint. By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and callingUpdateEndpoint
.initial_instance_count (
Union
[int
,float
,None
]) – Number of instances to launch initially.initial_variant_weight (
Union
[int
,float
,None
]) – Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of theVariantWeight
to the sum of allVariantWeight
values across all ProductionVariants. If unspecified, it defaults to 1.0.instance_type (
Optional
[str
]) – The ML compute instance type.managed_instance_scaling (
Union
[IResolvable
,ManagedInstanceScalingProperty
,Dict
[str
,Any
],None
]) –model_data_download_timeout_in_seconds (
Union
[int
,float
,None
]) – The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.model_name (
Optional
[str
]) – The name of the model that you want to host. This is the name that you specified when creating the model.routing_config (
Union
[IResolvable
,RoutingConfigProperty
,Dict
[str
,Any
],None
]) –serverless_config (
Union
[IResolvable
,ServerlessConfigProperty
,Dict
[str
,Any
],None
]) – The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.volume_size_in_gb (
Union
[int
,float
,None
]) – The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker production_variant_property = sagemaker.CfnEndpointConfig.ProductionVariantProperty( variant_name="variantName", # the properties below are optional accelerator_type="acceleratorType", container_startup_health_check_timeout_in_seconds=123, enable_ssm_access=False, initial_instance_count=123, initial_variant_weight=123, instance_type="instanceType", managed_instance_scaling=sagemaker.CfnEndpointConfig.ManagedInstanceScalingProperty( max_instance_count=123, min_instance_count=123, status="status" ), model_data_download_timeout_in_seconds=123, model_name="modelName", routing_config=sagemaker.CfnEndpointConfig.RoutingConfigProperty( routing_strategy="routingStrategy" ), serverless_config=sagemaker.CfnEndpointConfig.ServerlessConfigProperty( max_concurrency=123, memory_size_in_mb=123, # the properties below are optional provisioned_concurrency=123 ), volume_size_in_gb=123 )
Attributes
- accelerator_type
The size of the Elastic Inference (EI) instance to use for the production variant.
EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker . For more information, see Using Elastic Inference in Amazon SageMaker .
- container_startup_health_check_timeout_in_seconds
The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting.
For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- enable_ssm_access
You can use this parameter to turn on native AWS Systems Manager (SSM) access for a production variant behind an endpoint.
By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and calling
UpdateEndpoint
.
- initial_instance_count
Number of instances to launch initially.
- initial_variant_weight
Determines initial traffic distribution among all of the models that you specify in the endpoint configuration.
The traffic to a production variant is determined by the ratio of the
VariantWeight
to the sum of allVariantWeight
values across all ProductionVariants. If unspecified, it defaults to 1.0.
- instance_type
The ML compute instance type.
- managed_instance_scaling
-
- Type:
see
- model_data_download_timeout_in_seconds
The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.
- model_name
The name of the model that you want to host.
This is the name that you specified when creating the model.
- routing_config
-
- Type:
see
- serverless_config
The serverless configuration for an endpoint.
Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
- variant_name
The name of the production variant.
- volume_size_in_gb
The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant.
Currently only Amazon EBS gp2 storage volumes are supported.
RoutingConfigProperty
- class CfnEndpointConfig.RoutingConfigProperty(*, routing_strategy=None)
Bases:
object
- Parameters:
routing_strategy (
Optional
[str
]) –- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker routing_config_property = sagemaker.CfnEndpointConfig.RoutingConfigProperty( routing_strategy="routingStrategy" )
Attributes
ServerlessConfigProperty
- class CfnEndpointConfig.ServerlessConfigProperty(*, max_concurrency, memory_size_in_mb, provisioned_concurrency=None)
Bases:
object
Specifies the serverless configuration for an endpoint variant.
- Parameters:
max_concurrency (
Union
[int
,float
]) – The maximum number of concurrent invocations your serverless endpoint can process.memory_size_in_mb (
Union
[int
,float
]) – The memory size of your serverless endpoint. Valid values are in 1 GB increments: 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB.provisioned_concurrency (
Union
[int
,float
,None
]) – The amount of provisioned concurrency to allocate for the serverless endpoint. Should be less than or equal toMaxConcurrency
. .. epigraph:: This field is not supported for serverless endpoint recommendations for Inference Recommender jobs. For more information about creating an Inference Recommender job, see CreateInferenceRecommendationsJobs .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker serverless_config_property = sagemaker.CfnEndpointConfig.ServerlessConfigProperty( max_concurrency=123, memory_size_in_mb=123, # the properties below are optional provisioned_concurrency=123 )
Attributes
- max_concurrency
The maximum number of concurrent invocations your serverless endpoint can process.
- memory_size_in_mb
The memory size of your serverless endpoint.
Valid values are in 1 GB increments: 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB.
- provisioned_concurrency
The amount of provisioned concurrency to allocate for the serverless endpoint.
Should be less than or equal to
MaxConcurrency
. .. epigraph:This field is not supported for serverless endpoint recommendations for Inference Recommender jobs. For more information about creating an Inference Recommender job, see `CreateInferenceRecommendationsJobs <https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateInferenceRecommendationsJob.html>`_ .
VpcConfigProperty
- class CfnEndpointConfig.VpcConfigProperty(*, security_group_ids, subnets)
Bases:
object
- Parameters:
security_group_ids (
Sequence
[str
]) –subnets (
Sequence
[str
]) –
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker vpc_config_property = sagemaker.CfnEndpointConfig.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] )
Attributes
- security_group_ids
-
- Type:
see