CfnModel
- class aws_cdk.aws_sagemaker.CfnModel(scope, id, *, containers=None, enable_network_isolation=None, execution_role_arn=None, inference_execution_config=None, model_name=None, primary_container=None, tags=None, vpc_config=None)
Bases:
CfnResource
The
AWS::SageMaker::Model
resource to create a model to host at an Amazon SageMaker endpoint.For more information, see Deploying a Model on Amazon SageMaker Hosting Services in the Amazon SageMaker Developer Guide .
- See:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-model.html
- CloudformationResource:
AWS::SageMaker::Model
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker # environment: Any cfn_model = sagemaker.CfnModel(self, "MyCfnModel", containers=[sagemaker.CfnModel.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker.CfnModel.ImageConfigProperty( repository_access_mode="repositoryAccessMode", # the properties below are optional repository_auth_config=sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_source=sagemaker.CfnModel.ModelDataSourceProperty( s3_data_source=sagemaker.CfnModel.S3DataSourceProperty( compression_type="compressionType", s3_data_type="s3DataType", s3_uri="s3Uri", # the properties below are optional hub_access_config=sagemaker.CfnModel.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker.CfnModel.ModelAccessConfigProperty( accept_eula=False ) ) ), model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker.CfnModel.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) )], enable_network_isolation=False, execution_role_arn="executionRoleArn", inference_execution_config=sagemaker.CfnModel.InferenceExecutionConfigProperty( mode="mode" ), model_name="modelName", primary_container=sagemaker.CfnModel.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker.CfnModel.ImageConfigProperty( repository_access_mode="repositoryAccessMode", # the properties below are optional repository_auth_config=sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_source=sagemaker.CfnModel.ModelDataSourceProperty( s3_data_source=sagemaker.CfnModel.S3DataSourceProperty( compression_type="compressionType", s3_data_type="s3DataType", s3_uri="s3Uri", # the properties below are optional hub_access_config=sagemaker.CfnModel.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker.CfnModel.ModelAccessConfigProperty( accept_eula=False ) ) ), model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker.CfnModel.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) ), tags=[CfnTag( key="key", value="value" )], vpc_config=sagemaker.CfnModel.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).containers (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,ContainerDefinitionProperty
,Dict
[str
,Any
]]],None
]) – Specifies the containers in the inference pipeline.enable_network_isolation (
Union
[bool
,IResolvable
,None
]) – Isolates the model container. No inbound or outbound network calls can be made to or from the model container.execution_role_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see SageMaker Roles . .. epigraph:: To be able to pass this role to SageMaker, the caller of this API must have theiam:PassRole
permission.inference_execution_config (
Union
[IResolvable
,InferenceExecutionConfigProperty
,Dict
[str
,Any
],None
]) – Specifies details of how containers in a multi-container endpoint are called.model_name (
Optional
[str
]) – The name of the new model.primary_container (
Union
[IResolvable
,ContainerDefinitionProperty
,Dict
[str
,Any
],None
]) – The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – A list of key-value pairs to apply to this resource. For more information, see Resource Tag and Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide .vpc_config (
Union
[IResolvable
,VpcConfigProperty
,Dict
[str
,Any
],None
]) – A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC.VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::SageMaker::Model'
- attr_id
Id
- Type:
cloudformationAttribute
- attr_model_name
The name of the model, such as
MyModel
.- CloudformationAttribute:
ModelName
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- containers
Specifies the containers in the inference pipeline.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- enable_network_isolation
Isolates the model container.
- execution_role_arn
The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs.
- inference_execution_config
Specifies details of how containers in a multi-container endpoint are called.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- model_name
The name of the new model.
- node
The tree node.
- primary_container
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
A list of key-value pairs to apply to this resource.
- vpc_config
//docs.aws.amazon.com/sagemaker/latest/dg/host-vpc.html>`_ and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
- Type:
A `VpcConfig <https
- Type:
//docs.aws.amazon.com/sagemaker/latest/dg/API_VpcConfig.html>`_ object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC.
VpcConfig
is used in hosting services and in batch transform. For more information, see `Protect Endpoints by Using an Amazon Virtual Private Cloud <https
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
ContainerDefinitionProperty
- class CfnModel.ContainerDefinitionProperty(*, container_hostname=None, environment=None, image=None, image_config=None, inference_specification_name=None, mode=None, model_data_source=None, model_data_url=None, model_package_name=None, multi_model_config=None)
Bases:
object
Describes the container, as part of model definition.
- Parameters:
container_hostname (
Optional
[str
]) – This parameter is ignored for models that contain only aPrimaryContainer
. When aContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don’t specify a value for this parameter for aContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned based on the position of theContainerDefinition
in the pipeline. If you specify a value for theContainerHostName
for anyContainerDefinition
that is part of an inference pipeline, you must specify a value for theContainerHostName
parameter of everyContainerDefinition
in that pipeline.environment (
Any
) – The environment variables to set in the Docker container. Don’t include any sensitive data in your environment variables. The maximum length of each key and value in theEnvironment
map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to aCreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.image (
Optional
[str
]) – The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports bothregistry/repository[:tag]
andregistry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker . .. epigraph:: The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.image_config (
Union
[IResolvable
,ImageConfigProperty
,Dict
[str
,Any
],None
]) – Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers . .. epigraph:: The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.inference_specification_name (
Optional
[str
]) – The inference specification name in the model package version.mode (
Optional
[str
]) – Whether the container hosts a single model or multiple models.model_data_source (
Union
[IResolvable
,ModelDataSourceProperty
,Dict
[str
,Any
],None
]) – Specifies the location of ML model data to deploy. .. epigraph:: Currently you cannot useModelDataSource
in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.model_data_url (
Optional
[str
]) – The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters . .. epigraph:: The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide . .. epigraph:: If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts inModelDataUrl
.model_package_name (
Optional
[str
]) – The name or Amazon Resource Name (ARN) of the model package to use to create the model.multi_model_config (
Union
[IResolvable
,MultiModelConfigProperty
,Dict
[str
,Any
],None
]) – Specifies additional configuration for multi-model endpoints.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker # environment: Any container_definition_property = sagemaker.CfnModel.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker.CfnModel.ImageConfigProperty( repository_access_mode="repositoryAccessMode", # the properties below are optional repository_auth_config=sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_source=sagemaker.CfnModel.ModelDataSourceProperty( s3_data_source=sagemaker.CfnModel.S3DataSourceProperty( compression_type="compressionType", s3_data_type="s3DataType", s3_uri="s3Uri", # the properties below are optional hub_access_config=sagemaker.CfnModel.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker.CfnModel.ModelAccessConfigProperty( accept_eula=False ) ) ), model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker.CfnModel.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) )
Attributes
- container_hostname
This parameter is ignored for models that contain only a
PrimaryContainer
.When a
ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don’t specify a value for this parameter for aContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned based on the position of theContainerDefinition
in the pipeline. If you specify a value for theContainerHostName
for anyContainerDefinition
that is part of an inference pipeline, you must specify a value for theContainerHostName
parameter of everyContainerDefinition
in that pipeline.
- environment
The environment variables to set in the Docker container. Don’t include any sensitive data in your environment variables.
The maximum length of each key and value in the
Environment
map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to aCreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
- image
The path where inference code is stored.
This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both
registry/repository[:tag]
andregistry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker . .. epigraph:The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
- image_config
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers . .. epigraph:
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
- inference_specification_name
The inference specification name in the model package version.
- mode
Whether the container hosts a single model or multiple models.
- model_data_source
Specifies the location of ML model data to deploy.
Currently you cannot use
ModelDataSource
in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
- model_data_url
The S3 path where the model artifacts, which result from model training, are stored.
This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters . .. epigraph:
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide . .. epigraph:
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in ``ModelDataUrl`` .
- model_package_name
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
- multi_model_config
Specifies additional configuration for multi-model endpoints.
HubAccessConfigProperty
- class CfnModel.HubAccessConfigProperty(*, hub_content_arn)
Bases:
object
- Parameters:
hub_content_arn (
str
) –- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker hub_access_config_property = sagemaker.CfnModel.HubAccessConfigProperty( hub_content_arn="hubContentArn" )
Attributes
ImageConfigProperty
- class CfnModel.ImageConfigProperty(*, repository_access_mode, repository_auth_config=None)
Bases:
object
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
- Parameters:
repository_access_mode (
str
) – Set this to one of the following values:. -Platform
- The model image is hosted in Amazon ECR. -Vpc
- The model image is hosted in a private Docker registry in your VPC.repository_auth_config (
Union
[IResolvable
,RepositoryAuthConfigProperty
,Dict
[str
,Any
],None
]) – (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specifiedVpc
as the value for theRepositoryAccessMode
field, and the private Docker registry where the model image is hosted requires authentication.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker image_config_property = sagemaker.CfnModel.ImageConfigProperty( repository_access_mode="repositoryAccessMode", # the properties below are optional repository_auth_config=sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) )
Attributes
- repository_access_mode
.
Platform
- The model image is hosted in Amazon ECR.Vpc
- The model image is hosted in a private Docker registry in your VPC.
- See:
- Type:
Set this to one of the following values
- repository_auth_config
(Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted.
Specify a value for this property only if you specified
Vpc
as the value for theRepositoryAccessMode
field, and the private Docker registry where the model image is hosted requires authentication.
InferenceExecutionConfigProperty
- class CfnModel.InferenceExecutionConfigProperty(*, mode)
Bases:
object
Specifies details about how containers in a multi-container endpoint are run.
- Parameters:
mode (
str
) – How containers in a multi-container are run. The following values are valid. -Serial
- Containers run as a serial pipeline. -Direct
- Only the individual container that you specify is run.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker inference_execution_config_property = sagemaker.CfnModel.InferenceExecutionConfigProperty( mode="mode" )
Attributes
- mode
How containers in a multi-container are run. The following values are valid.
Serial
- Containers run as a serial pipeline.Direct
- Only the individual container that you specify is run.
ModelAccessConfigProperty
- class CfnModel.ModelAccessConfigProperty(*, accept_eula)
Bases:
object
The access configuration file to control access to the ML model.
You can explicitly accept the model end-user license agreement (EULA) within the
ModelAccessConfig
.If you are a Jumpstart user, see the End-user license agreements section for more details on accepting the EULA.
If you are an AutoML user, see the Optional Parameters section of Create an AutoML job to fine-tune text generation models using the API for details on How to set the EULA acceptance when fine-tuning a model using the AutoML API .
- Parameters:
accept_eula (
Union
[bool
,IResolvable
]) – Specifies agreement to the model end-user license agreement (EULA). TheAcceptEula
value must be explicitly defined asTrue
in order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker model_access_config_property = sagemaker.CfnModel.ModelAccessConfigProperty( accept_eula=False )
Attributes
- accept_eula
Specifies agreement to the model end-user license agreement (EULA).
The
AcceptEula
value must be explicitly defined asTrue
in order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
ModelDataSourceProperty
- class CfnModel.ModelDataSourceProperty(*, s3_data_source)
Bases:
object
Specifies the location of ML model data to deploy.
If specified, you must specify one and only one of the available data sources.
- Parameters:
s3_data_source (
Union
[IResolvable
,S3DataSourceProperty
,Dict
[str
,Any
]]) – Specifies the S3 location of ML model data to deploy.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker model_data_source_property = sagemaker.CfnModel.ModelDataSourceProperty( s3_data_source=sagemaker.CfnModel.S3DataSourceProperty( compression_type="compressionType", s3_data_type="s3DataType", s3_uri="s3Uri", # the properties below are optional hub_access_config=sagemaker.CfnModel.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker.CfnModel.ModelAccessConfigProperty( accept_eula=False ) ) )
Attributes
- s3_data_source
Specifies the S3 location of ML model data to deploy.
MultiModelConfigProperty
- class CfnModel.MultiModelConfigProperty(*, model_cache_setting=None)
Bases:
object
Specifies additional configuration for hosting multi-model endpoints.
- Parameters:
model_cache_setting (
Optional
[str
]) – Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker multi_model_config_property = sagemaker.CfnModel.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" )
Attributes
- model_cache_setting
Whether to cache models for a multi-model endpoint.
By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.
RepositoryAuthConfigProperty
- class CfnModel.RepositoryAuthConfigProperty(*, repository_credentials_provider_arn)
Bases:
object
Specifies an authentication configuration for the private docker registry where your model image is hosted.
Specify a value for this property only if you specified
Vpc
as the value for theRepositoryAccessMode
field of theImageConfig
object that you passed to a call toCreateModel
and the private Docker registry where the model image is hosted requires authentication.- Parameters:
repository_credentials_provider_arn (
str
) – The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide .- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker repository_auth_config_property = sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" )
Attributes
- repository_credentials_provider_arn
The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted.
For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide .
S3DataSourceProperty
- class CfnModel.S3DataSourceProperty(*, compression_type, s3_data_type, s3_uri, hub_access_config=None, model_access_config=None)
Bases:
object
Describes the S3 data source.
Your input bucket must be in the same AWS region as your training job.
- Parameters:
compression_type (
str
) –s3_data_type (
str
) – If you chooseS3Prefix
,S3Uri
identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training. If you chooseManifestFile
,S3Uri
identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training. If you chooseAugmentedManifestFile
, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training.AugmentedManifestFile
can only be used if the Channel’s input mode isPipe
.s3_uri (
str
) – Depending on the value specified for theS3DataType
, identifies either a key name prefix or a manifest. For example: - A key name prefix might look like this:s3://bucketname/exampleprefix/
- A manifest might look like this:s3://bucketname/example.manifest
A manifest is an S3 object which is a JSON file consisting of an array of elements. The first element is a prefix which is followed by one or more suffixes. SageMaker appends the suffix elements to the prefix to get a full set ofS3Uri
. Note that the prefix must be a valid non-emptyS3Uri
that precludes users from specifying a manifest whose individualS3Uri
is sourced from different S3 buckets. The following code example shows a valid manifest format:[ {"prefix": "s3://customer_bucket/some/prefix/"},
"relative/path/to/custdata-1",
"relative/path/custdata-2",
...
"relative/path/custdata-N"
]
This JSON is equivalent to the followingS3Uri
list:s3://customer_bucket/some/prefix/relative/path/to/custdata-1
s3://customer_bucket/some/prefix/relative/path/custdata-2
...
s3://customer_bucket/some/prefix/relative/path/custdata-N
The complete set ofS3Uri
in this manifest is the input data for the channel for this data source. The object that eachS3Uri
points to must be readable by the IAM role that SageMaker uses to perform tasks on your behalf. Your input bucket must be located in same AWS region as your training job.hub_access_config (
Union
[IResolvable
,HubAccessConfigProperty
,Dict
[str
,Any
],None
]) –model_access_config (
Union
[IResolvable
,ModelAccessConfigProperty
,Dict
[str
,Any
],None
]) –
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker s3_data_source_property = sagemaker.CfnModel.S3DataSourceProperty( compression_type="compressionType", s3_data_type="s3DataType", s3_uri="s3Uri", # the properties below are optional hub_access_config=sagemaker.CfnModel.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker.CfnModel.ModelAccessConfigProperty( accept_eula=False ) )
Attributes
- compression_type
-
- Type:
see
- hub_access_config
-
- Type:
see
- model_access_config
-
- Type:
see
- s3_data_type
If you choose
S3Prefix
,S3Uri
identifies a key name prefix.SageMaker uses all objects that match the specified key name prefix for model training.
If you choose
ManifestFile
,S3Uri
identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training.If you choose
AugmentedManifestFile
, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training.AugmentedManifestFile
can only be used if the Channel’s input mode isPipe
.
- s3_uri
Depending on the value specified for the
S3DataType
, identifies either a key name prefix or a manifest.For example:
A key name prefix might look like this:
s3://bucketname/exampleprefix/
A manifest might look like this:
s3://bucketname/example.manifest
A manifest is an S3 object which is a JSON file consisting of an array of elements. The first element is a prefix which is followed by one or more suffixes. SageMaker appends the suffix elements to the prefix to get a full set of
S3Uri
. Note that the prefix must be a valid non-emptyS3Uri
that precludes users from specifying a manifest whose individualS3Uri
is sourced from different S3 buckets.The following code example shows a valid manifest format:
[ {"prefix": "s3://customer_bucket/some/prefix/"},
"relative/path/to/custdata-1",
"relative/path/custdata-2",
...
"relative/path/custdata-N"
]
This JSON is equivalent to the following
S3Uri
list:s3://customer_bucket/some/prefix/relative/path/to/custdata-1
s3://customer_bucket/some/prefix/relative/path/custdata-2
...
s3://customer_bucket/some/prefix/relative/path/custdata-N
The complete set of
S3Uri
in this manifest is the input data for the channel for this data source. The object that eachS3Uri
points to must be readable by the IAM role that SageMaker uses to perform tasks on your behalf.Your input bucket must be located in same AWS region as your training job.
VpcConfigProperty
- class CfnModel.VpcConfigProperty(*, security_group_ids, subnets)
Bases:
object
Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to.
You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC .
- Parameters:
security_group_ids (
Sequence
[str
]) – The VPC security group IDs, in the formsg-xxxxxxxx
. Specify the security groups for the VPC that is specified in theSubnets
field.subnets (
Sequence
[str
]) – The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_sagemaker as sagemaker vpc_config_property = sagemaker.CfnModel.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] )
Attributes
- security_group_ids
The VPC security group IDs, in the form
sg-xxxxxxxx
.Specify the security groups for the VPC that is specified in the
Subnets
field.
- subnets
The ID of the subnets in the VPC to which you want to connect your training job or model.
For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .