You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::SageMaker::Types::CreateModelInput
- Inherits:
-
Struct
- Object
- Struct
- Aws::SageMaker::Types::CreateModelInput
- Defined in:
- (unknown)
Overview
When passing CreateModelInput as input to an Aws::Client method, you can use a vanilla Hash:
{
model_name: "ModelName", # required
primary_container: {
container_hostname: "ContainerHostname",
image: "ContainerImage",
image_config: {
repository_access_mode: "Platform", # required, accepts Platform, Vpc
},
mode: "SingleModel", # accepts SingleModel, MultiModel
model_data_url: "Url",
environment: {
"EnvironmentKey" => "EnvironmentValue",
},
model_package_name: "VersionedArnOrName",
},
containers: [
{
container_hostname: "ContainerHostname",
image: "ContainerImage",
image_config: {
repository_access_mode: "Platform", # required, accepts Platform, Vpc
},
mode: "SingleModel", # accepts SingleModel, MultiModel
model_data_url: "Url",
environment: {
"EnvironmentKey" => "EnvironmentValue",
},
model_package_name: "VersionedArnOrName",
},
],
execution_role_arn: "RoleArn", # required
tags: [
{
key: "TagKey", # required
value: "TagValue", # required
},
],
vpc_config: {
security_group_ids: ["SecurityGroupId"], # required
subnets: ["SubnetId"], # required
},
enable_network_isolation: false,
}
Instance Attribute Summary collapse
-
#containers ⇒ Array<Types::ContainerDefinition>
Specifies the containers in the inference pipeline.
-
#enable_network_isolation ⇒ Boolean
Isolates the model container.
-
#execution_role_arn ⇒ String
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs.
-
#model_name ⇒ String
The name of the new model.
-
#primary_container ⇒ Types::ContainerDefinition
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
-
#tags ⇒ Array<Types::Tag>
An array of key-value pairs.
-
#vpc_config ⇒ Types::VpcConfig
A VpcConfig object that specifies the VPC that you want your model to connect to.
Instance Attribute Details
#containers ⇒ Array<Types::ContainerDefinition>
Specifies the containers in the inference pipeline.
#enable_network_isolation ⇒ Boolean
Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
#execution_role_arn ⇒ String
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
iam:PassRole
permission.
#model_name ⇒ String
The name of the new model.
#primary_container ⇒ Types::ContainerDefinition
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
#tags ⇒ Array<Types::Tag>
An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
#vpc_config ⇒ Types::VpcConfig
A VpcConfig object that specifies the VPC that you want your
model to connect to. Control access to and from your model container by
configuring the VPC. VpcConfig
is used in hosting services and in
batch transform. For more information, see Protect Endpoints by Using
an Amazon Virtual Private Cloud and Protect Data in Batch Transform
Jobs by Using an Amazon Virtual Private Cloud.