AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Creates a model in Amazon SageMaker. In the request, you name the model and describe one or more containers. For each container, you specify the docker image containing inference code, artifacts (from prior training), and custom environment map that the inference code uses when you deploy the model into production.
Use this API to create a model only if you want to use Amazon SageMaker hosting services.
To host your model, you create an endpoint configuration with the
API, and then create an endpoint with the
Amazon SageMaker then deploys all of the containers that you defined for the model in the hosting environment.
CreateModel request, you must define a container with the
In the request, you also provide an IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other AWS resources, you grant necessary permissions via this role.
For .NET Core and PCL this operation is only available in asynchronous form. Please refer to CreateModelAsync.
public virtual CreateModelResponse CreateModel( CreateModelRequest request )
Container for the necessary parameters to execute the CreateModel service method.
|ResourceLimitExceededException||You have exceeded an Amazon SageMaker resource limit. For example, you might have too many training jobs created.|
Supported in: 4.5, 4.0, 3.5
Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms