AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Creates a model in SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions.
Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job.
To host your model, you create an endpoint configuration with the
API, and then create an endpoint with the
CreateEndpoint API. SageMaker
then deploys all of the containers that you defined for the model in the hosting environment.
For an example that calls this method when deploying a model to SageMaker hosting services, see Create a Model (Amazon Web Services SDK for Python (Boto 3)).
To run a batch transform using your model, you start a job with the
API. SageMaker uses your model and your dataset to get inferences which are then saved
to a specified S3 location.
In the request, you also provide an IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.
For .NET Core this operation is only available in asynchronous form. Please refer to CreateModelAsync.
public virtual CreateModelResponse CreateModel( CreateModelRequest request )
Container for the necessary parameters to execute the CreateModel service method.
|ResourceLimitExceededException||You have exceeded an SageMaker resource limit. For example, you might have too many training jobs created.|
Supported in: 4.5, 4.0, 3.5