You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::SageMaker::Types::InferenceSpecification

Inherits:
Struct
  • Object
show all
Defined in:
(unknown)

Overview

Note:

When passing InferenceSpecification as input to an Aws::Client method, you can use a vanilla Hash:

{
  containers: [ # required
    {
      container_hostname: "ContainerHostname",
      image: "Image", # required
      image_digest: "ImageDigest",
      model_data_url: "Url",
      product_id: "ProductId",
    },
  ],
  supported_transform_instance_types: ["ml.m4.xlarge"], # required, accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge
  supported_realtime_inference_instance_types: ["ml.t2.medium"], # required, accepts ml.t2.medium, ml.t2.large, ml.t2.xlarge, ml.t2.2xlarge, ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.12xlarge, ml.m5d.24xlarge, ml.c4.large, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5d.large, ml.c5d.xlarge, ml.c5d.2xlarge, ml.c5d.4xlarge, ml.c5d.9xlarge, ml.c5d.18xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.12xlarge, ml.r5.24xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.12xlarge, ml.r5d.24xlarge
  supported_content_types: ["ContentType"], # required
  supported_response_mime_types: ["ResponseMIMEType"], # required
}

Defines how to perform inference generation after a training job is run.

Returned by:

Instance Attribute Summary collapse

Instance Attribute Details

#containersArray<Types::ModelPackageContainerDefinition>

The Amazon ECR registry path of the Docker image that contains the inference code.

Returns:

#supported_content_typesArray<String>

The supported MIME types for the input data.

Returns:

  • (Array<String>)

    The supported MIME types for the input data.

#supported_realtime_inference_instance_typesArray<String>

A list of the instance types that are used to generate inferences in real-time.

Returns:

  • (Array<String>)

    A list of the instance types that are used to generate inferences in real-time.

#supported_response_mime_typesArray<String>

The supported MIME types for the output data.

Returns:

  • (Array<String>)

    The supported MIME types for the output data.

#supported_transform_instance_typesArray<String>

A list of the instance types on which a transformation job can be run or on which an endpoint can be deployed.

Returns:

  • (Array<String>)

    A list of the instance types on which a transformation job can be run or on which an endpoint can be deployed.