You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::SageMaker::Types::ContainerDefinition

Inherits:
Struct
  • Object
show all
Defined in:
(unknown)

Overview

Note:

When passing ContainerDefinition as input to an Aws::Client method, you can use a vanilla Hash:

{
  container_hostname: "ContainerHostname",
  image: "ContainerImage",
  image_config: {
    repository_access_mode: "Platform", # required, accepts Platform, Vpc
  },
  mode: "SingleModel", # accepts SingleModel, MultiModel
  model_data_url: "Url",
  environment: {
    "EnvironmentKey" => "EnvironmentValue",
  },
  model_package_name: "VersionedArnOrName",
}

Describes the container, as part of model definition.

Returned by:

Instance Attribute Summary collapse

Instance Attribute Details

#container_hostnameString

This parameter is ignored for models that contain only a PrimaryContainer.

When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don\'t specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

Returns:

  • (String)

    This parameter is ignored for models that contain only a PrimaryContainer.

#environmentHash<String,String>

The environment variables to set in the Docker container. Each key and value in the Environment string to string map can have length of up to 1024. We support up to 16 entries in the map.

Returns:

  • (Hash<String,String>)

    The environment variables to set in the Docker container.

#imageString

The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker

Returns:

  • (String)

    The path where inference code is stored.

#image_configTypes::ImageConfig

Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers

Returns:

  • (Types::ImageConfig)

    Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).

#modeString

Whether the container hosts a single model or multiple models.

Possible values:

  • SingleModel
  • MultiModel

Returns:

  • (String)

    Whether the container hosts a single model or multiple models.

#model_data_urlString

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.

The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.

If you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.

If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path to the model artifacts in ModelDataUrl.

Returns:

  • (String)

    The S3 path where the model artifacts, which result from model training, are stored.

#model_package_nameString

The name or Amazon Resource Name (ARN) of the model package to use to create the model.

Returns:

  • (String)

    The name or Amazon Resource Name (ARN) of the model package to use to create the model.