@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class ContainerDefinition extends Object implements Serializable, Cloneable, StructuredPojo
Describes the container, as part of model definition.
Constructor and Description |
---|
ContainerDefinition() |
Modifier and Type | Method and Description |
---|---|
ContainerDefinition |
addEnvironmentEntry(String key,
String value)
Add a single Environment entry
|
ContainerDefinition |
clearEnvironmentEntries()
Removes all the entries added into Environment.
|
ContainerDefinition |
clone() |
boolean |
equals(Object obj) |
List<AdditionalModelDataSource> |
getAdditionalModelDataSources()
Data sources that are available to your model in addition to the one that you specify for
ModelDataSource when you use the CreateModel action. |
String |
getContainerHostname()
This parameter is ignored for models that contain only a
PrimaryContainer . |
Map<String,String> |
getEnvironment()
The environment variables to set in the Docker container.
|
String |
getImage()
The path where inference code is stored.
|
ImageConfig |
getImageConfig()
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon
Virtual Private Cloud (VPC).
|
String |
getInferenceSpecificationName()
The inference specification name in the model package version.
|
String |
getMode()
Whether the container hosts a single model or multiple models.
|
ModelDataSource |
getModelDataSource()
Specifies the location of ML model data to deploy.
|
String |
getModelDataUrl()
The S3 path where the model artifacts, which result from model training, are stored.
|
String |
getModelPackageName()
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
|
MultiModelConfig |
getMultiModelConfig()
Specifies additional configuration for multi-model endpoints.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setAdditionalModelDataSources(Collection<AdditionalModelDataSource> additionalModelDataSources)
Data sources that are available to your model in addition to the one that you specify for
ModelDataSource when you use the CreateModel action. |
void |
setContainerHostname(String containerHostname)
This parameter is ignored for models that contain only a
PrimaryContainer . |
void |
setEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container.
|
void |
setImage(String image)
The path where inference code is stored.
|
void |
setImageConfig(ImageConfig imageConfig)
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon
Virtual Private Cloud (VPC).
|
void |
setInferenceSpecificationName(String inferenceSpecificationName)
The inference specification name in the model package version.
|
void |
setMode(String mode)
Whether the container hosts a single model or multiple models.
|
void |
setModelDataSource(ModelDataSource modelDataSource)
Specifies the location of ML model data to deploy.
|
void |
setModelDataUrl(String modelDataUrl)
The S3 path where the model artifacts, which result from model training, are stored.
|
void |
setModelPackageName(String modelPackageName)
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
|
void |
setMultiModelConfig(MultiModelConfig multiModelConfig)
Specifies additional configuration for multi-model endpoints.
|
String |
toString()
Returns a string representation of this object.
|
ContainerDefinition |
withAdditionalModelDataSources(AdditionalModelDataSource... additionalModelDataSources)
Data sources that are available to your model in addition to the one that you specify for
ModelDataSource when you use the CreateModel action. |
ContainerDefinition |
withAdditionalModelDataSources(Collection<AdditionalModelDataSource> additionalModelDataSources)
Data sources that are available to your model in addition to the one that you specify for
ModelDataSource when you use the CreateModel action. |
ContainerDefinition |
withContainerHostname(String containerHostname)
This parameter is ignored for models that contain only a
PrimaryContainer . |
ContainerDefinition |
withEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container.
|
ContainerDefinition |
withImage(String image)
The path where inference code is stored.
|
ContainerDefinition |
withImageConfig(ImageConfig imageConfig)
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon
Virtual Private Cloud (VPC).
|
ContainerDefinition |
withInferenceSpecificationName(String inferenceSpecificationName)
The inference specification name in the model package version.
|
ContainerDefinition |
withMode(ContainerMode mode)
Whether the container hosts a single model or multiple models.
|
ContainerDefinition |
withMode(String mode)
Whether the container hosts a single model or multiple models.
|
ContainerDefinition |
withModelDataSource(ModelDataSource modelDataSource)
Specifies the location of ML model data to deploy.
|
ContainerDefinition |
withModelDataUrl(String modelDataUrl)
The S3 path where the model artifacts, which result from model training, are stored.
|
ContainerDefinition |
withModelPackageName(String modelPackageName)
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
|
ContainerDefinition |
withMultiModelConfig(MultiModelConfig multiModelConfig)
Specifies additional configuration for multi-model endpoints.
|
public void setContainerHostname(String containerHostname)
This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
containerHostname
- This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
value for the ContainerHostName
for any ContainerDefinition
that is part of an
inference pipeline, you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
public String getContainerHostname()
This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
value for the ContainerHostName
for any ContainerDefinition
that is part of an
inference pipeline, you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
public ContainerDefinition withContainerHostname(String containerHostname)
This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
containerHostname
- This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
value for the ContainerHostName
for any ContainerDefinition
that is part of an
inference pipeline, you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
public void setImage(String image)
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker
requirements. SageMaker supports both registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
image
- The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a
Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are
using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must
meet SageMaker requirements. SageMaker supports both registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with
Amazon SageMaker. The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
public String getImage()
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker
requirements. SageMaker supports both registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms
with Amazon SageMaker. The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
public ContainerDefinition withImage(String image)
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker
requirements. SageMaker supports both registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
image
- The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a
Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are
using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must
meet SageMaker requirements. SageMaker supports both registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with
Amazon SageMaker. The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
public void setImageConfig(ImageConfig imageConfig)
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
imageConfig
- Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your
Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry,
see Use a Private Docker Registry for Real-Time Inference Containers. The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
public ImageConfig getImageConfig()
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
public ContainerDefinition withImageConfig(ImageConfig imageConfig)
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers.
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
imageConfig
- Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your
Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry,
see Use a Private Docker Registry for Real-Time Inference Containers. The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
public void setMode(String mode)
Whether the container hosts a single model or multiple models.
mode
- Whether the container hosts a single model or multiple models.ContainerMode
public String getMode()
Whether the container hosts a single model or multiple models.
ContainerMode
public ContainerDefinition withMode(String mode)
Whether the container hosts a single model or multiple models.
mode
- Whether the container hosts a single model or multiple models.ContainerMode
public ContainerDefinition withMode(ContainerMode mode)
Whether the container hosts a single model or multiple models.
mode
- Whether the container hosts a single model or multiple models.ContainerMode
public void setModelDataUrl(String modelDataUrl)
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model
artifacts in ModelDataUrl
.
modelDataUrl
- The S3 path where the model artifacts, which result from model training, are stored. This path must point
to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in
algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common
Parameters. The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the
model artifacts in ModelDataUrl
.
public String getModelDataUrl()
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model
artifacts in ModelDataUrl
.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the
model artifacts in ModelDataUrl
.
public ContainerDefinition withModelDataUrl(String modelDataUrl)
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model
artifacts in ModelDataUrl
.
modelDataUrl
- The S3 path where the model artifacts, which result from model training, are stored. This path must point
to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in
algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common
Parameters. The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the
model artifacts in ModelDataUrl
.
public void setModelDataSource(ModelDataSource modelDataSource)
Specifies the location of ML model data to deploy.
Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker
serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
modelDataSource
- Specifies the location of ML model data to deploy.
Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform,
SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
public ModelDataSource getModelDataSource()
Specifies the location of ML model data to deploy.
Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker
serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform,
SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
public ContainerDefinition withModelDataSource(ModelDataSource modelDataSource)
Specifies the location of ML model data to deploy.
Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker
serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
modelDataSource
- Specifies the location of ML model data to deploy.
Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform,
SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
public List<AdditionalModelDataSource> getAdditionalModelDataSources()
Data sources that are available to your model in addition to the one that you specify for
ModelDataSource
when you use the CreateModel
action.
ModelDataSource
when you use the CreateModel
action.public void setAdditionalModelDataSources(Collection<AdditionalModelDataSource> additionalModelDataSources)
Data sources that are available to your model in addition to the one that you specify for
ModelDataSource
when you use the CreateModel
action.
additionalModelDataSources
- Data sources that are available to your model in addition to the one that you specify for
ModelDataSource
when you use the CreateModel
action.public ContainerDefinition withAdditionalModelDataSources(AdditionalModelDataSource... additionalModelDataSources)
Data sources that are available to your model in addition to the one that you specify for
ModelDataSource
when you use the CreateModel
action.
NOTE: This method appends the values to the existing list (if any). Use
setAdditionalModelDataSources(java.util.Collection)
or
withAdditionalModelDataSources(java.util.Collection)
if you want to override the existing values.
additionalModelDataSources
- Data sources that are available to your model in addition to the one that you specify for
ModelDataSource
when you use the CreateModel
action.public ContainerDefinition withAdditionalModelDataSources(Collection<AdditionalModelDataSource> additionalModelDataSources)
Data sources that are available to your model in addition to the one that you specify for
ModelDataSource
when you use the CreateModel
action.
additionalModelDataSources
- Data sources that are available to your model in addition to the one that you specify for
ModelDataSource
when you use the CreateModel
action.public Map<String,String> getEnvironment()
The environment variables to set in the Docker container.
The maximum length of each key and value in the Environment
map is 1024 bytes. The maximum length of
all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
The maximum length of each key and value in the Environment
map is 1024 bytes. The maximum
length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a
CreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
public void setEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container.
The maximum length of each key and value in the Environment
map is 1024 bytes. The maximum length of
all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
environment
- The environment variables to set in the Docker container.
The maximum length of each key and value in the Environment
map is 1024 bytes. The maximum
length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a
CreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
public ContainerDefinition withEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container.
The maximum length of each key and value in the Environment
map is 1024 bytes. The maximum length of
all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
environment
- The environment variables to set in the Docker container.
The maximum length of each key and value in the Environment
map is 1024 bytes. The maximum
length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a
CreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.
public ContainerDefinition addEnvironmentEntry(String key, String value)
public ContainerDefinition clearEnvironmentEntries()
public void setModelPackageName(String modelPackageName)
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
modelPackageName
- The name or Amazon Resource Name (ARN) of the model package to use to create the model.public String getModelPackageName()
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
public ContainerDefinition withModelPackageName(String modelPackageName)
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
modelPackageName
- The name or Amazon Resource Name (ARN) of the model package to use to create the model.public void setInferenceSpecificationName(String inferenceSpecificationName)
The inference specification name in the model package version.
inferenceSpecificationName
- The inference specification name in the model package version.public String getInferenceSpecificationName()
The inference specification name in the model package version.
public ContainerDefinition withInferenceSpecificationName(String inferenceSpecificationName)
The inference specification name in the model package version.
inferenceSpecificationName
- The inference specification name in the model package version.public void setMultiModelConfig(MultiModelConfig multiModelConfig)
Specifies additional configuration for multi-model endpoints.
multiModelConfig
- Specifies additional configuration for multi-model endpoints.public MultiModelConfig getMultiModelConfig()
Specifies additional configuration for multi-model endpoints.
public ContainerDefinition withMultiModelConfig(MultiModelConfig multiModelConfig)
Specifies additional configuration for multi-model endpoints.
multiModelConfig
- Specifies additional configuration for multi-model endpoints.public String toString()
toString
in class Object
Object.toString()
public ContainerDefinition clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.