@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class StartMLModelTrainingJobRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
NOOP
Constructor and Description |
---|
StartMLModelTrainingJobRequest() |
Modifier and Type | Method and Description |
---|---|
StartMLModelTrainingJobRequest |
clone()
Creates a shallow clone of this object for all fields except the handler context.
|
boolean |
equals(Object obj) |
String |
getBaseProcessingInstanceType()
The type of ML instance used in preparing and managing training of ML models.
|
CustomModelTrainingParameters |
getCustomModelTrainingParameters()
The configuration for custom model training.
|
String |
getDataProcessingJobId()
The job ID of the completed data-processing job that has created the data that the training will work with.
|
Boolean |
getEnableManagedSpotTraining()
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances.
|
String |
getId()
A unique identifier for the new job.
|
Integer |
getMaxHPONumberOfTrainingJobs()
Maximum total number of training jobs to start for the hyperparameter tuning job.
|
Integer |
getMaxHPOParallelTrainingJobs()
Maximum number of parallel training jobs to start for the hyperparameter tuning job.
|
String |
getNeptuneIamRoleArn()
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources.
|
String |
getPreviousModelTrainingJobId()
The job ID of a completed model-training job that you want to update incrementally based on updated data.
|
String |
getS3OutputEncryptionKMSKey()
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job.
|
String |
getSagemakerIamRoleArn()
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error
will occur.
|
List<String> |
getSecurityGroupIds()
The VPC security group IDs.
|
List<String> |
getSubnets()
The IDs of the subnets in the Neptune VPC.
|
String |
getTrainingInstanceType()
The type of ML instance used for model training.
|
Integer |
getTrainingInstanceVolumeSizeInGB()
The disk volume size of the training instance.
|
Integer |
getTrainingTimeOutInSeconds()
Timeout in seconds for the training job.
|
String |
getTrainModelS3Location()
The location in Amazon S3 where the model artifacts are to be stored.
|
String |
getVolumeEncryptionKMSKey()
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to
the ML compute instances that run the training job.
|
int |
hashCode() |
Boolean |
isEnableManagedSpotTraining()
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances.
|
void |
setBaseProcessingInstanceType(String baseProcessingInstanceType)
The type of ML instance used in preparing and managing training of ML models.
|
void |
setCustomModelTrainingParameters(CustomModelTrainingParameters customModelTrainingParameters)
The configuration for custom model training.
|
void |
setDataProcessingJobId(String dataProcessingJobId)
The job ID of the completed data-processing job that has created the data that the training will work with.
|
void |
setEnableManagedSpotTraining(Boolean enableManagedSpotTraining)
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances.
|
void |
setId(String id)
A unique identifier for the new job.
|
void |
setMaxHPONumberOfTrainingJobs(Integer maxHPONumberOfTrainingJobs)
Maximum total number of training jobs to start for the hyperparameter tuning job.
|
void |
setMaxHPOParallelTrainingJobs(Integer maxHPOParallelTrainingJobs)
Maximum number of parallel training jobs to start for the hyperparameter tuning job.
|
void |
setNeptuneIamRoleArn(String neptuneIamRoleArn)
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources.
|
void |
setPreviousModelTrainingJobId(String previousModelTrainingJobId)
The job ID of a completed model-training job that you want to update incrementally based on updated data.
|
void |
setS3OutputEncryptionKMSKey(String s3OutputEncryptionKMSKey)
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job.
|
void |
setSagemakerIamRoleArn(String sagemakerIamRoleArn)
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error
will occur.
|
void |
setSecurityGroupIds(Collection<String> securityGroupIds)
The VPC security group IDs.
|
void |
setSubnets(Collection<String> subnets)
The IDs of the subnets in the Neptune VPC.
|
void |
setTrainingInstanceType(String trainingInstanceType)
The type of ML instance used for model training.
|
void |
setTrainingInstanceVolumeSizeInGB(Integer trainingInstanceVolumeSizeInGB)
The disk volume size of the training instance.
|
void |
setTrainingTimeOutInSeconds(Integer trainingTimeOutInSeconds)
Timeout in seconds for the training job.
|
void |
setTrainModelS3Location(String trainModelS3Location)
The location in Amazon S3 where the model artifacts are to be stored.
|
void |
setVolumeEncryptionKMSKey(String volumeEncryptionKMSKey)
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to
the ML compute instances that run the training job.
|
String |
toString()
Returns a string representation of this object.
|
StartMLModelTrainingJobRequest |
withBaseProcessingInstanceType(String baseProcessingInstanceType)
The type of ML instance used in preparing and managing training of ML models.
|
StartMLModelTrainingJobRequest |
withCustomModelTrainingParameters(CustomModelTrainingParameters customModelTrainingParameters)
The configuration for custom model training.
|
StartMLModelTrainingJobRequest |
withDataProcessingJobId(String dataProcessingJobId)
The job ID of the completed data-processing job that has created the data that the training will work with.
|
StartMLModelTrainingJobRequest |
withEnableManagedSpotTraining(Boolean enableManagedSpotTraining)
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances.
|
StartMLModelTrainingJobRequest |
withId(String id)
A unique identifier for the new job.
|
StartMLModelTrainingJobRequest |
withMaxHPONumberOfTrainingJobs(Integer maxHPONumberOfTrainingJobs)
Maximum total number of training jobs to start for the hyperparameter tuning job.
|
StartMLModelTrainingJobRequest |
withMaxHPOParallelTrainingJobs(Integer maxHPOParallelTrainingJobs)
Maximum number of parallel training jobs to start for the hyperparameter tuning job.
|
StartMLModelTrainingJobRequest |
withNeptuneIamRoleArn(String neptuneIamRoleArn)
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources.
|
StartMLModelTrainingJobRequest |
withPreviousModelTrainingJobId(String previousModelTrainingJobId)
The job ID of a completed model-training job that you want to update incrementally based on updated data.
|
StartMLModelTrainingJobRequest |
withS3OutputEncryptionKMSKey(String s3OutputEncryptionKMSKey)
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job.
|
StartMLModelTrainingJobRequest |
withSagemakerIamRoleArn(String sagemakerIamRoleArn)
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error
will occur.
|
StartMLModelTrainingJobRequest |
withSecurityGroupIds(Collection<String> securityGroupIds)
The VPC security group IDs.
|
StartMLModelTrainingJobRequest |
withSecurityGroupIds(String... securityGroupIds)
The VPC security group IDs.
|
StartMLModelTrainingJobRequest |
withSubnets(Collection<String> subnets)
The IDs of the subnets in the Neptune VPC.
|
StartMLModelTrainingJobRequest |
withSubnets(String... subnets)
The IDs of the subnets in the Neptune VPC.
|
StartMLModelTrainingJobRequest |
withTrainingInstanceType(String trainingInstanceType)
The type of ML instance used for model training.
|
StartMLModelTrainingJobRequest |
withTrainingInstanceVolumeSizeInGB(Integer trainingInstanceVolumeSizeInGB)
The disk volume size of the training instance.
|
StartMLModelTrainingJobRequest |
withTrainingTimeOutInSeconds(Integer trainingTimeOutInSeconds)
Timeout in seconds for the training job.
|
StartMLModelTrainingJobRequest |
withTrainModelS3Location(String trainModelS3Location)
The location in Amazon S3 where the model artifacts are to be stored.
|
StartMLModelTrainingJobRequest |
withVolumeEncryptionKMSKey(String volumeEncryptionKMSKey)
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to
the ML compute instances that run the training job.
|
addHandlerContext, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getHandlerContext, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestCredentialsProvider, getRequestMetricCollector, getSdkClientExecutionTimeout, getSdkRequestTimeout, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestCredentialsProvider, setRequestMetricCollector, setSdkClientExecutionTimeout, setSdkRequestTimeout, withGeneralProgressListener, withRequestCredentialsProvider, withRequestMetricCollector, withSdkClientExecutionTimeout, withSdkRequestTimeout
public void setId(String id)
A unique identifier for the new job. The default is An autogenerated UUID.
id
- A unique identifier for the new job. The default is An autogenerated UUID.public String getId()
A unique identifier for the new job. The default is An autogenerated UUID.
public StartMLModelTrainingJobRequest withId(String id)
A unique identifier for the new job. The default is An autogenerated UUID.
id
- A unique identifier for the new job. The default is An autogenerated UUID.public void setPreviousModelTrainingJobId(String previousModelTrainingJobId)
The job ID of a completed model-training job that you want to update incrementally based on updated data.
previousModelTrainingJobId
- The job ID of a completed model-training job that you want to update incrementally based on updated data.public String getPreviousModelTrainingJobId()
The job ID of a completed model-training job that you want to update incrementally based on updated data.
public StartMLModelTrainingJobRequest withPreviousModelTrainingJobId(String previousModelTrainingJobId)
The job ID of a completed model-training job that you want to update incrementally based on updated data.
previousModelTrainingJobId
- The job ID of a completed model-training job that you want to update incrementally based on updated data.public void setDataProcessingJobId(String dataProcessingJobId)
The job ID of the completed data-processing job that has created the data that the training will work with.
dataProcessingJobId
- The job ID of the completed data-processing job that has created the data that the training will work
with.public String getDataProcessingJobId()
The job ID of the completed data-processing job that has created the data that the training will work with.
public StartMLModelTrainingJobRequest withDataProcessingJobId(String dataProcessingJobId)
The job ID of the completed data-processing job that has created the data that the training will work with.
dataProcessingJobId
- The job ID of the completed data-processing job that has created the data that the training will work
with.public void setTrainModelS3Location(String trainModelS3Location)
The location in Amazon S3 where the model artifacts are to be stored.
trainModelS3Location
- The location in Amazon S3 where the model artifacts are to be stored.public String getTrainModelS3Location()
The location in Amazon S3 where the model artifacts are to be stored.
public StartMLModelTrainingJobRequest withTrainModelS3Location(String trainModelS3Location)
The location in Amazon S3 where the model artifacts are to be stored.
trainModelS3Location
- The location in Amazon S3 where the model artifacts are to be stored.public void setSagemakerIamRoleArn(String sagemakerIamRoleArn)
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.
sagemakerIamRoleArn
- The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or
an error will occur.public String getSagemakerIamRoleArn()
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.
public StartMLModelTrainingJobRequest withSagemakerIamRoleArn(String sagemakerIamRoleArn)
The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.
sagemakerIamRoleArn
- The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or
an error will occur.public void setNeptuneIamRoleArn(String neptuneIamRoleArn)
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
neptuneIamRoleArn
- The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be
listed in your DB cluster parameter group or an error will occur.public String getNeptuneIamRoleArn()
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
public StartMLModelTrainingJobRequest withNeptuneIamRoleArn(String neptuneIamRoleArn)
The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.
neptuneIamRoleArn
- The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be
listed in your DB cluster parameter group or an error will occur.public void setBaseProcessingInstanceType(String baseProcessingInstanceType)
The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.
baseProcessingInstanceType
- The type of ML instance used in preparing and managing training of ML models. This is a CPU instance
chosen based on memory requirements for processing the training data and model.public String getBaseProcessingInstanceType()
The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.
public StartMLModelTrainingJobRequest withBaseProcessingInstanceType(String baseProcessingInstanceType)
The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.
baseProcessingInstanceType
- The type of ML instance used in preparing and managing training of ML models. This is a CPU instance
chosen based on memory requirements for processing the training data and model.public void setTrainingInstanceType(String trainingInstanceType)
The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training.
The default is ml.p3.2xlarge
. Choosing the right instance type for training depends on the task
type, graph size, and your budget.
trainingInstanceType
- The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU
training. The default is ml.p3.2xlarge
. Choosing the right instance type for training depends
on the task type, graph size, and your budget.public String getTrainingInstanceType()
The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training.
The default is ml.p3.2xlarge
. Choosing the right instance type for training depends on the task
type, graph size, and your budget.
ml.p3.2xlarge
. Choosing the right instance type for training
depends on the task type, graph size, and your budget.public StartMLModelTrainingJobRequest withTrainingInstanceType(String trainingInstanceType)
The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training.
The default is ml.p3.2xlarge
. Choosing the right instance type for training depends on the task
type, graph size, and your budget.
trainingInstanceType
- The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU
training. The default is ml.p3.2xlarge
. Choosing the right instance type for training depends
on the task type, graph size, and your budget.public void setTrainingInstanceVolumeSizeInGB(Integer trainingInstanceVolumeSizeInGB)
The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
trainingInstanceVolumeSizeInGB
- The disk volume size of the training instance. Both input data and the output model are stored on disk, so
the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0,
Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.public Integer getTrainingInstanceVolumeSizeInGB()
The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
public StartMLModelTrainingJobRequest withTrainingInstanceVolumeSizeInGB(Integer trainingInstanceVolumeSizeInGB)
The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.
trainingInstanceVolumeSizeInGB
- The disk volume size of the training instance. Both input data and the output model are stored on disk, so
the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0,
Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.public void setTrainingTimeOutInSeconds(Integer trainingTimeOutInSeconds)
Timeout in seconds for the training job. The default is 86,400 (1 day).
trainingTimeOutInSeconds
- Timeout in seconds for the training job. The default is 86,400 (1 day).public Integer getTrainingTimeOutInSeconds()
Timeout in seconds for the training job. The default is 86,400 (1 day).
public StartMLModelTrainingJobRequest withTrainingTimeOutInSeconds(Integer trainingTimeOutInSeconds)
Timeout in seconds for the training job. The default is 86,400 (1 day).
trainingTimeOutInSeconds
- Timeout in seconds for the training job. The default is 86,400 (1 day).public void setMaxHPONumberOfTrainingJobs(Integer maxHPONumberOfTrainingJobs)
Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML
automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use
at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs
to 10). In general, the more tuning
runs, the better the results.
maxHPONumberOfTrainingJobs
- Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2.
Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that
performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs
to 10).
In general, the more tuning runs, the better the results.public Integer getMaxHPONumberOfTrainingJobs()
Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML
automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use
at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs
to 10). In general, the more tuning
runs, the better the results.
maxHPONumberOfTrainingJobs
to 10).
In general, the more tuning runs, the better the results.public StartMLModelTrainingJobRequest withMaxHPONumberOfTrainingJobs(Integer maxHPONumberOfTrainingJobs)
Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML
automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use
at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs
to 10). In general, the more tuning
runs, the better the results.
maxHPONumberOfTrainingJobs
- Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2.
Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that
performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs
to 10).
In general, the more tuning runs, the better the results.public void setMaxHPOParallelTrainingJobs(Integer maxHPOParallelTrainingJobs)
Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.
maxHPOParallelTrainingJobs
- Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The
number of parallel jobs you can run is limited by the available resources on your training instance.public Integer getMaxHPOParallelTrainingJobs()
Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.
public StartMLModelTrainingJobRequest withMaxHPOParallelTrainingJobs(Integer maxHPOParallelTrainingJobs)
Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.
maxHPOParallelTrainingJobs
- Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The
number of parallel jobs you can run is limited by the available resources on your training instance.public List<String> getSubnets()
The IDs of the subnets in the Neptune VPC. The default is None.
public void setSubnets(Collection<String> subnets)
The IDs of the subnets in the Neptune VPC. The default is None.
subnets
- The IDs of the subnets in the Neptune VPC. The default is None.public StartMLModelTrainingJobRequest withSubnets(String... subnets)
The IDs of the subnets in the Neptune VPC. The default is None.
NOTE: This method appends the values to the existing list (if any). Use
setSubnets(java.util.Collection)
or withSubnets(java.util.Collection)
if you want to override
the existing values.
subnets
- The IDs of the subnets in the Neptune VPC. The default is None.public StartMLModelTrainingJobRequest withSubnets(Collection<String> subnets)
The IDs of the subnets in the Neptune VPC. The default is None.
subnets
- The IDs of the subnets in the Neptune VPC. The default is None.public List<String> getSecurityGroupIds()
The VPC security group IDs. The default is None.
public void setSecurityGroupIds(Collection<String> securityGroupIds)
The VPC security group IDs. The default is None.
securityGroupIds
- The VPC security group IDs. The default is None.public StartMLModelTrainingJobRequest withSecurityGroupIds(String... securityGroupIds)
The VPC security group IDs. The default is None.
NOTE: This method appends the values to the existing list (if any). Use
setSecurityGroupIds(java.util.Collection)
or withSecurityGroupIds(java.util.Collection)
if you
want to override the existing values.
securityGroupIds
- The VPC security group IDs. The default is None.public StartMLModelTrainingJobRequest withSecurityGroupIds(Collection<String> securityGroupIds)
The VPC security group IDs. The default is None.
securityGroupIds
- The VPC security group IDs. The default is None.public void setVolumeEncryptionKMSKey(String volumeEncryptionKMSKey)
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
volumeEncryptionKMSKey
- The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume
attached to the ML compute instances that run the training job. The default is None.public String getVolumeEncryptionKMSKey()
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
public StartMLModelTrainingJobRequest withVolumeEncryptionKMSKey(String volumeEncryptionKMSKey)
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.
volumeEncryptionKMSKey
- The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume
attached to the ML compute instances that run the training job. The default is None.public void setS3OutputEncryptionKMSKey(String s3OutputEncryptionKMSKey)
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
s3OutputEncryptionKMSKey
- The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing
job. The default is none.public String getS3OutputEncryptionKMSKey()
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
public StartMLModelTrainingJobRequest withS3OutputEncryptionKMSKey(String s3OutputEncryptionKMSKey)
The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.
s3OutputEncryptionKMSKey
- The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing
job. The default is none.public void setEnableManagedSpotTraining(Boolean enableManagedSpotTraining)
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The
default is False
.
enableManagedSpotTraining
- Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot
instances. The default is False
.public Boolean getEnableManagedSpotTraining()
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The
default is False
.
False
.public StartMLModelTrainingJobRequest withEnableManagedSpotTraining(Boolean enableManagedSpotTraining)
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The
default is False
.
enableManagedSpotTraining
- Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot
instances. The default is False
.public Boolean isEnableManagedSpotTraining()
Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The
default is False
.
False
.public void setCustomModelTrainingParameters(CustomModelTrainingParameters customModelTrainingParameters)
The configuration for custom model training. This is a JSON object.
customModelTrainingParameters
- The configuration for custom model training. This is a JSON object.public CustomModelTrainingParameters getCustomModelTrainingParameters()
The configuration for custom model training. This is a JSON object.
public StartMLModelTrainingJobRequest withCustomModelTrainingParameters(CustomModelTrainingParameters customModelTrainingParameters)
The configuration for custom model training. This is a JSON object.
customModelTrainingParameters
- The configuration for custom model training. This is a JSON object.public String toString()
toString
in class Object
Object.toString()
public StartMLModelTrainingJobRequest clone()
AmazonWebServiceRequest
clone
in class AmazonWebServiceRequest
Object.clone()