@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class StartModelRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
NOOP| Constructor and Description |
|---|
StartModelRequest() |
| Modifier and Type | Method and Description |
|---|---|
StartModelRequest |
clone()
Creates a shallow clone of this object for all fields except the handler context.
|
boolean |
equals(Object obj) |
String |
getClientToken()
ClientToken is an idempotency token that ensures a call to
StartModel completes only once. |
Integer |
getMaxInferenceUnits()
The maximum number of inference units to use for auto-scaling the model.
|
Integer |
getMinInferenceUnits()
The minimum number of inference units to use.
|
String |
getModelVersion()
The version of the model that you want to start.
|
String |
getProjectName()
The name of the project that contains the model that you want to start.
|
int |
hashCode() |
void |
setClientToken(String clientToken)
ClientToken is an idempotency token that ensures a call to
StartModel completes only once. |
void |
setMaxInferenceUnits(Integer maxInferenceUnits)
The maximum number of inference units to use for auto-scaling the model.
|
void |
setMinInferenceUnits(Integer minInferenceUnits)
The minimum number of inference units to use.
|
void |
setModelVersion(String modelVersion)
The version of the model that you want to start.
|
void |
setProjectName(String projectName)
The name of the project that contains the model that you want to start.
|
String |
toString()
Returns a string representation of this object.
|
StartModelRequest |
withClientToken(String clientToken)
ClientToken is an idempotency token that ensures a call to
StartModel completes only once. |
StartModelRequest |
withMaxInferenceUnits(Integer maxInferenceUnits)
The maximum number of inference units to use for auto-scaling the model.
|
StartModelRequest |
withMinInferenceUnits(Integer minInferenceUnits)
The minimum number of inference units to use.
|
StartModelRequest |
withModelVersion(String modelVersion)
The version of the model that you want to start.
|
StartModelRequest |
withProjectName(String projectName)
The name of the project that contains the model that you want to start.
|
addHandlerContext, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getHandlerContext, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestCredentialsProvider, getRequestMetricCollector, getSdkClientExecutionTimeout, getSdkRequestTimeout, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestCredentialsProvider, setRequestMetricCollector, setSdkClientExecutionTimeout, setSdkRequestTimeout, withGeneralProgressListener, withRequestCredentialsProvider, withRequestMetricCollector, withSdkClientExecutionTimeout, withSdkRequestTimeoutpublic void setProjectName(String projectName)
The name of the project that contains the model that you want to start.
projectName - The name of the project that contains the model that you want to start.public String getProjectName()
The name of the project that contains the model that you want to start.
public StartModelRequest withProjectName(String projectName)
The name of the project that contains the model that you want to start.
projectName - The name of the project that contains the model that you want to start.public void setModelVersion(String modelVersion)
The version of the model that you want to start.
modelVersion - The version of the model that you want to start.public String getModelVersion()
The version of the model that you want to start.
public StartModelRequest withModelVersion(String modelVersion)
The version of the model that you want to start.
modelVersion - The version of the model that you want to start.public void setMinInferenceUnits(Integer minInferenceUnits)
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
minInferenceUnits - The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use
a higher number to increase the TPS throughput of your model. You are charged for the number of inference
units that you use.public Integer getMinInferenceUnits()
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
public StartModelRequest withMinInferenceUnits(Integer minInferenceUnits)
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
minInferenceUnits - The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use
a higher number to increase the TPS throughput of your model. You are charged for the number of inference
units that you use.public void setClientToken(String clientToken)
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You
choose the value to pass. For example, An issue might prevent you from getting a response from
StartModel. In this case, safely retry your call to StartModel by using the same
ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This
prevents retries after a network error from making multiple start requests. You'll need to provide your own value
for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value
for ClientToken is considered a new call to StartModel. An idempotency token is active
for 8 hours.
clientToken - ClientToken is an idempotency token that ensures a call to StartModel completes only once.
You choose the value to pass. For example, An issue might prevent you from getting a response from
StartModel. In this case, safely retry your call to StartModel by using the same
ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for
you. This prevents retries after a network error from making multiple start requests. You'll need to
provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different
value for ClientToken is considered a new call to StartModel. An idempotency
token is active for 8 hours.
public String getClientToken()
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You
choose the value to pass. For example, An issue might prevent you from getting a response from
StartModel. In this case, safely retry your call to StartModel by using the same
ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This
prevents retries after a network error from making multiple start requests. You'll need to provide your own value
for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value
for ClientToken is considered a new call to StartModel. An idempotency token is active
for 8 hours.
StartModel completes only once.
You choose the value to pass. For example, An issue might prevent you from getting a response from
StartModel. In this case, safely retry your call to StartModel by using the
same ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for
you. This prevents retries after a network error from making multiple start requests. You'll need to
provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different
value for ClientToken is considered a new call to StartModel. An idempotency
token is active for 8 hours.
public StartModelRequest withClientToken(String clientToken)
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You
choose the value to pass. For example, An issue might prevent you from getting a response from
StartModel. In this case, safely retry your call to StartModel by using the same
ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This
prevents retries after a network error from making multiple start requests. You'll need to provide your own value
for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value
for ClientToken is considered a new call to StartModel. An idempotency token is active
for 8 hours.
clientToken - ClientToken is an idempotency token that ensures a call to StartModel completes only once.
You choose the value to pass. For example, An issue might prevent you from getting a response from
StartModel. In this case, safely retry your call to StartModel by using the same
ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for
you. This prevents retries after a network error from making multiple start requests. You'll need to
provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different
value for ClientToken is considered a new call to StartModel. An idempotency
token is active for 8 hours.
public void setMaxInferenceUnits(Integer maxInferenceUnits)
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
maxInferenceUnits - The maximum number of inference units to use for auto-scaling the model. If you don't specify a value,
Amazon Lookout for Vision doesn't auto-scale the model.public Integer getMaxInferenceUnits()
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
public StartModelRequest withMaxInferenceUnits(Integer maxInferenceUnits)
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
maxInferenceUnits - The maximum number of inference units to use for auto-scaling the model. If you don't specify a value,
Amazon Lookout for Vision doesn't auto-scale the model.public String toString()
toString in class ObjectObject.toString()public StartModelRequest clone()
AmazonWebServiceRequestclone in class AmazonWebServiceRequestObject.clone()