@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class StartProjectVersionRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
NOOP
Constructor and Description |
---|
StartProjectVersionRequest() |
Modifier and Type | Method and Description |
---|---|
StartProjectVersionRequest |
clone()
Creates a shallow clone of this object for all fields except the handler context.
|
boolean |
equals(Object obj) |
Integer |
getMaxInferenceUnits()
The maximum number of inference units to use for auto-scaling the model.
|
Integer |
getMinInferenceUnits()
The minimum number of inference units to use.
|
String |
getProjectVersionArn()
The Amazon Resource Name(ARN) of the model version that you want to start.
|
int |
hashCode() |
void |
setMaxInferenceUnits(Integer maxInferenceUnits)
The maximum number of inference units to use for auto-scaling the model.
|
void |
setMinInferenceUnits(Integer minInferenceUnits)
The minimum number of inference units to use.
|
void |
setProjectVersionArn(String projectVersionArn)
The Amazon Resource Name(ARN) of the model version that you want to start.
|
String |
toString()
Returns a string representation of this object.
|
StartProjectVersionRequest |
withMaxInferenceUnits(Integer maxInferenceUnits)
The maximum number of inference units to use for auto-scaling the model.
|
StartProjectVersionRequest |
withMinInferenceUnits(Integer minInferenceUnits)
The minimum number of inference units to use.
|
StartProjectVersionRequest |
withProjectVersionArn(String projectVersionArn)
The Amazon Resource Name(ARN) of the model version that you want to start.
|
addHandlerContext, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getHandlerContext, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestCredentialsProvider, getRequestMetricCollector, getSdkClientExecutionTimeout, getSdkRequestTimeout, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestCredentialsProvider, setRequestMetricCollector, setSdkClientExecutionTimeout, setSdkRequestTimeout, withGeneralProgressListener, withRequestCredentialsProvider, withRequestMetricCollector, withSdkClientExecutionTimeout, withSdkRequestTimeout
public void setProjectVersionArn(String projectVersionArn)
The Amazon Resource Name(ARN) of the model version that you want to start.
projectVersionArn
- The Amazon Resource Name(ARN) of the model version that you want to start.public String getProjectVersionArn()
The Amazon Resource Name(ARN) of the model version that you want to start.
public StartProjectVersionRequest withProjectVersionArn(String projectVersionArn)
The Amazon Resource Name(ARN) of the model version that you want to start.
projectVersionArn
- The Amazon Resource Name(ARN) of the model version that you want to start.public void setMinInferenceUnits(Integer minInferenceUnits)
The minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
minInferenceUnits
- The minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
public Integer getMinInferenceUnits()
The minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
public StartProjectVersionRequest withMinInferenceUnits(Integer minInferenceUnits)
The minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
minInferenceUnits
- The minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
public void setMaxInferenceUnits(Integer maxInferenceUnits)
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Rekognition Custom Labels doesn't auto-scale the model.
maxInferenceUnits
- The maximum number of inference units to use for auto-scaling the model. If you don't specify a value,
Amazon Rekognition Custom Labels doesn't auto-scale the model.public Integer getMaxInferenceUnits()
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Rekognition Custom Labels doesn't auto-scale the model.
public StartProjectVersionRequest withMaxInferenceUnits(Integer maxInferenceUnits)
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Rekognition Custom Labels doesn't auto-scale the model.
maxInferenceUnits
- The maximum number of inference units to use for auto-scaling the model. If you don't specify a value,
Amazon Rekognition Custom Labels doesn't auto-scale the model.public String toString()
toString
in class Object
Object.toString()
public StartProjectVersionRequest clone()
AmazonWebServiceRequest
clone
in class AmazonWebServiceRequest
Object.clone()