@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class InferenceMetrics extends Object implements Serializable, Cloneable, StructuredPojo
The metrics for an existing endpoint compared in an Inference Recommender job.
Constructor and Description |
---|
InferenceMetrics() |
Modifier and Type | Method and Description |
---|---|
InferenceMetrics |
clone() |
boolean |
equals(Object obj) |
Integer |
getMaxInvocations()
The expected maximum number of requests per minute for the instance.
|
Integer |
getModelLatency()
The expected model latency at maximum invocations per minute for the instance.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setMaxInvocations(Integer maxInvocations)
The expected maximum number of requests per minute for the instance.
|
void |
setModelLatency(Integer modelLatency)
The expected model latency at maximum invocations per minute for the instance.
|
String |
toString()
Returns a string representation of this object.
|
InferenceMetrics |
withMaxInvocations(Integer maxInvocations)
The expected maximum number of requests per minute for the instance.
|
InferenceMetrics |
withModelLatency(Integer modelLatency)
The expected model latency at maximum invocations per minute for the instance.
|
public void setMaxInvocations(Integer maxInvocations)
The expected maximum number of requests per minute for the instance.
maxInvocations
- The expected maximum number of requests per minute for the instance.public Integer getMaxInvocations()
The expected maximum number of requests per minute for the instance.
public InferenceMetrics withMaxInvocations(Integer maxInvocations)
The expected maximum number of requests per minute for the instance.
maxInvocations
- The expected maximum number of requests per minute for the instance.public void setModelLatency(Integer modelLatency)
The expected model latency at maximum invocations per minute for the instance.
modelLatency
- The expected model latency at maximum invocations per minute for the instance.public Integer getModelLatency()
The expected model latency at maximum invocations per minute for the instance.
public InferenceMetrics withModelLatency(Integer modelLatency)
The expected model latency at maximum invocations per minute for the instance.
modelLatency
- The expected model latency at maximum invocations per minute for the instance.public String toString()
toString
in class Object
Object.toString()
public InferenceMetrics clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.