@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class InferenceComponentComputeResourceRequirements extends Object implements Serializable, Cloneable, StructuredPojo
Defines the compute resources to allocate to run a model that you assign to an inference component. These resources include CPU cores, accelerators, and memory.
Constructor and Description |
---|
InferenceComponentComputeResourceRequirements() |
Modifier and Type | Method and Description |
---|---|
InferenceComponentComputeResourceRequirements |
clone() |
boolean |
equals(Object obj) |
Integer |
getMaxMemoryRequiredInMb()
The maximum MB of memory to allocate to run a model that you assign to an inference component.
|
Integer |
getMinMemoryRequiredInMb()
The minimum MB of memory to allocate to run a model that you assign to an inference component.
|
Float |
getNumberOfAcceleratorDevicesRequired()
The number of accelerators to allocate to run a model that you assign to an inference component.
|
Float |
getNumberOfCpuCoresRequired()
The number of CPU cores to allocate to run a model that you assign to an inference component.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setMaxMemoryRequiredInMb(Integer maxMemoryRequiredInMb)
The maximum MB of memory to allocate to run a model that you assign to an inference component.
|
void |
setMinMemoryRequiredInMb(Integer minMemoryRequiredInMb)
The minimum MB of memory to allocate to run a model that you assign to an inference component.
|
void |
setNumberOfAcceleratorDevicesRequired(Float numberOfAcceleratorDevicesRequired)
The number of accelerators to allocate to run a model that you assign to an inference component.
|
void |
setNumberOfCpuCoresRequired(Float numberOfCpuCoresRequired)
The number of CPU cores to allocate to run a model that you assign to an inference component.
|
String |
toString()
Returns a string representation of this object.
|
InferenceComponentComputeResourceRequirements |
withMaxMemoryRequiredInMb(Integer maxMemoryRequiredInMb)
The maximum MB of memory to allocate to run a model that you assign to an inference component.
|
InferenceComponentComputeResourceRequirements |
withMinMemoryRequiredInMb(Integer minMemoryRequiredInMb)
The minimum MB of memory to allocate to run a model that you assign to an inference component.
|
InferenceComponentComputeResourceRequirements |
withNumberOfAcceleratorDevicesRequired(Float numberOfAcceleratorDevicesRequired)
The number of accelerators to allocate to run a model that you assign to an inference component.
|
InferenceComponentComputeResourceRequirements |
withNumberOfCpuCoresRequired(Float numberOfCpuCoresRequired)
The number of CPU cores to allocate to run a model that you assign to an inference component.
|
public InferenceComponentComputeResourceRequirements()
public void setNumberOfCpuCoresRequired(Float numberOfCpuCoresRequired)
The number of CPU cores to allocate to run a model that you assign to an inference component.
numberOfCpuCoresRequired
- The number of CPU cores to allocate to run a model that you assign to an inference component.public Float getNumberOfCpuCoresRequired()
The number of CPU cores to allocate to run a model that you assign to an inference component.
public InferenceComponentComputeResourceRequirements withNumberOfCpuCoresRequired(Float numberOfCpuCoresRequired)
The number of CPU cores to allocate to run a model that you assign to an inference component.
numberOfCpuCoresRequired
- The number of CPU cores to allocate to run a model that you assign to an inference component.public void setNumberOfAcceleratorDevicesRequired(Float numberOfAcceleratorDevicesRequired)
The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and Amazon Web Services Inferentia.
numberOfAcceleratorDevicesRequired
- The number of accelerators to allocate to run a model that you assign to an inference component.
Accelerators include GPUs and Amazon Web Services Inferentia.public Float getNumberOfAcceleratorDevicesRequired()
The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and Amazon Web Services Inferentia.
public InferenceComponentComputeResourceRequirements withNumberOfAcceleratorDevicesRequired(Float numberOfAcceleratorDevicesRequired)
The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and Amazon Web Services Inferentia.
numberOfAcceleratorDevicesRequired
- The number of accelerators to allocate to run a model that you assign to an inference component.
Accelerators include GPUs and Amazon Web Services Inferentia.public void setMinMemoryRequiredInMb(Integer minMemoryRequiredInMb)
The minimum MB of memory to allocate to run a model that you assign to an inference component.
minMemoryRequiredInMb
- The minimum MB of memory to allocate to run a model that you assign to an inference component.public Integer getMinMemoryRequiredInMb()
The minimum MB of memory to allocate to run a model that you assign to an inference component.
public InferenceComponentComputeResourceRequirements withMinMemoryRequiredInMb(Integer minMemoryRequiredInMb)
The minimum MB of memory to allocate to run a model that you assign to an inference component.
minMemoryRequiredInMb
- The minimum MB of memory to allocate to run a model that you assign to an inference component.public void setMaxMemoryRequiredInMb(Integer maxMemoryRequiredInMb)
The maximum MB of memory to allocate to run a model that you assign to an inference component.
maxMemoryRequiredInMb
- The maximum MB of memory to allocate to run a model that you assign to an inference component.public Integer getMaxMemoryRequiredInMb()
The maximum MB of memory to allocate to run a model that you assign to an inference component.
public InferenceComponentComputeResourceRequirements withMaxMemoryRequiredInMb(Integer maxMemoryRequiredInMb)
The maximum MB of memory to allocate to run a model that you assign to an inference component.
maxMemoryRequiredInMb
- The maximum MB of memory to allocate to run a model that you assign to an inference component.public String toString()
toString
in class Object
Object.toString()
public InferenceComponentComputeResourceRequirements clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.