enum AcceleratorType
Language | Type name |
---|---|
![]() | Amazon.CDK.AWS.EC2.AcceleratorType |
![]() | github.com/aws/aws-cdk-go/awscdk/v2/awsec2#AcceleratorType |
![]() | software.amazon.awscdk.services.ec2.AcceleratorType |
![]() | aws_cdk.aws_ec2.AcceleratorType |
![]() | aws-cdk-lib » aws_ec2 » AcceleratorType |
Hardware accelerator categories available for EC2 instances.
Defines the general type of hardware accelerator that can be attached to an instance, typically used in instance requirement specifications (e.g., GPUs for compute-intensive tasks, FPGAs for custom logic, or inference chips for ML workloads).
Example
declare const infrastructureRole: iam.Role;
declare const instanceProfile: iam.InstanceProfile;
declare const vpc: ec2.Vpc;
const miCapacityProvider = new ecs.ManagedInstancesCapacityProvider(this, 'MICapacityProvider', {
infrastructureRole,
ec2InstanceProfile: instanceProfile,
subnets: vpc.privateSubnets,
instanceRequirements: {
// Required: CPU and memory constraints
vCpuCountMin: 2,
vCpuCountMax: 8,
memoryMin: Size.gibibytes(4),
memoryMax: Size.gibibytes(32),
// CPU preferences
cpuManufacturers: [ec2.CpuManufacturer.INTEL, ec2.CpuManufacturer.AMD],
instanceGenerations: [ec2.InstanceGeneration.CURRENT],
// Instance type filtering
allowedInstanceTypes: ['m5.*', 'c5.*'],
// Performance characteristics
burstablePerformance: ec2.BurstablePerformance.EXCLUDED,
bareMetal: ec2.BareMetal.EXCLUDED,
// Accelerator requirements (for ML/AI workloads)
acceleratorTypes: [ec2.AcceleratorType.GPU],
acceleratorManufacturers: [ec2.AcceleratorManufacturer.NVIDIA],
acceleratorNames: [ec2.AcceleratorName.T4, ec2.AcceleratorName.V100],
acceleratorCountMin: 1,
// Storage requirements
localStorage: ec2.LocalStorage.REQUIRED,
localStorageTypes: [ec2.LocalStorageType.SSD],
totalLocalStorageGBMin: 100,
// Network requirements
networkInterfaceCountMin: 2,
networkBandwidthGbpsMin: 10,
// Cost optimization
onDemandMaxPricePercentageOverLowestPrice: 10,
},
});
Members
Name | Description |
---|---|
GPU | Graphics Processing Unit accelerators, such as NVIDIA GPUs. |
FPGA | Field Programmable Gate Array accelerators, such as Xilinx FPGAs. |
INFERENCE | Inference accelerators, such as AWS Inferentia. |
GPU
Graphics Processing Unit accelerators, such as NVIDIA GPUs.
Commonly used for machine learning training, graphics rendering, or high-performance parallel computing.
FPGA
Field Programmable Gate Array accelerators, such as Xilinx FPGAs.
Used for hardware-level customization and specialized workloads.
INFERENCE
Inference accelerators, such as AWS Inferentia.
Purpose-built for efficient machine learning inference.