AcceleratorType

class aws_cdk.aws_ec2.AcceleratorType(*values)

Bases: Enum

Hardware accelerator categories available for EC2 instances.

Defines the general type of hardware accelerator that can be attached to an instance, typically used in instance requirement specifications (e.g., GPUs for compute-intensive tasks, FPGAs for custom logic, or inference chips for ML workloads).

ExampleMetadata:

infused

Example:

# infrastructure_role: iam.Role
# instance_profile: iam.InstanceProfile
# vpc: ec2.Vpc


mi_capacity_provider = ecs.ManagedInstancesCapacityProvider(self, "MICapacityProvider",
    infrastructure_role=infrastructure_role,
    ec2_instance_profile=instance_profile,
    subnets=vpc.private_subnets,
    instance_requirements=ec2.InstanceRequirementsConfig(
        # Required: CPU and memory constraints
        v_cpu_count_min=2,
        v_cpu_count_max=8,
        memory_min=Size.gibibytes(4),
        memory_max=Size.gibibytes(32),

        # CPU preferences
        cpu_manufacturers=[ec2.CpuManufacturer.INTEL, ec2.CpuManufacturer.AMD],
        instance_generations=[ec2.InstanceGeneration.CURRENT],

        # Instance type filtering
        allowed_instance_types=["m5.*", "c5.*"],

        # Performance characteristics
        burstable_performance=ec2.BurstablePerformance.EXCLUDED,
        bare_metal=ec2.BareMetal.EXCLUDED,

        # Accelerator requirements (for ML/AI workloads)
        accelerator_types=[ec2.AcceleratorType.GPU],
        accelerator_manufacturers=[ec2.AcceleratorManufacturer.NVIDIA],
        accelerator_names=[ec2.AcceleratorName.T4, ec2.AcceleratorName.V100],
        accelerator_count_min=1,

        # Storage requirements
        local_storage=ec2.LocalStorage.REQUIRED,
        local_storage_types=[ec2.LocalStorageType.SSD],
        total_local_storage_gBMin=100,

        # Network requirements
        network_interface_count_min=2,
        network_bandwidth_gbps_min=10,

        # Cost optimization
        on_demand_max_price_percentage_over_lowest_price=10
    )
)

Attributes

FPGA

Field Programmable Gate Array accelerators, such as Xilinx FPGAs.

Used for hardware-level customization and specialized workloads.

GPU

Graphics Processing Unit accelerators, such as NVIDIA GPUs.

Commonly used for machine learning training, graphics rendering, or high-performance parallel computing.

INFERENCE

Inference accelerators, such as AWS Inferentia.

Purpose-built for efficient machine learning inference.