Amazon Elastic Compute Cloud
User Guide for Linux Instances

Storage Optimized Instances

Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.

D2 Instances

D2 instances are well suited for the following applications:

  • Massive parallel processing (MPP) data warehouse

  • MapReduce and Hadoop distributed computing

  • Log or data processing applications

H1 Instances

H1 instances are well suited for the following applications:

  • Data-intensive workloads such as MapReduce and distributed file systems

  • Applications requiring sequential access to large amounts of data on direct-attached instance storage

  • Applications that require high-throughput access to large quantities of data

I3 Instances

I3 instances are well suited for the following applications:

  • High frequency online transaction processing (OLTP) systems

  • Relational databases

  • NoSQL databases

  • Cache for in-memory databases (for example, Redis)

  • Data warehousing applications

  • Low latency Ad-Tech serving applications

i3.metal instances provide your applications with direct access to physical resources of the host server, such as processors and memory. These instances are well suited for the following:

  • Workloads that require access to low-level hardware features (for example, Intel VT) that are not available or fully supported in virtualized environments

  • Applications that require a non-virtualized environment for licensing or support

Hardware Specifications

The primary data storage for D2 instances is HDD instance store volumes. The primary data storage for I3 instances is non-volatile memory express (NVMe) SSD instance store volumes.

Instance store volumes persist only for the life of the instance. When you stop or terminate an instance, the applications and data in its instance store volumes are erased. We recommend that you regularly back up or replicate important data in your instance store volumes. For more information, see Amazon EC2 Instance Store and SSD Instance Store Volumes.

The following is a summary of the hardware specifications for storage optimized instances.

Instance type Default vCPUs Memory (GiB)
d2.xlarge 4 30.5
d2.2xlarge 8 61
d2.4xlarge 16 122
d2.8xlarge 36 244
h1.2xlarge 8 32
h1.4xlarge 16 64
h1.8xlarge 32 128
h1.16xlarge 64 256
i3.large 2 15.25
i3.xlarge 4 30.5
i3.2xlarge 8 61
i3.4xlarge 16 122
i3.8xlarge 32 244
i3.16xlarge 64 488
i3.metal 72 512

For more information about the hardware specifications for each Amazon EC2 instance type, see Amazon EC2 Instance Types.

For more information about specifying CPU options, see Optimizing CPU Options.

Instance Performance

To ensure the best disk throughput performance from your instance on Linux, we recommend that you use the most recent version of Amazon Linux 2 or the Amazon Linux AMI.

For instances with NVMe instance store volumes, you must use a Linux AMI with kernel version 4.4 or later. Otherwise, your instance will not achieve the maximum IOPS performance available.

D2 instances provide the best disk performance when you use a Linux kernel that supports persistent grants, an extension to the Xen block ring protocol that significantly improves disk throughput and scalability. For more information about persistent grants, see this article in the Xen Project Blog.

EBS-optimized instances enable you to get consistently high performance for your EBS volumes by eliminating contention between Amazon EBS I/O and other network traffic from your instance. Some storage optimized instances are EBS-optimized by default at no additional cost. For more information, see Amazon EBS–Optimized Instances.

Some storage optimized instance types provide the ability to control processor C-states and P-states on Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states control the desired performance (in CPU frequency) from a core. For more information, see Processor State Control for Your EC2 Instance.

Network Performance

You can enable enhanced networking capabilities on supported instance types. Enhanced networking provides significantly higher packet-per-second (PPS) performance, lower network jitter, and lower latencies. For more information, see Enhanced Networking on Linux.

Instance types that use the Elastic Network Adapter (ENA) for enhanced networking deliver high packet per second performance with consistently low latencies. Most applications do not consistently need a high level of network performance, but can benefit from having access to increased bandwidth when they send or receive data. Instance types that use the ENA and support up to 10 Gbps of throughput use a network I/O credit mechanism to allocate network bandwidth to instances based on average bandwidth utilization. These instances accrue credits when their network throughput is below their baseline limits, and can use these credits when they perform network data transfers. For workloads that require access to 10 Gbps of bandwidth or more on a sustained basis, we recommend using instance types that support 10 Gbps or 25 Gbps network speeds.

The following is a summary of network performance for storage optimized instances that support enhanced networking.

Instance type Network performance Enhanced networking

i3.4xlarge and smaller

Up to 10 Gbps, use network I/O credit mechanism

ENA

i3.8xlarge | h1.8xlarge

10 Gbps

ENA

i3.16xlarge | i3.metal | h1.16xlarge

25 Gbps

ENA

d2.xlarge

Moderate

Intel 82599 VF
d2.2xlarge | d2.4xlarge

High

Intel 82599 VF
d2.8xlarge

10 Gbps

Intel 82599 VF

SSD I/O Performance

If you use a Linux AMI with kernel version 4.4 or later and use all the SSD-based instance store volumes available to your instance, you get the IOPS (4,096 byte block size) performance listed in the following table (at queue depth saturation). Otherwise, you get lower IOPS performance.

Instance Size 100% Random Read IOPS Write IOPS

i3.large *

100,125

35,000

i3.xlarge *

206,250

70,000

i3.2xlarge

412,500

180,000

i3.4xlarge

825,000

360,000

i3.8xlarge

1.65 million

720,000

i3.16xlarge

3.3 million

1.4 million

* For i3.large and i3.xlarge instances, you can get up to the specified performance.

As you fill the SSD-based instance store volumes for your instance, the number of write IOPS that you can achieve decreases. This is due to the extra work the SSD controller must do to find available space, rewrite existing data, and erase unused space so that it can be rewritten. This process of garbage collection results in internal write amplification to the SSD, expressed as the ratio of SSD write operations to user write operations. This decrease in performance is even larger if the write operations are not in multiples of 4,096 bytes or not aligned to a 4,096-byte boundary. If you write a smaller amount of bytes or bytes that are not aligned, the SSD controller must read the surrounding data and store the result in a new location. This pattern results in significantly increased write amplification, increased latency, and dramatically reduced I/O performance.

SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy is to reserve space in the SSD instance storage so that the controller can more efficiently manage the space available for write operations. This is called over-provisioning. The SSD-based instance store volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller can use it for over-provisioning. This decreases the storage that you can use, but increases performance even if the disk is close to full capacity.

For instance store volumes that support TRIM, you can use the TRIM command to notify the SSD controller whenever you no longer need data that you've written. This provides the controller with more free space, which can reduce write amplification and increase performance. For more information, see Instance Store Volume TRIM Support.

Instance Features

The following is a summary of features for storage optimized instances:

EBS only NVMe EBS Instance store Placement group

D2

HDD

Yes

H1

HDD

Yes

I3

NVMe *

Yes

* The root device volume must be an Amazon EBS volume.

For more information, see the following:

Support for vCPUs

The d2.8xlarge instance type provides 36 vCPUs, which might cause launch issues in some Linux operating systems that have a vCPU limit of 32. We strongly recommend that you use the latest AMIs when you launch d2.8xlarge instances.

The following Linux AMIs support launching d2.8xlarge instances with 36 vCPUs:

  • Amazon Linux 2 (HVM)

  • Amazon Linux AMI 2018.03 (HVM)

  • Ubuntu Server 14.04 LTS (HVM) or later

  • Red Hat Enterprise Linux 7.1 (HVM)

  • SUSE Linux Enterprise Server 12 (HVM)

If you must use a different AMI for your application, and your d2.8xlarge instance launch does not complete successfully (for example, if your instance status changes to stopped during launch with a Client.InstanceInitiatedShutdown state transition reason), modify your instance as described in the following procedure to support more than 32 vCPUs so that you can use the d2.8xlarge instance type.

To update an instance to support more than 32 vCPUs

  1. Launch a D2 instance using your AMI, choosing any D2 instance type other than d2.8xlarge.

  2. Update the kernel to the latest version by following your operating system-specific instructions. For example, for RHEL 6, use the following command:

    sudo yum update -y kernel
  3. Stop the instance.

  4. (Optional) Create an AMI from the instance that you can use to launch any additional d2.8xlarge instances that you need in the future.

  5. Change the instance type of your stopped instance to d2.8xlarge (choose Actions, Instance Settings, Change Instance Type, and then follow the directions).

  6. Start the instance. If the instance launches properly, you are done. If the instance still does not boot properly, proceed to the next step.

  7. (Optional) If the instance still does not boot properly, the kernel on your instance may not support more than 32 vCPUs. However, you may be able to boot the instance if you limit the vCPUs.

    1. Change the instance type of your stopped instance to any D2 instance type other than d2.8xlarge (choose Actions, Instance Settings, Change Instance Type, and then follow the directions).

    2. Add the maxcpus=32 option to your boot kernel parameters by following your operating system-specific instructions. For example, for RHEL 6, edit the /boot/grub/menu.lst file and add the following option to the most recent and active kernel entry:

      default=0 timeout=1 splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.32-504.3.3.el6.x86_64) root (hd0,0) kernel /boot/vmlinuz-2.6.32-504.3.3.el6.x86_64 maxcpus=32 console=ttyS0 ro root=UUID=9996863e-b964-47d3-a33b-3920974fdbd9 rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 xen_blkfront.sda_is_xvda=1 console=ttyS0,115200n8 console=tty0 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM rd_NO_DM initrd /boot/initramfs-2.6.32-504.3.3.el6.x86_64.img
    3. Stop the instance.

    4. (Optional) Create an AMI from the instance that you can use to launch any additional d2.8xlarge instances that you need in the future.

    5. Change the instance type of your stopped instance to d2.8xlarge (choose Actions, Instance Settings, Change Instance Type, and then follow the directions).

    6. Start the instance.

Release Notes

  • You must launch storage optimized instances using an HVM AMI. For more information, see Linux AMI Virtualization Types.

  • You must launch I3 instances using an Amazon EBS-backed AMI.

  • The following are requirements for i3.metal instances:

    • NVMe drivers must be installed. EBS volumes are exposed as NVMe block devices.

    • Elastic Network Adapter (ENA) drivers must be installed.

    The following AMIs meet these requirements:

    • Amazon Linux 2

    • Amazon Linux 2014.03 or later

    • Ubuntu 14.04 or later

    • SUSE Linux Enterprise Server 12 or later

    • Red Hat Enterprise Linux 7.4 or later

    • CentOS 7 or later

    • Windows Server 2008 R2 or later

  • Launching an i3.metal instance boots the underlying server, which includes verifying all hardware and firmware components. This means that it can take 20 minutes from the time the instance enters the running state until it becomes available over the network.

  • To attach or detach EBS volumes or secondary network interfaces from an i3.metal instance requires PCIe native hotplug support. Amazon Linux 2 and the latest versions of the Amazon Linux AMI support PCIe native hotplug, but earlier versions do not. You must enable the following Linux kernel configuration options:

    CONFIG_HOTPLUG_PCI_PCIE=y CONFIG_PCIEASPM=y
  • i3.metal instances use a PCI-based serial device rather than an I/O port-based serial device. The upstream Linux kernel and the latest Amazon Linux AMIs support this device. i3.metal instances also provide an ACPI SPCR table to enable the system to automatically use the PCI-based serial device. The latest Windows AMIs automatically use the PCI-based serial device.

  • With FreeBSD AMIs, i3.metal instances take nearly an hour to boot and I/O to the local NVMe storage does not complete. As a workaround, add the following line to /boot/loader.conf and reboot:

    hw.nvme.per_cpu_io_queues="0"
  • The d2.8xlarge instance type has 36 vCPUs, which might cause launch issues in some Linux operating systems that have a vCPU limit of 32. For more information, see Support for vCPUs.

  • There is a limit on the total number of instances that you can launch in a region, and there are additional limits on some instance types. For more information, see How many instances can I run in Amazon EC2?. To request a limit increase, use the Amazon EC2 Instance Request Form.