Menu
Amazon Elastic Compute Cloud
User Guide for Linux Instances

Amazon EBS Volume Types

Amazon EBS provides the following volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. The volumes types fall into two categories:

  • SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS

  • HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS

The following table describes the use cases and performance characteristics for each volume type:

 Solid-State Drives (SSD)Hard disk Drives (HDD)
Volume TypeGeneral Purpose SSD (gp2)*Provisioned IOPS SSD (io1)Throughput Optimized HDD (st1)Cold HDD (sc1)
DescriptionGeneral purpose SSD volume that balances price and performance for a wide variety of transactional workloadsHighest-performance SSD volume designed for mission-critical applicationsLow cost HDD volume designed for frequently accessed, throughput-intensive workloadsLowest cost HDD volume designed for less frequently accessed workloads
Use Cases
  • Recommended for most workloads

  • System boot volumes

  • Virtual desktops

  • Low-latency interactive apps

  • Development and test environments

  • Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume

  • Large database workloads, such as:

    • MongoDB

    • Cassandra

    • Microsoft SQL Server

    • MySQL

    • PostgreSQL

    • Oracle

  • Streaming workloads requiring consistent, fast throughput at a low price

  • Big data

  • Data warehouses

  • Log processing

  • Cannot be a boot volume

  • Throughput-oriented storage for large volumes of data that is infrequently accessed

  • Scenarios where the lowest storage cost is important

  • Cannot be a boot volume

API Namegp2io1st1sc1
Volume Size1 GiB - 16 TiB 4 GiB - 16 TiB 500 GiB - 16 TiB500 GiB - 16 TiB
Max. IOPS**/Volume10,00020,000500250
Max. Throughput/Volume160 MiB/s320 MiB/s500 MiB/s250 MiB/s
Max. IOPS/Instance48,00048,00048,00048,000
Max. Throughput/Instance800 MiB/s800 MiB/s800 MiB/s800 MiB/s
Dominant Performance AttributeIOPSIOPSMiB/sMiB/s

*Default volume type

**gp2/io1 based on 16KiB I/O size, st1/sc1 based on 1 MiB I/O size

The following table describes previous-generation EBS volume types. If you need higher performance or performance consistency than previous-generation volumes can provide, we recommend that you consider using General Purpose SSD (gp2) or other current volume types. For more information, see Previous Generation Volumes.

Previous Generation Volumes
Volume TypeEBS Magnetic
DescriptionPrevious generation HDD
Use CasesWorkloads where data is infrequently accessed
API Namestandard
Volume Size1 GiB-1 TiB
Max. IOPS/Volume40-200
Max. Throughput/Volume40-90 MiB/s
Max. IOPS/Instance48,000
Max. Throughput/Instance800 MiB/s
Dominant Performance AttributeIOPS

Note

Linux AMIs require GPT partition tables and GRUB 2 for boot volumes 2 TiB (2048 GiB) or larger. Many Linux AMIs today use the MBR partitioning scheme, which only supports up to 2047 GiB boot volumes. If your instance does not boot with a boot volume that is 2 TiB or larger, the AMI you are using may be limited to a 2047 GiB boot volume size. Non-boot volumes do not have this limitation on Linux instances.

There are several factors that can affect the performance of EBS volumes, such as instance configuration, I/O characteristics, and workload demand. For more information about getting the most out of your EBS volumes, see Amazon EBS Volume Performance on Linux Instances.

For more information about pricing for these volume types, see Amazon EBS Pricing.

General Purpose SSD (gp2) Volumes

General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 10,000 IOPS (at 3,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. A gp2 volume can range in size from 1 GiB to 16 TiB.

I/O Credits and Burst Performance

The performance of gp2 volumes is tied to volume size, which determines the baseline performance level of the volume and how quickly it accumulates I/O credits; larger volumes have higher baseline performance levels and accumulate I/O credits faster. I/O credits represent the available bandwidth that your gp2 volume can use to burst large amounts of I/O when more than the baseline performance is needed. The more credits your volume has for I/O, the more time it can burst beyond its baseline performance level and the better it performs when more performance is needed. The following diagram shows the burst-bucket behavior for gp2.

gp2 burst bucket

Each volume receives an initial I/O credit balance of 5.4 million I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for 30 minutes. This initial credit balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications. Volumes earn I/O credits at the baseline performance rate of 3 IOPS per GiB of volume size. For example, a 100 GiB gp2 volume has a baseline performance of 300 IOPS.

Comparing baseline performance and burst IOPS

When your volume requires more than the baseline performance I/O level, it draws on I/O credits in the credit balance to burst to the required performance level, up to a maximum of 3,000 IOPS. Volumes larger than 1,000 GiB have a baseline performance that is equal or greater than the maximum burst performance, and their I/O credit balance never depletes. When your volume uses fewer I/O credits than it earns in a second, unused I/O credits are added to the I/O credit balance. The maximum I/O credit balance for a volume is equal to the initial credit balance (5.4 million I/O credits).

The following table lists several volume sizes and the associated baseline performance of the volume (which is also the rate at which it accumulates I/O credits), the burst duration at the 3,000 IOPS maximum (when starting with a full credit balance), and the time in seconds that the volume would take to refill an empty credit balance.

Volume size (GiB)

Baseline performance (IOPS)

Maximum burst duration @ 3,000 IOPS (seconds)

Seconds to fill empty credit balance

1

100

1862

54,000

100

300

2,000

18,000

214 (Min. size for max. throughput)

642

2,290

8,412

250

750

2,4007,200

500

1,500

3,600

3,600

750

2,250

7,200

2,400

1,000

3,000

N/A*

N/A*

3,334 (Min. size for max. IOPS)

10,000

N/A*

N/A*

16,384 (16 TiB, max. volume size)

10,000

N/A*

N/A*

* Bursting and I/O credits are only relevant to volumes under 1,000 GiB, where burst performance exceeds baseline performance.

The burst duration of a volume is dependent on the size of the volume, the burst IOPS required, and the credit balance when the burst begins. This is shown in the following equation:

                             (Credit balance)
Burst duration  =  ------------------------------------
                   (Burst IOPS) - 3(Volume size in GiB)

What happens if I empty my I/O credit balance?

If your gp2 volume uses all of its I/O credit balance, the maximum IOPS performance of the volume will remain at the baseline IOPS performance level (the rate at which your volume earns credits) and the throughput limit is reduced to the baseline IOPS multiplied by the maximum throughput and divided by 3,000, until I/O demand drops below the baseline level and unused credits are added to the I/O credit balance.

                                                    (Max throughput) x (Baseline IOPS)
Throughput limit with empty I/O credit balance  =  ------------------------------------
                                                                 3,000

For example, a 100 GiB gp2 volume with an empty credit balance has a baseline IOPS performance limit of 300 IOPS and a throughput limit of 12.8 MiB/s ((128 MiB/s)*300/3000). The larger a volume is, the greater the baseline performance is and the faster it replenishes the credit balance. For more information about how IOPS are measured, see I/O Characteristics.

If you notice that your volume performance is frequently limited to the baseline level (due to an empty I/O credit balance), you should consider using a larger gp2 volume (with a higher baseline performance level) or switching to an io1 volume for workloads that require sustained IOPS performance greater than 10,000 IOPS.

Throughput Performance

The throughput limit for gp2 volumes is 128 MiB/s for volumes less than or equal to 170 GiB; for volumes over 170 GiB, this limit increases at the rate of 768 KiB/s per GiB to a maximum of 160 MiB/s (at 214 GiB and larger).

Throughput performance for gp2 volumes

Provisioned IOPS SSD (io1) Volumes

Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Instead of using a bucket and credit model to calculate performance, an io1 volume allows you to specify a consistent IOPS rate when you create the volume, and Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the time over a given year.

An io1 volume can range in size from 4 GiB to 16 TiB and you can provision up to 20,000 IOPS per volume. The ratio of IOPS provisioned to the volume size requested can be a maximum of 30; for example, a volume with 3,000 IOPS must be at least 100 GiB. You can stripe multiple volumes together in a RAID configuration for larger size and greater performance.

The throughput limit of io1 volumes is 256 KiB for each IOPS provisioned, up to a maximum of 320 MiB/s (at 1,280 IOPS).

Throughput limits for io1 volumes

Your per I/O latency experience depends on the IOPS provisioned and your workload pattern. For the best per I/O latency experience, we recommend that you provision an IOPS-to-GiB ratio greater than a 2:1 (for example, a 2,000 IOPS volume would be smaller than 1,000 GiB in this case).

Note

Some AWS accounts created before 2012 might have access to Availability Zones in us-east-1, us-west-1, or ap-northeast-1 that do not support Provisioned IOPS SSD (io1) volumes. If you are unable to create an io1 volume (or launch an instance with an io1 volume in its block device mapping) in one of these regions, try a different Availability Zone in the region. You can verify that an Availability Zone supports io1 volumes by creating a 4 GiB io1 volume in that zone.

Throughput Optimized HDD (st1) Volumes

Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable st1 volumes are not supported.

Note

This volume type is optimized for workloads involving large, sequential I/O, and we recommend that customers with workloads performing small, random I/O use gp2. For more information, see Inefficiency of Small Read/Writes on HDD.

Throughput Credits and Burst Performance

Like gp2, st1 uses a burst-bucket model for performance. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits. Volume size also determines the burst throughput of your volume, which is the rate at which you can spend credits when they are available. Larger volumes have higher baseline and burst throughput. The more credits your volume has, the longer it will be able to drive I/O at the burst level.

The following diagram shows the burst-bucket behavior for st1.

st1 burst bucket

Subject to throughput and throughput-credit caps, the available throughput of an st1 volume is expressed by the following formula:

(Volume size) x (Credit accumulation rate per TiB) = Baseline Throughput

For a 1 TiB st1 volume, burst throughput is limited to 250 MiB/s, the bucket fills with credits at 40 MiB/s, and it can hold up to 1 TiB-worth of credits.

Larger volumes scale these limits linearly, with throughput capped at a maximum of 500 MiB/s. After the bucket is depleted, throughput is limited to the baseline rate of 40 MiB/s per TiB.

On volume sizes ranging from 0.5 to 16 TiB, baseline throughput varies from 20 to a cap of 500 MiB/s, which is reached at 12.5 TiB because

            40 MiB/s
12.5 TiB x ---------- = 500 MiB/s
             1 TiB                                                                 

Burst throughput varies from 125 MiB/s to a cap of 500 MiB/s, which is reached at 2 TiB because

         250 MiB/s
2 TiB x ---------- = 500 MiB/s
          1 TiB                                                                 

The following table states the full range of base and burst throughput values for st1:

Volume Size (TiB)ST1 Base Throughput (MiB/s)ST1 Burst Throughput (MiB/s)
0.520125
140250
280500
3120500
4160500
5200500
6240500
7280500
8320500
9360500
10400500
11440500
12480500
12.5500500
13500500
14500500
15500500
16500500

The following diagram plots the table values:

Comparing st1 base and burst throughput

Note

Throughput for an st1 volume is also capped at the baseline while a snapshot is being created.

Cold HDD (sc1) Volumes

Cold HDD (sc1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit than st1, sc1 is a good fit ideal for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, sc1 provides inexpensive block storage. Bootable sc1 volumes are not supported.

Note

This volume type is optimized for workloads involving large, sequential I/O, and we recommend that customers with workloads performing small, random I/O use gp2. For more information, see Inefficiency of Small Read/Writes on HDD.

Throughput Credits and Burst Performance

Like gp2, sc1 uses a burst-bucket model for performance. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits. Volume size also determines the burst throughput of your volume, which is the rate at which you can spend credits when they are available. Larger volumes have higher baseline and burst throughput. The more credits your volume has, the longer it will be able to drive I/O at the burst level.

sc1 burst bucket

Subject to throughput and throughput-credit caps, the available throughput of an sc1 volume is expressed by the following formula:

(Volume size) x (Credit accumulation rate per TiB) = Baseline Throughput

For a 1 TiB sc1 volume, burst throughput is limited to 80 MiB/s, the bucket fills with credits at 12 MiB/s, and it can hold up to 1 TiB-worth of credits.

Larger volumes scale these limits linearly, with throughput capped at a maximum of 250 MiB/s. After the bucket is depleted, throughput is limited to the baseline rate of 12 MiB/s per TiB.

On volume sizes ranging from 0.5 to 16 TiB, baseline throughput varies from 6 MiB/s to a maximum of 192 MiB/s, which is reached at 16 TiB because

           12 MiB/s
16 TiB x ---------- = 192 MiB/s
            1 TiB                                                                 

Burst throughput varies from 40 MiB/s to a cap of 250 MiB/s, which is reached at 3.125 TiB because

             250 MiB/s
3.125 TiB x ----------- = 250 MiB/s
              1 TiB                                                                 

The following table states the full range of base and burst throughput values for sc1:

Volume Size (TiB)SC1 Base Throughput (MiB/s)SC1 Burst Throughput (MiB/s)
0.5640
11280
224160
336240
3.12537.5250
448250
560250
672250
784250
896250
9108250
10120250
11132250
12144250
13156250
14168250
15180250
16192250

The following diagram plots the table values:

Comparing sc1 base and burst throughput

Magnetic (standard)

Magnetic volumes are backed by magnetic drives and are suited for workloads where data is accessed infrequently, and scenarios where low-cost storage for small volume sizes is important. These volumes deliver approximately 100 IOPS on average, with burst capability of up to hundreds of IOPS, and they can range in size from 1 GiB to 1 TiB.

Note

Magnetic is a Previous Generation Volume. For new applications, we recommend using one of the newer volume types. For more information, see Previous Generation Volumes.

Performance Considerations When Using HDD Volumes

For optimal throughput results using HDD volumes, plan your workloads with the following considerations in mind.

Throughput Optimized HDD vs. Cold HDD

The st1 and sc1 bucket sizes vary according to volume size, and a full bucket contains enough tokens for a full volume scan. However, larger st1 and sc1 volumes take longer for the volume scan to complete due to per-instance and per-volume throughput limits. Volumes attached to smaller instances are limited to the per-instance throughput rather than the st1 or sc1 throughput limits.

Both st1 and sc1 are designed for performance consistency of 90% of burst throughput 99% of the time. Non-compliant periods are approximately uniformly distributed, targeting 99% of expected total throughput each hour.

The following table shows ideal scan times for volumes of various size, assuming full buckets and sufficient instance throughput.

In general, scan times are expressed by this formula:

 Volume size
------------- = Scan time
 Throughput

For example, taking the performance consistency guarantees and other optimizations into account, an st1 customer with a 5 TiB volume can expect to complete a full volume scan in 2.91 to 3.27 hours.

   5 TiB            5 TiB
----------- = ------------------- = 10,486 s = 2.91 hours (optimal) 
 500 MiB/s     0.00047684 TiB/s


               2.91 hours
2.91 hours + -------------- = 3.27 hours (minimum expected)
              (0.90)(0.99) <-- From expected performance of 90% of burst 99% of the time

Similarly, an sc1 customer with a 5 TiB volume can expect to complete a full volume scan in 5.83 to 6.54 hours.

      5 TiB
------------------- = 20972 s = 5.83 hours (optimal) 
 0.000238418 TiB/s


               5.83 hours
5.83 hours + -------------- = 6.54 hours (minimum expected)
              (0.90)(0.99)

Volume Size (TiB)ST1 Scan Time with Burst (Hours)*SC1 Scan Time with Burst (Hours)*
11.173.64
21.173.64
31.753.64
42.334.66
52.915.83
63.506.99
74.088.16
84.669.32
95.2410.49
105.8311.65
116.4112.82
126.9913.98
137.5715.15
148.1616.31
158.7417.48
169.3218.64

* These scan times assume an average queue depth (rounded to the nearest whole number) of four or more when performing 1 MiB of sequential I/O.

Therefore if you have a throughput-oriented workload that needs to complete scans quickly (up to 500 MiB/s), or requires several full volume scans a day, use st1. If you are optimizing for cost, your data is relatively infrequently accessed, and you don’t need more than 250 MiB/s of scanning performance, then use sc1.

Inefficiency of Small Read/Writes on HDD

The performance model for st1 and sc1 volumes is optimized for sequential I/Os, favoring high-throughput workloads, offering acceptable performance on workloads with mixed IOPS and throughput, and discouraging workloads with small, random I/O.

For example, an I/O request of 1 MiB or less counts as a 1 MiB I/O credit. However, if the I/Os are sequential, they are merged into 1 MiB I/O blocks and count only as a 1 MiB I/O credit.

Limitations on per-Instance Throughput

Throughput for st1 and sc1 volumes will always be the determined by the smaller of the following:

  • Throughput limits of the volume

  • Throughput limits of the instance

As for all Amazon EBS volumes, we recommend that you select an appropriate EBS-optimized EC2 instance in order to avoid network bottlenecks. For more information, see Amazon EBS-Optimized Instances.