Amazon EFS performance - Amazon Elastic File System

Amazon EFS performance

The following sections provide an overview of Amazon EFS performance, and describe how your file system configuration impacts key performance dimensions. We also provide some important tips and recommendations for optimizing the performance of your file system.

Performance summary

File system performance is typically measured by using the dimensions of latency, throughput, and Input/Output operations per second (IOPS). Amazon EFS performance across these dimensions depends on your file system's configuration. The following configurations impact the performance of an Amazon EFS file system:

  • Storage class – EFS One Zone or EFS Standard

  • Performance mode – General Purpose or Max I/O

  • Throughput mode – Elastic, Provisioned, or Bursting

The following table illustrates Amazon EFS file system performance for the available combinations of storage class and general performance mode settings.

File system performance for storage class and performance mode combinations
Latency1 Maximum IOPS Maximum throughput

File system configuration – Storage class and performance mode

Read operations

Write operations

Read operations

Write operations

Per-file-system read2

Per-file-system write2

Per-client read/write

One Zone storage and General Purpose

As low as 250 microseconds (µs)

As low as 1.6 milliseconds (ms)

35,000 7,000

3 – 10 gibibytes per second (GiBps)

1 – 3 GiBps

500 mebibytes per second (MiBps)

Standard storage and General Purpose

As low as 250 µs

As low as 2.7 ms 55,000 25,000

3 – 10 GiBps

1 – 3 GiBps

500 MiBps


  1. The latency performance is available for file systems and mount targets created on or after December 17, 2022 in all AWS Regions where Amazon EFS is available. To achieve the indicated latency performance on a file system created prior to this date, you need to delete and recreate the mount targets associated with the file system.

    Latencies for file data reads and writes to the cost-optimized storage classes (Standard-IA and One Zone-IA) are low double-digit milliseconds.

  2. Maximum read and write throughput depend on the AWS Region and on the file system's throughput mode. Throughput in excess of an AWS Region's maximum throughput requires a throughput quota increase. Any request for additional throughput is considered on a case-by-case basis by the Amazon EFS service team. Approval might depend on your type of workload. To learn more about requesting quota increases, see Amazon EFS quotas and limits.

Storage classes and performance

Amazon EFS uses the following storage classes:

  • EFS One Zone storage classes – EFS One Zone and EFS One Zone-Infrequent Access (EFS One Zone-IA). The EFS One Zone storage classes replicate data within a single Availability Zone.

  • EFS Standard storage classes – EFS Standard and EFS Standard-IA. The EFS Standard storage classes replicate data across multiple Availability Zones (Multi-AZ).

First-byte latency when reading from or writing to either of the IA storage classes is higher than that for the EFS Standard or EFS One Zone storage classes.

For more information about EFS storage classes, see EFS storage classes.

Performance modes

Amazon EFS offers two performance modes, General Purpose and Max I/O:

  • General Purpose mode (recommended) supports up to 55,000 IOPS, has the lowest per-operation latency, and is the recommended performance mode for file systems. File systems with EFS One Zone storage classes always use General Purpose performance mode. For file systems with EFS Standard storage classes, you can use either the default General Purpose performance mode or the Max I/O performance mode.

  • Max I/O mode is designed for highly parallelized workloads that can tolerate higher latencies than the General Purpose mode. Max I/O mode is not supported for file systems using EFS One Zone storage class or those using Elastic throughput mode.


You cannot change the performance mode for a file system after the file system is created.

We recommend using General Purpose performance mode for the vast majority of applications. If you are not sure which performance mode to use, choose the General Purpose performance mode. To help ensure that your workload stays within the IOPS limit available to file systems using General Purpose mode, you can monitor the PercentIOLimit CloudWatch metric. For more information, see Amazon CloudWatch metrics for Amazon EFS.

Applications can scale their IOPS elastically up to the limit associated with the performance mode. You are not billed separately for IOPS; they are included in a file system's throughput accounting. Every Network File System (NFS) request is accounted for as 4 kilobyte (KB) of throughput, or its actual request and response size, whichever is larger.

Throughput modes

A file system's throughput mode determines the throughput available to your file system. Amazon EFS offers three throughput modes: Elastic, Provisioned, and Bursting. Read throughput is discounted to allow you to drive higher read throughput than write throughput. The maximum throughput available with each throughput mode depends on the AWS Region. For more information about the maximum file system throughput in the different regions, see Amazon EFS quotas and limits.

Your file system can achieve a combined 100% of its read and write throughput. For example, if your file system is using 33% of its read throughput limit, the file system can simultaneously achieve up to 67% of its write throughput limit. You can monitor your file system’s throughput usage in the Throughput utilization (%) graph on the on the File System Detail page of the console. For more information, see Using CloudWatch metrics to monitor throughput performance.

Choosing the correct throughput mode for a file system

Choosing the correct throughput mode for your file system depends on your workload's performance requirements.

  • Elastic Throughput (Recommended) – Use the default Elastic Throughput when you have spiky or unpredictable workloads and performance requirements that are difficult to forecast, or when your application drives throughput at an average-to-peak ratio of 5% or less. For more information, see Elastic Throughput mode.

  • Provisioned Throughput – Use Provisioned Throughput if you know your workload's performance requirements, or when your application drives throughput at an average-to-peak ratio of 5% or more. For more information, see Provisioned Throughput mode.

  • Bursting Throughput – Use Bursting Throughput when you want throughput that scales with the amount of storage in your file system.

    If, after using Bursting Throughput mode, you find that your application is throughput-constrained (for example, it uses more than 80% of the permitted throughput or you have used all of your burst credits), then you should use either Elastic or Provisioned Throughput mode. For more information, see Bursting Throughput mode.

You can use Amazon CloudWatch to determine your workload's average-to-peak ratio by comparing the MeteredIOBytes metric to the PermittedThroughput metric. For more information about Amazon EFS metrics, see Amazon CloudWatch metrics for Amazon EFS.

Elastic Throughput mode

For file systems that are using Elastic Throughput, Amazon EFS automatically scales throughput performance up or down to meet the needs of your workload activity. Elastic Throughput is the best throughput mode for spiky or unpredictable workloads with performance requirements that are difficult to forecast, or for applications that drive throughput at 5% or less of the peak throughput on average (the average-to-peak ratio).

Because throughput performance for file systems with Elastic Throughput scales automatically, you don't need to specify or provision the throughput capacity to meet your application needs. You pay only for the amount of metadata and data read or written, and you don't accrue or consume burst credits while in Elastic Throughput mode.


Elastic Throughput mode is available only for file systems that are configured with the General Purpose performance mode.

For information about per-Region Elastic Throughput limits, see Amazon EFS quotas that you can increase.

Provisioned Throughput mode

With Provisioned Throughput mode, you specify a level of throughput that the file system can drive independent of the file system's size or burst credit balance. Use Provisioned Throughput if you know your workload's performance requirements, or if your application drives throughput at 5% or more of the average-to-peak ratio.

For file systems using Provisioned Throughput, you are charged for the amount of throughput enabled for the file system. The throughput amount billed in a month is based on the throughput provisioned in excess of your file system’s included baseline throughput from Standard storage, up to the prevailing Bursting baseline throughput limits in the AWS Region.

If the file system’s baseline throughput exceeds the Provisioned Throughput amount, then it automatically uses the Bursting Throughput allowed for the file system (up to the prevailing Bursting baseline throughput limits in that AWS Region).

For information about per-Region Provisioned Throughput limits, see Amazon EFS quotas that you can increase.

Bursting Throughput mode

Bursting Throughput mode is recommended for workloads that require throughput that scales with the amount of storage in your file system. In Bursting Throughput mode, the base throughput is proportionate to the file system's size in the EFS Standard storage class, at a rate of 50 KiBps per each GiB of storage. Burst credits accrue when the file system consumes below its base throughput rate, and are deducted when throughput exceeds the base rate.

When burst credits are available, a file system can drive throughput up to 100 MiBps per TiB of storage, up to the Amazon EFS Region's limit, with a minimum of 100 MiBps. If no burst credits are available, a file system can drive up to 50 MiBps per TiB of storage, with a minimum of 1 MiBps.

For information about per-Region Bursting Throughput limits, see General resource quotas that cannot be changed.

Understanding Amazon EFS burst credits

With Bursting Throughput, each file system earns burst credits over time at a baseline rate that is determined by the size of the file system that is stored in the EFS Standard or EFS One Zone Standard storage class. The baseline rate is 50 MiBps per tebibyte [TiB] of storage (equivalent to 50 KiBps per GiB of storage). Amazon EFS meters read operations up to one-third the rate of write operations, permitting the file system to drive a baseline rate up to 150 KiBps per GiB of read throughput, or 50 KiBps per GiB of write throughput.

A file system can drive throughput at its baseline metered rate continuously. A file system accumulates burst credits whenever it is inactive or driving throughput below its baseline metered rate. Accumulated burst credits give the file system the ability to drive throughput above its baseline rate.

For example, a file system with 100 GiB of metered data in EFS Standard storage has a baseline throughput of 5 MiBps. Over a 24-hour period of inactivity, the file system earns 432,000 MiB worth of credit (5 MiB × 86,400 seconds = 432,000 MiB), which can be used to burst at 100 MiBps for 72 minutes (432,000 MiB ÷ 100 MiBps = 72 minutes).

File systems larger than 1 TiB can always burst for up to 50 percent of the time if they are inactive for the remaining 50 percent of the time.

The following table provides examples of bursting behavior.

File system size Burst throughput Baseline throughput
100 GiB of metered data in Standard storage
  • Burst to 300 (MiBps) read-only for up to 72 minutes per day, or

  • Burst to 100 MiBps write-only for up to 72 minutes per day

  • Drive up to 15 MiBps read-only continuously

  • Drive up to 5 MiBps write-only continuously

1 TiB of metered data in Standard storage
  • Burst to 300 MiBps read-only for 12 hours per day, or

  • Burst to 100 MiBps write-only for 12 hours per day

  • Drive 150 MiBps read-only continuously

  • Drive 50 MiBps write-only continuously

10 TiB of metered data in Standard storage
  • Burst to 3 GiBps read-only for 12 hours per day, or

  • Burst to 1 GiBps write-only for 12 hours per day

  • Drive 1.5 GiBps read-only continuously

  • Drive 500 MiBps write-only continuously

Generally, larger file systems
  • Burst to 300 MiBps read-only per TiB of storage for 12 hours per day, or

  • Burst to 100 MiBps write-only per TiB of storage for 12 hours per day

  • Drive 150 MiBps read-only per TiB of storage continuously

  • Drive 50 MiBps write-only per TiB of storage continuously


Amazon EFS provides a metered throughput of 1 MiBps to all file systems, even if the baseline rate is lower.

The file system size used to determine the baseline and burst rates is the ValueInStandard metered size available through the DescribeFileSystems API operation.

File systems can earn credits up to a maximum credit balance of 2.1 TiB for file systems smaller than 1 TiB, or 2.1 TiB per TiB stored for file systems larger than 1 TiB. This behavior means that file systems can accumulate enough credits to burst for up to 12 hours continuously.

Restrictions on switching throughput modes and changing provisioned amount

You can switch an existing file system's throughput mode and change the throughput amount. However, after switching the throughput mode to Provisioned Throughput or changing the provisioned throughput amount, the following actions are restricted for a 24-hour period:

  • Switching from Provisioned mode to Elastic or Bursting mode.

  • Decreasing the provisioned throughput amount.

Amazon EFS performance tips

When using Amazon EFS, keep the following performance tips in mind.

Average I/O size

The distributed nature of Amazon EFS enables high levels of availability, durability, and scalability. This distributed architecture results in a small latency overhead for each file operation. Because of this per-operation latency, overall throughput generally increases as the average I/O size increases, because the overhead is amortized over a larger amount of data.

Request model

If you enable asynchronous writes to your file system, pending write operations are buffered on the Amazon EC2 instance before they're written to Amazon EFS asynchronously. Asynchronous writes typically have lower latencies. When performing asynchronous writes, the kernel uses additional memory for caching.

A file system that has enabled synchronous writes, or one that opens files using an option that bypasses the cache (for example, O_DIRECT), issues synchronous requests to Amazon EFS. Every operation goes through a round trip between the client and Amazon EFS.


Your chosen request model has tradeoffs in consistency (if you're using multiple Amazon EC2 instances) and speed. Using synchronous writes provides increased data consistency by completing each write request transaction before processing the next request. Using asynchronous writes provides increased throughput by buffering pending write operations.

NFS client mount settings

Verify that you're using the recommended mount options as outlined in Mounting EFS file systems and in Additional mounting considerations.

When mounting your file systems on Amazon EC2 instances, Amazon EFS supports the Network File System version 4.0 and 4.1 (NFSv4) protocols. NFSv4.1 provides better performance for parallel small-file read operations (greater than 10,000 files per second) compared to NFSv4.0 (less than 1,000 files per second). For Amazon EC2 macOS instances running macOS Big Sur, only NFSv4.0 is supported.

Don't use the following mount options:

  • noac, actimeo=0, acregmax=0, acdirmax=0 – These options disable the attribute cache, which has a very large performance impact.

  • lookupcache=pos, lookupcache=none – These options disable the file name lookup cache, which has a very large impact on performance.

  • fsc – This option enables local file caching, but does not change NFS cache coherency, and does not reduce latencies.


When you mount your file system, consider increasing the size of the read and write buffers for your NFS client to 1 MB.

Optimizing small-file performance

You can improve small-file performance by minimizing file reopens, increasing parallelism, and bundling reference files where possible.

  • Minimize the number of round trips to the server.

    Don't unnecessarily close files if you will need them later in a workflow. Keeping file descriptors open enables direct access to the local copy in the cache. File open, close, and metadata operations generally cannot be made asynchronously or through a pipeline.

    When reading or writing small files, the two additional round trips are significant.

    Each round trip (file open, file close) can take as much time as reading or writing megabytes of bulk data. It's more efficient to open an input or output file once, at the beginning of your compute job, and hold it open for the entire length of the job.

  • Use parallelism to reduce the impact of round-trip times.

  • Bundle reference files in a .zip file. Some applications use a large set of small, mostly read-only reference files. Bundling these in a .zip file allows you to read many files with one open-close round trip.

    The .zip format allows for random access to individual files.

Optimizing directory performance

When performing a listing (ls) on very large directories (over 100k files) that are being modified concurrently, Linux NFS clients can hang, not returning a response. This issue is fixed in kernel 5.11, which has been ported to Amazon Linux 2 kernels 4.14, 5.4, and 5.10.

We recommend keeping the number of directories on your file system to less than 10,000, if possible. Use nested subdirectories as much as possible.

When listing a directory, avoid getting file attributes if they are not required, because they are not stored in the directory itself.

Optimizing the NFS read_ahead_kb size

The NFS read_ahead_kb attribute defines the number of kilobytes for the Linux kernel to read ahead or prefetch during a sequential read operation.

For Linux kernel versions prior to 5.4.*, the read_ahead_kb value is set by multiplying NFS_MAX_READAHEAD by the value for rsize (the client configured read buffer size set in the mount options). When using the recommended mount options, this formula sets read_ahead_kb to 15 MB.


Starting with Linux kernel versions 5.4.*, the Linux NFS client uses a default read_ahead_kb value of 128 KB. We recommend increasing this value to 15 MB.

The Amazon EFS mount helper that is available in amazon-efs-utils version 1.33.2 and later automatically modifies the read_ahead_kb value to equal 15 * rsize, or 15 MB, after mounting the file system.

For Linux kernels 5.4 or later, if you do not use the mount helper to mount your file systems, consider manually setting read_ahead_kb to 15 MB for improved performance. After mounting the file system, you can reset the read_ahead_kb value by using the following command. Before using this command, replace the following values:

  • Replace read-ahead-value-kb with the desired size in kilobytes.

  • Replace efs-mount-point with the file system's mount point.

device_number=$(stat -c '%d' efs-mount-point) ((major = ($device_number & 0xFFF00) >> 8)) ((minor = ($device_number & 0xFF) | (($device_number >> 12) & 0xFFF00))) sudo bash -c "echo read-ahead-value-kb > /sys/class/bdi/$major:$minor/read_ahead_kb"

The following example sets the read_ahead_kb size to 15 MB.

device_number=$(stat -c '%d' efs) ((major = ($device_number & 0xFFF00) >> 8)) ((minor = ($device_number & 0xFF) | (($device_number >> 12) & 0xFFF00))) sudo bash -c "echo 15000 > /sys/class/bdi/$major:$minor/read_ahead_kb"