Menu
Amazon ElastiCache
User Guide (API Version 2015-02-02)

Selecting Your Node Size

This section helps you determine what node instance type you need for your scenarios. Since the engines, Memcached and Redis, implement clusters differently, the engine you select will make a difference in the node size you needed by your application.

Selecting Your Memcached Node Size

Memcached clusters contain one or more nodes. Because of this, the memory needs of the cluster and the memory of a node are related, but not the same. You can attain your needed cluster memory capacity by having a few large nodes or many smaller nodes. Further, as your needs change, you can add or remove nodes from the cluster and thus pay only for what you need.

The total memory capacity of your cluster is calculated by multiplying the number of cache nodes in the cluster by the RAM capacity of each node. The capacity of each cache node is based on the cache node type.

The number of cache nodes in the cluster is a key factor in the availability of your cluster running Memcached. The failure of a single cache node can have an impact on the availability of your application and the load on your back-end database while ElastiCache provisions a replacement for the failed cache node and it gets repopulated. You can reduce this potential availability impact by spreading your memory and compute capacity over a larger number of cache nodes, each with smaller capacity, rather than using a fewer number of high capacity nodes.

In a scenario where you want to have 40 GB of cache memory, you can set it up in any of the following configurations:

  • 13 cache.t2.medium nodes with 3.22 GB of memory and 2 threads each = 41.86 GB and 26 threads.

     

  • 7 cache.m3.large nodes with 6.05 GB of memory and 2 threads each = 42.35 GB and 14 threads.

    7 cache.m4.large nodes with 6.42 GB of memory and 2 threads each = 44.94 GB and 14 threads.

     

  • 3 cache.r3.large nodes with 13.50 GB of memory and 2 threads each = 40.50 GB and 6 threads.

    3 cache.m4.xlarge nodes with 14.28 GB of memory and 4 threads each = 42.84 GB and 12 threads.

Comparing node options

Node type Memory Cores Cost * Nodes Needed Total Memory Total Cores Monthly Cost †
cache.t2.medium 3.22 GB 2 $ 0.068 13 41.86 GB 26 $ 636.48
cache.m3.large 6.05 GB 2 $ 0.182 7 42.35 GB 14 $ 917.28
cache.m4.large 6.42 GB 2 $ 0.156 7 44.94 GB 14 $ 768.24
cache.r3.large 13.50 GB 2 $ 0.228 3 40.50 GB 6 $ 492.48
cache.m4.xlarge 14.28 GB 4 $ 0.311 3 42.84 GB 12 $ 671.76
* Hourly cost per node as of August 4, 2016.
† Monthly cost at 100% usage for 30 days (720 hours).

These options each provide similar memory capacity but different computational capacity and cost. To compare the costs of your specific options, see Amazon ElastiCache Pricing.

For clusters running Memcached, some of the available memory on each cache node is used for connection overhead. For more information, see Memcached Connection Overhead

Using multiple nodes will require spreading the keys across them. Each node has its own endpoint. For easy endpoint management, you can use the ElastiCache the Auto Discovery feature, which enables client programs to automatically identify all of the nodes in a cache cluster. For more information, see Node Auto Discovery (Memcached).

If you're unsure about how much capacity you need, for testing we recommend starting with one cache.m3.medium node and monitoring the memory usage, CPU utilization, and cache hit rate with the ElastiCache metrics that are published to CloudWatch. For more information on CloudWatch metrics for ElastiCache, see Monitoring Use with CloudWatch Metrics. For production and larger workloads, the R3 nodes provide the best performance and RAM cost value.

If your cluster does not have the desired hit rate, you can easily add more nodes, thereby increasing the total available memory in your cluster.

If your cluster turns out to be bound by CPU but it has sufficient hit rate, try setting up a new cluster with a cache node type that provides more compute power.

Selecting Your Redis Node Size

Answering the following questions will help you determine the minimum node type you need for your Redis implementation.

  • How much total memory do you need for your data?

     

    You can get a general estimate by taking the size of the items you want to cache and multiplying it by the number of items you want to keep in the cache at the same time. To get a reasonable estimation of the item size, serialize your cache items then count the characters, then divide this over the number of shards in your cluster.

     

  • What version of Redis are you running?

     

    Redis versions prior to 2.8.22 require you to reserve more memory for failover, snapshot, synchronizing, and promoting a replica to primary operations. This requirement occurs because you must have sufficient memory available for all writes that occur during the process.

    Redis version 2.8.22 and later use a forkless save process that requires less available memory than the earlier process.

    For more information, see the following:

     

  • How write-heavy is your application?

     

    Write heavy applications can require significantly more available memory, memory not used by data, when taking snapshots or failing over. Whenever the BGSAVE process is performed–when taking a snapshot, when syncing a primary cluster with a replica in a cluster, when enabling the append-only file (AOF) feature, or promoting a replica to primary (if you have Multi-AZ with auto failover enabled)–you must have sufficient memory that is unused by data to accommodate all the writes that transpire during the BGSAVE process. Worst case would be when all of your data is rewritten during the process, in which case you would need a node instance size with twice as much memory as needed for data alone.

     

    For more detailed information, go to Ensuring You Have Sufficient Memory to Create a Redis Snapshot.

     

  • Will your implementation be a standalone Redis (cluster mode disabled) cluster or a Redis (cluster mode enabled) cluster with multiple shards?

     

    Redis (cluster mode disabled) cluster

    If you're implementing a Redis (cluster mode disabled) cluster, your node type must be able to accommodate all your data plus the necessary overhead as described in the previous bullet.

     

    For example, if you estimate that the total size of all your items to be 12 GB, you can use a cache.m3.xlarge node with 13.3 GB of memory or a cache.r3.large node with 13.5 GB of memory. However, you may need more memory for BGSAVE operations. If your application is write heavy, you should double the memory requirements to at least 24 GB, meaning you should use either a cache.m3.2xlarge with 27.9 GB of memory or a cache.r3.xlarge with 28.4 GB of memory.

     

    Redis (cluster mode enabled) with multiple shards

    If you're implementing a Redis (cluster mode enabled) cluster with multiple shards, then the node type must be able to accommodate bytes-for-data-and-overhead / number-of-shards bytes of data.

     

    For example, if you estimate that the total size of all your items to be 12 GB and you have 2 shards, you can use a cache.m3.large node with 6.05 GB of memory (12 GB / 2). However, you may need more memory for BGSAVE operations. If your application is write heavy, you should double the memory requirements to at least 12 GB per shard, meaning you should use either a cache.m3.xlarge with 13.3 GB of memory or a cache.r3.large with 13.5 GB of memory.

     

    Currently you cannot add shards to a Redis (cluster mode enabled) cluster. Therefore, you may want to use a somewhat larger node type to accommodate anticipated growth.

     

While your cluster is running, you can monitor the memory usage, processor utilization, cache hits, and cache misses metrics that are published to CloudWatch. If your cluster does not have the desired hit rate or you notice that keys are being evicted too often, you can choose a different cache node size with larger CPU and memory specifications.

When monitoring CPU usage, remember that Redis is single-threaded, so you need to multiply the reported CPU usage by the number of CPU cores to get that actual usage. For example, a four core CPU reporting a 20% usage rate is actually the one core Redis is using running at 80%.