Memcached vs. Redis - Performance at Scale with Amazon ElastiCache

Memcached vs. Redis

Amazon ElastiCache currently supports two different in-memory key-value engines. You can choose the engine you prefer when launching an ElastiCache cache cluster:

  • Memcached — A widely adopted in-memory key store, and historically the gold standard of web caching. ElastiCache is protocol-compliant with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service. Memcached is also multithreaded, meaning it makes good use of larger Amazon EC2 instance sizes with multiple cores.

  • Redis — an increasingly popular open-source key-value store that supports more advanced data structures such as sorted sets, hashes, and lists. Unlike Memcached, Redis has disk persistence built in, meaning that you can use it for long-lived data. Redis also supports replication, which can be used to achieve Multi-AZ redundancy, similar to Amazon RDS.

Although both Memcached and Redis appear similar on the surface, in that they are both in-memory key stores, they are quite different in practice. Because of the replication and persistence features of Redis, ElastiCache manages Redis more as a relational database. Redis ElastiCache clusters are managed as stateful entities that include failover, similar to how Amazon RDS manages database failover.

Conversely, because Memcached is designed as a pure caching solution with no persistence, ElastiCache manages Memcached nodes as a pool that can grow and shrink, similar to an Amazon EC2 Auto Scaling group. Individual nodes are expendable, and ElastiCache provides additional capabilities here such as automatic node replacement and Auto Discovery.

When deciding between Memcached and Redis, here are a few questions to consider:

  • Is object caching your primary goal, for example to offload your database? If so, use Memcached.

  • Are you interested in as simple a caching model as possible? If so, use Memcached.

  • Are you planning on running large cache nodes, and require multithreaded performance with utilization of multiple cores? If so, use Memcached.

  • Do you want the ability to scale your cache horizontally as you grow? If so, use Memcached.

  • Does your app need to atomically increment or decrement counters? If so, use either Redis or Memcached.

  • Are you looking for more advanced data types, such as lists, hashes, bit arrays, HyperLogLogs, and sets? If so, use Redis.

  • Does sorting and ranking datasets in memory help you, such as with leaderboards? If so, use Redis.

  • Are publish and subscribe (pub/sub) capabilities of use to your application? If so, use Redis.

  • Is persistence of your key store important? If so, use Redis.

  • Do you want to run in multiple AWS Availability Zones (Multi-AZ) with failover? If so, use Redis.

  • Is geospatial support important to your applications? If so, use Redis.

  • Is encryption and compliance to standards, such as PCI DSS, HIPAA, and FedRAMP, required for your business? If so, use Redis.

Although it's tempting to look at Redis as a more evolved Memcached due to its advanced data types and atomic operations, Memcached has a longer track record and the ability to leverage multiple CPU cores.

Because Memcached and Redis are so different in practice, we're going to address them separately in most of this paper. We will focus on using Memcached as an in-memory cache pool, and using Redis for advanced datasets, such as game leaderboards and activity streams.