Choosing between deployment options - Amazon ElastiCache for Redis

Choosing between deployment options

Amazon ElastiCache has two deployment options:

  • Serverless caching

  • Self-designed clusters

For a list of supported commands for both, see Supported and restricted Redis commands.

Serverless caching

Amazon ElastiCache Serverless simplifies cache creation and instantly scales to support customers' most demanding applications. With ElastiCache Serverless, you can create a highly-available and scalable cache in less than a minute, eliminating the need to provision, plan for, and manage cache cluster capacity. ElastiCache Serverless automatically stores data redundantly across three Availability Zones and provides a 99.99% availability Service Level Agreement (SLA). Backups are cross-compatible, and can be exported to and restored from Self-designed clusters.

Self-designed clusters

If you need fine-grained control over your ElastiCache for Redis cluster, you can choose to design your own Redis cluster with ElastiCache. ElastiCache enables you to operate a node-based cluster, by choosing the node-type, number of nodes, and node placement across AWS Availability Zones for your cluster. Since ElastiCache is a fully-managed service, it helps manage hardware provisioning, monitoring, node replacements, and software patching for your cluster. Self-designed clusters can be designed to provide an up to 99.99% availibility SLA. Backups are cross-compatible, and can be exported to and restored from Serveless caches.

Choosing between deployment options

Choose serverless caching if:

  • You are creating a cache for workloads that are either new or difficult to predict.

  • You have unpredictable application traffic.

  • You want the easiest way to get started with a cache.

Choose to design your own ElastiCache cluster if:

  • You are already running ElastiCache Serverless and want finer grained control over the type of node running Redis, number of nodes, and placement of nodes.

  • You expect your application traffic to be relatively predictable, and you want fine-grained control over performance, availability, and cost.

  • You can forecast your capacity requirements to control costs.

Comparing serverless caching and self-designed clusters

Feature Serverless caching Self-designed clusters

Cache setup

Create a cache with just a name in under a minute

Provides fine-grained control over cache cluster design. User can choose node-type, number of nodes, and placement across AWS availability zones

Supported ElastiCache for Redis version

ElastiCache for Redis version 7.1 and higher

ElastiCache for Redis version 4.0 and higher

Cluster Mode

Operates Redis in cluster mode enabled only. Redis clients must support cluster mode enabled to connect to ElastiCache Serverless.

Can be configured to operate in cluster mode enabled or cluster mode disabled.

Scaling

Automatically scales both vertically and horizontally without any capacity management.

Provides control over scaling, while also requiring monitoring to make sure current capacity is adequately meeting demand.

You can choose to scale vertically, by increasing or decreasing the cache node size when needed. You can also scale horizontally, by adding new shards or adding more replicas to your shards.

With the Auto-Scaling feature you can also configure scaling based on a schedule, or scale based on metrics like CPU and Memory usage on the cache.

Client connection

Clients connect to a single endpoint. This enables the underlying cache node topology (scaling, replacements, and upgrades) to change without disconnecting the client.

Clients connect to each individual cache node. If a node is replaced, the client rediscovers cluster topology and re-establishes connections.

Configurability

No fine-grained configuration available. Customers can configure basic settings including subnets which can access the cache, whether automatic backups are turned on or off, and maximum cache usage limits.

Self-designed clusters provide fine-grained configuration options. Customers can use parameter groups for fine-grained control. For a table of these parameter values by node type, see Redis node-type specific parameters.

Multi-AZ

Data is replicated asynchronously across multiple Availability Zones for higher availability and improved read latency.

Provides an option to design the cluster in a single Availability Zone or across multiple Availability Zones (AZs). For Multi-AZ clusters, data is replicated asynchronously across multiple Availability Zones for higher availability and improved read latency.

Encryption at rest

Always enabled. Customers can use an AWS managed key or a customer managed key in AWS KMS.

Option to enable or disable encryption at rest. When enabled, customers can use an AWS managed key or a customer managed key in AWS KMS.

Encryption in transit (TLS)

Always enabled. Clients must support TLS connectivity.

Option to enable or disable.

Backups

Supports automatic and manual backups of caches with no performance impact.

Backups are cross-compatible, and can be restored into an ElastiCache Serverless cache or a self-designed cluster.

Supports automatic and manual backups. Clusters may see some performance impact depending on the available reserved memory. For more information, see Managing Reserved Memory.

Backups are cross-compatible, and can be restored into an ElastiCache Serverless cache or a self-designed cluster.

Monitoring

Support cache level metrics including cache hit rate, cache miss rate, data size, and ECPUs consumed.

ElastiCache Serverless sends events using EventBridge when significant events happen on your cache. You can choose to monitor, ingest, transform, and act on ElastiCache events using Amazon EventBridge. For more information, see Serverless cache events.

ElastiCache self-designed clusters emit metrics at each node level, including both host-level metrics and cache metrics.

Self-designed clusters emit SNS notifications for significant events. See Metrics for Redis.

Availability

99.99% availability Service Level Agreement (SLA)

Self-designed clusters can be designed to achieve up to 99.99% availability Service Level Agreement (SLA), depending on the configuration.

Software upgrades and patching

Automatically upgrades cache software to the latest minor and patch version, without application impact. Customers receive a notification for major version upgrades, and customers can upgrade to the latest major version when they want.

Self-designed clusters offer customer-enabled self-service for minor and patching version upgrades, as well as major version upgrades. Managed updates are automatically applied during customer defined maintenance windows. Customers can also choose to apply a minor or patch version upgrade on-demand.

Global Data Store

Not supported

Supports Global Data Store, which enables cross region replication with single region writes and multi-region reads

Data Tiering

Not supported

Clusters that are designed using nodes from the r6gd family have their data tiered between memory and local SSD (solid state drives) storage. Data tiering provides a price-performance option for Redis workloads by utilizing lower-cost solid state drives (SSDs) in each cluster node, in addition to storing data in memory.

Pricing model

Pay-per-use, based on data stored in GB-hours and requests in ElastiCache Processing Units (ECPU). See pricing details here.

Pay-per-hour, based on cache node usage. See pricing details here.

Related topics: