Caching - AWS AppSync

Caching

AWS AppSync’s server-side data caching capabilities reduce the need to directly access data sources by making data available in a high speed in-memory cache, improving performance and decreasing latency. In order to take advantage of server-side caching in your AppSync API, refer to this section to define the desired behavior.

AWS AppSync hosts Amazon ElastiCache Redis instances in the AppSync service accounts, in the same AWS Region as your AppSync API.

The following instance types are available:

small

1 vCPU, 1.5 GiB RAM, Low to moderate network performance

medium

2 vCPU, 3 GiB RAM, Low to moderate network performance

large

2 vCPU, 12.3 GiB RAM, Up to 10 Gigabit network performance

xlarge

4 vCPU, 25.05 GiB RAM, Up to 10 Gigabit network performance

2xlarge

8 vCPU, 50.47 GiB RAM, Up to 10 Gigabit network performance

4xlarge

16 vCPU, 101.38 GiB RAM, Up to 10 Gigabit network performance

8xlarge

32 vCPU, 203.26 GiB RAM, 10 Gigabit network performance; not available in all regions.

12xlarge

48 vCPU, 317.77 GiB RAM, 10 Gigabit network performance

Note

Historically, a specific instance type (such as t2.medium) was specified. As of July 2020, these legacy instance types will continue to be available, but their use is deprecated and discouraged. We recommend you use the generic instance types described here.

The following are the behaviors related to caching:

None

No server-side caching.

Full request caching

If the data is not in the cache, it will be retrieved from the data source and populate the cache until the TTL expiration. All subsequent requests to your API will be returned from the cache, which means data sources won’t be contacted directly unless the TTL expires. As caching keys in this setting, we use the contents of the $context.arguments and $context.identity maps.

Per-resolver caching

With this setting, each resolver needs to be opted in explicitly for it to cache responses. A TTL and caching keys can be specified on the resolver. Caching keys that can be specified are values from the $context.arguments, $context.source and $context.identity maps. The TTL value is mandatory, but the caching keys are optional. If no caching keys are specified, the defaults are the contents of the $context.arguments, $context.source and $context.identity maps, similar to the above. For example, one might use $context.arguments.id or $context.arguments.InputType.id, $context.source.id and $context.identity.sub or $context.identity.claims.username. When no caching keys are specified and only a TTL, the behavior of the resolver is similar to the one above.

Cache time to live

This defines the amount of time cached entries will be stored in memory. The maximum TTL is 3600s (1h), after which entries will be automatically deleted.

Cache encryption comes in two flavors that are explained below. These are similar to the settings allowed by Amazon ElastiCache for Redis. The encryption settings can only be enabled when first enabling caching for your AppSync API.

  • Encryption in transit: requests between AppSync, the cache, and data sources (except insecure HTTP data sources) will be encrypted at the network level. Because there is some processing needed to encrypt and decrypt the data at the endpoints, enabling in-transit encryption can have some performance impact.

  • Encryption at rest: Data saved to disk from memory during swap operations will be encrypted at the cache instance. This setting also carries a performance impact.

In order to invalidate cache entries, a flush cache API call is available. This can be called either through the console or through the CLI.

For more information, see the ApiCache data type in in the AWS AppSync API Reference.