Class: Aws::ElastiCache::Client

Inherits:
Seahorse::Client::Base show all
Includes:
ClientStubs
Defined in:
gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb

Overview

An API client for ElastiCache. To construct a client, you need to configure a :region and :credentials.

client = Aws::ElastiCache::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the developer guide.

See #initialize for a full list of supported configuration options.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

API Operations collapse

Instance Method Summary collapse

Methods included from ClientStubs

#api_requests, #stub_data, #stub_responses

Methods inherited from Seahorse::Client::Base

add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :plugins (Array<Seahorse::Client::Plugin>) — default: []]

    A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • Aws::Credentials - Used for configuring static, non-refreshing credentials.

    • Aws::SharedCredentials - Used for loading static credentials from a shared file, such as ~/.aws/config.

    • Aws::AssumeRoleCredentials - Used when you need to assume a role.

    • Aws::AssumeRoleWebIdentityCredentials - Used when you need to assume a role after providing credentials via the web.

    • Aws::SSOCredentials - Used for loading credentials from AWS SSO using an access token generated from aws login.

    • Aws::ProcessCredentials - Used for loading credentials from a process that outputs to stdout.

    • Aws::InstanceProfileCredentials - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • Aws::ECSCredentials - Used for loading credentials from instances running in ECS.

    • Aws::CognitoIdentityCredentials - Used for loading credentials from the Cognito Identity service.

    When :credentials are not configured directly, the following locations will be searched for credentials:

    • Aws.config[:credentials]
    • The :access_key_id, :secret_access_key, :session_token, and :account_id options.
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SESSION_TOKEN'], and ENV['AWS_ACCOUNT_ID']
    • ~/.aws/credentials
    • ~/.aws/config
    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of Aws::InstanceProfileCredentials or Aws::ECSCredentials to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV['AWS_EC2_METADATA_DISABLED'] to true.
  • :region (required, String)

    The AWS region to connect to. The configured :region is used to determine the service :endpoint. When not passed, a default :region is searched for in the following locations:

    • Aws.config[:region]
    • ENV['AWS_REGION']
    • ENV['AMAZON_REGION']
    • ENV['AWS_DEFAULT_REGION']
    • ~/.aws/credentials
    • ~/.aws/config
  • :access_key_id (String)
  • :account_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in adaptive retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a RetryCapacityNotAvailableError and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When true, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in standard and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :disable_request_compression (Boolean) — default: false

    When set to 'true' the request body will not be compressed for supported operations.

  • :endpoint (String, URI::HTTPS, URI::HTTP)

    Normally you should not configure the :endpoint option directly. This is normally constructed from the :region option. Configuring :endpoint is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:

    'http://example.com'
    'https://example.com'
    'http://example.com:123'
    
  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to true, endpoint discovery will be enabled for operations when available.

  • :ignore_configured_endpoint_urls (Boolean)

    Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the :logger at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in standard and adaptive retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.

  • :request_min_compression_size_bytes (Integer) — default: 10240

    The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.

  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the legacy retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the legacy retry mode.

    @see https://www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the legacy retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • legacy - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • standard - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • adaptive - An experimental retry mode that includes all the functionality of standard mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :sdk_ua_app_id (String)

    A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key (String)
  • :session_token (String)
  • :sigv4a_signing_region_set (Array)

    A list of regions that should be signed with SigV4a signing. When not passed, a default :sigv4a_signing_region_set is searched for in the following locations:

    • Aws.config[:sigv4a_signing_region_set]
    • ENV['AWS_SIGV4A_SIGNING_REGION_SET']
    • ~/.aws/config
  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :telemetry_provider (Aws::Telemetry::TelemetryProviderBase) — default: Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses NoOpTelemetryProvider which will not record or emit any telemetry data. The SDK supports the following telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require the opentelemetry-sdk gem and then, pass in an instance of a Aws::Telemetry::OTelProvider for telemetry provider.
  • :token_provider (Aws::TokenProvider)

    A Bearer Token Provider. This can be an instance of any one of the following classes:

    • Aws::StaticTokenProvider - Used for configuring static, non-refreshing tokens.

    • Aws::SSOTokenProvider - Used for loading tokens from AWS SSO using an access token generated from aws login.

    When :token_provider is not configured directly, the Aws::TokenProviderChain will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to true, dualstack enabled endpoints (with .aws TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to true, fips compatible endpoints will be used if available. When a fips region is used, the region is normalized and this config is set to true.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::ElastiCache::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to #resolve_endpoint(parameters) where parameters is a Struct similar to Aws::ElastiCache::EndpointParameters.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has "Expect" header set to "100-continue". Defaults to nil which disables this behaviour. This value can safely be set per request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_open_timeout (Float) — default: 15

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like 'http://proxy.com:123'.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_wire_trace (Boolean) — default: false

    When true, HTTP debug output will be sent to the :logger.

  • :on_chunk_received (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a content-length).

  • :on_chunk_sent (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_store (String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert (OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key (OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout (Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer (Boolean) — default: true

    When true, SSL peer certificates are verified when establishing a connection.



444
445
446
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 444

def initialize(*args)
  super
end

Instance Method Details

#add_tags_to_resource(params = {}) ⇒ Types::TagListMessage

A tag is a key-value pair where the key and value are case-sensitive. You can use tags to categorize and track all your ElastiCache resources, with the exception of global replication group. When you add or remove tags on replication groups, those actions will be replicated to all nodes in the replication group. For more information, see Resource-level permissions.

For example, you can use cost-allocation tags to your ElastiCache resources, Amazon generates a cost allocation report as a comma-separated value (CSV) file with your usage and costs aggregated by your tags. You can apply tags that represent business categories (such as cost centers, application names, or owners) to organize your costs across multiple services.

For more information, see Using Cost Allocation Tags in Amazon ElastiCache in the ElastiCache User Guide.

Examples:

Example: AddTagsToResource


# Adds up to 10 tags, key/value pairs, to a cluster or snapshot resource.

resp = client.add_tags_to_resource({
  resource_name: "arn:aws:elasticache:us-east-1:1234567890:cluster:my-mem-cluster", 
  tags: [
    {
      key: "APIVersion", 
      value: "20150202", 
    }, 
    {
      key: "Service", 
      value: "ElastiCache", 
    }, 
  ], 
})

resp.to_h outputs the following:
{
  tag_list: [
    {
      key: "APIVersion", 
      value: "20150202", 
    }, 
    {
      key: "Service", 
      value: "ElastiCache", 
    }, 
  ], 
}

Request syntax with placeholder values


resp = client.add_tags_to_resource({
  resource_name: "String", # required
  tags: [ # required
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.tag_list #=> Array
resp.tag_list[0].key #=> String
resp.tag_list[0].value #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_name (required, String)

    The Amazon Resource Name (ARN) of the resource to which the tags are to be added, for example arn:aws:elasticache:us-west-2:0123456789:cluster:myCluster or arn:aws:elasticache:us-west-2:0123456789:snapshot:mySnapshot. ElastiCache resources are cluster and snapshot.

    For more information about ARNs, see Amazon Resource Names (ARNs) and Amazon Service Namespaces.

  • :tags (required, Array<Types::Tag>)

    A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted.

Returns:

See Also:



550
551
552
553
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 550

def add_tags_to_resource(params = {}, options = {})
  req = build_request(:add_tags_to_resource, params)
  req.send_request(options)
end

#authorize_cache_security_group_ingress(params = {}) ⇒ Types::AuthorizeCacheSecurityGroupIngressResult

Allows network ingress to a cache security group. Applications using ElastiCache must be running on Amazon EC2, and Amazon EC2 security groups are used as the authorization mechanism.

You cannot authorize ingress from an Amazon EC2 security group in one region to an ElastiCache cluster in another region.

Examples:

Example: AuthorizeCacheCacheSecurityGroupIngress


# Allows network ingress to a cache security group. Applications using ElastiCache must be running on Amazon EC2. Amazon
# EC2 security groups are used as the authorization mechanism.

resp = client.authorize_cache_security_group_ingress({
  cache_security_group_name: "my-sec-grp", 
  ec2_security_group_name: "my-ec2-sec-grp", 
  ec2_security_group_owner_id: "1234567890", 
})

Request syntax with placeholder values


resp = client.authorize_cache_security_group_ingress({
  cache_security_group_name: "String", # required
  ec2_security_group_name: "String", # required
  ec2_security_group_owner_id: "String", # required
})

Response structure


resp.cache_security_group.owner_id #=> String
resp.cache_security_group.cache_security_group_name #=> String
resp.cache_security_group.description #=> String
resp.cache_security_group.ec2_security_groups #=> Array
resp.cache_security_group.ec2_security_groups[0].status #=> String
resp.cache_security_group.ec2_security_groups[0].ec2_security_group_name #=> String
resp.cache_security_group.ec2_security_groups[0].ec2_security_group_owner_id #=> String
resp.cache_security_group.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_security_group_name (required, String)

    The cache security group that allows network ingress.

  • :ec2_security_group_name (required, String)

    The Amazon EC2 security group to be authorized for ingress to the cache security group.

  • :ec2_security_group_owner_id (required, String)

    The Amazon account number of the Amazon EC2 security group owner. Note that this is not the same thing as an Amazon access key ID - you must provide a valid Amazon account number for this parameter.

Returns:

See Also:



615
616
617
618
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 615

def authorize_cache_security_group_ingress(params = {}, options = {})
  req = build_request(:authorize_cache_security_group_ingress, params)
  req.send_request(options)
end

#batch_apply_update_action(params = {}) ⇒ Types::UpdateActionResultsMessage

Apply the service update. For more information on service updates and applying them, see Applying Service Updates.

Examples:

Request syntax with placeholder values


resp = client.batch_apply_update_action({
  replication_group_ids: ["String"],
  cache_cluster_ids: ["String"],
  service_update_name: "String", # required
})

Response structure


resp.processed_update_actions #=> Array
resp.processed_update_actions[0].replication_group_id #=> String
resp.processed_update_actions[0].cache_cluster_id #=> String
resp.processed_update_actions[0].service_update_name #=> String
resp.processed_update_actions[0].update_action_status #=> String, one of "not-applied", "waiting-to-start", "in-progress", "stopping", "stopped", "complete", "scheduling", "scheduled", "not-applicable"
resp.unprocessed_update_actions #=> Array
resp.unprocessed_update_actions[0].replication_group_id #=> String
resp.unprocessed_update_actions[0].cache_cluster_id #=> String
resp.unprocessed_update_actions[0].service_update_name #=> String
resp.unprocessed_update_actions[0].error_type #=> String
resp.unprocessed_update_actions[0].error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replication_group_ids (Array<String>)

    The replication group IDs

  • :cache_cluster_ids (Array<String>)

    The cache cluster IDs

  • :service_update_name (required, String)

    The unique ID of the service update

Returns:

See Also:



667
668
669
670
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 667

def batch_apply_update_action(params = {}, options = {})
  req = build_request(:batch_apply_update_action, params)
  req.send_request(options)
end

#batch_stop_update_action(params = {}) ⇒ Types::UpdateActionResultsMessage

Stop the service update. For more information on service updates and stopping them, see Stopping Service Updates.

Examples:

Request syntax with placeholder values


resp = client.batch_stop_update_action({
  replication_group_ids: ["String"],
  cache_cluster_ids: ["String"],
  service_update_name: "String", # required
})

Response structure


resp.processed_update_actions #=> Array
resp.processed_update_actions[0].replication_group_id #=> String
resp.processed_update_actions[0].cache_cluster_id #=> String
resp.processed_update_actions[0].service_update_name #=> String
resp.processed_update_actions[0].update_action_status #=> String, one of "not-applied", "waiting-to-start", "in-progress", "stopping", "stopped", "complete", "scheduling", "scheduled", "not-applicable"
resp.unprocessed_update_actions #=> Array
resp.unprocessed_update_actions[0].replication_group_id #=> String
resp.unprocessed_update_actions[0].cache_cluster_id #=> String
resp.unprocessed_update_actions[0].service_update_name #=> String
resp.unprocessed_update_actions[0].error_type #=> String
resp.unprocessed_update_actions[0].error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replication_group_ids (Array<String>)

    The replication group IDs

  • :cache_cluster_ids (Array<String>)

    The cache cluster IDs

  • :service_update_name (required, String)

    The unique ID of the service update

Returns:

See Also:



719
720
721
722
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 719

def batch_stop_update_action(params = {}, options = {})
  req = build_request(:batch_stop_update_action, params)
  req.send_request(options)
end

#complete_migration(params = {}) ⇒ Types::CompleteMigrationResponse

Complete the migration of data.

Examples:

Request syntax with placeholder values


resp = client.complete_migration({
  replication_group_id: "String", # required
  force: false,
})

Response structure


resp.replication_group.replication_group_id #=> String
resp.replication_group.description #=> String
resp.replication_group.global_replication_group_info.global_replication_group_id #=> String
resp.replication_group.global_replication_group_info.global_replication_group_member_role #=> String
resp.replication_group.status #=> String
resp.replication_group.pending_modified_values.primary_cluster_id #=> String
resp.replication_group.pending_modified_values.automatic_failover_status #=> String, one of "enabled", "disabled"
resp.replication_group.pending_modified_values.resharding.slot_migration.progress_percentage #=> Float
resp.replication_group.pending_modified_values.auth_token_status #=> String, one of "SETTING", "ROTATING"
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_add #=> Array
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_add[0] #=> String
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_remove #=> Array
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_remove[0] #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations #=> Array
resp.replication_group.pending_modified_values.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.replication_group.pending_modified_values.transit_encryption_enabled #=> Boolean
resp.replication_group.pending_modified_values.transit_encryption_mode #=> String, one of "preferred", "required"
resp.replication_group.pending_modified_values.cluster_mode #=> String, one of "enabled", "disabled", "compatible"
resp.replication_group.member_clusters #=> Array
resp.replication_group.member_clusters[0] #=> String
resp.replication_group.node_groups #=> Array
resp.replication_group.node_groups[0].node_group_id #=> String
resp.replication_group.node_groups[0].status #=> String
resp.replication_group.node_groups[0].primary_endpoint.address #=> String
resp.replication_group.node_groups[0].primary_endpoint.port #=> Integer
resp.replication_group.node_groups[0].reader_endpoint.address #=> String
resp.replication_group.node_groups[0].reader_endpoint.port #=> Integer
resp.replication_group.node_groups[0].slots #=> String
resp.replication_group.node_groups[0].node_group_members #=> Array
resp.replication_group.node_groups[0].node_group_members[0].cache_cluster_id #=> String
resp.replication_group.node_groups[0].node_group_members[0].cache_node_id #=> String
resp.replication_group.node_groups[0].node_group_members[0].read_endpoint.address #=> String
resp.replication_group.node_groups[0].node_group_members[0].read_endpoint.port #=> Integer
resp.replication_group.node_groups[0].node_group_members[0].preferred_availability_zone #=> String
resp.replication_group.node_groups[0].node_group_members[0].preferred_outpost_arn #=> String
resp.replication_group.node_groups[0].node_group_members[0].current_role #=> String
resp.replication_group.snapshotting_cluster_id #=> String
resp.replication_group.automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.replication_group.multi_az #=> String, one of "enabled", "disabled"
resp.replication_group.configuration_endpoint.address #=> String
resp.replication_group.configuration_endpoint.port #=> Integer
resp.replication_group.snapshot_retention_limit #=> Integer
resp.replication_group.snapshot_window #=> String
resp.replication_group.cluster_enabled #=> Boolean
resp.replication_group.cache_node_type #=> String
resp.replication_group.auth_token_enabled #=> Boolean
resp.replication_group.auth_token_last_modified_date #=> Time
resp.replication_group.transit_encryption_enabled #=> Boolean
resp.replication_group.at_rest_encryption_enabled #=> Boolean
resp.replication_group.member_clusters_outpost_arns #=> Array
resp.replication_group.member_clusters_outpost_arns[0] #=> String
resp.replication_group.kms_key_id #=> String
resp.replication_group.arn #=> String
resp.replication_group.user_group_ids #=> Array
resp.replication_group.user_group_ids[0] #=> String
resp.replication_group.log_delivery_configurations #=> Array
resp.replication_group.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.replication_group.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.replication_group.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.replication_group.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.replication_group.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.replication_group.log_delivery_configurations[0].status #=> String, one of "active", "enabling", "modifying", "disabling", "error"
resp.replication_group.log_delivery_configurations[0].message #=> String
resp.replication_group.replication_group_create_time #=> Time
resp.replication_group.data_tiering #=> String, one of "enabled", "disabled"
resp.replication_group.auto_minor_version_upgrade #=> Boolean
resp.replication_group.network_type #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.replication_group.ip_discovery #=> String, one of "ipv4", "ipv6"
resp.replication_group.transit_encryption_mode #=> String, one of "preferred", "required"
resp.replication_group.cluster_mode #=> String, one of "enabled", "disabled", "compatible"
resp.replication_group.engine #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replication_group_id (required, String)

    The ID of the replication group to which data is being migrated.

  • :force (Boolean)

    Forces the migration to stop without ensuring that data is in sync. It is recommended to use this option only to abort the migration and not recommended when application wants to continue migration to ElastiCache.

Returns:

See Also:



828
829
830
831
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 828

def complete_migration(params = {}, options = {})
  req = build_request(:complete_migration, params)
  req.send_request(options)
end

#copy_serverless_cache_snapshot(params = {}) ⇒ Types::CopyServerlessCacheSnapshotResponse

Creates a copy of an existing serverless cache’s snapshot. Available for Valkey, Redis OSS and Serverless Memcached only.

Examples:

Request syntax with placeholder values


resp = client.copy_serverless_cache_snapshot({
  source_serverless_cache_snapshot_name: "String", # required
  target_serverless_cache_snapshot_name: "String", # required
  kms_key_id: "String",
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.serverless_cache_snapshot.serverless_cache_snapshot_name #=> String
resp.serverless_cache_snapshot.arn #=> String
resp.serverless_cache_snapshot.kms_key_id #=> String
resp.serverless_cache_snapshot.snapshot_type #=> String
resp.serverless_cache_snapshot.status #=> String
resp.serverless_cache_snapshot.create_time #=> Time
resp.serverless_cache_snapshot.expiry_time #=> Time
resp.serverless_cache_snapshot.bytes_used_for_cache #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.serverless_cache_name #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.engine #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.major_engine_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :source_serverless_cache_snapshot_name (required, String)

    The identifier of the existing serverless cache’s snapshot to be copied. Available for Valkey, Redis OSS and Serverless Memcached only.

  • :target_serverless_cache_snapshot_name (required, String)

    The identifier for the snapshot to be created. Available for Valkey, Redis OSS and Serverless Memcached only.

  • :kms_key_id (String)

    The identifier of the KMS key used to encrypt the target snapshot. Available for Valkey, Redis OSS and Serverless Memcached only.

  • :tags (Array<Types::Tag>)

    A list of tags to be added to the target snapshot resource. A tag is a key-value pair. Available for Valkey, Redis OSS and Serverless Memcached only. Default: NULL

Returns:

See Also:



889
890
891
892
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 889

def copy_serverless_cache_snapshot(params = {}, options = {})
  req = build_request(:copy_serverless_cache_snapshot, params)
  req.send_request(options)
end

#copy_snapshot(params = {}) ⇒ Types::CopySnapshotResult

Makes a copy of an existing snapshot.

This operation is valid for Valkey or Redis OSS only.

Users or groups that have permissions to use the CopySnapshot operation can create their own Amazon S3 buckets and copy snapshots to it. To control access to your snapshots, use an IAM policy to control who has the ability to use the CopySnapshot operation. For more information about using IAM to control the use of ElastiCache operations, see Exporting Snapshots and Authentication & Access Control.

You could receive the following error messages.

Error Messages

  • Error Message: The S3 bucket %s is outside of the region.

    Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide.

  • Error Message: The S3 bucket %s does not exist.

    Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide.

  • Error Message: The S3 bucket %s is not owned by the authenticated user.

    Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide.

  • Error Message: The authenticated user does not have sufficient permissions to perform the desired activity.

    Solution: Contact your system administrator to get the needed permissions.

  • Error Message: The S3 bucket %s already contains an object with key %s.

    Solution: Give the TargetSnapshotName a new and unique value. If exporting a snapshot, you could alternatively create a new Amazon S3 bucket and use this same value for TargetSnapshotName.

  • Error Message: ElastiCache has not been granted READ permissions %s on the S3 Bucket.

    Solution: Add List and Read permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide.

  • Error Message: ElastiCache has not been granted WRITE permissions %s on the S3 Bucket.

    Solution: Add Upload/Delete permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide.

  • Error Message: ElastiCache has not been granted READ_ACP permissions %s on the S3 Bucket.

    Solution: Add View Permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide.

Examples:

Example: CopySnapshot


# Copies a snapshot to a specified name.

resp = client.copy_snapshot({
  source_snapshot_name: "my-snapshot", 
  target_bucket: "", 
  target_snapshot_name: "my-snapshot-copy", 
})

resp.to_h outputs the following:
{
  snapshot: {
    auto_minor_version_upgrade: true, 
    cache_cluster_create_time: Time.parse("2016-12-21T22:24:04.955Z"), 
    cache_cluster_id: "my-redis4", 
    cache_node_type: "cache.m3.large", 
    cache_parameter_group_name: "default.redis3.2", 
    cache_subnet_group_name: "default", 
    engine: "redis", 
    engine_version: "3.2.4", 
    node_snapshots: [
      {
        cache_node_create_time: Time.parse("2016-12-21T22:24:04.955Z"), 
        cache_node_id: "0001", 
        cache_size: "3 MB", 
        snapshot_create_time: Time.parse("2016-12-28T07:00:52Z"), 
      }, 
    ], 
    num_cache_nodes: 1, 
    port: 6379, 
    preferred_availability_zone: "us-east-1c", 
    preferred_maintenance_window: "tue:09:30-tue:10:30", 
    snapshot_name: "my-snapshot-copy", 
    snapshot_retention_limit: 7, 
    snapshot_source: "manual", 
    snapshot_status: "creating", 
    snapshot_window: "07:00-08:00", 
    vpc_id: "vpc-3820329f3", 
  }, 
}

Request syntax with placeholder values


resp = client.copy_snapshot({
  source_snapshot_name: "String", # required
  target_snapshot_name: "String", # required
  target_bucket: "String",
  kms_key_id: "String",
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.snapshot.snapshot_name #=> String
resp.snapshot.replication_group_id #=> String
resp.snapshot.replication_group_description #=> String
resp.snapshot.cache_cluster_id #=> String
resp.snapshot.snapshot_status #=> String
resp.snapshot.snapshot_source #=> String
resp.snapshot.cache_node_type #=> String
resp.snapshot.engine #=> String
resp.snapshot.engine_version #=> String
resp.snapshot.num_cache_nodes #=> Integer
resp.snapshot.preferred_availability_zone #=> String
resp.snapshot.preferred_outpost_arn #=> String
resp.snapshot.cache_cluster_create_time #=> Time
resp.snapshot.preferred_maintenance_window #=> String
resp.snapshot.topic_arn #=> String
resp.snapshot.port #=> Integer
resp.snapshot.cache_parameter_group_name #=> String
resp.snapshot.cache_subnet_group_name #=> String
resp.snapshot.vpc_id #=> String
resp.snapshot.auto_minor_version_upgrade #=> Boolean
resp.snapshot.snapshot_retention_limit #=> Integer
resp.snapshot.snapshot_window #=> String
resp.snapshot.num_node_groups #=> Integer
resp.snapshot.automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.snapshot.node_snapshots #=> Array
resp.snapshot.node_snapshots[0].cache_cluster_id #=> String
resp.snapshot.node_snapshots[0].node_group_id #=> String
resp.snapshot.node_snapshots[0].cache_node_id #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.node_group_id #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.slots #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_count #=> Integer
resp.snapshot.node_snapshots[0].node_group_configuration.primary_availability_zone #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_availability_zones #=> Array
resp.snapshot.node_snapshots[0].node_group_configuration.replica_availability_zones[0] #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.primary_outpost_arn #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_outpost_arns #=> Array
resp.snapshot.node_snapshots[0].node_group_configuration.replica_outpost_arns[0] #=> String
resp.snapshot.node_snapshots[0].cache_size #=> String
resp.snapshot.node_snapshots[0].cache_node_create_time #=> Time
resp.snapshot.node_snapshots[0].snapshot_create_time #=> Time
resp.snapshot.kms_key_id #=> String
resp.snapshot.arn #=> String
resp.snapshot.data_tiering #=> String, one of "enabled", "disabled"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :source_snapshot_name (required, String)

    The name of an existing snapshot from which to make a copy.

  • :target_snapshot_name (required, String)

    A name for the snapshot copy. ElastiCache does not permit overwriting a snapshot, therefore this name must be unique within its context - ElastiCache or an Amazon S3 bucket if exporting.

  • :target_bucket (String)

    The Amazon S3 bucket to which the snapshot is exported. This parameter is used only when exporting a snapshot for external access.

    When using this parameter to export a snapshot, be sure Amazon ElastiCache has the needed permissions to this S3 bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the Amazon ElastiCache User Guide.

    For more information, see Exporting a Snapshot in the Amazon ElastiCache User Guide.

  • :kms_key_id (String)

    The ID of the KMS key used to encrypt the target snapshot.

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted.

Returns:

See Also:



1117
1118
1119
1120
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 1117

def copy_snapshot(params = {}, options = {})
  req = build_request(:copy_snapshot, params)
  req.send_request(options)
end

#create_cache_cluster(params = {}) ⇒ Types::CreateCacheClusterResult

Creates a cluster. All nodes in the cluster run the same protocol-compliant cache engine software, either Memcached, Valkey or Redis OSS.

This operation is not supported for Valkey or Redis OSS (cluster mode enabled) clusters.

Examples:

Example: CreateCacheCluster


# Creates a Memcached cluster with 2 nodes. 

resp = client.create_cache_cluster({
  az_mode: "cross-az", 
  cache_cluster_id: "my-memcached-cluster", 
  cache_node_type: "cache.r3.large", 
  cache_subnet_group_name: "default", 
  engine: "memcached", 
  engine_version: "1.4.24", 
  num_cache_nodes: 2, 
  port: 11211, 
})

resp.to_h outputs the following:
{
  cache_cluster: {
    auto_minor_version_upgrade: true, 
    cache_cluster_id: "my-memcached-cluster", 
    cache_cluster_status: "creating", 
    cache_node_type: "cache.r3.large", 
    cache_parameter_group: {
      cache_node_ids_to_reboot: [
      ], 
      cache_parameter_group_name: "default.memcached1.4", 
      parameter_apply_status: "in-sync", 
    }, 
    cache_security_groups: [
    ], 
    cache_subnet_group_name: "default", 
    client_download_landing_page: "https://console.aws.amazon.com/elasticache/home#client-download:", 
    engine: "memcached", 
    engine_version: "1.4.24", 
    num_cache_nodes: 2, 
    pending_modified_values: {
    }, 
    preferred_availability_zone: "Multiple", 
    preferred_maintenance_window: "wed:09:00-wed:10:00", 
  }, 
}

Example: CreateCacheCluster


# Creates a Redis cluster with 1 node. 

resp = client.create_cache_cluster({
  auto_minor_version_upgrade: true, 
  cache_cluster_id: "my-redis", 
  cache_node_type: "cache.r3.larage", 
  cache_subnet_group_name: "default", 
  engine: "redis", 
  engine_version: "3.2.4", 
  num_cache_nodes: 1, 
  port: 6379, 
  preferred_availability_zone: "us-east-1c", 
  snapshot_retention_limit: 7, 
})

resp.to_h outputs the following:
{
  cache_cluster: {
    auto_minor_version_upgrade: true, 
    cache_cluster_id: "my-redis", 
    cache_cluster_status: "creating", 
    cache_node_type: "cache.m3.large", 
    cache_parameter_group: {
      cache_node_ids_to_reboot: [
      ], 
      cache_parameter_group_name: "default.redis3.2", 
      parameter_apply_status: "in-sync", 
    }, 
    cache_security_groups: [
    ], 
    cache_subnet_group_name: "default", 
    client_download_landing_page: "https: //console.aws.amazon.com/elasticache/home#client-download: ", 
    engine: "redis", 
    engine_version: "3.2.4", 
    num_cache_nodes: 1, 
    pending_modified_values: {
    }, 
    preferred_availability_zone: "us-east-1c", 
    preferred_maintenance_window: "fri: 05: 30-fri: 06: 30", 
    snapshot_retention_limit: 7, 
    snapshot_window: "10: 00-11: 00", 
  }, 
}

Request syntax with placeholder values


resp = client.create_cache_cluster({
  cache_cluster_id: "String", # required
  replication_group_id: "String",
  az_mode: "single-az", # accepts single-az, cross-az
  preferred_availability_zone: "String",
  preferred_availability_zones: ["String"],
  num_cache_nodes: 1,
  cache_node_type: "String",
  engine: "String",
  engine_version: "String",
  cache_parameter_group_name: "String",
  cache_subnet_group_name: "String",
  cache_security_group_names: ["String"],
  security_group_ids: ["String"],
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
  snapshot_arns: ["String"],
  snapshot_name: "String",
  preferred_maintenance_window: "String",
  port: 1,
  notification_topic_arn: "String",
  auto_minor_version_upgrade: false,
  snapshot_retention_limit: 1,
  snapshot_window: "String",
  auth_token: "String",
  outpost_mode: "single-outpost", # accepts single-outpost, cross-outpost
  preferred_outpost_arn: "String",
  preferred_outpost_arns: ["String"],
  log_delivery_configurations: [
    {
      log_type: "slow-log", # accepts slow-log, engine-log
      destination_type: "cloudwatch-logs", # accepts cloudwatch-logs, kinesis-firehose
      destination_details: {
        cloud_watch_logs_details: {
          log_group: "String",
        },
        kinesis_firehose_details: {
          delivery_stream: "String",
        },
      },
      log_format: "text", # accepts text, json
      enabled: false,
    },
  ],
  transit_encryption_enabled: false,
  network_type: "ipv4", # accepts ipv4, ipv6, dual_stack
  ip_discovery: "ipv4", # accepts ipv4, ipv6
})

Response structure


resp.cache_cluster.cache_cluster_id #=> String
resp.cache_cluster.configuration_endpoint.address #=> String
resp.cache_cluster.configuration_endpoint.port #=> Integer
resp.cache_cluster.client_download_landing_page #=> String
resp.cache_cluster.cache_node_type #=> String
resp.cache_cluster.engine #=> String
resp.cache_cluster.engine_version #=> String
resp.cache_cluster.cache_cluster_status #=> String
resp.cache_cluster.num_cache_nodes #=> Integer
resp.cache_cluster.preferred_availability_zone #=> String
resp.cache_cluster.preferred_outpost_arn #=> String
resp.cache_cluster.cache_cluster_create_time #=> Time
resp.cache_cluster.preferred_maintenance_window #=> String
resp.cache_cluster.pending_modified_values.num_cache_nodes #=> Integer
resp.cache_cluster.pending_modified_values.cache_node_ids_to_remove #=> Array
resp.cache_cluster.pending_modified_values.cache_node_ids_to_remove[0] #=> String
resp.cache_cluster.pending_modified_values.engine_version #=> String
resp.cache_cluster.pending_modified_values.cache_node_type #=> String
resp.cache_cluster.pending_modified_values.auth_token_status #=> String, one of "SETTING", "ROTATING"
resp.cache_cluster.pending_modified_values.log_delivery_configurations #=> Array
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.cache_cluster.pending_modified_values.transit_encryption_enabled #=> Boolean
resp.cache_cluster.pending_modified_values.transit_encryption_mode #=> String, one of "preferred", "required"
resp.cache_cluster.notification_configuration.topic_arn #=> String
resp.cache_cluster.notification_configuration.topic_status #=> String
resp.cache_cluster.cache_security_groups #=> Array
resp.cache_cluster.cache_security_groups[0].cache_security_group_name #=> String
resp.cache_cluster.cache_security_groups[0].status #=> String
resp.cache_cluster.cache_parameter_group.cache_parameter_group_name #=> String
resp.cache_cluster.cache_parameter_group.parameter_apply_status #=> String
resp.cache_cluster.cache_parameter_group.cache_node_ids_to_reboot #=> Array
resp.cache_cluster.cache_parameter_group.cache_node_ids_to_reboot[0] #=> String
resp.cache_cluster.cache_subnet_group_name #=> String
resp.cache_cluster.cache_nodes #=> Array
resp.cache_cluster.cache_nodes[0].cache_node_id #=> String
resp.cache_cluster.cache_nodes[0].cache_node_status #=> String
resp.cache_cluster.cache_nodes[0].cache_node_create_time #=> Time
resp.cache_cluster.cache_nodes[0].endpoint.address #=> String
resp.cache_cluster.cache_nodes[0].endpoint.port #=> Integer
resp.cache_cluster.cache_nodes[0].parameter_group_status #=> String
resp.cache_cluster.cache_nodes[0].source_cache_node_id #=> String
resp.cache_cluster.cache_nodes[0].customer_availability_zone #=> String
resp.cache_cluster.cache_nodes[0].customer_outpost_arn #=> String
resp.cache_cluster.auto_minor_version_upgrade #=> Boolean
resp.cache_cluster.security_groups #=> Array
resp.cache_cluster.security_groups[0].security_group_id #=> String
resp.cache_cluster.security_groups[0].status #=> String
resp.cache_cluster.replication_group_id #=> String
resp.cache_cluster.snapshot_retention_limit #=> Integer
resp.cache_cluster.snapshot_window #=> String
resp.cache_cluster.auth_token_enabled #=> Boolean
resp.cache_cluster.auth_token_last_modified_date #=> Time
resp.cache_cluster.transit_encryption_enabled #=> Boolean
resp.cache_cluster.at_rest_encryption_enabled #=> Boolean
resp.cache_cluster.arn #=> String
resp.cache_cluster.replication_group_log_delivery_enabled #=> Boolean
resp.cache_cluster.log_delivery_configurations #=> Array
resp.cache_cluster.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.cache_cluster.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.cache_cluster.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.cache_cluster.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.cache_cluster.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.cache_cluster.log_delivery_configurations[0].status #=> String, one of "active", "enabling", "modifying", "disabling", "error"
resp.cache_cluster.log_delivery_configurations[0].message #=> String
resp.cache_cluster.network_type #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.cache_cluster.ip_discovery #=> String, one of "ipv4", "ipv6"
resp.cache_cluster.transit_encryption_mode #=> String, one of "preferred", "required"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_cluster_id (required, String)

    The node group (shard) identifier. This parameter is stored as a lowercase string.

    Constraints:

    • A name must contain from 1 to 50 alphanumeric characters or hyphens.

    • The first character must be a letter.

    • A name cannot end with a hyphen or contain two consecutive hyphens.

  • :replication_group_id (String)

    The ID of the replication group to which this cluster should belong. If this parameter is specified, the cluster is added to the specified replication group as a read replica; otherwise, the cluster is a standalone primary that is not part of any replication group.

    If the specified replication group is Multi-AZ enabled and the Availability Zone is not specified, the cluster is created in Availability Zones that provide the best spread of read replicas across Availability Zones.

    This parameter is only valid if the Engine parameter is redis.

  • :az_mode (String)

    Specifies whether the nodes in this Memcached cluster are created in a single Availability Zone or created across multiple Availability Zones in the cluster's region.

    This parameter is only supported for Memcached clusters.

    If the AZMode and PreferredAvailabilityZones are not specified, ElastiCache assumes single-az mode.

  • :preferred_availability_zone (String)

    The EC2 Availability Zone in which the cluster is created.

    All nodes belonging to this cluster are placed in the preferred Availability Zone. If you want to create your nodes across multiple Availability Zones, use PreferredAvailabilityZones.

    Default: System chosen Availability Zone.

  • :preferred_availability_zones (Array<String>)

    A list of the Availability Zones in which cache nodes are created. The order of the zones in the list is not important.

    This option is only supported on Memcached.

    If you are creating your cluster in an Amazon VPC (recommended) you can only locate nodes in Availability Zones that are associated with the subnets in the selected subnet group.

    The number of Availability Zones listed must equal the value of NumCacheNodes.

    If you want all the nodes in the same Availability Zone, use PreferredAvailabilityZone instead, or repeat the Availability Zone multiple times in the list.

    Default: System chosen Availability Zones.

  • :num_cache_nodes (Integer)

    The initial number of cache nodes that the cluster has.

    For clusters running Valkey or Redis OSS, this value must be 1. For clusters running Memcached, this value must be between 1 and 40.

    If you need more than 40 nodes for your Memcached cluster, please fill out the ElastiCache Limit Increase Request form at http://aws.amazon.com/contact-us/elasticache-node-limit-request/.

  • :cache_node_type (String)

    The compute and memory capacity of the nodes in the node group (shard).

    The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

    • General purpose:

      • Current generation:

        M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge

        For region availability, see Supported Node Types

        M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge

        M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge

        M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge

        T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium

        T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium

        T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium

      • Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.)

        T1 node types: cache.t1.micro

        M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge

        M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge

    • Compute optimized:

      • Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.)

        C1 node types: cache.c1.xlarge

    • Memory optimized:

      • Current generation:

        R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge

        For region availability, see Supported Node Types

        R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge

        R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge

        R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge

      • Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.)

        M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge

        R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge

    Additional node type info

    • All current generation instance types are created in Amazon VPC by default.

    • Valkey or Redis OSS append-only files (AOF) are not supported for T1 or T2 instances.

    • Valkey or Redis OSS Multi-AZ with automatic failover is not supported on T1 instances.

    • The configuration variables appendonly and appendfsync are not supported on Valkey, or on Redis OSS version 2.8.22 and later.

  • :engine (String)

    The name of the cache engine to be used for this cluster.

    Valid values for this parameter are: memcached | redis

  • :engine_version (String)

    The version number of the cache engine to be used for this cluster. To view the supported cache engine versions, use the DescribeCacheEngineVersions operation.

    Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cluster or replication group and create it anew with the earlier engine version.

  • :cache_parameter_group_name (String)

    The name of the parameter group to associate with this cluster. If this argument is omitted, the default parameter group for the specified engine is used. You cannot use any parameter group which has cluster-enabled='yes' when creating a cluster.

  • :cache_subnet_group_name (String)

    The name of the subnet group to be used for the cluster.

    Use this parameter only when you are creating a cluster in an Amazon Virtual Private Cloud (Amazon VPC).

    If you're going to launch your cluster in an Amazon VPC, you need to create a subnet group before you start creating a cluster. For more information, see Subnets and Subnet Groups.

  • :cache_security_group_names (Array<String>)

    A list of security group names to associate with this cluster.

    Use this parameter only when you are creating a cluster outside of an Amazon Virtual Private Cloud (Amazon VPC).

  • :security_group_ids (Array<String>)

    One or more VPC security groups associated with the cluster.

    Use this parameter only when you are creating a cluster in an Amazon Virtual Private Cloud (Amazon VPC).

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource.

  • :snapshot_arns (Array<String>)

    A single-element string list containing an Amazon Resource Name (ARN) that uniquely identifies a Valkey or Redis OSS RDB snapshot file stored in Amazon S3. The snapshot file is used to populate the node group (shard). The Amazon S3 object name in the ARN cannot contain any commas.

    This parameter is only valid if the Engine parameter is redis.

    Example of an Amazon S3 ARN: arn:aws:s3:::my_bucket/snapshot1.rdb

  • :snapshot_name (String)

    The name of a Valkey or Redis OSS snapshot from which to restore data into the new node group (shard). The snapshot status changes to restoring while the new node group (shard) is being created.

    This parameter is only valid if the Engine parameter is redis.

  • :preferred_maintenance_window (String)

    Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

  • :port (Integer)

    The port number on which each of the cache nodes accepts connections.

  • :notification_topic_arn (String)

    The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic to which notifications are sent.

    The Amazon SNS topic owner must be the same as the cluster owner.

  • :auto_minor_version_upgrade (Boolean)

     If you are running Valkey 7.2 and above or Redis OSS engine version 6.0 and above, set this parameter to yes to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions. 

  • :snapshot_retention_limit (Integer)

    The number of days for which ElastiCache retains automatic snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot taken today is retained for 5 days before being deleted.

    This parameter is only valid if the Engine parameter is redis.

    Default: 0 (i.e., automatic backups are disabled for this cache cluster).

  • :snapshot_window (String)

    The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).

    Example: 05:00-09:00

    If you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.

    This parameter is only valid if the Engine parameter is redis.

  • :auth_token (String)

    Reserved parameter. The password used to access a password protected server.

    Password constraints:

    • Must be only printable ASCII characters.

    • Must be at least 16 characters and no more than 128 characters in length.

    • The only permitted printable special characters are !, &, #, $, ^, <, >, and -. Other printable special characters cannot be used in the AUTH token.

    For more information, see AUTH password at http://redis.io/commands/AUTH.

  • :outpost_mode (String)

    Specifies whether the nodes in the cluster are created in a single outpost or across multiple outposts.

  • :preferred_outpost_arn (String)

    The outpost ARN in which the cache cluster is created.

  • :preferred_outpost_arns (Array<String>)

    The outpost ARNs in which the cache cluster is created.

  • :log_delivery_configurations (Array<Types::LogDeliveryConfigurationRequest>)

    Specifies the destination, format and type of the logs.

  • :transit_encryption_enabled (Boolean)

    A flag that enables in-transit encryption when set to true.

  • :network_type (String)

    Must be either ipv4 | ipv6 | dual_stack. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system.

  • :ip_discovery (String)

    The network type you choose when modifying a cluster, either ipv4 | ipv6. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system.

Returns:

See Also:



1736
1737
1738
1739
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 1736

def create_cache_cluster(params = {}, options = {})
  req = build_request(:create_cache_cluster, params)
  req.send_request(options)
end

#create_cache_parameter_group(params = {}) ⇒ Types::CreateCacheParameterGroupResult

Creates a new Amazon ElastiCache cache parameter group. An ElastiCache cache parameter group is a collection of parameters and their values that are applied to all of the nodes in any cluster or replication group using the CacheParameterGroup.

A newly created CacheParameterGroup is an exact duplicate of the default parameter group for the CacheParameterGroupFamily. To customize the newly created CacheParameterGroup you can change the values of specific parameters. For more information, see:

Examples:

Example: CreateCacheParameterGroup


# Creates the Amazon ElastiCache parameter group custom-redis2-8.

resp = client.create_cache_parameter_group({
  cache_parameter_group_family: "redis2.8", 
  cache_parameter_group_name: "custom-redis2-8", 
  description: "Custom Redis 2.8 parameter group.", 
})

resp.to_h outputs the following:
{
  cache_parameter_group: {
    cache_parameter_group_family: "redis2.8", 
    cache_parameter_group_name: "custom-redis2-8", 
    description: "Custom Redis 2.8 parameter group.", 
  }, 
}

Request syntax with placeholder values


resp = client.create_cache_parameter_group({
  cache_parameter_group_name: "String", # required
  cache_parameter_group_family: "String", # required
  description: "String", # required
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.cache_parameter_group.cache_parameter_group_name #=> String
resp.cache_parameter_group.cache_parameter_group_family #=> String
resp.cache_parameter_group.description #=> String
resp.cache_parameter_group.is_global #=> Boolean
resp.cache_parameter_group.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_parameter_group_name (required, String)

    A user-specified name for the cache parameter group.

  • :cache_parameter_group_family (required, String)

    The name of the cache parameter group family that the cache parameter group can be used with.

    Valid values are: memcached1.4 | memcached1.5 | memcached1.6 | redis2.6 | redis2.8 | redis3.2 | redis4.0 | redis5.0 | redis6.x | redis7

  • :description (required, String)

    A user-specified description for the cache parameter group.

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted.

Returns:

See Also:



1829
1830
1831
1832
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 1829

def create_cache_parameter_group(params = {}, options = {})
  req = build_request(:create_cache_parameter_group, params)
  req.send_request(options)
end

#create_cache_security_group(params = {}) ⇒ Types::CreateCacheSecurityGroupResult

Creates a new cache security group. Use a cache security group to control access to one or more clusters.

Cache security groups are only used when you are creating a cluster outside of an Amazon Virtual Private Cloud (Amazon VPC). If you are creating a cluster inside of a VPC, use a cache subnet group instead. For more information, see CreateCacheSubnetGroup.

Examples:

Example: CreateCacheSecurityGroup


# Creates an ElastiCache security group. ElastiCache security groups are only for clusters not running in an AWS VPC.

resp = client.create_cache_security_group({
  cache_security_group_name: "my-cache-sec-grp", 
  description: "Example ElastiCache security group.", 
})

Request syntax with placeholder values


resp = client.create_cache_security_group({
  cache_security_group_name: "String", # required
  description: "String", # required
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.cache_security_group.owner_id #=> String
resp.cache_security_group.cache_security_group_name #=> String
resp.cache_security_group.description #=> String
resp.cache_security_group.ec2_security_groups #=> Array
resp.cache_security_group.ec2_security_groups[0].status #=> String
resp.cache_security_group.ec2_security_groups[0].ec2_security_group_name #=> String
resp.cache_security_group.ec2_security_groups[0].ec2_security_group_owner_id #=> String
resp.cache_security_group.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_security_group_name (required, String)

    A name for the cache security group. This value is stored as a lowercase string.

    Constraints: Must contain no more than 255 alphanumeric characters. Cannot be the word "Default".

    Example: mysecuritygroup

  • :description (required, String)

    A description for the cache security group.

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted.

Returns:

See Also:



1905
1906
1907
1908
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 1905

def create_cache_security_group(params = {}, options = {})
  req = build_request(:create_cache_security_group, params)
  req.send_request(options)
end

#create_cache_subnet_group(params = {}) ⇒ Types::CreateCacheSubnetGroupResult

Creates a new cache subnet group.

Use this parameter only when you are creating a cluster in an Amazon Virtual Private Cloud (Amazon VPC).

Examples:

Example: CreateCacheSubnet


# Creates a new cache subnet group.

resp = client.create_cache_subnet_group({
  cache_subnet_group_description: "Sample subnet group", 
  cache_subnet_group_name: "my-sn-grp2", 
  subnet_ids: [
    "subnet-6f28c982", 
    "subnet-bcd382f3", 
    "subnet-845b3e7c0", 
  ], 
})

resp.to_h outputs the following:
{
  cache_subnet_group: {
    cache_subnet_group_description: "My subnet group.", 
    cache_subnet_group_name: "my-sn-grp", 
    subnets: [
      {
        subnet_availability_zone: {
          name: "us-east-1a", 
        }, 
        subnet_identifier: "subnet-6f28c982", 
      }, 
      {
        subnet_availability_zone: {
          name: "us-east-1c", 
        }, 
        subnet_identifier: "subnet-bcd382f3", 
      }, 
      {
        subnet_availability_zone: {
          name: "us-east-1b", 
        }, 
        subnet_identifier: "subnet-845b3e7c0", 
      }, 
    ], 
    vpc_id: "vpc-91280df6", 
  }, 
}

Request syntax with placeholder values


resp = client.create_cache_subnet_group({
  cache_subnet_group_name: "String", # required
  cache_subnet_group_description: "String", # required
  subnet_ids: ["String"], # required
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.cache_subnet_group.cache_subnet_group_name #=> String
resp.cache_subnet_group.cache_subnet_group_description #=> String
resp.cache_subnet_group.vpc_id #=> String
resp.cache_subnet_group.subnets #=> Array
resp.cache_subnet_group.subnets[0].subnet_identifier #=> String
resp.cache_subnet_group.subnets[0].subnet_availability_zone.name #=> String
resp.cache_subnet_group.subnets[0].subnet_outpost.subnet_outpost_arn #=> String
resp.cache_subnet_group.subnets[0].supported_network_types #=> Array
resp.cache_subnet_group.subnets[0].supported_network_types[0] #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.cache_subnet_group.arn #=> String
resp.cache_subnet_group.supported_network_types #=> Array
resp.cache_subnet_group.supported_network_types[0] #=> String, one of "ipv4", "ipv6", "dual_stack"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_subnet_group_name (required, String)

    A name for the cache subnet group. This value is stored as a lowercase string.

    Constraints: Must contain no more than 255 alphanumeric characters or hyphens.

    Example: mysubnetgroup

  • :cache_subnet_group_description (required, String)

    A description for the cache subnet group.

  • :subnet_ids (required, Array<String>)

    A list of VPC subnet IDs for the cache subnet group.

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted.

Returns:

See Also:



2016
2017
2018
2019
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 2016

def create_cache_subnet_group(params = {}, options = {})
  req = build_request(:create_cache_subnet_group, params)
  req.send_request(options)
end

#create_global_replication_group(params = {}) ⇒ Types::CreateGlobalReplicationGroupResult

Global Datastore offers fully managed, fast, reliable and secure cross-region replication. Using Global Datastore with Valkey or Redis OSS, you can create cross-region read replica clusters for ElastiCache to enable low-latency reads and disaster recovery across regions. For more information, see Replication Across Regions Using Global Datastore.

  • The GlobalReplicationGroupIdSuffix is the name of the Global datastore.

  • The PrimaryReplicationGroupId represents the name of the primary cluster that accepts writes and will replicate updates to the secondary cluster.

Examples:

Request syntax with placeholder values


resp = client.create_global_replication_group({
  global_replication_group_id_suffix: "String", # required
  global_replication_group_description: "String",
  primary_replication_group_id: "String", # required
})

Response structure


resp.global_replication_group.global_replication_group_id #=> String
resp.global_replication_group.global_replication_group_description #=> String
resp.global_replication_group.status #=> String
resp.global_replication_group.cache_node_type #=> String
resp.global_replication_group.engine #=> String
resp.global_replication_group.engine_version #=> String
resp.global_replication_group.members #=> Array
resp.global_replication_group.members[0].replication_group_id #=> String
resp.global_replication_group.members[0].replication_group_region #=> String
resp.global_replication_group.members[0].role #=> String
resp.global_replication_group.members[0].automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.global_replication_group.members[0].status #=> String
resp.global_replication_group.cluster_enabled #=> Boolean
resp.global_replication_group.global_node_groups #=> Array
resp.global_replication_group.global_node_groups[0].global_node_group_id #=> String
resp.global_replication_group.global_node_groups[0].slots #=> String
resp.global_replication_group.auth_token_enabled #=> Boolean
resp.global_replication_group.transit_encryption_enabled #=> Boolean
resp.global_replication_group.at_rest_encryption_enabled #=> Boolean
resp.global_replication_group.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :global_replication_group_id_suffix (required, String)

    The suffix name of a Global datastore. Amazon ElastiCache automatically applies a prefix to the Global datastore ID when it is created. Each Amazon Region has its own prefix. For instance, a Global datastore ID created in the US-West-1 region will begin with "dsdfu" along with the suffix name you provide. The suffix, combined with the auto-generated prefix, guarantees uniqueness of the Global datastore name across multiple regions.

    For a full list of Amazon Regions and their respective Global datastore iD prefixes, see Using the Amazon CLI with Global datastores .

  • :global_replication_group_description (String)

    Provides details of the Global datastore

  • :primary_replication_group_id (required, String)

    The name of the primary cluster that accepts writes and will replicate updates to the secondary cluster.

Returns:

See Also:



2102
2103
2104
2105
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 2102

def create_global_replication_group(params = {}, options = {})
  req = build_request(:create_global_replication_group, params)
  req.send_request(options)
end

#create_replication_group(params = {}) ⇒ Types::CreateReplicationGroupResult

Creates a Valkey or Redis OSS (cluster mode disabled) or a Valkey or Redis OSS (cluster mode enabled) replication group.

This API can be used to create a standalone regional replication group or a secondary replication group associated with a Global datastore.

A Valkey or Redis OSS (cluster mode disabled) replication group is a collection of nodes, where one of the nodes is a read/write primary and the others are read-only replicas. Writes to the primary are asynchronously propagated to the replicas.

A Valkey or Redis OSS cluster-mode enabled cluster is comprised of from 1 to 90 shards (API/CLI: node groups). Each shard has a primary node and up to 5 read-only replica nodes. The configuration can range from 90 shards and 0 replicas to 15 shards and 5 replicas, which is the maximum number or replicas allowed.

The node or shard limit can be increased to a maximum of 500 per cluster if the Valkey or Redis OSS engine version is 5.0.6 or higher. For example, you can choose to configure a 500 node cluster that ranges between 83 shards (one primary and 5 replicas per shard) and 500 shards (single primary and no replicas). Make sure there are enough available IP addresses to accommodate the increase. Common pitfalls include the subnets in the subnet group have too small a CIDR range or the subnets are shared and heavily used by other clusters. For more information, see Creating a Subnet Group. For versions below 5.0.6, the limit is 250 per cluster.

To request a limit increase, see Amazon Service Limits and choose the limit type Nodes per cluster per instance type.

When a Valkey or Redis OSS (cluster mode disabled) replication group has been successfully created, you can add one or more read replicas to it, up to a total of 5 read replicas. If you need to increase or decrease the number of node groups (console: shards), you can use scaling. For more information, see Scaling self-designed clusters in the ElastiCache User Guide.

This operation is valid for Valkey and Redis OSS only.

Examples:

Example: CreateCacheReplicationGroup


# Creates a Redis replication group with 3 nodes.

resp = client.create_replication_group({
  automatic_failover_enabled: true, 
  cache_node_type: "cache.m3.medium", 
  engine: "redis", 
  engine_version: "2.8.24", 
  num_cache_clusters: 3, 
  replication_group_description: "A Redis replication group.", 
  replication_group_id: "my-redis-rg", 
  snapshot_retention_limit: 30, 
})

resp.to_h outputs the following:
{
  replication_group: {
    automatic_failover: "enabling", 
    description: "A Redis replication group.", 
    member_clusters: [
      "my-redis-rg-001", 
      "my-redis-rg-002", 
      "my-redis-rg-003", 
    ], 
    pending_modified_values: {
    }, 
    replication_group_id: "my-redis-rg", 
    snapshotting_cluster_id: "my-redis-rg-002", 
    status: "creating", 
  }, 
}

Example: CreateReplicationGroup


# Creates a Redis (cluster mode enabled) replication group with two shards. One shard has one read replica node and the
# other shard has two read replicas.

resp = client.create_replication_group({
  auto_minor_version_upgrade: true, 
  cache_node_type: "cache.m3.medium", 
  cache_parameter_group_name: "default.redis3.2.cluster.on", 
  engine: "redis", 
  engine_version: "3.2.4", 
  node_group_configuration: [
    {
      primary_availability_zone: "us-east-1c", 
      replica_availability_zones: [
        "us-east-1b", 
      ], 
      replica_count: 1, 
      slots: "0-8999", 
    }, 
    {
      primary_availability_zone: "us-east-1a", 
      replica_availability_zones: [
        "us-east-1a", 
        "us-east-1c", 
      ], 
      replica_count: 2, 
      slots: "9000-16383", 
    }, 
  ], 
  num_node_groups: 2, 
  replication_group_description: "A multi-sharded replication group", 
  replication_group_id: "clustered-redis-rg", 
  snapshot_retention_limit: 8, 
})

resp.to_h outputs the following:
{
  replication_group: {
    automatic_failover: "enabled", 
    description: "Sharded replication group", 
    member_clusters: [
      "rc-rg3-0001-001", 
      "rc-rg3-0001-002", 
      "rc-rg3-0002-001", 
      "rc-rg3-0002-002", 
      "rc-rg3-0002-003", 
    ], 
    pending_modified_values: {
    }, 
    replication_group_id: "clustered-redis-rg", 
    snapshot_retention_limit: 8, 
    snapshot_window: "05:30-06:30", 
    status: "creating", 
  }, 
}

Request syntax with placeholder values


resp = client.create_replication_group({
  replication_group_id: "String", # required
  replication_group_description: "String", # required
  global_replication_group_id: "String",
  primary_cluster_id: "String",
  automatic_failover_enabled: false,
  multi_az_enabled: false,
  num_cache_clusters: 1,
  preferred_cache_cluster_a_zs: ["String"],
  num_node_groups: 1,
  replicas_per_node_group: 1,
  node_group_configuration: [
    {
      node_group_id: "AllowedNodeGroupId",
      slots: "String",
      replica_count: 1,
      primary_availability_zone: "String",
      replica_availability_zones: ["String"],
      primary_outpost_arn: "String",
      replica_outpost_arns: ["String"],
    },
  ],
  cache_node_type: "String",
  engine: "String",
  engine_version: "String",
  cache_parameter_group_name: "String",
  cache_subnet_group_name: "String",
  cache_security_group_names: ["String"],
  security_group_ids: ["String"],
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
  snapshot_arns: ["String"],
  snapshot_name: "String",
  preferred_maintenance_window: "String",
  port: 1,
  notification_topic_arn: "String",
  auto_minor_version_upgrade: false,
  snapshot_retention_limit: 1,
  snapshot_window: "String",
  auth_token: "String",
  transit_encryption_enabled: false,
  at_rest_encryption_enabled: false,
  kms_key_id: "String",
  user_group_ids: ["UserGroupId"],
  log_delivery_configurations: [
    {
      log_type: "slow-log", # accepts slow-log, engine-log
      destination_type: "cloudwatch-logs", # accepts cloudwatch-logs, kinesis-firehose
      destination_details: {
        cloud_watch_logs_details: {
          log_group: "String",
        },
        kinesis_firehose_details: {
          delivery_stream: "String",
        },
      },
      log_format: "text", # accepts text, json
      enabled: false,
    },
  ],
  data_tiering_enabled: false,
  network_type: "ipv4", # accepts ipv4, ipv6, dual_stack
  ip_discovery: "ipv4", # accepts ipv4, ipv6
  transit_encryption_mode: "preferred", # accepts preferred, required
  cluster_mode: "enabled", # accepts enabled, disabled, compatible
  serverless_cache_snapshot_name: "String",
})

Response structure


resp.replication_group.replication_group_id #=> String
resp.replication_group.description #=> String
resp.replication_group.global_replication_group_info.global_replication_group_id #=> String
resp.replication_group.global_replication_group_info.global_replication_group_member_role #=> String
resp.replication_group.status #=> String
resp.replication_group.pending_modified_values.primary_cluster_id #=> String
resp.replication_group.pending_modified_values.automatic_failover_status #=> String, one of "enabled", "disabled"
resp.replication_group.pending_modified_values.resharding.slot_migration.progress_percentage #=> Float
resp.replication_group.pending_modified_values.auth_token_status #=> String, one of "SETTING", "ROTATING"
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_add #=> Array
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_add[0] #=> String
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_remove #=> Array
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_remove[0] #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations #=> Array
resp.replication_group.pending_modified_values.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.replication_group.pending_modified_values.transit_encryption_enabled #=> Boolean
resp.replication_group.pending_modified_values.transit_encryption_mode #=> String, one of "preferred", "required"
resp.replication_group.pending_modified_values.cluster_mode #=> String, one of "enabled", "disabled", "compatible"
resp.replication_group.member_clusters #=> Array
resp.replication_group.member_clusters[0] #=> String
resp.replication_group.node_groups #=> Array
resp.replication_group.node_groups[0].node_group_id #=> String
resp.replication_group.node_groups[0].status #=> String
resp.replication_group.node_groups[0].primary_endpoint.address #=> String
resp.replication_group.node_groups[0].primary_endpoint.port #=> Integer
resp.replication_group.node_groups[0].reader_endpoint.address #=> String
resp.replication_group.node_groups[0].reader_endpoint.port #=> Integer
resp.replication_group.node_groups[0].slots #=> String
resp.replication_group.node_groups[0].node_group_members #=> Array
resp.replication_group.node_groups[0].node_group_members[0].cache_cluster_id #=> String
resp.replication_group.node_groups[0].node_group_members[0].cache_node_id #=> String
resp.replication_group.node_groups[0].node_group_members[0].read_endpoint.address #=> String
resp.replication_group.node_groups[0].node_group_members[0].read_endpoint.port #=> Integer
resp.replication_group.node_groups[0].node_group_members[0].preferred_availability_zone #=> String
resp.replication_group.node_groups[0].node_group_members[0].preferred_outpost_arn #=> String
resp.replication_group.node_groups[0].node_group_members[0].current_role #=> String
resp.replication_group.snapshotting_cluster_id #=> String
resp.replication_group.automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.replication_group.multi_az #=> String, one of "enabled", "disabled"
resp.replication_group.configuration_endpoint.address #=> String
resp.replication_group.configuration_endpoint.port #=> Integer
resp.replication_group.snapshot_retention_limit #=> Integer
resp.replication_group.snapshot_window #=> String
resp.replication_group.cluster_enabled #=> Boolean
resp.replication_group.cache_node_type #=> String
resp.replication_group.auth_token_enabled #=> Boolean
resp.replication_group.auth_token_last_modified_date #=> Time
resp.replication_group.transit_encryption_enabled #=> Boolean
resp.replication_group.at_rest_encryption_enabled #=> Boolean
resp.replication_group.member_clusters_outpost_arns #=> Array
resp.replication_group.member_clusters_outpost_arns[0] #=> String
resp.replication_group.kms_key_id #=> String
resp.replication_group.arn #=> String
resp.replication_group.user_group_ids #=> Array
resp.replication_group.user_group_ids[0] #=> String
resp.replication_group.log_delivery_configurations #=> Array
resp.replication_group.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.replication_group.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.replication_group.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.replication_group.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.replication_group.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.replication_group.log_delivery_configurations[0].status #=> String, one of "active", "enabling", "modifying", "disabling", "error"
resp.replication_group.log_delivery_configurations[0].message #=> String
resp.replication_group.replication_group_create_time #=> Time
resp.replication_group.data_tiering #=> String, one of "enabled", "disabled"
resp.replication_group.auto_minor_version_upgrade #=> Boolean
resp.replication_group.network_type #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.replication_group.ip_discovery #=> String, one of "ipv4", "ipv6"
resp.replication_group.transit_encryption_mode #=> String, one of "preferred", "required"
resp.replication_group.cluster_mode #=> String, one of "enabled", "disabled", "compatible"
resp.replication_group.engine #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replication_group_id (required, String)

    The replication group identifier. This parameter is stored as a lowercase string.

    Constraints:

    • A name must contain from 1 to 40 alphanumeric characters or hyphens.

    • The first character must be a letter.

    • A name cannot end with a hyphen or contain two consecutive hyphens.

  • :replication_group_description (required, String)

    A user-created description for the replication group.

  • :global_replication_group_id (String)

    The name of the Global datastore

  • :primary_cluster_id (String)

    The identifier of the cluster that serves as the primary for this replication group. This cluster must already exist and have a status of available.

    This parameter is not required if NumCacheClusters, NumNodeGroups, or ReplicasPerNodeGroup is specified.

  • :automatic_failover_enabled (Boolean)

    Specifies whether a read-only replica is automatically promoted to read/write primary if the existing primary fails.

    AutomaticFailoverEnabled must be enabled for Valkey or Redis OSS (cluster mode enabled) replication groups.

    Default: false

  • :multi_az_enabled (Boolean)

    A flag indicating if you have Multi-AZ enabled to enhance fault tolerance. For more information, see Minimizing Downtime: Multi-AZ.

  • :num_cache_clusters (Integer)

    The number of clusters this replication group initially has.

    This parameter is not used if there is more than one node group (shard). You should use ReplicasPerNodeGroup instead.

    If AutomaticFailoverEnabled is true, the value of this parameter must be at least 2. If AutomaticFailoverEnabled is false you can omit this parameter (it will default to 1), or you can explicitly set it to a value between 2 and 6.

    The maximum permitted value for NumCacheClusters is 6 (1 primary plus 5 replicas).

  • :preferred_cache_cluster_a_zs (Array<String>)

    A list of EC2 Availability Zones in which the replication group's clusters are created. The order of the Availability Zones in the list is the order in which clusters are allocated. The primary cluster is created in the first AZ in the list.

    This parameter is not used if there is more than one node group (shard). You should use NodeGroupConfiguration instead.

    If you are creating your replication group in an Amazon VPC (recommended), you can only locate clusters in Availability Zones associated with the subnets in the selected subnet group.

    The number of Availability Zones listed must equal the value of NumCacheClusters.

    Default: system chosen Availability Zones.

  • :num_node_groups (Integer)

    An optional parameter that specifies the number of node groups (shards) for this Valkey or Redis OSS (cluster mode enabled) replication group. For Valkey or Redis OSS (cluster mode disabled) either omit this parameter or set it to 1.

    Default: 1

  • :replicas_per_node_group (Integer)

    An optional parameter that specifies the number of replica nodes in each node group (shard). Valid values are 0 to 5.

  • :node_group_configuration (Array<Types::NodeGroupConfiguration>)

    A list of node group (shard) configuration options. Each node group (shard) configuration has the following members: PrimaryAvailabilityZone, ReplicaAvailabilityZones, ReplicaCount, and Slots.

    If you're creating a Valkey or Redis OSS (cluster mode disabled) or a Valkey or Redis OSS (cluster mode enabled) replication group, you can use this parameter to individually configure each node group (shard), or you can omit this parameter. However, it is required when seeding a Valkey or Redis OSS (cluster mode enabled) cluster from a S3 rdb file. You must configure each node group (shard) using this parameter because you must specify the slots for each node group.

  • :cache_node_type (String)

    The compute and memory capacity of the nodes in the node group (shard).

    The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

    • General purpose:

      • Current generation:

        M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge

        For region availability, see Supported Node Types

        M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge

        M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge

        M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge

        T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium

        T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium

        T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium

      • Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.)

        T1 node types: cache.t1.micro

        M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge

        M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge

    • Compute optimized:

      • Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.)

        C1 node types: cache.c1.xlarge

    • Memory optimized:

      • Current generation:

        R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge

        For region availability, see Supported Node Types

        R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge

        R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge

        R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge

      • Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.)

        M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge

        R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge

    Additional node type info

    • All current generation instance types are created in Amazon VPC by default.

    • Valkey or Redis OSS append-only files (AOF) are not supported for T1 or T2 instances.

    • Valkey or Redis OSS Multi-AZ with automatic failover is not supported on T1 instances.

    • The configuration variables appendonly and appendfsync are not supported on Valkey, or on Redis OSS version 2.8.22 and later.

  • :engine (String)

    The name of the cache engine to be used for the clusters in this replication group. The value must be set to Redis.

  • :engine_version (String)

    The version number of the cache engine to be used for the clusters in this replication group. To view the supported cache engine versions, use the DescribeCacheEngineVersions operation.

    Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version) in the ElastiCache User Guide, but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cluster or replication group and create it anew with the earlier engine version.

  • :cache_parameter_group_name (String)

    The name of the parameter group to associate with this replication group. If this argument is omitted, the default cache parameter group for the specified engine is used.

    If you are running Valkey or Redis OSS version 3.2.4 or later, only one node group (shard), and want to use a default parameter group, we recommend that you specify the parameter group by name.

    • To create a Valkey or Redis OSS (cluster mode disabled) replication group, use CacheParameterGroupName=default.redis3.2.

    • To create a Valkey or Redis OSS (cluster mode enabled) replication group, use CacheParameterGroupName=default.redis3.2.cluster.on.

  • :cache_subnet_group_name (String)

    The name of the cache subnet group to be used for the replication group.

    If you're going to launch your cluster in an Amazon VPC, you need to create a subnet group before you start creating a cluster. For more information, see Subnets and Subnet Groups.

  • :cache_security_group_names (Array<String>)

    A list of cache security group names to associate with this replication group.

  • :security_group_ids (Array<String>)

    One or more Amazon VPC security groups associated with this replication group.

    Use this parameter only when you are creating a replication group in an Amazon Virtual Private Cloud (Amazon VPC).

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource. Tags are comma-separated key,value pairs (e.g. Key=myKey, Value=myKeyValue. You can include multiple tags as shown following: Key=myKey, Value=myKeyValue Key=mySecondKey, Value=mySecondKeyValue. Tags on replication groups will be replicated to all nodes.

  • :snapshot_arns (Array<String>)

    A list of Amazon Resource Names (ARN) that uniquely identify the Valkey or Redis OSS RDB snapshot files stored in Amazon S3. The snapshot files are used to populate the new replication group. The Amazon S3 object name in the ARN cannot contain any commas. The new replication group will have the number of node groups (console: shards) specified by the parameter NumNodeGroups or the number of node groups configured by NodeGroupConfiguration regardless of the number of ARNs specified here.

    Example of an Amazon S3 ARN: arn:aws:s3:::my_bucket/snapshot1.rdb

  • :snapshot_name (String)

    The name of a snapshot from which to restore data into the new replication group. The snapshot status changes to restoring while the new replication group is being created.

  • :preferred_maintenance_window (String)

    Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

    Valid values for ddd are:

    • sun

    • mon

    • tue

    • wed

    • thu

    • fri

    • sat

    Example: sun:23:00-mon:01:30

  • :port (Integer)

    The port number on which each member of the replication group accepts connections.

  • :notification_topic_arn (String)

    The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic to which notifications are sent.

    The Amazon SNS topic owner must be the same as the cluster owner.

  • :auto_minor_version_upgrade (Boolean)

     If you are running Valkey 7.2 and above or Redis OSS engine version 6.0 and above, set this parameter to yes to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions. 

  • :snapshot_retention_limit (Integer)

    The number of days for which ElastiCache retains automatic snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

    Default: 0 (i.e., automatic backups are disabled for this cluster).

  • :snapshot_window (String)

    The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).

    Example: 05:00-09:00

    If you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.

  • :auth_token (String)

    Reserved parameter. The password used to access a password protected server.

    AuthToken can be specified only on replication groups where TransitEncryptionEnabled is true.

    For HIPAA compliance, you must specify TransitEncryptionEnabled as true, an AuthToken, and a CacheSubnetGroup.

    Password constraints:

    • Must be only printable ASCII characters.

    • Must be at least 16 characters and no more than 128 characters in length.

    • The only permitted printable special characters are !, &, #, $, ^, <, >, and -. Other printable special characters cannot be used in the AUTH token.

    For more information, see AUTH password at http://redis.io/commands/AUTH.

  • :transit_encryption_enabled (Boolean)

    A flag that enables in-transit encryption when set to true.

    This parameter is valid only if the Engine parameter is redis, the EngineVersion parameter is 3.2.6, 4.x or later, and the cluster is being created in an Amazon VPC.

    If you enable in-transit encryption, you must also specify a value for CacheSubnetGroup.

    Required: Only available when creating a replication group in an Amazon VPC using Redis OSS version 3.2.6, 4.x or later.

    Default: false

    For HIPAA compliance, you must specify TransitEncryptionEnabled as true, an AuthToken, and a CacheSubnetGroup.

  • :at_rest_encryption_enabled (Boolean)

    A flag that enables encryption at rest when set to true.

    You cannot modify the value of AtRestEncryptionEnabled after the replication group is created. To enable encryption at rest on a replication group you must set AtRestEncryptionEnabled to true when you create the replication group.

    Required: Only available when creating a replication group in an Amazon VPC using Redis OSS version 3.2.6, 4.x or later.

    Default: false

  • :kms_key_id (String)

    The ID of the KMS key used to encrypt the disk in the cluster.

  • :user_group_ids (Array<String>)

    The user group to associate with the replication group.

  • :log_delivery_configurations (Array<Types::LogDeliveryConfigurationRequest>)

    Specifies the destination, format and type of the logs.

  • :data_tiering_enabled (Boolean)

    Enables data tiering. Data tiering is only supported for replication groups using the r6gd node type. This parameter must be set to true when using r6gd nodes. For more information, see Data tiering.

  • :network_type (String)

    Must be either ipv4 | ipv6 | dual_stack. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system.

  • :ip_discovery (String)

    The network type you choose when creating a replication group, either ipv4 | ipv6. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system.

  • :transit_encryption_mode (String)

    A setting that allows you to migrate your clients to use in-transit encryption, with no downtime.

    When setting TransitEncryptionEnabled to true, you can set your TransitEncryptionMode to preferred in the same request, to allow both encrypted and unencrypted connections at the same time. Once you migrate all your Valkey or Redis OSS clients to use encrypted connections you can modify the value to required to allow encrypted connections only.

    Setting TransitEncryptionMode to required is a two-step process that requires you to first set the TransitEncryptionMode to preferred, after that you can set TransitEncryptionMode to required.

    This process will not trigger the replacement of the replication group.

  • :cluster_mode (String)

    Enabled or Disabled. To modify cluster mode from Disabled to Enabled, you must first set the cluster mode to Compatible. Compatible mode allows your Valkey or Redis OSS clients to connect using both cluster mode enabled and cluster mode disabled. After you migrate all Valkey or Redis OSS clients to use cluster mode enabled, you can then complete cluster mode configuration and set the cluster mode to Enabled.

  • :serverless_cache_snapshot_name (String)

    The name of the snapshot used to create a replication group. Available for Valkey, Redis OSS only.

Returns:

See Also:



2897
2898
2899
2900
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 2897

def create_replication_group(params = {}, options = {})
  req = build_request(:create_replication_group, params)
  req.send_request(options)
end

#create_serverless_cache(params = {}) ⇒ Types::CreateServerlessCacheResponse

Creates a serverless cache.

Examples:

Request syntax with placeholder values


resp = client.create_serverless_cache({
  serverless_cache_name: "String", # required
  description: "String",
  engine: "String", # required
  major_engine_version: "String",
  cache_usage_limits: {
    data_storage: {
      maximum: 1,
      minimum: 1,
      unit: "GB", # required, accepts GB
    },
    ecpu_per_second: {
      maximum: 1,
      minimum: 1,
    },
  },
  kms_key_id: "String",
  security_group_ids: ["String"],
  snapshot_arns_to_restore: ["String"],
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
  user_group_id: "String",
  subnet_ids: ["String"],
  snapshot_retention_limit: 1,
  daily_snapshot_time: "String",
})

Response structure


resp.serverless_cache.serverless_cache_name #=> String
resp.serverless_cache.description #=> String
resp.serverless_cache.create_time #=> Time
resp.serverless_cache.status #=> String
resp.serverless_cache.engine #=> String
resp.serverless_cache.major_engine_version #=> String
resp.serverless_cache.full_engine_version #=> String
resp.serverless_cache.cache_usage_limits.data_storage.maximum #=> Integer
resp.serverless_cache.cache_usage_limits.data_storage.minimum #=> Integer
resp.serverless_cache.cache_usage_limits.data_storage.unit #=> String, one of "GB"
resp.serverless_cache.cache_usage_limits.ecpu_per_second.maximum #=> Integer
resp.serverless_cache.cache_usage_limits.ecpu_per_second.minimum #=> Integer
resp.serverless_cache.kms_key_id #=> String
resp.serverless_cache.security_group_ids #=> Array
resp.serverless_cache.security_group_ids[0] #=> String
resp.serverless_cache.endpoint.address #=> String
resp.serverless_cache.endpoint.port #=> Integer
resp.serverless_cache.reader_endpoint.address #=> String
resp.serverless_cache.reader_endpoint.port #=> Integer
resp.serverless_cache.arn #=> String
resp.serverless_cache.user_group_id #=> String
resp.serverless_cache.subnet_ids #=> Array
resp.serverless_cache.subnet_ids[0] #=> String
resp.serverless_cache.snapshot_retention_limit #=> Integer
resp.serverless_cache.daily_snapshot_time #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :serverless_cache_name (required, String)

    User-provided identifier for the serverless cache. This parameter is stored as a lowercase string.

  • :description (String)

    User-provided description for the serverless cache. The default is NULL, i.e. if no description is provided then an empty string will be returned. The maximum length is 255 characters.

  • :engine (required, String)

    The name of the cache engine to be used for creating the serverless cache.

  • :major_engine_version (String)

    The version of the cache engine that will be used to create the serverless cache.

  • :cache_usage_limits (Types::CacheUsageLimits)

    Sets the cache usage limits for storage and ElastiCache Processing Units for the cache.

  • :kms_key_id (String)

    ARN of the customer managed key for encrypting the data at rest. If no KMS key is provided, a default service key is used.

  • :security_group_ids (Array<String>)

    A list of the one or more VPC security groups to be associated with the serverless cache. The security group will authorize traffic access for the VPC end-point (private-link). If no other information is given this will be the VPC’s Default Security Group that is associated with the cluster VPC end-point.

  • :snapshot_arns_to_restore (Array<String>)

    The ARN(s) of the snapshot that the new serverless cache will be created from. Available for Valkey, Redis OSS and Serverless Memcached only.

  • :tags (Array<Types::Tag>)

    The list of tags (key, value) pairs to be added to the serverless cache resource. Default is NULL.

  • :user_group_id (String)

    The identifier of the UserGroup to be associated with the serverless cache. Available for Valkey and Redis OSS only. Default is NULL.

  • :subnet_ids (Array<String>)

    A list of the identifiers of the subnets where the VPC endpoint for the serverless cache will be deployed. All the subnetIds must belong to the same VPC.

  • :snapshot_retention_limit (Integer)

    The number of snapshots that will be retained for the serverless cache that is being created. As new snapshots beyond this limit are added, the oldest snapshots will be deleted on a rolling basis. Available for Valkey, Redis OSS and Serverless Memcached only.

  • :daily_snapshot_time (String)

    The daily time that snapshots will be created from the new serverless cache. By default this number is populated with 0, i.e. no snapshots will be created on an automatic daily basis. Available for Valkey, Redis OSS and Serverless Memcached only.

Returns:

See Also:



3035
3036
3037
3038
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3035

def create_serverless_cache(params = {}, options = {})
  req = build_request(:create_serverless_cache, params)
  req.send_request(options)
end

#create_serverless_cache_snapshot(params = {}) ⇒ Types::CreateServerlessCacheSnapshotResponse

This API creates a copy of an entire ServerlessCache at a specific moment in time. Available for Valkey, Redis OSS and Serverless Memcached only.

Examples:

Request syntax with placeholder values


resp = client.create_serverless_cache_snapshot({
  serverless_cache_snapshot_name: "String", # required
  serverless_cache_name: "String", # required
  kms_key_id: "String",
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.serverless_cache_snapshot.serverless_cache_snapshot_name #=> String
resp.serverless_cache_snapshot.arn #=> String
resp.serverless_cache_snapshot.kms_key_id #=> String
resp.serverless_cache_snapshot.snapshot_type #=> String
resp.serverless_cache_snapshot.status #=> String
resp.serverless_cache_snapshot.create_time #=> Time
resp.serverless_cache_snapshot.expiry_time #=> Time
resp.serverless_cache_snapshot.bytes_used_for_cache #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.serverless_cache_name #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.engine #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.major_engine_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :serverless_cache_snapshot_name (required, String)

    The name for the snapshot being created. Must be unique for the customer account. Available for Valkey, Redis OSS and Serverless Memcached only. Must be between 1 and 255 characters.

  • :serverless_cache_name (required, String)

    The name of an existing serverless cache. The snapshot is created from this cache. Available for Valkey, Redis OSS and Serverless Memcached only.

  • :kms_key_id (String)

    The ID of the KMS key used to encrypt the snapshot. Available for Valkey, Redis OSS and Serverless Memcached only. Default: NULL

  • :tags (Array<Types::Tag>)

    A list of tags to be added to the snapshot resource. A tag is a key-value pair. Available for Valkey, Redis OSS and Serverless Memcached only.

Returns:

See Also:



3099
3100
3101
3102
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3099

def create_serverless_cache_snapshot(params = {}, options = {})
  req = build_request(:create_serverless_cache_snapshot, params)
  req.send_request(options)
end

#create_snapshot(params = {}) ⇒ Types::CreateSnapshotResult

Creates a copy of an entire cluster or replication group at a specific moment in time.

This operation is valid for Valkey or Redis OSS only.

Examples:

Example: CreateSnapshot - NonClustered Redis, no read-replicas


# Creates a snapshot of a non-clustered Redis cluster that has only one node.

resp = client.create_snapshot({
  cache_cluster_id: "onenoderedis", 
  snapshot_name: "snapshot-1", 
})

resp.to_h outputs the following:
{
  snapshot: {
    auto_minor_version_upgrade: true, 
    cache_cluster_create_time: Time.parse("2017-02-03T15:43:36.278Z"), 
    cache_cluster_id: "onenoderedis", 
    cache_node_type: "cache.m3.medium", 
    cache_parameter_group_name: "default.redis3.2", 
    cache_subnet_group_name: "default", 
    engine: "redis", 
    engine_version: "3.2.4", 
    node_snapshots: [
      {
        cache_node_create_time: Time.parse("2017-02-03T15:43:36.278Z"), 
        cache_node_id: "0001", 
        cache_size: "", 
      }, 
    ], 
    num_cache_nodes: 1, 
    port: 6379, 
    preferred_availability_zone: "us-west-2c", 
    preferred_maintenance_window: "sat:08:00-sat:09:00", 
    snapshot_name: "snapshot-1", 
    snapshot_retention_limit: 1, 
    snapshot_source: "manual", 
    snapshot_status: "creating", 
    snapshot_window: "00:00-01:00", 
    vpc_id: "vpc-73c3cd17", 
  }, 
}

Example: CreateSnapshot - NonClustered Redis, 2 read-replicas


# Creates a snapshot of a non-clustered Redis cluster that has only three nodes, primary and two read-replicas.
# CacheClusterId must be a specific node in the cluster.

resp = client.create_snapshot({
  cache_cluster_id: "threenoderedis-001", 
  snapshot_name: "snapshot-2", 
})

resp.to_h outputs the following:
{
  snapshot: {
    auto_minor_version_upgrade: true, 
    cache_cluster_create_time: Time.parse("2017-02-03T15:43:36.278Z"), 
    cache_cluster_id: "threenoderedis-001", 
    cache_node_type: "cache.m3.medium", 
    cache_parameter_group_name: "default.redis3.2", 
    cache_subnet_group_name: "default", 
    engine: "redis", 
    engine_version: "3.2.4", 
    node_snapshots: [
      {
        cache_node_create_time: Time.parse("2017-02-03T15:43:36.278Z"), 
        cache_node_id: "0001", 
        cache_size: "", 
      }, 
    ], 
    num_cache_nodes: 1, 
    port: 6379, 
    preferred_availability_zone: "us-west-2c", 
    preferred_maintenance_window: "sat:08:00-sat:09:00", 
    snapshot_name: "snapshot-2", 
    snapshot_retention_limit: 1, 
    snapshot_source: "manual", 
    snapshot_status: "creating", 
    snapshot_window: "00:00-01:00", 
    vpc_id: "vpc-73c3cd17", 
  }, 
}

Example: CreateSnapshot-clustered Redis


# Creates a snapshot of a clustered Redis cluster that has 2 shards, each with a primary and 4 read-replicas.

resp = client.create_snapshot({
  replication_group_id: "clusteredredis", 
  snapshot_name: "snapshot-2x5", 
})

resp.to_h outputs the following:
{
  snapshot: {
    auto_minor_version_upgrade: true, 
    automatic_failover: "enabled", 
    cache_node_type: "cache.m3.medium", 
    cache_parameter_group_name: "default.redis3.2.cluster.on", 
    cache_subnet_group_name: "default", 
    engine: "redis", 
    engine_version: "3.2.4", 
    node_snapshots: [
      {
        cache_size: "", 
        node_group_id: "0001", 
      }, 
      {
        cache_size: "", 
        node_group_id: "0002", 
      }, 
    ], 
    num_node_groups: 2, 
    port: 6379, 
    preferred_maintenance_window: "mon:09:30-mon:10:30", 
    replication_group_description: "Redis cluster with 2 shards.", 
    replication_group_id: "clusteredredis", 
    snapshot_name: "snapshot-2x5", 
    snapshot_retention_limit: 1, 
    snapshot_source: "manual", 
    snapshot_status: "creating", 
    snapshot_window: "12:00-13:00", 
    vpc_id: "vpc-73c3cd17", 
  }, 
}

Request syntax with placeholder values


resp = client.create_snapshot({
  replication_group_id: "String",
  cache_cluster_id: "String",
  snapshot_name: "String", # required
  kms_key_id: "String",
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.snapshot.snapshot_name #=> String
resp.snapshot.replication_group_id #=> String
resp.snapshot.replication_group_description #=> String
resp.snapshot.cache_cluster_id #=> String
resp.snapshot.snapshot_status #=> String
resp.snapshot.snapshot_source #=> String
resp.snapshot.cache_node_type #=> String
resp.snapshot.engine #=> String
resp.snapshot.engine_version #=> String
resp.snapshot.num_cache_nodes #=> Integer
resp.snapshot.preferred_availability_zone #=> String
resp.snapshot.preferred_outpost_arn #=> String
resp.snapshot.cache_cluster_create_time #=> Time
resp.snapshot.preferred_maintenance_window #=> String
resp.snapshot.topic_arn #=> String
resp.snapshot.port #=> Integer
resp.snapshot.cache_parameter_group_name #=> String
resp.snapshot.cache_subnet_group_name #=> String
resp.snapshot.vpc_id #=> String
resp.snapshot.auto_minor_version_upgrade #=> Boolean
resp.snapshot.snapshot_retention_limit #=> Integer
resp.snapshot.snapshot_window #=> String
resp.snapshot.num_node_groups #=> Integer
resp.snapshot.automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.snapshot.node_snapshots #=> Array
resp.snapshot.node_snapshots[0].cache_cluster_id #=> String
resp.snapshot.node_snapshots[0].node_group_id #=> String
resp.snapshot.node_snapshots[0].cache_node_id #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.node_group_id #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.slots #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_count #=> Integer
resp.snapshot.node_snapshots[0].node_group_configuration.primary_availability_zone #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_availability_zones #=> Array
resp.snapshot.node_snapshots[0].node_group_configuration.replica_availability_zones[0] #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.primary_outpost_arn #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_outpost_arns #=> Array
resp.snapshot.node_snapshots[0].node_group_configuration.replica_outpost_arns[0] #=> String
resp.snapshot.node_snapshots[0].cache_size #=> String
resp.snapshot.node_snapshots[0].cache_node_create_time #=> Time
resp.snapshot.node_snapshots[0].snapshot_create_time #=> Time
resp.snapshot.kms_key_id #=> String
resp.snapshot.arn #=> String
resp.snapshot.data_tiering #=> String, one of "enabled", "disabled"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replication_group_id (String)

    The identifier of an existing replication group. The snapshot is created from this replication group.

  • :cache_cluster_id (String)

    The identifier of an existing cluster. The snapshot is created from this cluster.

  • :snapshot_name (required, String)

    A name for the snapshot being created.

  • :kms_key_id (String)

    The ID of the KMS key used to encrypt the snapshot.

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted.

Returns:

See Also:



3324
3325
3326
3327
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3324

def create_snapshot(params = {}, options = {})
  req = build_request(:create_snapshot, params)
  req.send_request(options)
end

#create_user(params = {}) ⇒ Types::User

For Valkey engine version 7.2 onwards and Redis OSS 6.0 and onwards: Creates a user. For more information, see Using Role Based Access Control (RBAC).

Examples:

Request syntax with placeholder values


resp = client.create_user({
  user_id: "UserId", # required
  user_name: "UserName", # required
  engine: "EngineType", # required
  passwords: ["String"],
  access_string: "AccessString", # required
  no_password_required: false,
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
  authentication_mode: {
    type: "password", # accepts password, no-password-required, iam
    passwords: ["String"],
  },
})

Response structure


resp.user_id #=> String
resp.user_name #=> String
resp.status #=> String
resp.engine #=> String
resp.minimum_engine_version #=> String
resp.access_string #=> String
resp.user_group_ids #=> Array
resp.user_group_ids[0] #=> String
resp.authentication.type #=> String, one of "password", "no-password", "iam"
resp.authentication.password_count #=> Integer
resp.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :user_id (required, String)

    The ID of the user.

  • :user_name (required, String)

    The username of the user.

  • :engine (required, String)

    The current supported value is Redis.

  • :passwords (Array<String>)

    Passwords used for this user. You can create up to two passwords for each user.

  • :access_string (required, String)

    Access permissions string used for this user.

  • :no_password_required (Boolean)

    Indicates a password is not required for this user.

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted.

  • :authentication_mode (Types::AuthenticationMode)

    Specifies how to authenticate the user.

Returns:

See Also:



3415
3416
3417
3418
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3415

def create_user(params = {}, options = {})
  req = build_request(:create_user, params)
  req.send_request(options)
end

#create_user_group(params = {}) ⇒ Types::UserGroup

For Valkey engine version 7.2 onwards and Redis OSS 6.0 onwards: Creates a user group. For more information, see Using Role Based Access Control (RBAC)

Examples:

Request syntax with placeholder values


resp = client.create_user_group({
  user_group_id: "String", # required
  engine: "EngineType", # required
  user_ids: ["UserId"],
  tags: [
    {
      key: "String",
      value: "String",
    },
  ],
})

Response structure


resp.user_group_id #=> String
resp.status #=> String
resp.engine #=> String
resp.user_ids #=> Array
resp.user_ids[0] #=> String
resp.minimum_engine_version #=> String
resp.pending_changes.user_ids_to_remove #=> Array
resp.pending_changes.user_ids_to_remove[0] #=> String
resp.pending_changes.user_ids_to_add #=> Array
resp.pending_changes.user_ids_to_add[0] #=> String
resp.replication_groups #=> Array
resp.replication_groups[0] #=> String
resp.serverless_caches #=> Array
resp.serverless_caches[0] #=> String
resp.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :user_group_id (required, String)

    The ID of the user group.

  • :engine (required, String)

    The current supported value is Redis user.

  • :user_ids (Array<String>)

    The list of user IDs that belong to the user group.

  • :tags (Array<Types::Tag>)

    A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted. Available for Valkey and Redis OSS only.

Returns:

See Also:



3490
3491
3492
3493
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3490

def create_user_group(params = {}, options = {})
  req = build_request(:create_user_group, params)
  req.send_request(options)
end

#decrease_node_groups_in_global_replication_group(params = {}) ⇒ Types::DecreaseNodeGroupsInGlobalReplicationGroupResult

Decreases the number of node groups in a Global datastore

Examples:

Request syntax with placeholder values


resp = client.decrease_node_groups_in_global_replication_group({
  global_replication_group_id: "String", # required
  node_group_count: 1, # required
  global_node_groups_to_remove: ["String"],
  global_node_groups_to_retain: ["String"],
  apply_immediately: false, # required
})

Response structure


resp.global_replication_group.global_replication_group_id #=> String
resp.global_replication_group.global_replication_group_description #=> String
resp.global_replication_group.status #=> String
resp.global_replication_group.cache_node_type #=> String
resp.global_replication_group.engine #=> String
resp.global_replication_group.engine_version #=> String
resp.global_replication_group.members #=> Array
resp.global_replication_group.members[0].replication_group_id #=> String
resp.global_replication_group.members[0].replication_group_region #=> String
resp.global_replication_group.members[0].role #=> String
resp.global_replication_group.members[0].automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.global_replication_group.members[0].status #=> String
resp.global_replication_group.cluster_enabled #=> Boolean
resp.global_replication_group.global_node_groups #=> Array
resp.global_replication_group.global_node_groups[0].global_node_group_id #=> String
resp.global_replication_group.global_node_groups[0].slots #=> String
resp.global_replication_group.auth_token_enabled #=> Boolean
resp.global_replication_group.transit_encryption_enabled #=> Boolean
resp.global_replication_group.at_rest_encryption_enabled #=> Boolean
resp.global_replication_group.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :global_replication_group_id (required, String)

    The name of the Global datastore

  • :node_group_count (required, Integer)

    The number of node groups (shards) that results from the modification of the shard configuration

  • :global_node_groups_to_remove (Array<String>)

    If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. GlobalNodeGroupsToRemove is a list of NodeGroupIds to remove from the cluster. ElastiCache will attempt to remove all node groups listed by GlobalNodeGroupsToRemove from the cluster.

  • :global_node_groups_to_retain (Array<String>)

    If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. GlobalNodeGroupsToRetain is a list of NodeGroupIds to retain from the cluster. ElastiCache will attempt to retain all node groups listed by GlobalNodeGroupsToRetain from the cluster.

  • :apply_immediately (required, Boolean)

    Indicates that the shard reconfiguration process begins immediately. At present, the only permitted value for this parameter is true.

Returns:

See Also:



3563
3564
3565
3566
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3563

def decrease_node_groups_in_global_replication_group(params = {}, options = {})
  req = build_request(:decrease_node_groups_in_global_replication_group, params)
  req.send_request(options)
end

#decrease_replica_count(params = {}) ⇒ Types::DecreaseReplicaCountResult

Dynamically decreases the number of replicas in a Valkey or Redis OSS (cluster mode disabled) replication group or the number of replica nodes in one or more node groups (shards) of a Valkey or Redis OSS (cluster mode enabled) replication group. This operation is performed with no cluster down time.

Examples:

Request syntax with placeholder values


resp = client.decrease_replica_count({
  replication_group_id: "String", # required
  new_replica_count: 1,
  replica_configuration: [
    {
      node_group_id: "AllowedNodeGroupId", # required
      new_replica_count: 1, # required
      preferred_availability_zones: ["String"],
      preferred_outpost_arns: ["String"],
    },
  ],
  replicas_to_remove: ["String"],
  apply_immediately: false, # required
})

Response structure


resp.replication_group.replication_group_id #=> String
resp.replication_group.description #=> String
resp.replication_group.global_replication_group_info.global_replication_group_id #=> String
resp.replication_group.global_replication_group_info.global_replication_group_member_role #=> String
resp.replication_group.status #=> String
resp.replication_group.pending_modified_values.primary_cluster_id #=> String
resp.replication_group.pending_modified_values.automatic_failover_status #=> String, one of "enabled", "disabled"
resp.replication_group.pending_modified_values.resharding.slot_migration.progress_percentage #=> Float
resp.replication_group.pending_modified_values.auth_token_status #=> String, one of "SETTING", "ROTATING"
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_add #=> Array
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_add[0] #=> String
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_remove #=> Array
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_remove[0] #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations #=> Array
resp.replication_group.pending_modified_values.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.replication_group.pending_modified_values.transit_encryption_enabled #=> Boolean
resp.replication_group.pending_modified_values.transit_encryption_mode #=> String, one of "preferred", "required"
resp.replication_group.pending_modified_values.cluster_mode #=> String, one of "enabled", "disabled", "compatible"
resp.replication_group.member_clusters #=> Array
resp.replication_group.member_clusters[0] #=> String
resp.replication_group.node_groups #=> Array
resp.replication_group.node_groups[0].node_group_id #=> String
resp.replication_group.node_groups[0].status #=> String
resp.replication_group.node_groups[0].primary_endpoint.address #=> String
resp.replication_group.node_groups[0].primary_endpoint.port #=> Integer
resp.replication_group.node_groups[0].reader_endpoint.address #=> String
resp.replication_group.node_groups[0].reader_endpoint.port #=> Integer
resp.replication_group.node_groups[0].slots #=> String
resp.replication_group.node_groups[0].node_group_members #=> Array
resp.replication_group.node_groups[0].node_group_members[0].cache_cluster_id #=> String
resp.replication_group.node_groups[0].node_group_members[0].cache_node_id #=> String
resp.replication_group.node_groups[0].node_group_members[0].read_endpoint.address #=> String
resp.replication_group.node_groups[0].node_group_members[0].read_endpoint.port #=> Integer
resp.replication_group.node_groups[0].node_group_members[0].preferred_availability_zone #=> String
resp.replication_group.node_groups[0].node_group_members[0].preferred_outpost_arn #=> String
resp.replication_group.node_groups[0].node_group_members[0].current_role #=> String
resp.replication_group.snapshotting_cluster_id #=> String
resp.replication_group.automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.replication_group.multi_az #=> String, one of "enabled", "disabled"
resp.replication_group.configuration_endpoint.address #=> String
resp.replication_group.configuration_endpoint.port #=> Integer
resp.replication_group.snapshot_retention_limit #=> Integer
resp.replication_group.snapshot_window #=> String
resp.replication_group.cluster_enabled #=> Boolean
resp.replication_group.cache_node_type #=> String
resp.replication_group.auth_token_enabled #=> Boolean
resp.replication_group.auth_token_last_modified_date #=> Time
resp.replication_group.transit_encryption_enabled #=> Boolean
resp.replication_group.at_rest_encryption_enabled #=> Boolean
resp.replication_group.member_clusters_outpost_arns #=> Array
resp.replication_group.member_clusters_outpost_arns[0] #=> String
resp.replication_group.kms_key_id #=> String
resp.replication_group.arn #=> String
resp.replication_group.user_group_ids #=> Array
resp.replication_group.user_group_ids[0] #=> String
resp.replication_group.log_delivery_configurations #=> Array
resp.replication_group.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.replication_group.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.replication_group.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.replication_group.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.replication_group.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.replication_group.log_delivery_configurations[0].status #=> String, one of "active", "enabling", "modifying", "disabling", "error"
resp.replication_group.log_delivery_configurations[0].message #=> String
resp.replication_group.replication_group_create_time #=> Time
resp.replication_group.data_tiering #=> String, one of "enabled", "disabled"
resp.replication_group.auto_minor_version_upgrade #=> Boolean
resp.replication_group.network_type #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.replication_group.ip_discovery #=> String, one of "ipv4", "ipv6"
resp.replication_group.transit_encryption_mode #=> String, one of "preferred", "required"
resp.replication_group.cluster_mode #=> String, one of "enabled", "disabled", "compatible"
resp.replication_group.engine #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replication_group_id (required, String)

    The id of the replication group from which you want to remove replica nodes.

  • :new_replica_count (Integer)

    The number of read replica nodes you want at the completion of this operation. For Valkey or Redis OSS (cluster mode disabled) replication groups, this is the number of replica nodes in the replication group. For Valkey or Redis OSS (cluster mode enabled) replication groups, this is the number of replica nodes in each of the replication group's node groups.

    The minimum number of replicas in a shard or replication group is:

    • Valkey or Redis OSS (cluster mode disabled)

      • If Multi-AZ is enabled: 1

      • If Multi-AZ is not enabled: 0

    • Valkey or Redis OSS (cluster mode enabled): 0 (though you will not be able to failover to a replica if your primary node fails)
  • :replica_configuration (Array<Types::ConfigureShard>)

    A list of ConfigureShard objects that can be used to configure each shard in a Valkey or Redis OSS (cluster mode enabled) replication group. The ConfigureShard has three members: NewReplicaCount, NodeGroupId, and PreferredAvailabilityZones.

  • :replicas_to_remove (Array<String>)

    A list of the node ids to remove from the replication group or node group (shard).

  • :apply_immediately (required, Boolean)

    If True, the number of replica nodes is decreased immediately. ApplyImmediately=False is not currently supported.

Returns:

See Also:



3713
3714
3715
3716
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3713

def decrease_replica_count(params = {}, options = {})
  req = build_request(:decrease_replica_count, params)
  req.send_request(options)
end

#delete_cache_cluster(params = {}) ⇒ Types::DeleteCacheClusterResult

Deletes a previously provisioned cluster. DeleteCacheCluster deletes all associated cache nodes, node endpoints and the cluster itself. When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the cluster; you cannot cancel or revert this operation.

This operation is not valid for:

  • Valkey or Redis OSS (cluster mode enabled) clusters

  • Valkey or Redis OSS (cluster mode disabled) clusters

  • A cluster that is the last read replica of a replication group

  • A cluster that is the primary node of a replication group

  • A node group (shard) that has Multi-AZ mode enabled

  • A cluster from a Valkey or Redis OSS (cluster mode enabled) replication group

  • A cluster that is not in the available state

Examples:

Example: DeleteCacheCluster


# Deletes an Amazon ElastiCache cluster.

resp = client.delete_cache_cluster({
  cache_cluster_id: "my-memcached", 
})

resp.to_h outputs the following:
{
  cache_cluster: {
    auto_minor_version_upgrade: true, 
    cache_cluster_create_time: Time.parse("2016-12-22T16:05:17.314Z"), 
    cache_cluster_id: "my-memcached", 
    cache_cluster_status: "deleting", 
    cache_node_type: "cache.r3.large", 
    cache_parameter_group: {
      cache_node_ids_to_reboot: [
      ], 
      cache_parameter_group_name: "default.memcached1.4", 
      parameter_apply_status: "in-sync", 
    }, 
    cache_security_groups: [
    ], 
    cache_subnet_group_name: "default", 
    client_download_landing_page: "https://console.aws.amazon.com/elasticache/home#client-download:", 
    configuration_endpoint: {
      address: "my-memcached2.ameaqx.cfg.use1.cache.amazonaws.com", 
      port: 11211, 
    }, 
    engine: "memcached", 
    engine_version: "1.4.24", 
    num_cache_nodes: 2, 
    pending_modified_values: {
    }, 
    preferred_availability_zone: "Multiple", 
    preferred_maintenance_window: "tue:07:30-tue:08:30", 
  }, 
}

Request syntax with placeholder values


resp = client.delete_cache_cluster({
  cache_cluster_id: "String", # required
  final_snapshot_identifier: "String",
})

Response structure


resp.cache_cluster.cache_cluster_id #=> String
resp.cache_cluster.configuration_endpoint.address #=> String
resp.cache_cluster.configuration_endpoint.port #=> Integer
resp.cache_cluster.client_download_landing_page #=> String
resp.cache_cluster.cache_node_type #=> String
resp.cache_cluster.engine #=> String
resp.cache_cluster.engine_version #=> String
resp.cache_cluster.cache_cluster_status #=> String
resp.cache_cluster.num_cache_nodes #=> Integer
resp.cache_cluster.preferred_availability_zone #=> String
resp.cache_cluster.preferred_outpost_arn #=> String
resp.cache_cluster.cache_cluster_create_time #=> Time
resp.cache_cluster.preferred_maintenance_window #=> String
resp.cache_cluster.pending_modified_values.num_cache_nodes #=> Integer
resp.cache_cluster.pending_modified_values.cache_node_ids_to_remove #=> Array
resp.cache_cluster.pending_modified_values.cache_node_ids_to_remove[0] #=> String
resp.cache_cluster.pending_modified_values.engine_version #=> String
resp.cache_cluster.pending_modified_values.cache_node_type #=> String
resp.cache_cluster.pending_modified_values.auth_token_status #=> String, one of "SETTING", "ROTATING"
resp.cache_cluster.pending_modified_values.log_delivery_configurations #=> Array
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.cache_cluster.pending_modified_values.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.cache_cluster.pending_modified_values.transit_encryption_enabled #=> Boolean
resp.cache_cluster.pending_modified_values.transit_encryption_mode #=> String, one of "preferred", "required"
resp.cache_cluster.notification_configuration.topic_arn #=> String
resp.cache_cluster.notification_configuration.topic_status #=> String
resp.cache_cluster.cache_security_groups #=> Array
resp.cache_cluster.cache_security_groups[0].cache_security_group_name #=> String
resp.cache_cluster.cache_security_groups[0].status #=> String
resp.cache_cluster.cache_parameter_group.cache_parameter_group_name #=> String
resp.cache_cluster.cache_parameter_group.parameter_apply_status #=> String
resp.cache_cluster.cache_parameter_group.cache_node_ids_to_reboot #=> Array
resp.cache_cluster.cache_parameter_group.cache_node_ids_to_reboot[0] #=> String
resp.cache_cluster.cache_subnet_group_name #=> String
resp.cache_cluster.cache_nodes #=> Array
resp.cache_cluster.cache_nodes[0].cache_node_id #=> String
resp.cache_cluster.cache_nodes[0].cache_node_status #=> String
resp.cache_cluster.cache_nodes[0].cache_node_create_time #=> Time
resp.cache_cluster.cache_nodes[0].endpoint.address #=> String
resp.cache_cluster.cache_nodes[0].endpoint.port #=> Integer
resp.cache_cluster.cache_nodes[0].parameter_group_status #=> String
resp.cache_cluster.cache_nodes[0].source_cache_node_id #=> String
resp.cache_cluster.cache_nodes[0].customer_availability_zone #=> String
resp.cache_cluster.cache_nodes[0].customer_outpost_arn #=> String
resp.cache_cluster.auto_minor_version_upgrade #=> Boolean
resp.cache_cluster.security_groups #=> Array
resp.cache_cluster.security_groups[0].security_group_id #=> String
resp.cache_cluster.security_groups[0].status #=> String
resp.cache_cluster.replication_group_id #=> String
resp.cache_cluster.snapshot_retention_limit #=> Integer
resp.cache_cluster.snapshot_window #=> String
resp.cache_cluster.auth_token_enabled #=> Boolean
resp.cache_cluster.auth_token_last_modified_date #=> Time
resp.cache_cluster.transit_encryption_enabled #=> Boolean
resp.cache_cluster.at_rest_encryption_enabled #=> Boolean
resp.cache_cluster.arn #=> String
resp.cache_cluster.replication_group_log_delivery_enabled #=> Boolean
resp.cache_cluster.log_delivery_configurations #=> Array
resp.cache_cluster.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.cache_cluster.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.cache_cluster.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.cache_cluster.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.cache_cluster.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.cache_cluster.log_delivery_configurations[0].status #=> String, one of "active", "enabling", "modifying", "disabling", "error"
resp.cache_cluster.log_delivery_configurations[0].message #=> String
resp.cache_cluster.network_type #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.cache_cluster.ip_discovery #=> String, one of "ipv4", "ipv6"
resp.cache_cluster.transit_encryption_mode #=> String, one of "preferred", "required"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_cluster_id (required, String)

    The cluster identifier for the cluster to be deleted. This parameter is not case sensitive.

  • :final_snapshot_identifier (String)

    The user-supplied name of a final cluster snapshot. This is the unique name that identifies the snapshot. ElastiCache creates the snapshot, and then deletes the cluster immediately afterward.

Returns:

See Also:



3880
3881
3882
3883
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3880

def delete_cache_cluster(params = {}, options = {})
  req = build_request(:delete_cache_cluster, params)
  req.send_request(options)
end

#delete_cache_parameter_group(params = {}) ⇒ Struct

Deletes the specified cache parameter group. You cannot delete a cache parameter group if it is associated with any cache clusters. You cannot delete the default cache parameter groups in your account.

Examples:

Example: DeleteCacheParameterGroup


# Deletes the Amazon ElastiCache parameter group custom-mem1-4.

resp = client.delete_cache_parameter_group({
  cache_parameter_group_name: "custom-mem1-4", 
})

Request syntax with placeholder values


resp = client.delete_cache_parameter_group({
  cache_parameter_group_name: "String", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_parameter_group_name (required, String)

    The name of the cache parameter group to delete.

    The specified cache security group must not be associated with any clusters.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



3918
3919
3920
3921
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3918

def delete_cache_parameter_group(params = {}, options = {})
  req = build_request(:delete_cache_parameter_group, params)
  req.send_request(options)
end

#delete_cache_security_group(params = {}) ⇒ Struct

Deletes a cache security group.

You cannot delete a cache security group if it is associated with any clusters.

Examples:

Example: DeleteCacheSecurityGroup


# Deletes a cache security group.

resp = client.delete_cache_security_group({
  cache_security_group_name: "my-sec-group", 
})

Request syntax with placeholder values


resp = client.delete_cache_security_group({
  cache_security_group_name: "String", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_security_group_name (required, String)

    The name of the cache security group to delete.

    You cannot delete the default security group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



3958
3959
3960
3961
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3958

def delete_cache_security_group(params = {}, options = {})
  req = build_request(:delete_cache_security_group, params)
  req.send_request(options)
end

#delete_cache_subnet_group(params = {}) ⇒ Struct

Deletes a cache subnet group.

You cannot delete a default cache subnet group or one that is associated with any clusters.

Examples:

Example: DeleteCacheSubnetGroup


# Deletes the Amazon ElastiCache subnet group my-subnet-group.

resp = client.delete_cache_subnet_group({
  cache_subnet_group_name: "my-subnet-group", 
})

Request syntax with placeholder values


resp = client.delete_cache_subnet_group({
  cache_subnet_group_name: "String", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_subnet_group_name (required, String)

    The name of the cache subnet group to delete.

    Constraints: Must contain no more than 255 alphanumeric characters or hyphens.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



3997
3998
3999
4000
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 3997

def delete_cache_subnet_group(params = {}, options = {})
  req = build_request(:delete_cache_subnet_group, params)
  req.send_request(options)
end

#delete_global_replication_group(params = {}) ⇒ Types::DeleteGlobalReplicationGroupResult

Deleting a Global datastore is a two-step process:

  • First, you must DisassociateGlobalReplicationGroup to remove the secondary clusters in the Global datastore.

  • Once the Global datastore contains only the primary cluster, you can use the DeleteGlobalReplicationGroup API to delete the Global datastore while retainining the primary cluster using RetainPrimaryReplicationGroup=true.

Since the Global Datastore has only a primary cluster, you can delete the Global Datastore while retaining the primary by setting RetainPrimaryReplicationGroup=true. The primary cluster is never deleted when deleting a Global Datastore. It can only be deleted when it no longer is associated with any Global Datastore.

When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the selected resources; you cannot cancel or revert this operation.

Examples:

Request syntax with placeholder values


resp = client.delete_global_replication_group({
  global_replication_group_id: "String", # required
  retain_primary_replication_group: false, # required
})

Response structure


resp.global_replication_group.global_replication_group_id #=> String
resp.global_replication_group.global_replication_group_description #=> String
resp.global_replication_group.status #=> String
resp.global_replication_group.cache_node_type #=> String
resp.global_replication_group.engine #=> String
resp.global_replication_group.engine_version #=> String
resp.global_replication_group.members #=> Array
resp.global_replication_group.members[0].replication_group_id #=> String
resp.global_replication_group.members[0].replication_group_region #=> String
resp.global_replication_group.members[0].role #=> String
resp.global_replication_group.members[0].automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.global_replication_group.members[0].status #=> String
resp.global_replication_group.cluster_enabled #=> Boolean
resp.global_replication_group.global_node_groups #=> Array
resp.global_replication_group.global_node_groups[0].global_node_group_id #=> String
resp.global_replication_group.global_node_groups[0].slots #=> String
resp.global_replication_group.auth_token_enabled #=> Boolean
resp.global_replication_group.transit_encryption_enabled #=> Boolean
resp.global_replication_group.at_rest_encryption_enabled #=> Boolean
resp.global_replication_group.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :global_replication_group_id (required, String)

    The name of the Global datastore

  • :retain_primary_replication_group (required, Boolean)

    The primary replication group is retained as a standalone replication group.

Returns:

See Also:



4067
4068
4069
4070
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 4067

def delete_global_replication_group(params = {}, options = {})
  req = build_request(:delete_global_replication_group, params)
  req.send_request(options)
end

#delete_replication_group(params = {}) ⇒ Types::DeleteReplicationGroupResult

Deletes an existing replication group. By default, this operation deletes the entire replication group, including the primary/primaries and all of the read replicas. If the replication group has only one primary, you can optionally delete only the read replicas, while retaining the primary by setting RetainPrimaryCluster=true.

When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the selected resources; you cannot cancel or revert this operation.

* CreateSnapshot permission is required to create a final snapshot. Without this permission, the API call will fail with an Access Denied exception.

  • This operation is valid for Redis OSS only.

Examples:

Example: DeleteReplicationGroup


# Deletes the Amazon ElastiCache replication group my-redis-rg.

resp = client.delete_replication_group({
  replication_group_id: "my-redis-rg", 
  retain_primary_cluster: false, 
})

resp.to_h outputs the following:
{
  replication_group: {
    automatic_failover: "disabled", 
    description: "simple redis cluster", 
    pending_modified_values: {
    }, 
    replication_group_id: "my-redis-rg", 
    status: "deleting", 
  }, 
}

Request syntax with placeholder values


resp = client.delete_replication_group({
  replication_group_id: "String", # required
  retain_primary_cluster: false,
  final_snapshot_identifier: "String",
})

Response structure


resp.replication_group.replication_group_id #=> String
resp.replication_group.description #=> String
resp.replication_group.global_replication_group_info.global_replication_group_id #=> String
resp.replication_group.global_replication_group_info.global_replication_group_member_role #=> String
resp.replication_group.status #=> String
resp.replication_group.pending_modified_values.primary_cluster_id #=> String
resp.replication_group.pending_modified_values.automatic_failover_status #=> String, one of "enabled", "disabled"
resp.replication_group.pending_modified_values.resharding.slot_migration.progress_percentage #=> Float
resp.replication_group.pending_modified_values.auth_token_status #=> String, one of "SETTING", "ROTATING"
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_add #=> Array
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_add[0] #=> String
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_remove #=> Array
resp.replication_group.pending_modified_values.user_groups.user_group_ids_to_remove[0] #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations #=> Array
resp.replication_group.pending_modified_values.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.replication_group.pending_modified_values.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.replication_group.pending_modified_values.transit_encryption_enabled #=> Boolean
resp.replication_group.pending_modified_values.transit_encryption_mode #=> String, one of "preferred", "required"
resp.replication_group.pending_modified_values.cluster_mode #=> String, one of "enabled", "disabled", "compatible"
resp.replication_group.member_clusters #=> Array
resp.replication_group.member_clusters[0] #=> String
resp.replication_group.node_groups #=> Array
resp.replication_group.node_groups[0].node_group_id #=> String
resp.replication_group.node_groups[0].status #=> String
resp.replication_group.node_groups[0].primary_endpoint.address #=> String
resp.replication_group.node_groups[0].primary_endpoint.port #=> Integer
resp.replication_group.node_groups[0].reader_endpoint.address #=> String
resp.replication_group.node_groups[0].reader_endpoint.port #=> Integer
resp.replication_group.node_groups[0].slots #=> String
resp.replication_group.node_groups[0].node_group_members #=> Array
resp.replication_group.node_groups[0].node_group_members[0].cache_cluster_id #=> String
resp.replication_group.node_groups[0].node_group_members[0].cache_node_id #=> String
resp.replication_group.node_groups[0].node_group_members[0].read_endpoint.address #=> String
resp.replication_group.node_groups[0].node_group_members[0].read_endpoint.port #=> Integer
resp.replication_group.node_groups[0].node_group_members[0].preferred_availability_zone #=> String
resp.replication_group.node_groups[0].node_group_members[0].preferred_outpost_arn #=> String
resp.replication_group.node_groups[0].node_group_members[0].current_role #=> String
resp.replication_group.snapshotting_cluster_id #=> String
resp.replication_group.automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.replication_group.multi_az #=> String, one of "enabled", "disabled"
resp.replication_group.configuration_endpoint.address #=> String
resp.replication_group.configuration_endpoint.port #=> Integer
resp.replication_group.snapshot_retention_limit #=> Integer
resp.replication_group.snapshot_window #=> String
resp.replication_group.cluster_enabled #=> Boolean
resp.replication_group.cache_node_type #=> String
resp.replication_group.auth_token_enabled #=> Boolean
resp.replication_group.auth_token_last_modified_date #=> Time
resp.replication_group.transit_encryption_enabled #=> Boolean
resp.replication_group.at_rest_encryption_enabled #=> Boolean
resp.replication_group.member_clusters_outpost_arns #=> Array
resp.replication_group.member_clusters_outpost_arns[0] #=> String
resp.replication_group.kms_key_id #=> String
resp.replication_group.arn #=> String
resp.replication_group.user_group_ids #=> Array
resp.replication_group.user_group_ids[0] #=> String
resp.replication_group.log_delivery_configurations #=> Array
resp.replication_group.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.replication_group.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.replication_group.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.replication_group.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.replication_group.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.replication_group.log_delivery_configurations[0].status #=> String, one of "active", "enabling", "modifying", "disabling", "error"
resp.replication_group.log_delivery_configurations[0].message #=> String
resp.replication_group.replication_group_create_time #=> Time
resp.replication_group.data_tiering #=> String, one of "enabled", "disabled"
resp.replication_group.auto_minor_version_upgrade #=> Boolean
resp.replication_group.network_type #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.replication_group.ip_discovery #=> String, one of "ipv4", "ipv6"
resp.replication_group.transit_encryption_mode #=> String, one of "preferred", "required"
resp.replication_group.cluster_mode #=> String, one of "enabled", "disabled", "compatible"
resp.replication_group.engine #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replication_group_id (required, String)

    The identifier for the cluster to be deleted. This parameter is not case sensitive.

  • :retain_primary_cluster (Boolean)

    If set to true, all of the read replicas are deleted, but the primary node is retained.

  • :final_snapshot_identifier (String)

    The name of a final node group (shard) snapshot. ElastiCache creates the snapshot from the primary node in the cluster, rather than one of the replicas; this is to ensure that it captures the freshest data. After the final snapshot is taken, the replication group is immediately deleted.

Returns:

See Also:



4221
4222
4223
4224
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 4221

def delete_replication_group(params = {}, options = {})
  req = build_request(:delete_replication_group, params)
  req.send_request(options)
end

#delete_serverless_cache(params = {}) ⇒ Types::DeleteServerlessCacheResponse

Deletes a specified existing serverless cache.

CreateServerlessCacheSnapshot permission is required to create a final snapshot. Without this permission, the API call will fail with an Access Denied exception.

Examples:

Request syntax with placeholder values


resp = client.delete_serverless_cache({
  serverless_cache_name: "String", # required
  final_snapshot_name: "String",
})

Response structure


resp.serverless_cache.serverless_cache_name #=> String
resp.serverless_cache.description #=> String
resp.serverless_cache.create_time #=> Time
resp.serverless_cache.status #=> String
resp.serverless_cache.engine #=> String
resp.serverless_cache.major_engine_version #=> String
resp.serverless_cache.full_engine_version #=> String
resp.serverless_cache.cache_usage_limits.data_storage.maximum #=> Integer
resp.serverless_cache.cache_usage_limits.data_storage.minimum #=> Integer
resp.serverless_cache.cache_usage_limits.data_storage.unit #=> String, one of "GB"
resp.serverless_cache.cache_usage_limits.ecpu_per_second.maximum #=> Integer
resp.serverless_cache.cache_usage_limits.ecpu_per_second.minimum #=> Integer
resp.serverless_cache.kms_key_id #=> String
resp.serverless_cache.security_group_ids #=> Array
resp.serverless_cache.security_group_ids[0] #=> String
resp.serverless_cache.endpoint.address #=> String
resp.serverless_cache.endpoint.port #=> Integer
resp.serverless_cache.reader_endpoint.address #=> String
resp.serverless_cache.reader_endpoint.port #=> Integer
resp.serverless_cache.arn #=> String
resp.serverless_cache.user_group_id #=> String
resp.serverless_cache.subnet_ids #=> Array
resp.serverless_cache.subnet_ids[0] #=> String
resp.serverless_cache.snapshot_retention_limit #=> Integer
resp.serverless_cache.daily_snapshot_time #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :serverless_cache_name (required, String)

    The identifier of the serverless cache to be deleted.

  • :final_snapshot_name (String)

    Name of the final snapshot to be taken before the serverless cache is deleted. Available for Valkey, Redis OSS and Serverless Memcached only. Default: NULL, i.e. a final snapshot is not taken.

Returns:

See Also:



4285
4286
4287
4288
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 4285

def delete_serverless_cache(params = {}, options = {})
  req = build_request(:delete_serverless_cache, params)
  req.send_request(options)
end

#delete_serverless_cache_snapshot(params = {}) ⇒ Types::DeleteServerlessCacheSnapshotResponse

Deletes an existing serverless cache snapshot. Available for Valkey, Redis OSS and Serverless Memcached only.

Examples:

Request syntax with placeholder values


resp = client.delete_serverless_cache_snapshot({
  serverless_cache_snapshot_name: "String", # required
})

Response structure


resp.serverless_cache_snapshot.serverless_cache_snapshot_name #=> String
resp.serverless_cache_snapshot.arn #=> String
resp.serverless_cache_snapshot.kms_key_id #=> String
resp.serverless_cache_snapshot.snapshot_type #=> String
resp.serverless_cache_snapshot.status #=> String
resp.serverless_cache_snapshot.create_time #=> Time
resp.serverless_cache_snapshot.expiry_time #=> Time
resp.serverless_cache_snapshot.bytes_used_for_cache #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.serverless_cache_name #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.engine #=> String
resp.serverless_cache_snapshot.serverless_cache_configuration.major_engine_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :serverless_cache_snapshot_name (required, String)

    Idenfitier of the snapshot to be deleted. Available for Valkey, Redis OSS and Serverless Memcached only.

Returns:

See Also:



4325
4326
4327
4328
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 4325

def delete_serverless_cache_snapshot(params = {}, options = {})
  req = build_request(:delete_serverless_cache_snapshot, params)
  req.send_request(options)
end

#delete_snapshot(params = {}) ⇒ Types::DeleteSnapshotResult

Deletes an existing snapshot. When you receive a successful response from this operation, ElastiCache immediately begins deleting the snapshot; you cannot cancel or revert this operation.

This operation is valid for Valkey or Redis OSS only.

Examples:

Example: DeleteSnapshot


# Deletes the Redis snapshot snapshot-20160822.

resp = client.delete_snapshot({
  snapshot_name: "snapshot-20161212", 
})

resp.to_h outputs the following:
{
  snapshot: {
    auto_minor_version_upgrade: true, 
    cache_cluster_create_time: Time.parse("2016-12-21T22:27:12.543Z"), 
    cache_cluster_id: "my-redis5", 
    cache_node_type: "cache.m3.large", 
    cache_parameter_group_name: "default.redis3.2", 
    cache_subnet_group_name: "default", 
    engine: "redis", 
    engine_version: "3.2.4", 
    node_snapshots: [
      {
        cache_node_create_time: Time.parse("2016-12-21T22:27:12.543Z"), 
        cache_node_id: "0001", 
        cache_size: "3 MB", 
        snapshot_create_time: Time.parse("2016-12-21T22:30:26Z"), 
      }, 
    ], 
    num_cache_nodes: 1, 
    port: 6379, 
    preferred_availability_zone: "us-east-1c", 
    preferred_maintenance_window: "fri:05:30-fri:06:30", 
    snapshot_name: "snapshot-20161212", 
    snapshot_retention_limit: 7, 
    snapshot_source: "manual", 
    snapshot_status: "deleting", 
    snapshot_window: "10:00-11:00", 
    vpc_id: "vpc-91280df6", 
  }, 
}

Request syntax with placeholder values


resp = client.delete_snapshot({
  snapshot_name: "String", # required
})

Response structure


resp.snapshot.snapshot_name #=> String
resp.snapshot.replication_group_id #=> String
resp.snapshot.replication_group_description #=> String
resp.snapshot.cache_cluster_id #=> String
resp.snapshot.snapshot_status #=> String
resp.snapshot.snapshot_source #=> String
resp.snapshot.cache_node_type #=> String
resp.snapshot.engine #=> String
resp.snapshot.engine_version #=> String
resp.snapshot.num_cache_nodes #=> Integer
resp.snapshot.preferred_availability_zone #=> String
resp.snapshot.preferred_outpost_arn #=> String
resp.snapshot.cache_cluster_create_time #=> Time
resp.snapshot.preferred_maintenance_window #=> String
resp.snapshot.topic_arn #=> String
resp.snapshot.port #=> Integer
resp.snapshot.cache_parameter_group_name #=> String
resp.snapshot.cache_subnet_group_name #=> String
resp.snapshot.vpc_id #=> String
resp.snapshot.auto_minor_version_upgrade #=> Boolean
resp.snapshot.snapshot_retention_limit #=> Integer
resp.snapshot.snapshot_window #=> String
resp.snapshot.num_node_groups #=> Integer
resp.snapshot.automatic_failover #=> String, one of "enabled", "disabled", "enabling", "disabling"
resp.snapshot.node_snapshots #=> Array
resp.snapshot.node_snapshots[0].cache_cluster_id #=> String
resp.snapshot.node_snapshots[0].node_group_id #=> String
resp.snapshot.node_snapshots[0].cache_node_id #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.node_group_id #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.slots #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_count #=> Integer
resp.snapshot.node_snapshots[0].node_group_configuration.primary_availability_zone #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_availability_zones #=> Array
resp.snapshot.node_snapshots[0].node_group_configuration.replica_availability_zones[0] #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.primary_outpost_arn #=> String
resp.snapshot.node_snapshots[0].node_group_configuration.replica_outpost_arns #=> Array
resp.snapshot.node_snapshots[0].node_group_configuration.replica_outpost_arns[0] #=> String
resp.snapshot.node_snapshots[0].cache_size #=> String
resp.snapshot.node_snapshots[0].cache_node_create_time #=> Time
resp.snapshot.node_snapshots[0].snapshot_create_time #=> Time
resp.snapshot.kms_key_id #=> String
resp.snapshot.arn #=> String
resp.snapshot.data_tiering #=> String, one of "enabled", "disabled"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :snapshot_name (required, String)

    The name of the snapshot to be deleted.

Returns:

See Also:



4442
4443
4444
4445
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 4442

def delete_snapshot(params = {}, options = {})
  req = build_request(:delete_snapshot, params)
  req.send_request(options)
end

#delete_user(params = {}) ⇒ Types::User

For Valkey engine version 7.2 onwards and Redis OSS 6.0 onwards: Deletes a user. The user will be removed from all user groups and in turn removed from all replication groups. For more information, see Using Role Based Access Control (RBAC).

Examples:

Request syntax with placeholder values


resp = client.delete_user({
  user_id: "UserId", # required
})

Response structure


resp.user_id #=> String
resp.user_name #=> String
resp.status #=> String
resp.engine #=> String
resp.minimum_engine_version #=> String
resp.access_string #=> String
resp.user_group_ids #=> Array
resp.user_group_ids[0] #=> String
resp.authentication.type #=> String, one of "password", "no-password", "iam"
resp.authentication.password_count #=> Integer
resp.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :user_id (required, String)

    The ID of the user.

Returns:

See Also:



4495
4496
4497
4498
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 4495

def delete_user(params = {}, options = {})
  req = build_request(:delete_user, params)
  req.send_request(options)
end

#delete_user_group(params = {}) ⇒ Types::UserGroup

For Valkey engine version 7.2 onwards and Redis OSS 6.0 onwards: Deletes a user group. The user group must first be disassociated from the replication group before it can be deleted. For more information, see Using Role Based Access Control (RBAC).

Examples:

Request syntax with placeholder values


resp = client.delete_user_group({
  user_group_id: "String", # required
})

Response structure


resp.user_group_id #=> String
resp.status #=> String
resp.engine #=> String
resp.user_ids #=> Array
resp.user_ids[0] #=> String
resp.minimum_engine_version #=> String
resp.pending_changes.user_ids_to_remove #=> Array
resp.pending_changes.user_ids_to_remove[0] #=> String
resp.pending_changes.user_ids_to_add #=> Array
resp.pending_changes.user_ids_to_add[0] #=> String
resp.replication_groups #=> Array
resp.replication_groups[0] #=> String
resp.serverless_caches #=> Array
resp.serverless_caches[0] #=> String
resp.arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :user_group_id (required, String)

    The ID of the user group.

Returns:

See Also:



4552
4553
4554
4555
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 4552

def delete_user_group(params = {}, options = {})
  req = build_request(:delete_user_group, params)
  req.send_request(options)
end

#describe_cache_clusters(params = {}) ⇒ Types::CacheClusterMessage

Returns information about all provisioned clusters if no cluster identifier is specified, or about a specific cache cluster if a cluster identifier is supplied.

By default, abbreviated information about the clusters is returned. You can use the optional ShowCacheNodeInfo flag to retrieve detailed information about the cache nodes associated with the clusters. These details include the DNS address and port for the cache node endpoint.

If the cluster is in the creating state, only cluster-level information is displayed until all of the nodes are successfully provisioned.

If the cluster is in the deleting state, only cluster-level information is displayed.

If cache nodes are currently being added to the cluster, node endpoint information and creation time for the additional nodes are not displayed until they are completely provisioned. When the cluster state is available, the cluster is ready for use.

If cache nodes are currently being removed from the cluster, no endpoint information for the removed nodes is displayed.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

The following waiters are defined for this operation (see #wait_until for detailed usage):

  • cache_cluster_available
  • cache_cluster_deleted

Examples:

Example: DescribeCacheClusters


# Lists the details for up to 50 cache clusters.

resp = client.describe_cache_clusters({
  cache_cluster_id: "my-mem-cluster", 
})

resp.to_h outputs the following:
{
  cache_clusters: [
    {
      auto_minor_version_upgrade: true, 
      cache_cluster_create_time: Time.parse("2016-12-21T21:59:43.794Z"), 
      cache_cluster_id: "my-mem-cluster", 
      cache_cluster_status: "available", 
      cache_node_type: "cache.t2.medium", 
      cache_parameter_group: {
        cache_node_ids_to_reboot: [
        ], 
        cache_parameter_group_name: "default.memcached1.4", 
        parameter_apply_status: "in-sync", 
      }, 
      cache_security_groups: [
      ], 
      cache_subnet_group_name: "default", 
      client_download_landing_page: "https://console.aws.amazon.com/elasticache/home#client-download:", 
      configuration_endpoint: {
        address: "my-mem-cluster.abcdef.cfg.use1.cache.amazonaws.com", 
        port: 11211, 
      }, 
      engine: "memcached", 
      engine_version: "1.4.24", 
      num_cache_nodes: 2, 
      pending_modified_values: {
      }, 
      preferred_availability_zone: "Multiple", 
      preferred_maintenance_window: "wed:06:00-wed:07:00", 
    }, 
  ], 
}

Example: DescribeCacheClusters


# Lists the details for the cache cluster my-mem-cluster.

resp = client.describe_cache_clusters({
  cache_cluster_id: "my-mem-cluster", 
  show_cache_node_info: true, 
})

resp.to_h outputs the following:
{
  cache_clusters: [
    {
      auto_minor_version_upgrade: true, 
      cache_cluster_create_time: Time.parse("2016-12-21T21:59:43.794Z"), 
      cache_cluster_id: "my-mem-cluster", 
      cache_cluster_status: "available", 
      cache_node_type: "cache.t2.medium", 
      cache_nodes: [
        {
          cache_node_create_time: Time.parse("2016-12-21T21:59:43.794Z"), 
          cache_node_id: "0001", 
          cache_node_status: "available", 
          customer_availability_zone: "us-east-1b", 
          endpoint: {
            address: "my-mem-cluster.ameaqx.0001.use1.cache.amazonaws.com", 
            port: 11211, 
          }, 
          parameter_group_status: "in-sync", 
        }, 
        {
          cache_node_create_time: Time.parse("2016-12-21T21:59:43.794Z"), 
          cache_node_id: "0002", 
          cache_node_status: "available", 
          customer_availability_zone: "us-east-1a", 
          endpoint: {
            address: "my-mem-cluster.ameaqx.0002.use1.cache.amazonaws.com", 
            port: 11211, 
          }, 
          parameter_group_status: "in-sync", 
        }, 
      ], 
      cache_parameter_group: {
        cache_node_ids_to_reboot: [
        ], 
        cache_parameter_group_name: "default.memcached1.4", 
        parameter_apply_status: "in-sync", 
      }, 
      cache_security_groups: [
      ], 
      cache_subnet_group_name: "default", 
      client_download_landing_page: "https://console.aws.amazon.com/elasticache/home#client-download:", 
      configuration_endpoint: {
        address: "my-mem-cluster.ameaqx.cfg.use1.cache.amazonaws.com", 
        port: 11211, 
      }, 
      engine: "memcached", 
      engine_version: "1.4.24", 
      num_cache_nodes: 2, 
      pending_modified_values: {
      }, 
      preferred_availability_zone: "Multiple", 
      preferred_maintenance_window: "wed:06:00-wed:07:00", 
    }, 
  ], 
}

Request syntax with placeholder values


resp = client.describe_cache_clusters({
  cache_cluster_id: "String",
  max_records: 1,
  marker: "String",
  show_cache_node_info: false,
  show_cache_clusters_not_in_replication_groups: false,
})

Response structure


resp.marker #=> String
resp.cache_clusters #=> Array
resp.cache_clusters[0].cache_cluster_id #=> String
resp.cache_clusters[0].configuration_endpoint.address #=> String
resp.cache_clusters[0].configuration_endpoint.port #=> Integer
resp.cache_clusters[0].client_download_landing_page #=> String
resp.cache_clusters[0].cache_node_type #=> String
resp.cache_clusters[0].engine #=> String
resp.cache_clusters[0].engine_version #=> String
resp.cache_clusters[0].cache_cluster_status #=> String
resp.cache_clusters[0].num_cache_nodes #=> Integer
resp.cache_clusters[0].preferred_availability_zone #=> String
resp.cache_clusters[0].preferred_outpost_arn #=> String
resp.cache_clusters[0].cache_cluster_create_time #=> Time
resp.cache_clusters[0].preferred_maintenance_window #=> String
resp.cache_clusters[0].pending_modified_values.num_cache_nodes #=> Integer
resp.cache_clusters[0].pending_modified_values.cache_node_ids_to_remove #=> Array
resp.cache_clusters[0].pending_modified_values.cache_node_ids_to_remove[0] #=> String
resp.cache_clusters[0].pending_modified_values.engine_version #=> String
resp.cache_clusters[0].pending_modified_values.cache_node_type #=> String
resp.cache_clusters[0].pending_modified_values.auth_token_status #=> String, one of "SETTING", "ROTATING"
resp.cache_clusters[0].pending_modified_values.log_delivery_configurations #=> Array
resp.cache_clusters[0].pending_modified_values.log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.cache_clusters[0].pending_modified_values.log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.cache_clusters[0].pending_modified_values.log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.cache_clusters[0].pending_modified_values.log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.cache_clusters[0].pending_modified_values.log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.cache_clusters[0].pending_modified_values.transit_encryption_enabled #=> Boolean
resp.cache_clusters[0].pending_modified_values.transit_encryption_mode #=> String, one of "preferred", "required"
resp.cache_clusters[0].notification_configuration.topic_arn #=> String
resp.cache_clusters[0].notification_configuration.topic_status #=> String
resp.cache_clusters[0].cache_security_groups #=> Array
resp.cache_clusters[0].cache_security_groups[0].cache_security_group_name #=> String
resp.cache_clusters[0].cache_security_groups[0].status #=> String
resp.cache_clusters[0].cache_parameter_group.cache_parameter_group_name #=> String
resp.cache_clusters[0].cache_parameter_group.parameter_apply_status #=> String
resp.cache_clusters[0].cache_parameter_group.cache_node_ids_to_reboot #=> Array
resp.cache_clusters[0].cache_parameter_group.cache_node_ids_to_reboot[0] #=> String
resp.cache_clusters[0].cache_subnet_group_name #=> String
resp.cache_clusters[0].cache_nodes #=> Array
resp.cache_clusters[0].cache_nodes[0].cache_node_id #=> String
resp.cache_clusters[0].cache_nodes[0].cache_node_status #=> String
resp.cache_clusters[0].cache_nodes[0].cache_node_create_time #=> Time
resp.cache_clusters[0].cache_nodes[0].endpoint.address #=> String
resp.cache_clusters[0].cache_nodes[0].endpoint.port #=> Integer
resp.cache_clusters[0].cache_nodes[0].parameter_group_status #=> String
resp.cache_clusters[0].cache_nodes[0].source_cache_node_id #=> String
resp.cache_clusters[0].cache_nodes[0].customer_availability_zone #=> String
resp.cache_clusters[0].cache_nodes[0].customer_outpost_arn #=> String
resp.cache_clusters[0].auto_minor_version_upgrade #=> Boolean
resp.cache_clusters[0].security_groups #=> Array
resp.cache_clusters[0].security_groups[0].security_group_id #=> String
resp.cache_clusters[0].security_groups[0].status #=> String
resp.cache_clusters[0].replication_group_id #=> String
resp.cache_clusters[0].snapshot_retention_limit #=> Integer
resp.cache_clusters[0].snapshot_window #=> String
resp.cache_clusters[0].auth_token_enabled #=> Boolean
resp.cache_clusters[0].auth_token_last_modified_date #=> Time
resp.cache_clusters[0].transit_encryption_enabled #=> Boolean
resp.cache_clusters[0].at_rest_encryption_enabled #=> Boolean
resp.cache_clusters[0].arn #=> String
resp.cache_clusters[0].replication_group_log_delivery_enabled #=> Boolean
resp.cache_clusters[0].log_delivery_configurations #=> Array
resp.cache_clusters[0].log_delivery_configurations[0].log_type #=> String, one of "slow-log", "engine-log"
resp.cache_clusters[0].log_delivery_configurations[0].destination_type #=> String, one of "cloudwatch-logs", "kinesis-firehose"
resp.cache_clusters[0].log_delivery_configurations[0].destination_details.cloud_watch_logs_details.log_group #=> String
resp.cache_clusters[0].log_delivery_configurations[0].destination_details.kinesis_firehose_details.delivery_stream #=> String
resp.cache_clusters[0].log_delivery_configurations[0].log_format #=> String, one of "text", "json"
resp.cache_clusters[0].log_delivery_configurations[0].status #=> String, one of "active", "enabling", "modifying", "disabling", "error"
resp.cache_clusters[0].log_delivery_configurations[0].message #=> String
resp.cache_clusters[0].network_type #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.cache_clusters[0].ip_discovery #=> String, one of "ipv4", "ipv6"
resp.cache_clusters[0].transit_encryption_mode #=> String, one of "preferred", "required"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_cluster_id (String)

    The user-supplied cluster identifier. If this parameter is specified, only information about that specific cluster is returned. This parameter isn't case sensitive.

  • :max_records (Integer)

    The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a marker is included in the response so that the remaining results can be retrieved.

    Default: 100

    Constraints: minimum 20; maximum 100.

  • :marker (String)

    An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

  • :show_cache_node_info (Boolean)

    An optional flag that can be included in the DescribeCacheCluster request to retrieve information about the individual cache nodes.

  • :show_cache_clusters_not_in_replication_groups (Boolean)

    An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this means Memcached and single node Valkey or Redis OSS clusters.

Returns:

See Also:



4825
4826
4827
4828
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 4825

def describe_cache_clusters(params = {}, options = {})
  req = build_request(:describe_cache_clusters, params)
  req.send_request(options)
end

#describe_cache_engine_versions(params = {}) ⇒ Types::CacheEngineVersionMessage

Returns a list of the available cache engines and their versions.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Example: DescribeCacheEngineVersions


# Lists the details for up to 25 Memcached and Redis cache engine versions.

resp = client.describe_cache_engine_versions({
})

resp.to_h outputs the following:
{
  cache_engine_versions: [
    {
      cache_engine_description: "memcached", 
      cache_engine_version_description: "memcached version 1.4.14", 
      cache_parameter_group_family: "memcached1.4", 
      engine: "memcached", 
      engine_version: "1.4.14", 
    }, 
    {
      cache_engine_description: "memcached", 
      cache_engine_version_description: "memcached version 1.4.24", 
      cache_parameter_group_family: "memcached1.4", 
      engine: "memcached", 
      engine_version: "1.4.24", 
    }, 
    {
      cache_engine_description: "memcached", 
      cache_engine_version_description: "memcached version 1.4.33", 
      cache_parameter_group_family: "memcached1.4", 
      engine: "memcached", 
      engine_version: "1.4.33", 
    }, 
    {
      cache_engine_description: "memcached", 
      cache_engine_version_description: "memcached version 1.4.5", 
      cache_parameter_group_family: "memcached1.4", 
      engine: "memcached", 
      engine_version: "1.4.5", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.6.13", 
      cache_parameter_group_family: "redis2.6", 
      engine: "redis", 
      engine_version: "2.6.13", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.19", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.19", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.21", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.21", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.22 R5", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.22", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.23 R4", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.23", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.24 R3", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.24", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.6", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.6", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 3.2.4", 
      cache_parameter_group_family: "redis3.2", 
      engine: "redis", 
      engine_version: "3.2.4", 
    }, 
  ], 
}

Example: DescribeCacheEngineVersions


# Lists the details for up to 50 Redis cache engine versions.

resp = client.describe_cache_engine_versions({
  default_only: false, 
  engine: "redis", 
  max_records: 50, 
})

resp.to_h outputs the following:
{
  cache_engine_versions: [
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.6.13", 
      cache_parameter_group_family: "redis2.6", 
      engine: "redis", 
      engine_version: "2.6.13", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.19", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.19", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.21", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.21", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.22 R5", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.22", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.23 R4", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.23", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.24 R3", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.24", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 2.8.6", 
      cache_parameter_group_family: "redis2.8", 
      engine: "redis", 
      engine_version: "2.8.6", 
    }, 
    {
      cache_engine_description: "Redis", 
      cache_engine_version_description: "redis version 3.2.4", 
      cache_parameter_group_family: "redis3.2", 
      engine: "redis", 
      engine_version: "3.2.4", 
    }, 
  ], 
  marker: "", 
}

Request syntax with placeholder values


resp = client.describe_cache_engine_versions({
  engine: "String",
  engine_version: "String",
  cache_parameter_group_family: "String",
  max_records: 1,
  marker: "String",
  default_only: false,
})

Response structure


resp.marker #=> String
resp.cache_engine_versions #=> Array
resp.cache_engine_versions[0].engine #=> String
resp.cache_engine_versions[0].engine_version #=> String
resp.cache_engine_versions[0].cache_parameter_group_family #=> String
resp.cache_engine_versions[0].cache_engine_description #=> String
resp.cache_engine_versions[0].cache_engine_version_description #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :engine (String)

    The cache engine to return. Valid values: memcached | redis

  • :engine_version (String)

    The cache engine version to return.

    Example: 1.4.14

  • :cache_parameter_group_family (String)

    The name of a specific cache parameter group family to return details for.

    Valid values are: memcached1.4 | memcached1.5 | memcached1.6 | redis2.6 | redis2.8 | redis3.2 | redis4.0 | redis5.0 | redis6.x | redis6.2 | redis7 | valkey7

    Constraints:

    • Must be 1 to 255 alphanumeric characters

    • First character must be a letter

    • Cannot end with a hyphen or contain two consecutive hyphens

  • :max_records (Integer)

    The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a marker is included in the response so that the remaining results can be retrieved.

    Default: 100

    Constraints: minimum 20; maximum 100.

  • :marker (String)

    An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

  • :default_only (Boolean)

    If true, specifies that only the default version of the specified engine or engine and major version combination is to be returned.

Returns:

See Also:



5079
5080
5081
5082
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 5079

def describe_cache_engine_versions(params = {}, options = {})
  req = build_request(:describe_cache_engine_versions, params)
  req.send_request(options)
end

#describe_cache_parameter_groups(params = {}) ⇒ Types::CacheParameterGroupsMessage

Returns a list of cache parameter group descriptions. If a cache parameter group name is specified, the list contains only the descriptions for that group.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Example: DescribeCacheParameterGroups


# Returns a list of cache parameter group descriptions. If a cache parameter group name is specified, the list contains
# only the descriptions for that group.

resp = client.describe_cache_parameter_groups({
  cache_parameter_group_name: "custom-mem1-4", 
})

resp.to_h outputs the following:
{
  cache_parameter_groups: [
    {
      cache_parameter_group_family: "memcached1.4", 
      cache_parameter_group_name: "custom-mem1-4", 
      description: "Custom memcache param group", 
    }, 
  ], 
}

Request syntax with placeholder values


resp = client.describe_cache_parameter_groups({
  cache_parameter_group_name: "String",
  max_records: 1,
  marker: "String",
})

Response structure


resp.marker #=> String
resp.cache_parameter_groups #=> Array
resp.cache_parameter_groups[0].cache_parameter_group_name #=> String
resp.cache_parameter_groups[0].cache_parameter_group_family #=> String
resp.cache_parameter_groups[0].description #=> String
resp.cache_parameter_groups[0].is_global #=> Boolean
resp.cache_parameter_groups[0].arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_parameter_group_name (String)

    The name of a specific cache parameter group to return details for.

  • :max_records (Integer)

    The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a marker is included in the response so that the remaining results can be retrieved.

    Default: 100

    Constraints: minimum 20; maximum 100.

  • :marker (String)

    An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

Returns:

See Also:



5157
5158
5159
5160
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 5157

def describe_cache_parameter_groups(params = {}, options = {})
  req = build_request(:describe_cache_parameter_groups, params)
  req.send_request(options)
end

#describe_cache_parameters(params = {}) ⇒ Types::CacheParameterGroupDetails

Returns the detailed parameter list for a particular cache parameter group.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Example: DescribeCacheParameters


# Lists up to 100 user parameter values for the parameter group custom.redis2.8.

resp = client.describe_cache_parameters({
  cache_parameter_group_name: "custom-redis2-8", 
  max_records: 100, 
  source: "user", 
})

resp.to_h outputs the following:
{
  marker: "", 
  parameters: [
    {
      allowed_values: "yes,no", 
      change_type: "requires-reboot", 
      data_type: "string", 
      description: "Apply rehashing or not.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "activerehashing", 
      parameter_value: "yes", 
      source: "system", 
    }, 
    {
      allowed_values: "always,everysec,no", 
      change_type: "immediate", 
      data_type: "string", 
      description: "fsync policy for AOF persistence", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "appendfsync", 
      parameter_value: "everysec", 
      source: "system", 
    }, 
    {
      allowed_values: "yes,no", 
      change_type: "immediate", 
      data_type: "string", 
      description: "Enable Redis persistence.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "appendonly", 
      parameter_value: "no", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Normal client output buffer hard limit in bytes.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "client-output-buffer-limit-normal-hard-limit", 
      parameter_value: "0", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Normal client output buffer soft limit in bytes.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "client-output-buffer-limit-normal-soft-limit", 
      parameter_value: "0", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Normal client output buffer soft limit in seconds.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "client-output-buffer-limit-normal-soft-seconds", 
      parameter_value: "0", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Pubsub client output buffer hard limit in bytes.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "client-output-buffer-limit-pubsub-hard-limit", 
      parameter_value: "33554432", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Pubsub client output buffer soft limit in bytes.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "client-output-buffer-limit-pubsub-soft-limit", 
      parameter_value: "8388608", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Pubsub client output buffer soft limit in seconds.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "client-output-buffer-limit-pubsub-soft-seconds", 
      parameter_value: "60", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Slave client output buffer soft limit in seconds.", 
      is_modifiable: false, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "client-output-buffer-limit-slave-soft-seconds", 
      parameter_value: "60", 
      source: "system", 
    }, 
    {
      allowed_values: "yes,no", 
      change_type: "immediate", 
      data_type: "string", 
      description: "If enabled, clients who attempt to write to a read-only slave will be disconnected. Applicable to 2.8.23 and higher.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.23", 
      parameter_name: "close-on-slave-write", 
      parameter_value: "yes", 
      source: "system", 
    }, 
    {
      allowed_values: "1-1200000", 
      change_type: "requires-reboot", 
      data_type: "integer", 
      description: "Set the number of databases.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "databases", 
      parameter_value: "16", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The maximum number of hash entries in order for the dataset to be compressed.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "hash-max-ziplist-entries", 
      parameter_value: "512", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The threshold of biggest hash entries in order for the dataset to be compressed.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "hash-max-ziplist-value", 
      parameter_value: "64", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The maximum number of list entries in order for the dataset to be compressed.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "list-max-ziplist-entries", 
      parameter_value: "512", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The threshold of biggest list entries in order for the dataset to be compressed.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "list-max-ziplist-value", 
      parameter_value: "64", 
      source: "system", 
    }, 
    {
      allowed_values: "5000", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Max execution time of a Lua script in milliseconds. 0 for unlimited execution without warnings.", 
      is_modifiable: false, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "lua-time-limit", 
      parameter_value: "5000", 
      source: "system", 
    }, 
    {
      allowed_values: "1-65000", 
      change_type: "requires-reboot", 
      data_type: "integer", 
      description: "The maximum number of Redis clients.", 
      is_modifiable: false, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "maxclients", 
      parameter_value: "65000", 
      source: "system", 
    }, 
    {
      allowed_values: "volatile-lru,allkeys-lru,volatile-random,allkeys-random,volatile-ttl,noeviction", 
      change_type: "immediate", 
      data_type: "string", 
      description: "Max memory policy.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "maxmemory-policy", 
      parameter_value: "volatile-lru", 
      source: "system", 
    }, 
    {
      allowed_values: "1-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Max memory samples.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "maxmemory-samples", 
      parameter_value: "3", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Maximum number of seconds within which the master must receive a ping from a slave to take writes. Use this parameter together with min-slaves-to-write to regulate when the master stops accepting writes. Setting this value to 0 means the master always takes writes.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "min-slaves-max-lag", 
      parameter_value: "10", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Number of slaves that must be connected in order for master to take writes. Use this parameter together with min-slaves-max-lag to regulate when the master stops accepting writes. Setting this to 0 means the master always takes writes.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "min-slaves-to-write", 
      parameter_value: "0", 
      source: "system", 
    }, 
    {
      change_type: "immediate", 
      data_type: "string", 
      description: "The keyspace events for Redis to notify Pub/Sub clients about. By default all notifications are disabled", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "notify-keyspace-events", 
      source: "system", 
    }, 
    {
      allowed_values: "16384-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The replication backlog size in bytes for PSYNC. This is the size of the buffer which accumulates slave data when slave is disconnected for some time, so that when slave reconnects again, only transfer the portion of data which the slave missed. Minimum value is 16K.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "repl-backlog-size", 
      parameter_value: "1048576", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The amount of time in seconds after the master no longer have any slaves connected for the master to free the replication backlog. A value of 0 means to never release the backlog.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "repl-backlog-ttl", 
      parameter_value: "3600", 
      source: "system", 
    }, 
    {
      allowed_values: "11-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The timeout in seconds for bulk transfer I/O during sync and master timeout from the perspective of the slave, and slave timeout from the perspective of the master.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "repl-timeout", 
      parameter_value: "60", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The amount of memory reserved for non-cache memory usage, in bytes. You may want to increase this parameter for nodes with read replicas, AOF enabled, etc, to reduce swap usage.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "reserved-memory", 
      parameter_value: "0", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The limit in the size of the set in order for the dataset to be compressed.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "set-max-intset-entries", 
      parameter_value: "512", 
      source: "system", 
    }, 
    {
      allowed_values: "yes,no", 
      change_type: "immediate", 
      data_type: "string", 
      description: "Configures if chaining of slaves is allowed", 
      is_modifiable: false, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "slave-allow-chaining", 
      parameter_value: "no", 
      source: "system", 
    }, 
    {
      allowed_values: "-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The execution time, in microseconds, to exceed in order for the command to get logged. Note that a negative number disables the slow log, while a value of zero forces the logging of every command.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "slowlog-log-slower-than", 
      parameter_value: "10000", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The length of the slow log. There is no limit to this length. Just be aware that it will consume memory. You can reclaim memory used by the slow log with SLOWLOG RESET.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "slowlog-max-len", 
      parameter_value: "128", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "If non-zero, send ACKs every given number of seconds.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "tcp-keepalive", 
      parameter_value: "0", 
      source: "system", 
    }, 
    {
      allowed_values: "0,20-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "Close connection if client is idle for a given number of seconds, or never if 0.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "timeout", 
      parameter_value: "0", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The maximum number of sorted set entries in order for the dataset to be compressed.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "zset-max-ziplist-entries", 
      parameter_value: "128", 
      source: "system", 
    }, 
    {
      allowed_values: "0-", 
      change_type: "immediate", 
      data_type: "integer", 
      description: "The threshold of biggest sorted set entries in order for the dataset to be compressed.", 
      is_modifiable: true, 
      minimum_engine_version: "2.8.6", 
      parameter_name: "zset-max-ziplist-value", 
      parameter_value: "64", 
      source: "system", 
    }, 
  ], 
}

Request syntax with placeholder values


resp = client.describe_cache_parameters({
  cache_parameter_group_name: "String", # required
  source: "String",
  max_records: 1,
  marker: "String",
})

Response structure


resp.marker #=> String
resp.parameters #=> Array
resp.parameters[0].parameter_name #=> String
resp.parameters[0].parameter_value #=> String
resp.parameters[0].description #=> String
resp.parameters[0].source #=> String
resp.parameters[0].data_type #=> String
resp.parameters[0].allowed_values #=> String
resp.parameters[0].is_modifiable #=> Boolean
resp.parameters[0].minimum_engine_version #=> String
resp.parameters[0].change_type #=> String, one of "immediate", "requires-reboot"
resp.cache_node_type_specific_parameters #=> Array
resp.cache_node_type_specific_parameters[0].parameter_name #=> String
resp.cache_node_type_specific_parameters[0].description #=> String
resp.cache_node_type_specific_parameters[0].source #=> String
resp.cache_node_type_specific_parameters[0].data_type #=> String
resp.cache_node_type_specific_parameters[0].allowed_values #=> String
resp.cache_node_type_specific_parameters[0].is_modifiable #=> Boolean
resp.cache_node_type_specific_parameters[0].minimum_engine_version #=> String
resp.cache_node_type_specific_parameters[0].cache_node_type_specific_values #=> Array
resp.cache_node_type_specific_parameters[0].cache_node_type_specific_values[0].cache_node_type #=> String
resp.cache_node_type_specific_parameters[0].cache_node_type_specific_values[0].value #=> String
resp.cache_node_type_specific_parameters[0].change_type #=> String, one of "immediate", "requires-reboot"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_parameter_group_name (required, String)

    The name of a specific cache parameter group to return details for.

  • :source (String)

    The parameter types to return.

    Valid values: user | system | engine-default

  • :max_records (Integer)

    The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a marker is included in the response so that the remaining results can be retrieved.

    Default: 100

    Constraints: minimum 20; maximum 100.

  • :marker (String)

    An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

Returns:

See Also:



5637
5638
5639
5640
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 5637

def describe_cache_parameters(params = {}, options = {})
  req = build_request(:describe_cache_parameters, params)
  req.send_request(options)
end

#describe_cache_security_groups(params = {}) ⇒ Types::CacheSecurityGroupMessage

Returns a list of cache security group descriptions. If a cache security group name is specified, the list contains only the description of that group. This applicable only when you have ElastiCache in Classic setup

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Example: DescribeCacheSecurityGroups


# Returns a list of cache security group descriptions. If a cache security group name is specified, the list contains only
# the description of that group.

resp = client.describe_cache_security_groups({
  cache_security_group_name: "my-sec-group", 
})

Request syntax with placeholder values


resp = client.describe_cache_security_groups({
  cache_security_group_name: "String",
  max_records: 1,
  marker: "String",
})

Response structure


resp.marker #=> String
resp.cache_security_groups #=> Array
resp.cache_security_groups[0].owner_id #=> String
resp.cache_security_groups[0].cache_security_group_name #=> String
resp.cache_security_groups[0].description #=> String
resp.cache_security_groups[0].ec2_security_groups #=> Array
resp.cache_security_groups[0].ec2_security_groups[0].status #=> String
resp.cache_security_groups[0].ec2_security_groups[0].ec2_security_group_name #=> String
resp.cache_security_groups[0].ec2_security_groups[0].ec2_security_group_owner_id #=> String
resp.cache_security_groups[0].arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cache_security_group_name (String)

    The name of the cache security group to return details for.

  • :max_records (Integer)

    The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a marker is included in the response so that the remaining results can be retrieved.

    Default: 100

    Constraints: minimum 20; maximum 100.

  • :marker (String)

    An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

Returns:

See Also:



5708
5709
5710
5711
# File 'gems/aws-sdk-elasticache/lib/aws-sdk-elasticache/client.rb', line 5708

def describe_cache_security_groups(params = {}, options = {})
  req = build_request(:describe_cache_security_groups, params)
  req.send_request(options)
end

#describe_cache_subnet_groups(params = {}) ⇒ Types::CacheSubnetGroupMessage

Returns a list of cache subnet group descriptions. If a subnet group name is specified, the list contains only the description of that group. This is applicable only when you have ElastiCache in VPC setup. All ElastiCache clusters now launch in VPC by default.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Example: DescribeCacheSubnetGroups


# Describes up to 25 cache subnet groups.

resp = client.describe_cache_subnet_groups({
  max_records: 25, 
})

resp.to_h outputs the following:
{
  cache_subnet_groups: [
    {
      cache_subnet_group_description: "Default CacheSubnetGroup", 
      cache_subnet_group_name: "default", 
      subnets: [
        {
          subnet_availability_zone: {
            name: "us-east-1a", 
          }, 
          subnet_identifier: "subnet-1a2b3c4d", 
        }, 
        {
          subnet_availability_zone: {
            name: "us-east-1c", 
          }, 
          subnet_identifier: "subnet-a1b2c3d4", 
        }, 
        {
          subnet_availability_zone: {
            name: "us-east-1e", 
          }, 
          subnet_identifier: "subnet-abcd1234", 
        }, 
        {
          subnet_availability_zone: {
            name: "us-east-1b", 
          }, 
          subnet_identifier: "subnet-1234abcd", 
        }, 
      ], 
      vpc_id: "vpc-91280df6", 
    }, 
  ], 
  marker: "", 
}

Request syntax with placeholder values


resp = client.describe_cache_subnet_groups({
  cache_subnet_group_name: "String",
  max_records: 1,
  marker: "String",
})

Response structure


resp.marker #=> String
resp.cache_subnet_groups #=> Array
resp.cache_subnet_groups[0].cache_subnet_group_name #=> String
resp.cache_subnet_groups[0].cache_subnet_group_description #=> String
resp.cache_subnet_groups[0].vpc_id #=> String
resp.cache_subnet_groups[0].subnets #=> Array
resp.cache_subnet_groups[0].subnets[0].subnet_identifier #=> String
resp.cache_subnet_groups[0].subnets[0].subnet_availability_zone.name #=> String
resp.cache_subnet_groups[0].subnets[0].subnet_outpost.subnet_outpost_arn #=> String
resp.cache_subnet_groups[0].subnets[0].supported_network_types #=> Array
resp.cache_subnet_groups[0].subnets[0].supported_network_types[0] #=> String, one of "ipv4", "ipv6", "dual_stack"
resp.