Cookie の設定を選択する

当社は、当社のサイトおよびサービスを提供するために必要な必須 Cookie および類似のツールを使用しています。当社は、パフォーマンス Cookie を使用して匿名の統計情報を収集することで、お客様が当社のサイトをどのように利用しているかを把握し、改善に役立てています。必須 Cookie は無効化できませんが、[カスタマイズ] または [拒否] をクリックしてパフォーマンス Cookie を拒否することはできます。

お客様が同意した場合、AWS および承認された第三者は、Cookie を使用して便利なサイト機能を提供したり、お客様の選択を記憶したり、関連する広告を含む関連コンテンツを表示したりします。すべての必須ではない Cookie を受け入れるか拒否するには、[受け入れる] または [拒否] をクリックしてください。より詳細な選択を行うには、[カスタマイズ] をクリックしてください。

Class: Aws::Kafka::Client

Inherits:
Seahorse::Client::Base show all
Includes:
ClientStubs
Defined in:
gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb

Overview

An API client for Kafka. To construct a client, you need to configure a :region and :credentials.

client = Aws::Kafka::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the developer guide.

See #initialize for a full list of supported configuration options.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

API Operations collapse

Instance Method Summary collapse

Methods included from ClientStubs

#api_requests, #stub_data, #stub_responses

Methods inherited from Seahorse::Client::Base

add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :plugins (Array<Seahorse::Client::Plugin>) — default: []]

    A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • Aws::Credentials - Used for configuring static, non-refreshing credentials.

    • Aws::SharedCredentials - Used for loading static credentials from a shared file, such as ~/.aws/config.

    • Aws::AssumeRoleCredentials - Used when you need to assume a role.

    • Aws::AssumeRoleWebIdentityCredentials - Used when you need to assume a role after providing credentials via the web.

    • Aws::SSOCredentials - Used for loading credentials from AWS SSO using an access token generated from aws login.

    • Aws::ProcessCredentials - Used for loading credentials from a process that outputs to stdout.

    • Aws::InstanceProfileCredentials - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • Aws::ECSCredentials - Used for loading credentials from instances running in ECS.

    • Aws::CognitoIdentityCredentials - Used for loading credentials from the Cognito Identity service.

    When :credentials are not configured directly, the following locations will be searched for credentials:

    • Aws.config[:credentials]
    • The :access_key_id, :secret_access_key, :session_token, and :account_id options.
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SESSION_TOKEN'], and ENV['AWS_ACCOUNT_ID']
    • ~/.aws/credentials
    • ~/.aws/config
    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of Aws::InstanceProfileCredentials or Aws::ECSCredentials to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV['AWS_EC2_METADATA_DISABLED'] to true.
  • :region (required, String)

    The AWS region to connect to. The configured :region is used to determine the service :endpoint. When not passed, a default :region is searched for in the following locations:

    • Aws.config[:region]
    • ENV['AWS_REGION']
    • ENV['AMAZON_REGION']
    • ENV['AWS_DEFAULT_REGION']
    • ~/.aws/credentials
    • ~/.aws/config
  • :access_key_id (String)
  • :account_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in adaptive retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a RetryCapacityNotAvailableError and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When true, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in standard and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :disable_request_compression (Boolean) — default: false

    When set to 'true' the request body will not be compressed for supported operations.

  • :endpoint (String, URI::HTTPS, URI::HTTP)

    Normally you should not configure the :endpoint option directly. This is normally constructed from the :region option. Configuring :endpoint is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:

    'http://example.com'
    'https://example.com'
    'http://example.com:123'
    
  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to true, endpoint discovery will be enabled for operations when available.

  • :ignore_configured_endpoint_urls (Boolean)

    Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the :logger at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in standard and adaptive retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.

  • :request_checksum_calculation (String) — default: "when_supported"

    Determines when a checksum will be calculated for request payloads. Values are:

    • when_supported - (default) When set, a checksum will be calculated for all request payloads of operations modeled with the httpChecksum trait where requestChecksumRequired is true and/or a requestAlgorithmMember is modeled.
    • when_required - When set, a checksum will only be calculated for request payloads of operations modeled with the httpChecksum trait where requestChecksumRequired is true or where a requestAlgorithmMember is modeled and supplied.
  • :request_min_compression_size_bytes (Integer) — default: 10240

    The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.

  • :response_checksum_validation (String) — default: "when_supported"

    Determines when checksum validation will be performed on response payloads. Values are:

    • when_supported - (default) When set, checksum validation is performed on all response payloads of operations modeled with the httpChecksum trait where responseAlgorithms is modeled, except when no modeled checksum algorithms are supported.
    • when_required - When set, checksum validation is not performed on response payloads of operations unless the checksum algorithm is supported and the requestValidationModeMember member is set to ENABLED.
  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the legacy retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the legacy retry mode.

    @see https://www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the legacy retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • legacy - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • standard - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • adaptive - An experimental retry mode that includes all the functionality of standard mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :sdk_ua_app_id (String)

    A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key (String)
  • :session_token (String)
  • :sigv4a_signing_region_set (Array)

    A list of regions that should be signed with SigV4a signing. When not passed, a default :sigv4a_signing_region_set is searched for in the following locations:

    • Aws.config[:sigv4a_signing_region_set]
    • ENV['AWS_SIGV4A_SIGNING_REGION_SET']
    • ~/.aws/config
  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :telemetry_provider (Aws::Telemetry::TelemetryProviderBase) — default: Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses NoOpTelemetryProvider which will not record or emit any telemetry data. The SDK supports the following telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require the opentelemetry-sdk gem and then, pass in an instance of a Aws::Telemetry::OTelProvider for telemetry provider.
  • :token_provider (Aws::TokenProvider)

    A Bearer Token Provider. This can be an instance of any one of the following classes:

    • Aws::StaticTokenProvider - Used for configuring static, non-refreshing tokens.

    • Aws::SSOTokenProvider - Used for loading tokens from AWS SSO using an access token generated from aws login.

    When :token_provider is not configured directly, the Aws::TokenProviderChain will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to true, dualstack enabled endpoints (with .aws TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to true, fips compatible endpoints will be used if available. When a fips region is used, the region is normalized and this config is set to true.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::Kafka::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to #resolve_endpoint(parameters) where parameters is a Struct similar to Aws::Kafka::EndpointParameters.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has "Expect" header set to "100-continue". Defaults to nil which disables this behaviour. This value can safely be set per request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_open_timeout (Float) — default: 15

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like 'http://proxy.com:123'.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_wire_trace (Boolean) — default: false

    When true, HTTP debug output will be sent to the :logger.

  • :on_chunk_received (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a content-length).

  • :on_chunk_sent (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_store (String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert (OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key (OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout (Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer (Boolean) — default: true

    When true, SSL peer certificates are verified when establishing a connection.

[View source]

467
468
469
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 467

def initialize(*args)
  super
end

Instance Method Details

#batch_associate_scram_secret(params = {}) ⇒ Types::BatchAssociateScramSecretResponse

Associates one or more Scram Secrets with an Amazon MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.batch_associate_scram_secret({
  cluster_arn: "__string", # required
  secret_arn_list: ["__string"], # required
})

Response structure


resp.cluster_arn #=> String
resp.unprocessed_scram_secrets #=> Array
resp.unprocessed_scram_secrets[0].error_code #=> String
resp.unprocessed_scram_secrets[0].error_message #=> String
resp.unprocessed_scram_secrets[0].secret_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :secret_arn_list (required, Array<String>)

    List of AWS Secrets Manager secret ARNs.

Returns:

See Also:

[View source]

504
505
506
507
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 504

def batch_associate_scram_secret(params = {}, options = {})
  req = build_request(:batch_associate_scram_secret, params)
  req.send_request(options)
end

#batch_disassociate_scram_secret(params = {}) ⇒ Types::BatchDisassociateScramSecretResponse

Disassociates one or more Scram Secrets from an Amazon MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.batch_disassociate_scram_secret({
  cluster_arn: "__string", # required
  secret_arn_list: ["__string"], # required
})

Response structure


resp.cluster_arn #=> String
resp.unprocessed_scram_secrets #=> Array
resp.unprocessed_scram_secrets[0].error_code #=> String
resp.unprocessed_scram_secrets[0].error_message #=> String
resp.unprocessed_scram_secrets[0].secret_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :secret_arn_list (required, Array<String>)

    List of AWS Secrets Manager secret ARNs.

Returns:

See Also:

[View source]

1814
1815
1816
1817
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1814

def batch_disassociate_scram_secret(params = {}, options = {})
  req = build_request(:batch_disassociate_scram_secret, params)
  req.send_request(options)
end

#create_cluster(params = {}) ⇒ Types::CreateClusterResponse

Creates a new MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.create_cluster({
  broker_node_group_info: { # required
    broker_az_distribution: "DEFAULT", # accepts DEFAULT
    client_subnets: ["__string"], # required
    instance_type: "__stringMin5Max32", # required
    security_groups: ["__string"],
    storage_info: {
      ebs_storage_info: {
        provisioned_throughput: {
          enabled: false,
          volume_throughput: 1,
        },
        volume_size: 1,
      },
    },
    connectivity_info: {
      public_access: {
        type: "__string",
      },
      vpc_connectivity: {
        client_authentication: {
          sasl: {
            scram: {
              enabled: false,
            },
            iam: {
              enabled: false,
            },
          },
          tls: {
            enabled: false,
          },
        },
      },
    },
    zone_ids: ["__string"],
  },
  client_authentication: {
    sasl: {
      scram: {
        enabled: false,
      },
      iam: {
        enabled: false,
      },
    },
    tls: {
      certificate_authority_arn_list: ["__string"],
      enabled: false,
    },
    unauthenticated: {
      enabled: false,
    },
  },
  cluster_name: "__stringMin1Max64", # required
  configuration_info: {
    arn: "__string", # required
    revision: 1, # required
  },
  encryption_info: {
    encryption_at_rest: {
      data_volume_kms_key_id: "__string", # required
    },
    encryption_in_transit: {
      client_broker: "TLS", # accepts TLS, TLS_PLAINTEXT, PLAINTEXT
      in_cluster: false,
    },
  },
  enhanced_monitoring: "DEFAULT", # accepts DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, PER_TOPIC_PER_PARTITION
  kafka_version: "__stringMin1Max128", # required
  logging_info: {
    broker_logs: { # required
      cloud_watch_logs: {
        enabled: false, # required
        log_group: "__string",
      },
      firehose: {
        delivery_stream: "__string",
        enabled: false, # required
      },
      s3: {
        bucket: "__string",
        enabled: false, # required
        prefix: "__string",
      },
    },
  },
  number_of_broker_nodes: 1, # required
  open_monitoring: {
    prometheus: { # required
      jmx_exporter: {
        enabled_in_broker: false, # required
      },
      node_exporter: {
        enabled_in_broker: false, # required
      },
    },
  },
  tags: {
    "__string" => "__string",
  },
  storage_mode: "LOCAL", # accepts LOCAL, TIERED
})

Response structure


resp.cluster_arn #=> String
resp.cluster_name #=> String
resp.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :broker_node_group_info (required, Types::BrokerNodeGroupInfo)

    Information about the brokers.

  • :client_authentication (Types::ClientAuthentication)

    Includes all client authentication related information.

  • :cluster_name (required, String)

    The name of the cluster.

  • :configuration_info (Types::ConfigurationInfo)

    Represents the configuration that you want MSK to use for the cluster.

  • :encryption_info (Types::EncryptionInfo)

    Includes all encryption-related information.

  • :enhanced_monitoring (String)

    Specifies the level of monitoring for the MSK cluster. The possible values are DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, and PER_TOPIC_PER_PARTITION.

  • :kafka_version (required, String)

    The version of Apache Kafka.

  • :logging_info (Types::LoggingInfo)

    LoggingInfo details.

  • :number_of_broker_nodes (required, Integer)

    The number of Apache Kafka broker nodes in the Amazon MSK cluster.

  • :open_monitoring (Types::OpenMonitoringInfo)

    The settings for open monitoring.

  • :tags (Hash<String,String>)

    Create tags when creating the cluster.

  • :storage_mode (String)

    This controls storage mode for supported storage tiers.

Returns:

See Also:

[View source]

671
672
673
674
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 671

def create_cluster(params = {}, options = {})
  req = build_request(:create_cluster, params)
  req.send_request(options)
end

#create_cluster_v2(params = {}) ⇒ Types::CreateClusterV2Response

Creates a new Amazon MSK cluster of either the provisioned or the serverless type.

Examples:

Request syntax with placeholder values


resp = client.create_cluster_v2({
  cluster_name: "__stringMin1Max64", # required
  tags: {
    "__string" => "__string",
  },
  provisioned: {
    broker_node_group_info: { # required
      broker_az_distribution: "DEFAULT", # accepts DEFAULT
      client_subnets: ["__string"], # required
      instance_type: "__stringMin5Max32", # required
      security_groups: ["__string"],
      storage_info: {
        ebs_storage_info: {
          provisioned_throughput: {
            enabled: false,
            volume_throughput: 1,
          },
          volume_size: 1,
        },
      },
      connectivity_info: {
        public_access: {
          type: "__string",
        },
        vpc_connectivity: {
          client_authentication: {
            sasl: {
              scram: {
                enabled: false,
              },
              iam: {
                enabled: false,
              },
            },
            tls: {
              enabled: false,
            },
          },
        },
      },
      zone_ids: ["__string"],
    },
    client_authentication: {
      sasl: {
        scram: {
          enabled: false,
        },
        iam: {
          enabled: false,
        },
      },
      tls: {
        certificate_authority_arn_list: ["__string"],
        enabled: false,
      },
      unauthenticated: {
        enabled: false,
      },
    },
    configuration_info: {
      arn: "__string", # required
      revision: 1, # required
    },
    encryption_info: {
      encryption_at_rest: {
        data_volume_kms_key_id: "__string", # required
      },
      encryption_in_transit: {
        client_broker: "TLS", # accepts TLS, TLS_PLAINTEXT, PLAINTEXT
        in_cluster: false,
      },
    },
    enhanced_monitoring: "DEFAULT", # accepts DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, PER_TOPIC_PER_PARTITION
    open_monitoring: {
      prometheus: { # required
        jmx_exporter: {
          enabled_in_broker: false, # required
        },
        node_exporter: {
          enabled_in_broker: false, # required
        },
      },
    },
    kafka_version: "__stringMin1Max128", # required
    logging_info: {
      broker_logs: { # required
        cloud_watch_logs: {
          enabled: false, # required
          log_group: "__string",
        },
        firehose: {
          delivery_stream: "__string",
          enabled: false, # required
        },
        s3: {
          bucket: "__string",
          enabled: false, # required
          prefix: "__string",
        },
      },
    },
    number_of_broker_nodes: 1, # required
    storage_mode: "LOCAL", # accepts LOCAL, TIERED
  },
  serverless: {
    vpc_configs: [ # required
      {
        subnet_ids: ["__string"], # required
        security_group_ids: ["__string"],
      },
    ],
    client_authentication: {
      sasl: {
        iam: {
          enabled: false,
        },
      },
    },
  },
})

Response structure


resp.cluster_arn #=> String
resp.cluster_name #=> String
resp.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_name (required, String)

    The name of the cluster.

  • :tags (Hash<String,String>)

    A map of tags that you want the cluster to have.

  • :provisioned (Types::ProvisionedRequest)

    Creates a provisioned cluster.

  • :serverless (Types::ServerlessRequest)

    Creates a serverless cluster.

Returns:

See Also:

[View source]

832
833
834
835
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 832

def create_cluster_v2(params = {}, options = {})
  req = build_request(:create_cluster_v2, params)
  req.send_request(options)
end

#create_configuration(params = {}) ⇒ Types::CreateConfigurationResponse

Creates a new MSK configuration.

Examples:

Request syntax with placeholder values


resp = client.create_configuration({
  description: "__string",
  kafka_versions: ["__string"],
  name: "__string", # required
  server_properties: "data", # required
})

Response structure


resp.arn #=> String
resp.creation_time #=> Time
resp.latest_revision.creation_time #=> Time
resp.latest_revision.description #=> String
resp.latest_revision.revision #=> Integer
resp.name #=> String
resp.state #=> String, one of "ACTIVE", "DELETING", "DELETE_FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :description (String)

    The description of the configuration.

  • :kafka_versions (Array<String>)

    The versions of Apache Kafka with which you can use this MSK configuration.

  • :name (required, String)

    The name of the configuration. Configuration names are strings that match the regex "^[0-9A-Za-z-]+$".

  • :server_properties (required, String, StringIO, File)

Returns:

See Also:

[View source]

883
884
885
886
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 883

def create_configuration(params = {}, options = {})
  req = build_request(:create_configuration, params)
  req.send_request(options)
end

#create_replicator(params = {}) ⇒ Types::CreateReplicatorResponse

Creates a new Kafka Replicator.

Examples:

Request syntax with placeholder values


resp = client.create_replicator({
  description: "__stringMax1024",
  kafka_clusters: [ # required
    {
      amazon_msk_cluster: { # required
        msk_cluster_arn: "__string", # required
      },
      vpc_config: { # required
        security_group_ids: ["__string"],
        subnet_ids: ["__string"], # required
      },
    },
  ],
  replication_info_list: [ # required
    {
      consumer_group_replication: { # required
        consumer_groups_to_exclude: ["__stringMax256"],
        consumer_groups_to_replicate: ["__stringMax256"], # required
        detect_and_copy_new_consumer_groups: false,
        synchronise_consumer_group_offsets: false,
      },
      source_kafka_cluster_arn: "__string", # required
      target_compression_type: "NONE", # required, accepts NONE, GZIP, SNAPPY, LZ4, ZSTD
      target_kafka_cluster_arn: "__string", # required
      topic_replication: { # required
        copy_access_control_lists_for_topics: false,
        copy_topic_configurations: false,
        detect_and_copy_new_topics: false,
        starting_position: {
          type: "LATEST", # accepts LATEST, EARLIEST
        },
        topic_name_configuration: {
          type: "PREFIXED_WITH_SOURCE_CLUSTER_ALIAS", # accepts PREFIXED_WITH_SOURCE_CLUSTER_ALIAS, IDENTICAL
        },
        topics_to_exclude: ["__stringMax249"],
        topics_to_replicate: ["__stringMax249"], # required
      },
    },
  ],
  replicator_name: "__stringMin1Max128Pattern09AZaZ09AZaZ0", # required
  service_execution_role_arn: "__string", # required
  tags: {
    "__string" => "__string",
  },
})

Response structure


resp.replicator_arn #=> String
resp.replicator_name #=> String
resp.replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :description (String)

    A summary description of the replicator.

  • :kafka_clusters (required, Array<Types::KafkaCluster>)

    Kafka Clusters to use in setting up sources / targets for replication.

  • :replication_info_list (required, Array<Types::ReplicationInfo>)

    A list of replication configurations, where each configuration targets a given source cluster to target cluster replication flow.

  • :replicator_name (required, String)

    The name of the replicator. Alpha-numeric characters with '-' are allowed.

  • :service_execution_role_arn (required, String)

    The ARN of the IAM role used by the replicator to access resources in the customer's account (e.g source and target clusters)

  • :tags (Hash<String,String>)

    List of tags to attach to created Replicator.

Returns:

See Also:

[View source]

975
976
977
978
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 975

def create_replicator(params = {}, options = {})
  req = build_request(:create_replicator, params)
  req.send_request(options)
end

#create_vpc_connection(params = {}) ⇒ Types::CreateVpcConnectionResponse

Creates a new Amazon MSK VPC connection.

Examples:

Request syntax with placeholder values


resp = client.create_vpc_connection({
  target_cluster_arn: "__string", # required
  authentication: "__string", # required
  vpc_id: "__string", # required
  client_subnets: ["__string"], # required
  security_groups: ["__string"], # required
  tags: {
    "__string" => "__string",
  },
})

Response structure


resp.vpc_connection_arn #=> String
resp.state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"
resp.authentication #=> String
resp.vpc_id #=> String
resp.client_subnets #=> Array
resp.client_subnets[0] #=> String
resp.security_groups #=> Array
resp.security_groups[0] #=> String
resp.creation_time #=> Time
resp.tags #=> Hash
resp.tags["__string"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :target_cluster_arn (required, String)

    The Amazon Resource Name (ARN) of the cluster.

  • :authentication (required, String)
  • :vpc_id (required, String)

    The VPC ID of the VPC connection.

  • :client_subnets (required, Array<String>)

    The list of subnets in the client VPC.

  • :security_groups (required, Array<String>)

    The list of security groups to attach to the VPC connection.

  • :tags (Hash<String,String>)

    Create tags when creating the VPC connection.

Returns:

See Also:

[View source]

1041
1042
1043
1044
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1041

def create_vpc_connection(params = {}, options = {})
  req = build_request(:create_vpc_connection, params)
  req.send_request(options)
end

#delete_cluster(params = {}) ⇒ Types::DeleteClusterResponse

Deletes the MSK cluster specified by the Amazon Resource Name (ARN) in the request.

Examples:

Request syntax with placeholder values


resp = client.delete_cluster({
  cluster_arn: "__string", # required
  current_version: "__string",
})

Response structure


resp.cluster_arn #=> String
resp.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (String)

Returns:

See Also:

[View source]

1074
1075
1076
1077
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1074

def delete_cluster(params = {}, options = {})
  req = build_request(:delete_cluster, params)
  req.send_request(options)
end

#delete_cluster_policy(params = {}) ⇒ Struct

Deletes the MSK cluster policy specified by the Amazon Resource Name (ARN) in your request.

Examples:

Request syntax with placeholder values


resp = client.delete_cluster_policy({
  cluster_arn: "__string", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

2689
2690
2691
2692
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2689

def delete_cluster_policy(params = {}, options = {})
  req = build_request(:delete_cluster_policy, params)
  req.send_request(options)
end

#delete_configuration(params = {}) ⇒ Types::DeleteConfigurationResponse

Deletes the specified MSK configuration. The configuration must be in the ACTIVE or DELETE_FAILED state.

Examples:

Request syntax with placeholder values


resp = client.delete_configuration({
  arn: "__string", # required
})

Response structure


resp.arn #=> String
resp.state #=> String, one of "ACTIVE", "DELETING", "DELETE_FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

    The Amazon Resource Name (ARN) of the configuration.

Returns:

See Also:

[View source]

1105
1106
1107
1108
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1105

def delete_configuration(params = {}, options = {})
  req = build_request(:delete_configuration, params)
  req.send_request(options)
end

#delete_replicator(params = {}) ⇒ Types::DeleteReplicatorResponse

Deletes a replicator.

Examples:

Request syntax with placeholder values


resp = client.delete_replicator({
  current_version: "__string",
  replicator_arn: "__string", # required
})

Response structure


resp.replicator_arn #=> String
resp.replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :current_version (String)
  • :replicator_arn (required, String)

Returns:

See Also:

[View source]

1137
1138
1139
1140
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1137

def delete_replicator(params = {}, options = {})
  req = build_request(:delete_replicator, params)
  req.send_request(options)
end

#delete_vpc_connection(params = {}) ⇒ Types::DeleteVpcConnectionResponse

Deletes the Amazon MSK VPC connection specified in your request.

Examples:

Request syntax with placeholder values


resp = client.delete_vpc_connection({
  arn: "__string", # required
})

Response structure


resp.vpc_connection_arn #=> String
resp.state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

Returns:

See Also:

[View source]

1166
1167
1168
1169
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1166

def delete_vpc_connection(params = {}, options = {})
  req = build_request(:delete_vpc_connection, params)
  req.send_request(options)
end

#describe_cluster(params = {}) ⇒ Types::DescribeClusterResponse

Returns a description of the MSK cluster whose Amazon Resource Name (ARN) is specified in the request.

Examples:

Request syntax with placeholder values


resp = client.describe_cluster({
  cluster_arn: "__string", # required
})

Response structure


resp.cluster_info.active_operation_arn #=> String
resp.cluster_info.broker_node_group_info.broker_az_distribution #=> String, one of "DEFAULT"
resp.cluster_info.broker_node_group_info.client_subnets #=> Array
resp.cluster_info.broker_node_group_info.client_subnets[0] #=> String
resp.cluster_info.broker_node_group_info.instance_type #=> String
resp.cluster_info.broker_node_group_info.security_groups #=> Array
resp.cluster_info.broker_node_group_info.security_groups[0] #=> String
resp.cluster_info.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.enabled #=> Boolean
resp.cluster_info.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.volume_throughput #=> Integer
resp.cluster_info.broker_node_group_info.storage_info.ebs_storage_info.volume_size #=> Integer
resp.cluster_info.broker_node_group_info.connectivity_info.public_access.type #=> String
resp.cluster_info.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_info.broker_node_group_info.zone_ids #=> Array
resp.cluster_info.broker_node_group_info.zone_ids[0] #=> String
resp.cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_info.cluster_arn #=> String
resp.cluster_info.cluster_name #=> String
resp.cluster_info.creation_time #=> Time
resp.cluster_info.current_broker_software_info.configuration_arn #=> String
resp.cluster_info.current_broker_software_info.configuration_revision #=> Integer
resp.cluster_info.current_broker_software_info.kafka_version #=> String
resp.cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_info.current_version #=> String
resp.cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_info.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_info.state_info.code #=> String
resp.cluster_info.state_info.message #=> String
resp.cluster_info.tags #=> Hash
resp.cluster_info.tags["__string"] #=> String
resp.cluster_info.zookeeper_connect_string #=> String
resp.cluster_info.zookeeper_connect_string_tls #=> String
resp.cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_info.customer_action_status #=> String, one of "CRITICAL_ACTION_REQUIRED", "ACTION_RECOMMENDED", "NONE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

Returns:

See Also:

[View source]

1245
1246
1247
1248
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1245

def describe_cluster(params = {}, options = {})
  req = build_request(:describe_cluster, params)
  req.send_request(options)
end

#describe_cluster_operation(params = {}) ⇒ Types::DescribeClusterOperationResponse

Returns a description of the cluster operation specified by the ARN.

Examples:

Request syntax with placeholder values


resp = client.describe_cluster_operation({
  cluster_operation_arn: "__string", # required
})

Response structure


resp.cluster_operation_info.client_request_id #=> String
resp.cluster_operation_info.cluster_arn #=> String
resp.cluster_operation_info.creation_time #=> Time
resp.cluster_operation_info.end_time #=> Time
resp.cluster_operation_info.error_info.error_code #=> String
resp.cluster_operation_info.error_info.error_string #=> String
resp.cluster_operation_info.operation_steps #=> Array
resp.cluster_operation_info.operation_steps[0].step_info.step_status #=> String
resp.cluster_operation_info.operation_steps[0].step_name #=> String
resp.cluster_operation_info.operation_arn #=> String
resp.cluster_operation_info.operation_state #=> String
resp.cluster_operation_info.operation_type #=> String
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info.source_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info.source_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info.source_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info.source_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.source_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.source_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info.source_cluster_info.kafka_version #=> String
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info.source_cluster_info.instance_type #=> String
resp.cluster_operation_info.source_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info.source_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info.source_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info.source_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info.source_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info.source_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info.source_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info.source_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info.source_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info.source_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info.target_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info.target_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info.target_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info.target_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.target_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.target_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info.target_cluster_info.kafka_version #=> String
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info.target_cluster_info.instance_type #=> String
resp.cluster_operation_info.target_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info.target_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info.target_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info.target_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info.target_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info.target_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info.target_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info.target_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info.target_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info.target_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info.vpc_connection_info.vpc_connection_arn #=> String
resp.cluster_operation_info.vpc_connection_info.owner #=> String
resp.cluster_operation_info.vpc_connection_info.user_identity.type #=> String, one of "AWSACCOUNT", "AWSSERVICE"
resp.cluster_operation_info.vpc_connection_info.user_identity.principal_id #=> String
resp.cluster_operation_info.vpc_connection_info.creation_time #=> Time

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_operation_arn (required, String)

Returns:

See Also:

[View source]

1452
1453
1454
1455
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1452

def describe_cluster_operation(params = {}, options = {})
  req = build_request(:describe_cluster_operation, params)
  req.send_request(options)
end

#describe_cluster_operation_v2(params = {}) ⇒ Types::DescribeClusterOperationV2Response

Returns a description of the cluster operation specified by the ARN.

Examples:

Request syntax with placeholder values


resp = client.describe_cluster_operation_v2({
  cluster_operation_arn: "__string", # required
})

Response structure


resp.cluster_operation_info.cluster_arn #=> String
resp.cluster_operation_info.cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"
resp.cluster_operation_info.start_time #=> Time
resp.cluster_operation_info.end_time #=> Time
resp.cluster_operation_info.operation_arn #=> String
resp.cluster_operation_info.operation_state #=> String
resp.cluster_operation_info.operation_type #=> String
resp.cluster_operation_info.provisioned.operation_steps #=> Array
resp.cluster_operation_info.provisioned.operation_steps[0].step_info.step_status #=> String
resp.cluster_operation_info.provisioned.operation_steps[0].step_name #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info.provisioned.source_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info.provisioned.source_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info.provisioned.source_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info.provisioned.source_cluster_info.kafka_version #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.instance_type #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info.provisioned.source_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info.provisioned.source_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info.provisioned.source_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info.provisioned.source_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info.provisioned.source_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info.provisioned.target_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info.provisioned.target_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info.provisioned.target_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info.provisioned.target_cluster_info.kafka_version #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.instance_type #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info.provisioned.target_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info.provisioned.target_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info.provisioned.target_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info.provisioned.target_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info.provisioned.target_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info.provisioned.vpc_connection_info.vpc_connection_arn #=> String
resp.cluster_operation_info.provisioned.vpc_connection_info.owner #=> String
resp.cluster_operation_info.provisioned.vpc_connection_info.user_identity.type #=> String, one of "AWSACCOUNT", "AWSSERVICE"
resp.cluster_operation_info.provisioned.vpc_connection_info.user_identity.principal_id #=> String
resp.cluster_operation_info.provisioned.vpc_connection_info.creation_time #=> Time
resp.cluster_operation_info.serverless.vpc_connection_info.creation_time #=> Time
resp.cluster_operation_info.serverless.vpc_connection_info.owner #=> String
resp.cluster_operation_info.serverless.vpc_connection_info.user_identity.type #=> String, one of "AWSACCOUNT", "AWSSERVICE"
resp.cluster_operation_info.serverless.vpc_connection_info.user_identity.principal_id #=> String
resp.cluster_operation_info.serverless.vpc_connection_info.vpc_connection_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_operation_arn (required, String)

Returns:

See Also:

[View source]

1574
1575
1576
1577
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1574

def describe_cluster_operation_v2(params = {}, options = {})
  req = build_request(:describe_cluster_operation_v2, params)
  req.send_request(options)
end

#describe_cluster_v2(params = {}) ⇒ Types::DescribeClusterV2Response

Returns a description of the MSK cluster of either the provisioned or the serverless type whose Amazon Resource Name (ARN) is specified in the request.

Examples:

Request syntax with placeholder values


resp = client.describe_cluster_v2({
  cluster_arn: "__string", # required
})

Response structure


resp.cluster_info.active_operation_arn #=> String
resp.cluster_info.cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"
resp.cluster_info.cluster_arn #=> String
resp.cluster_info.cluster_name #=> String
resp.cluster_info.creation_time #=> Time
resp.cluster_info.current_version #=> String
resp.cluster_info.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_info.state_info.code #=> String
resp.cluster_info.state_info.message #=> String
resp.cluster_info.tags #=> Hash
resp.cluster_info.tags["__string"] #=> String
resp.cluster_info.provisioned.broker_node_group_info.broker_az_distribution #=> String, one of "DEFAULT"
resp.cluster_info.provisioned.broker_node_group_info.client_subnets #=> Array
resp.cluster_info.provisioned.broker_node_group_info.client_subnets[0] #=> String
resp.cluster_info.provisioned.broker_node_group_info.instance_type #=> String
resp.cluster_info.provisioned.broker_node_group_info.security_groups #=> Array
resp.cluster_info.provisioned.broker_node_group_info.security_groups[0] #=> String
resp.cluster_info.provisioned.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.enabled #=> Boolean
resp.cluster_info.provisioned.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.volume_throughput #=> Integer
resp.cluster_info.provisioned.broker_node_group_info.storage_info.ebs_storage_info.volume_size #=> Integer
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.public_access.type #=> String
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_info.provisioned.broker_node_group_info.zone_ids #=> Array
resp.cluster_info.provisioned.broker_node_group_info.zone_ids[0] #=> String
resp.cluster_info.provisioned.current_broker_software_info.configuration_arn #=> String
resp.cluster_info.provisioned.current_broker_software_info.configuration_revision #=> Integer
resp.cluster_info.provisioned.current_broker_software_info.kafka_version #=> String
resp.cluster_info.provisioned.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info.provisioned.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.provisioned.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_info.provisioned.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_info.provisioned.client_authentication.tls.enabled #=> Boolean
resp.cluster_info.provisioned.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_info.provisioned.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_info.provisioned.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_info.provisioned.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_info.provisioned.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_info.provisioned.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_info.provisioned.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_info.provisioned.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_info.provisioned.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_info.provisioned.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_info.provisioned.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_info.provisioned.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_info.provisioned.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_info.provisioned.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_info.provisioned.number_of_broker_nodes #=> Integer
resp.cluster_info.provisioned.zookeeper_connect_string #=> String
resp.cluster_info.provisioned.zookeeper_connect_string_tls #=> String
resp.cluster_info.provisioned.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_info.provisioned.customer_action_status #=> String, one of "CRITICAL_ACTION_REQUIRED", "ACTION_RECOMMENDED", "NONE"
resp.cluster_info.serverless.vpc_configs #=> Array
resp.cluster_info.serverless.vpc_configs[0].subnet_ids #=> Array
resp.cluster_info.serverless.vpc_configs[0].subnet_ids[0] #=> String
resp.cluster_info.serverless.vpc_configs[0].security_group_ids #=> Array
resp.cluster_info.serverless.vpc_configs[0].security_group_ids[0] #=> String
resp.cluster_info.serverless.client_authentication.sasl.iam.enabled #=> Boolean

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

    The Amazon Resource Name (ARN) that uniquely identifies the cluster.

Returns:

See Also:

[View source]

1333
1334
1335
1336
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1333

def describe_cluster_v2(params = {}, options = {})
  req = build_request(:describe_cluster_v2, params)
  req.send_request(options)
end

#describe_configuration(params = {}) ⇒ Types::DescribeConfigurationResponse

Returns a description of this MSK configuration.

Examples:

Request syntax with placeholder values


resp = client.describe_configuration({
  arn: "__string", # required
})

Response structure


resp.arn #=> String
resp.creation_time #=> Time
resp.description #=> String
resp.kafka_versions #=> Array
resp.kafka_versions[0] #=> String
resp.latest_revision.creation_time #=> Time
resp.latest_revision.description #=> String
resp.latest_revision.revision #=> Integer
resp.name #=> String
resp.state #=> String, one of "ACTIVE", "DELETING", "DELETE_FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

Returns:

See Also:

[View source]

1616
1617
1618
1619
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1616

def describe_configuration(params = {}, options = {})
  req = build_request(:describe_configuration, params)
  req.send_request(options)
end

#describe_configuration_revision(params = {}) ⇒ Types::DescribeConfigurationRevisionResponse

Returns a description of this revision of the configuration.

Examples:

Request syntax with placeholder values


resp = client.describe_configuration_revision({
  arn: "__string", # required
  revision: 1, # required
})

Response structure


resp.arn #=> String
resp.creation_time #=> Time
resp.description #=> String
resp.revision #=> Integer
resp.server_properties #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)
  • :revision (required, Integer)

Returns:

See Also:

[View source]

1654
1655
1656
1657
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1654

def describe_configuration_revision(params = {}, options = {})
  req = build_request(:describe_configuration_revision, params)
  req.send_request(options)
end

#describe_replicator(params = {}) ⇒ Types::DescribeReplicatorResponse

Returns a description of the Kafka Replicator whose Amazon Resource Name (ARN) is specified in the request.

Examples:

Request syntax with placeholder values


resp = client.describe_replicator({
  replicator_arn: "__string", # required
})

Response structure


resp.creation_time #=> Time
resp.current_version #=> String
resp.is_replicator_reference #=> Boolean
resp.kafka_clusters #=> Array
resp.kafka_clusters[0].amazon_msk_cluster.msk_cluster_arn #=> String
resp.kafka_clusters[0].kafka_cluster_alias #=> String
resp.kafka_clusters[0].vpc_config.security_group_ids #=> Array
resp.kafka_clusters[0].vpc_config.security_group_ids[0] #=> String
resp.kafka_clusters[0].vpc_config.subnet_ids #=> Array
resp.kafka_clusters[0].vpc_config.subnet_ids[0] #=> String
resp.replication_info_list #=> Array
resp.replication_info_list[0].consumer_group_replication.consumer_groups_to_exclude #=> Array
resp.replication_info_list[0].consumer_group_replication.consumer_groups_to_exclude[0] #=> String
resp.replication_info_list[0].consumer_group_replication.consumer_groups_to_replicate #=> Array
resp.replication_info_list[0].consumer_group_replication.consumer_groups_to_replicate[0] #=> String
resp.replication_info_list[0].consumer_group_replication.detect_and_copy_new_consumer_groups #=> Boolean
resp.replication_info_list[0].consumer_group_replication.synchronise_consumer_group_offsets #=> Boolean
resp.replication_info_list[0].source_kafka_cluster_alias #=> String
resp.replication_info_list[0].target_compression_type #=> String, one of "NONE", "GZIP", "SNAPPY", "LZ4", "ZSTD"
resp.replication_info_list[0].target_kafka_cluster_alias #=> String
resp.replication_info_list[0].topic_replication.copy_access_control_lists_for_topics #=> Boolean
resp.replication_info_list[0].topic_replication.copy_topic_configurations #=> Boolean
resp.replication_info_list[0].topic_replication.detect_and_copy_new_topics #=> Boolean
resp.replication_info_list[0].topic_replication.starting_position.type #=> String, one of "LATEST", "EARLIEST"
resp.replication_info_list[0].topic_replication.topic_name_configuration.type #=> String, one of "PREFIXED_WITH_SOURCE_CLUSTER_ALIAS", "IDENTICAL"
resp.replication_info_list[0].topic_replication.topics_to_exclude #=> Array
resp.replication_info_list[0].topic_replication.topics_to_exclude[0] #=> String
resp.replication_info_list[0].topic_replication.topics_to_replicate #=> Array
resp.replication_info_list[0].topic_replication.topics_to_replicate[0] #=> String
resp.replicator_arn #=> String
resp.replicator_description #=> String
resp.replicator_name #=> String
resp.replicator_resource_arn #=> String
resp.replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"
resp.service_execution_role_arn #=> String
resp.state_info.code #=> String
resp.state_info.message #=> String
resp.tags #=> Hash
resp.tags["__string"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replicator_arn (required, String)

Returns:

See Also:

[View source]

1732
1733
1734
1735
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1732

def describe_replicator(params = {}, options = {})
  req = build_request(:describe_replicator, params)
  req.send_request(options)
end

#describe_vpc_connection(params = {}) ⇒ Types::DescribeVpcConnectionResponse

Displays information about the specified Amazon MSK VPC connection.

Examples:

Request syntax with placeholder values


resp = client.describe_vpc_connection({
  arn: "__string", # required
})

Response structure


resp.vpc_connection_arn #=> String
resp.target_cluster_arn #=> String
resp.state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"
resp.authentication #=> String
resp.vpc_id #=> String
resp.subnets #=> Array
resp.subnets[0] #=> String
resp.security_groups #=> Array
resp.security_groups[0] #=> String
resp.creation_time #=> Time
resp.tags #=> Hash
resp.tags["__string"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

Returns:

See Also:

[View source]

1778
1779
1780
1781
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1778

def describe_vpc_connection(params = {}, options = {})
  req = build_request(:describe_vpc_connection, params)
  req.send_request(options)
end

#get_bootstrap_brokers(params = {}) ⇒ Types::GetBootstrapBrokersResponse

A list of brokers that a client application can use to bootstrap. This list doesn't necessarily include all of the brokers in the cluster. The following Python 3.6 example shows how you can use the Amazon Resource Name (ARN) of a cluster to get its bootstrap brokers. If you don't know the ARN of your cluster, you can use the ListClusters operation to get the ARNs of all the clusters in this account and Region.

Examples:

Request syntax with placeholder values


resp = client.get_bootstrap_brokers({
  cluster_arn: "__string", # required
})

Response structure


resp.bootstrap_broker_string #=> String
resp.bootstrap_broker_string_public_sasl_iam #=> String
resp.bootstrap_broker_string_public_sasl_scram #=> String
resp.bootstrap_broker_string_public_tls #=> String
resp.bootstrap_broker_string_tls #=> String
resp.bootstrap_broker_string_sasl_scram #=> String
resp.bootstrap_broker_string_sasl_iam #=> String
resp.bootstrap_broker_string_vpc_connectivity_tls #=> String
resp.bootstrap_broker_string_vpc_connectivity_sasl_scram #=> String
resp.bootstrap_broker_string_vpc_connectivity_sasl_iam #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

Returns:

See Also:

[View source]

1865
1866
1867
1868
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1865

def get_bootstrap_brokers(params = {}, options = {})
  req = build_request(:get_bootstrap_brokers, params)
  req.send_request(options)
end

#get_cluster_policy(params = {}) ⇒ Types::GetClusterPolicyResponse

Retrieves the contents of the specified MSK cluster policy.

Examples:

Request syntax with placeholder values


resp = client.get_cluster_policy({
  cluster_arn: "__string", # required
})

Response structure


resp.current_version #=> String
resp.policy #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

Returns:

See Also:

[View source]

2718
2719
2720
2721
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2718

def get_cluster_policy(params = {}, options = {})
  req = build_request(:get_cluster_policy, params)
  req.send_request(options)
end

#get_compatible_kafka_versions(params = {}) ⇒ Types::GetCompatibleKafkaVersionsResponse

Gets the Apache Kafka versions to which you can update the MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.get_compatible_kafka_versions({
  cluster_arn: "__string",
})

Response structure


resp.compatible_kafka_versions #=> Array
resp.compatible_kafka_versions[0].source_version #=> String
resp.compatible_kafka_versions[0].target_versions #=> Array
resp.compatible_kafka_versions[0].target_versions[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (String)

Returns:

See Also:

[View source]

1896
1897
1898
1899
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 1896

def get_compatible_kafka_versions(params = {}, options = {})
  req = build_request(:get_compatible_kafka_versions, params)
  req.send_request(options)
end

#list_client_vpc_connections(params = {}) ⇒ Types::ListClientVpcConnectionsResponse

Displays a list of client VPC connections.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_client_vpc_connections({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.client_vpc_connections #=> Array
resp.client_vpc_connections[0].authentication #=> String
resp.client_vpc_connections[0].creation_time #=> Time
resp.client_vpc_connections[0].state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"
resp.client_vpc_connections[0].vpc_connection_arn #=> String
resp.client_vpc_connections[0].owner #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2605
2606
2607
2608
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2605

def list_client_vpc_connections(params = {}, options = {})
  req = build_request(:list_client_vpc_connections, params)
  req.send_request(options)
end

#list_cluster_operations(params = {}) ⇒ Types::ListClusterOperationsResponse

Returns a list of all the operations that have been performed on the specified MSK cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_cluster_operations({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.cluster_operation_info_list #=> Array
resp.cluster_operation_info_list[0].client_request_id #=> String
resp.cluster_operation_info_list[0].cluster_arn #=> String
resp.cluster_operation_info_list[0].creation_time #=> Time
resp.cluster_operation_info_list[0].end_time #=> Time
resp.cluster_operation_info_list[0].error_info.error_code #=> String
resp.cluster_operation_info_list[0].error_info.error_string #=> String
resp.cluster_operation_info_list[0].operation_steps #=> Array
resp.cluster_operation_info_list[0].operation_steps[0].step_info.step_status #=> String
resp.cluster_operation_info_list[0].operation_steps[0].step_name #=> String
resp.cluster_operation_info_list[0].operation_arn #=> String
resp.cluster_operation_info_list[0].operation_state #=> String
resp.cluster_operation_info_list[0].operation_type #=> String
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info_list[0].source_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info_list[0].source_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info_list[0].source_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info_list[0].source_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info_list[0].source_cluster_info.kafka_version #=> String
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info_list[0].source_cluster_info.instance_type #=> String
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info_list[0].source_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info_list[0].source_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info_list[0].source_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info_list[0].source_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info_list[0].source_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info_list[0].source_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info_list[0].target_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info_list[0].target_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info_list[0].target_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info_list[0].target_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info_list[0].target_cluster_info.kafka_version #=> String
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info_list[0].target_cluster_info.instance_type #=> String
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info_list[0].target_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info_list[0].target_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info_list[0].target_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info_list[0].target_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info_list[0].target_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info_list[0].target_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info_list[0].vpc_connection_info.vpc_connection_arn #=> String
resp.cluster_operation_info_list[0].vpc_connection_info.owner #=> String
resp.cluster_operation_info_list[0].vpc_connection_info.user_identity.type #=> String, one of "AWSACCOUNT", "AWSSERVICE"
resp.cluster_operation_info_list[0].vpc_connection_info.user_identity.principal_id #=> String
resp.cluster_operation_info_list[0].vpc_connection_info.creation_time #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2027
2028
2029
2030
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2027

def list_cluster_operations(params = {}, options = {})
  req = build_request(:list_cluster_operations, params)
  req.send_request(options)
end

#list_cluster_operations_v2(params = {}) ⇒ Types::ListClusterOperationsV2Response

Returns a list of all the operations that have been performed on the specified MSK cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_cluster_operations_v2({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.cluster_operation_info_list #=> Array
resp.cluster_operation_info_list[0].cluster_arn #=> String
resp.cluster_operation_info_list[0].cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"
resp.cluster_operation_info_list[0].start_time #=> Time
resp.cluster_operation_info_list[0].end_time #=> Time
resp.cluster_operation_info_list[0].operation_arn #=> String
resp.cluster_operation_info_list[0].operation_state #=> String
resp.cluster_operation_info_list[0].operation_type #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2072
2073
2074
2075
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2072

def list_cluster_operations_v2(params = {}, options = {})
  req = build_request(:list_cluster_operations_v2, params)
  req.send_request(options)
end

#list_clusters(params = {}) ⇒ Types::ListClustersResponse

Returns a list of all the MSK clusters in the current Region.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_clusters({
  cluster_name_filter: "__string",
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.cluster_info_list #=> Array
resp.cluster_info_list[0].active_operation_arn #=> String
resp.cluster_info_list[0].broker_node_group_info.broker_az_distribution #=> String, one of "DEFAULT"
resp.cluster_info_list[0].broker_node_group_info.client_subnets #=> Array
resp.cluster_info_list[0].broker_node_group_info.client_subnets[0] #=> String
resp.cluster_info_list[0].broker_node_group_info.instance_type #=> String
resp.cluster_info_list[0].broker_node_group_info.security_groups #=> Array
resp.cluster_info_list[0].broker_node_group_info.security_groups[0] #=> String
resp.cluster_info_list[0].broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.enabled #=> Boolean
resp.cluster_info_list[0].broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.volume_throughput #=> Integer
resp.cluster_info_list[0].broker_node_group_info.storage_info.ebs_storage_info.volume_size #=> Integer
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.public_access.type #=> String
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_info_list[0].broker_node_group_info.zone_ids #=> Array
resp.cluster_info_list[0].broker_node_group_info.zone_ids[0] #=> String
resp.cluster_info_list[0].client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info_list[0].client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_info_list[0].client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_info_list[0].client_authentication.tls.enabled #=> Boolean
resp.cluster_info_list[0].client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_info_list[0].cluster_arn #=> String
resp.cluster_info_list[0].cluster_name #=> String
resp.cluster_info_list[0].creation_time #=> Time
resp.cluster_info_list[0].current_broker_software_info.configuration_arn #=> String
resp.cluster_info_list[0].current_broker_software_info.configuration_revision #=> Integer
resp.cluster_info_list[0].current_broker_software_info.kafka_version #=> String
resp.cluster_info_list[0].logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_info_list[0].logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_info_list[0].logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_info_list[0].logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_info_list[0].logging_info.broker_logs.s3.bucket #=> String
resp.cluster_info_list[0].logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_info_list[0].logging_info.broker_logs.s3.prefix #=> String
resp.cluster_info_list[0].current_version #=> String
resp.cluster_info_list[0].encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_info_list[0].encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_info_list[0].encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_info_list[0].enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_info_list[0].number_of_broker_nodes #=> Integer
resp.cluster_info_list[0].open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_info_list[0].open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_info_list[0].state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_info_list[0].state_info.code #=> String
resp.cluster_info_list[0].state_info.message #=> String
resp.cluster_info_list[0].tags #=> Hash
resp.cluster_info_list[0].tags["__string"] #=> String
resp.cluster_info_list[0].zookeeper_connect_string #=> String
resp.cluster_info_list[0].zookeeper_connect_string_tls #=> String
resp.cluster_info_list[0].storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_info_list[0].customer_action_status #=> String, one of "CRITICAL_ACTION_REQUIRED", "ACTION_RECOMMENDED", "NONE"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_name_filter (String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2161
2162
2163
2164
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2161

def list_clusters(params = {}, options = {})
  req = build_request(:list_clusters, params)
  req.send_request(options)
end

#list_clusters_v2(params = {}) ⇒ Types::ListClustersV2Response

Returns a list of all the MSK clusters in the current Region.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_clusters_v2({
  cluster_name_filter: "__string",
  cluster_type_filter: "__string",
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.cluster_info_list #=> Array
resp.cluster_info_list[0].active_operation_arn #=> String
resp.cluster_info_list[0].cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"
resp.cluster_info_list[0].cluster_arn #=> String
resp.cluster_info_list[0].cluster_name #=> String
resp.cluster_info_list[0].creation_time #=> Time
resp.cluster_info_list[0].current_version #=> String
resp.cluster_info_list[0].state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_info_list[0].state_info.code #=> String
resp.cluster_info_list[0].state_info.message #=> String
resp.cluster_info_list[0].tags #=> Hash
resp.cluster_info_list[0].tags["__string"] #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.broker_az_distribution #=> String, one of "DEFAULT"
resp.cluster_info_list[0].provisioned.broker_node_group_info.client_subnets #=> Array
resp.cluster_info_list[0].provisioned.broker_node_group_info.client_subnets[0] #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.instance_type #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.security_groups #=> Array
resp.cluster_info_list[0].provisioned.broker_node_group_info.security_groups[0] #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.volume_throughput #=> Integer
resp.cluster_info_list[0].provisioned.broker_node_group_info.storage_info.ebs_storage_info.volume_size #=> Integer
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.public_access.type #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.broker_node_group_info.zone_ids #=> Array
resp.cluster_info_list[0].provisioned.broker_node_group_info.zone_ids[0] #=> String
resp.cluster_info_list[0].provisioned.current_broker_software_info.configuration_arn #=> String
resp.cluster_info_list[0].provisioned.current_broker_software_info.configuration_revision #=> Integer
resp.cluster_info_list[0].provisioned.current_broker_software_info.kafka_version #=> String
resp.cluster_info_list[0].provisioned.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_info_list[0].provisioned.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_info_list[0].provisioned.client_authentication.tls.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_info_list[0].provisioned.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_info_list[0].provisioned.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_info_list[0].provisioned.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_info_list[0].provisioned.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_info_list[0].provisioned.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_info_list[0].provisioned.number_of_broker_nodes #=> Integer
resp.cluster_info_list[0].provisioned.zookeeper_connect_string #=> String
resp.cluster_info_list[0].provisioned.zookeeper_connect_string_tls #=> String
resp.cluster_info_list[0].provisioned.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_info_list[0].provisioned.customer_action_status #=> String, one of "CRITICAL_ACTION_REQUIRED", "ACTION_RECOMMENDED", "NONE"
resp.cluster_info_list[0].serverless.vpc_configs #=> Array
resp.cluster_info_list[0].serverless.vpc_configs[0].subnet_ids #=> Array
resp.cluster_info_list[0].serverless.vpc_configs[0].subnet_ids[0] #=> String
resp.cluster_info_list[0].serverless.vpc_configs[0].security_group_ids #=> Array
resp.cluster_info_list[0].serverless.vpc_configs[0].security_group_ids[0] #=> String
resp.cluster_info_list[0].serverless.client_authentication.sasl.iam.enabled #=> Boolean
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_name_filter (String)

    Specify a prefix of the names of the clusters that you want to list. The service lists all the clusters whose names start with this prefix.

  • :cluster_type_filter (String)

    Specify either PROVISIONED or SERVERLESS.

  • :max_results (Integer)

    The maximum number of results to return in the response. If there are more results, the response includes a NextToken parameter.

  • :next_token (String)

    The paginated results marker. When the result of the operation is truncated, the call returns NextToken in the response. To get the next batch, provide this token in your next request.

Returns:

See Also:

[View source]

2268
2269
2270
2271
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2268

def list_clusters_v2(params = {}, options = {})
  req = build_request(:list_clusters_v2, params)
  req.send_request(options)
end

#list_configuration_revisions(params = {}) ⇒ Types::ListConfigurationRevisionsResponse

Returns a list of all the revisions of an MSK configuration.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_configuration_revisions({
  arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.revisions #=> Array
resp.revisions[0].creation_time #=> Time
resp.revisions[0].description #=> String
resp.revisions[0].revision #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2308
2309
2310
2311
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2308

def list_configuration_revisions(params = {}, options = {})
  req = build_request(:list_configuration_revisions, params)
  req.send_request(options)
end

#list_configurations(params = {}) ⇒ Types::ListConfigurationsResponse

Returns a list of all the MSK configurations in this Region.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_configurations({
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.configurations #=> Array
resp.configurations[0].arn #=> String
resp.configurations[0].creation_time #=> Time
resp.configurations[0].description #=> String
resp.configurations[0].kafka_versions #=> Array
resp.configurations[0].kafka_versions[0] #=> String
resp.configurations[0].latest_revision.creation_time #=> Time
resp.configurations[0].latest_revision.description #=> String
resp.configurations[0].latest_revision.revision #=> Integer
resp.configurations[0].name #=> String
resp.configurations[0].state #=> String, one of "ACTIVE", "DELETING", "DELETE_FAILED"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2352
2353
2354
2355
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2352

def list_configurations(params = {}, options = {})
  req = build_request(:list_configurations, params)
  req.send_request(options)
end

#list_kafka_versions(params = {}) ⇒ Types::ListKafkaVersionsResponse

Returns a list of Apache Kafka versions.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_kafka_versions({
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.kafka_versions #=> Array
resp.kafka_versions[0].version #=> String
resp.kafka_versions[0].status #=> String, one of "ACTIVE", "DEPRECATED"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2388
2389
2390
2391
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2388

def list_kafka_versions(params = {}, options = {})
  req = build_request(:list_kafka_versions, params)
  req.send_request(options)
end

#list_nodes(params = {}) ⇒ Types::ListNodesResponse

Returns a list of the broker nodes in the cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_nodes({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.node_info_list #=> Array
resp.node_info_list[0].added_to_cluster_time #=> String
resp.node_info_list[0].broker_node_info.attached_eni_id #=> String
resp.node_info_list[0].broker_node_info.broker_id #=> Float
resp.node_info_list[0].broker_node_info.client_subnet #=> String
resp.node_info_list[0].broker_node_info.client_vpc_ip_address #=> String
resp.node_info_list[0].broker_node_info.current_broker_software_info.configuration_arn #=> String
resp.node_info_list[0].broker_node_info.current_broker_software_info.configuration_revision #=> Integer
resp.node_info_list[0].broker_node_info.current_broker_software_info.kafka_version #=> String
resp.node_info_list[0].broker_node_info.endpoints #=> Array
resp.node_info_list[0].broker_node_info.endpoints[0] #=> String
resp.node_info_list[0].controller_node_info.endpoints #=> Array
resp.node_info_list[0].controller_node_info.endpoints[0] #=> String
resp.node_info_list[0].instance_type #=> String
resp.node_info_list[0].node_arn #=> String
resp.node_info_list[0].node_type #=> String, one of "BROKER"
resp.node_info_list[0].zookeeper_node_info.attached_eni_id #=> String
resp.node_info_list[0].zookeeper_node_info.client_vpc_ip_address #=> String
resp.node_info_list[0].zookeeper_node_info.endpoints #=> Array
resp.node_info_list[0].zookeeper_node_info.endpoints[0] #=> String
resp.node_info_list[0].zookeeper_node_info.zookeeper_id #=> Float
resp.node_info_list[0].zookeeper_node_info.zookeeper_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2446
2447
2448
2449
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2446

def list_nodes(params = {}, options = {})
  req = build_request(:list_nodes, params)
  req.send_request(options)
end

#list_replicators(params = {}) ⇒ Types::ListReplicatorsResponse

Lists the replicators.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_replicators({
  max_results: 1,
  next_token: "__string",
  replicator_name_filter: "__string",
})

Response structure


resp.next_token #=> String
resp.replicators #=> Array
resp.replicators[0].creation_time #=> Time
resp.replicators[0].current_version #=> String
resp.replicators[0].is_replicator_reference #=> Boolean
resp.replicators[0].kafka_clusters_summary #=> Array
resp.replicators[0].kafka_clusters_summary[0].amazon_msk_cluster.msk_cluster_arn #=> String
resp.replicators[0].kafka_clusters_summary[0].kafka_cluster_alias #=> String
resp.replicators[0].replication_info_summary_list #=> Array
resp.replicators[0].replication_info_summary_list[0].source_kafka_cluster_alias #=> String
resp.replicators[0].replication_info_summary_list[0].target_kafka_cluster_alias #=> String
resp.replicators[0].replicator_arn #=> String
resp.replicators[0].replicator_name #=> String
resp.replicators[0].replicator_resource_arn #=> String
resp.replicators[0].replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)
  • :next_token (String)
  • :replicator_name_filter (String)

Returns:

See Also:

[View source]

2496
2497
2498
2499
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2496

def list_replicators(params = {}, options = {})
  req = build_request(:list_replicators, params)
  req.send_request(options)
end

#list_scram_secrets(params = {}) ⇒ Types::ListScramSecretsResponse

Returns a list of the Scram Secrets associated with an Amazon MSK cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_scram_secrets({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.secret_arn_list #=> Array
resp.secret_arn_list[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2535
2536
2537
2538
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2535

def list_scram_secrets(params = {}, options = {})
  req = build_request(:list_scram_secrets, params)
  req.send_request(options)
end

#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse

Returns a list of the tags associated with the specified resource.

Examples:

Request syntax with placeholder values


resp = client.list_tags_for_resource({
  resource_arn: "__string", # required
})

Response structure


resp.tags #=> Hash
resp.tags["__string"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

Returns:

See Also:

[View source]

2563
2564
2565
2566
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2563

def list_tags_for_resource(params = {}, options = {})
  req = build_request(:list_tags_for_resource, params)
  req.send_request(options)
end

#list_vpc_connections(params = {}) ⇒ Types::ListVpcConnectionsResponse

Displays a list of Amazon MSK VPC connections.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_vpc_connections({
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.vpc_connections #=> Array
resp.vpc_connections[0].vpc_connection_arn #=> String
resp.vpc_connections[0].target_cluster_arn #=> String
resp.vpc_connections[0].creation_time #=> Time
resp.vpc_connections[0].authentication #=> String
resp.vpc_connections[0].vpc_id #=> String
resp.vpc_connections[0].state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:

[View source]

2645
2646
2647
2648
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2645

def list_vpc_connections(params = {}, options = {})
  req = build_request(:list_vpc_connections, params)
  req.send_request(options)
end

#put_cluster_policy(params = {}) ⇒ Types::PutClusterPolicyResponse

Creates or updates the specified MSK cluster policy. If updating the policy, the currentVersion field is required in the request payload.

Examples:

Request syntax with placeholder values


resp = client.put_cluster_policy({
  cluster_arn: "__string", # required
  current_version: "__string",
  policy: "__string", # required
})

Response structure


resp.current_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (String)
  • :policy (required, String)

Returns:

See Also:

[View source]

2752
2753
2754
2755
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2752

def put_cluster_policy(params = {}, options = {})
  req = build_request(:put_cluster_policy, params)
  req.send_request(options)
end

#reboot_broker(params = {}) ⇒ Types::RebootBrokerResponse

Executes a reboot on a broker.

Examples:

Request syntax with placeholder values


resp = client.reboot_broker({
  broker_ids: ["__string"], # required
  cluster_arn: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :broker_ids (required, Array<String>)

    The list of broker ids to be rebooted.

  • :cluster_arn (required, String)

Returns:

See Also:

[View source]

2785
2786
2787
2788
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2785

def reboot_broker(params = {}, options = {})
  req = build_request(:reboot_broker, params)
  req.send_request(options)
end

#reject_client_vpc_connection(params = {}) ⇒ Struct

Returns an empty response.

Examples:

Request syntax with placeholder values


resp = client.reject_client_vpc_connection({
  cluster_arn: "__string", # required
  vpc_connection_arn: "__string", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :vpc_connection_arn (required, String)

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

2667
2668
2669
2670
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2667

def reject_client_vpc_connection(params = {}, options = {})
  req = build_request(:reject_client_vpc_connection, params)
  req.send_request(options)
end

#tag_resource(params = {}) ⇒ Struct

Adds tags to the specified MSK resource.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "__string", # required
  tags: { # required
    "__string" => "__string",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)
  • :tags (required, Hash<String,String>)

    The key-value pair for the resource tag.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

2812
2813
2814
2815
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2812

def tag_resource(params = {}, options = {})
  req = build_request(:tag_resource, params)
  req.send_request(options)
end

#untag_resource(params = {}) ⇒ Struct

Removes the tags associated with the keys that are provided in the query.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "__string", # required
  tag_keys: ["__string"], # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)
  • :tag_keys (required, Array<String>)

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

2837
2838
2839
2840
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2837

def untag_resource(params = {}, options = {})
  req = build_request(:untag_resource, params)
  req.send_request(options)
end

#update_broker_count(params = {}) ⇒ Types::UpdateBrokerCountResponse

Updates the number of broker nodes in the cluster. You can use this operation to increase the number of brokers in an existing cluster. You can't decrease the number of brokers.

Examples:

Request syntax with placeholder values


resp = client.update_broker_count({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  target_number_of_broker_nodes: 1, # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The current version of the cluster.

  • :target_number_of_broker_nodes (required, Integer)

    The number of broker nodes that you want the cluster to have after this operation completes successfully.

Returns:

See Also:

[View source]

2877
2878
2879
2880
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2877

def update_broker_count(params = {}, options = {})
  req = build_request(:update_broker_count, params)
  req.send_request(options)
end

#update_broker_storage(params = {}) ⇒ Types::UpdateBrokerStorageResponse

Updates the EBS storage associated with MSK brokers.

Examples:

Request syntax with placeholder values


resp = client.update_broker_storage({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  target_broker_ebs_volume_info: [ # required
    {
      kafka_broker_node_id: "__string", # required
      provisioned_throughput: {
        enabled: false,
        volume_throughput: 1,
      },
      volume_size_gb: 1,
    },
  ],
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The version of cluster to update from. A successful operation will then generate a new version.

  • :target_broker_ebs_volume_info (required, Array<Types::BrokerEBSVolumeInfo>)

    Describes the target volume size and the ID of the broker to apply the update to.

    The value you specify for Target-Volume-in-GiB must be a whole number that is greater than 100 GiB.

    The storage per broker after the update operation can't exceed 16384 GiB.

Returns:

See Also:

[View source]

2969
2970
2971
2972
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2969

def update_broker_storage(params = {}, options = {})
  req = build_request(:update_broker_storage, params)
  req.send_request(options)
end

#update_broker_type(params = {}) ⇒ Types::UpdateBrokerTypeResponse

Updates all the brokers in the cluster to the specified type.

Examples:

Request syntax with placeholder values


resp = client.update_broker_type({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  target_instance_type: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The current version of the cluster.

  • :target_instance_type (required, String)

    The Amazon MSK broker type that you want all of the brokers in this cluster to be.

Returns:

See Also:

[View source]

2915
2916
2917
2918
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 2915

def update_broker_type(params = {}, options = {})
  req = build_request(:update_broker_type, params)
  req.send_request(options)
end

#update_cluster_configuration(params = {}) ⇒ Types::UpdateClusterConfigurationResponse

Updates the cluster with the configuration that is specified in the request body.

Examples:

Request syntax with placeholder values


resp = client.update_cluster_configuration({
  cluster_arn: "__string", # required
  configuration_info: { # required
    arn: "__string", # required
    revision: 1, # required
  },
  current_version: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :configuration_info (required, Types::ConfigurationInfo)

    Represents the configuration that you want MSK to use for the cluster.

  • :current_version (required, String)

    The version of the cluster that you want to update.

Returns:

See Also:

[View source]

3050
3051
3052
3053
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 3050

def update_cluster_configuration(params = {}, options = {})
  req = build_request(:update_cluster_configuration, params)
  req.send_request(options)
end

#update_cluster_kafka_version(params = {}) ⇒ Types::UpdateClusterKafkaVersionResponse

Updates the Apache Kafka version for the cluster.

Examples:

Request syntax with placeholder values


resp = client.update_cluster_kafka_version({
  cluster_arn: "__string", # required
  configuration_info: {
    arn: "__string", # required
    revision: 1, # required
  },
  current_version: "__string", # required
  target_kafka_version: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :configuration_info (Types::ConfigurationInfo)

    Specifies the configuration to use for the brokers.

  • :current_version (required, String)

    Current cluster version.

  • :target_kafka_version (required, String)

    Target Apache Kafka version.

Returns:

See Also:

[View source]

3094
3095
3096
3097
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 3094

def update_cluster_kafka_version(params = {}, options = {})
  req = build_request(:update_cluster_kafka_version, params)
  req.send_request(options)
end

#update_configuration(params = {}) ⇒ Types::UpdateConfigurationResponse

Updates an existing MSK configuration. The configuration must be in the Active state.

Examples:

Request syntax with placeholder values


resp = client.update_configuration({
  arn: "__string", # required
  description: "__string",
  server_properties: "data", # required
})

Response structure


resp.arn #=> String
resp.latest_revision.creation_time #=> Time
resp.latest_revision.description #=> String
resp.latest_revision.revision #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

    The Amazon Resource Name (ARN) of the configuration.

  • :description (String)

    The description of the configuration.

  • :server_properties (required, String, StringIO, File)

Returns:

See Also:

[View source]

3009
3010
3011
3012
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 3009

def update_configuration(params = {}, options = {})
  req = build_request(:update_configuration, params)
  req.send_request(options)
end

#update_connectivity(params = {}) ⇒ Types::UpdateConnectivityResponse

Updates the connectivity configuration for the MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.update_connectivity({
  cluster_arn: "__string", # required
  connectivity_info: { # required
    public_access: {
      type: "__string",
    },
    vpc_connectivity: {
      client_authentication: {
        sasl: {
          scram: {
            enabled: false,
          },
          iam: {
            enabled: false,
          },
        },
        tls: {
          enabled: false,
        },
      },
    },
  },
  current_version: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :connectivity_info (required, Types::ConnectivityInfo)

    Information about the broker access configuration.

  • :current_version (required, String)

    The current version of the cluster.

Returns:

See Also:

[View source]

3150
3151
3152
3153
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 3150

def update_connectivity(params = {}, options = {})
  req = build_request(:update_connectivity, params)
  req.send_request(options)
end

#update_monitoring(params = {}) ⇒ Types::UpdateMonitoringResponse

Updates the monitoring settings for the cluster. You can use this operation to specify which Apache Kafka metrics you want Amazon MSK to send to Amazon CloudWatch. You can also specify settings for open monitoring with Prometheus.

Examples:

Request syntax with placeholder values


resp = client.update_monitoring({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  enhanced_monitoring: "DEFAULT", # accepts DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, PER_TOPIC_PER_PARTITION
  open_monitoring: {
    prometheus: { # required
      jmx_exporter: {
        enabled_in_broker: false, # required
      },
      node_exporter: {
        enabled_in_broker: false, # required
      },
    },
  },
  logging_info: {
    broker_logs: { # required
      cloud_watch_logs: {
        enabled: false, # required
        log_group: "__string",
      },
      firehose: {
        delivery_stream: "__string",
        enabled: false, # required
      },
      s3: {
        bucket: "__string",
        enabled: false, # required
        prefix: "__string",
      },
    },
  },
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The version of cluster to update from. A successful operation will then generate a new version.

  • :enhanced_monitoring (String)

    Specifies which Apache Kafka metrics Amazon MSK gathers and sends to Amazon CloudWatch for this cluster.

  • :open_monitoring (Types::OpenMonitoringInfo)

    The settings for open monitoring.

  • :logging_info (Types::LoggingInfo)

    LoggingInfo details.

Returns:

See Also:

[View source]

3225
3226
3227
3228
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 3225

def update_monitoring(params = {}, options = {})
  req = build_request(:update_monitoring, params)
  req.send_request(options)
end

#update_replication_info(params = {}) ⇒ Types::UpdateReplicationInfoResponse

Updates replication info of a replicator.

Examples:

Request syntax with placeholder values


resp = client.update_replication_info({
  consumer_group_replication: {
    consumer_groups_to_exclude: ["__stringMax256"], # required
    consumer_groups_to_replicate: ["__stringMax256"], # required
    detect_and_copy_new_consumer_groups: false, # required
    synchronise_consumer_group_offsets: false, # required
  },
  current_version: "__string", # required
  replicator_arn: "__string", # required
  source_kafka_cluster_arn: "__string", # required
  target_kafka_cluster_arn: "__string", # required
  topic_replication: {
    copy_access_control_lists_for_topics: false, # required
    copy_topic_configurations: false, # required
    detect_and_copy_new_topics: false, # required
    topics_to_exclude: ["__stringMax249"], # required
    topics_to_replicate: ["__stringMax249"], # required
  },
})

Response structure


resp.replicator_arn #=> String
resp.replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :consumer_group_replication (Types::ConsumerGroupReplicationUpdate)

    Updated consumer group replication information.

  • :current_version (required, String)

    Current replicator version.

  • :replicator_arn (required, String)
  • :source_kafka_cluster_arn (required, String)

    The ARN of the source Kafka cluster.

  • :target_kafka_cluster_arn (required, String)

    The ARN of the target Kafka cluster.

  • :topic_replication (Types::TopicReplicationUpdate)

    Updated topic replication information.

Returns:

See Also:

[View source]

3285
3286
3287
3288
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 3285

def update_replication_info(params = {}, options = {})
  req = build_request(:update_replication_info, params)
  req.send_request(options)
end

#update_security(params = {}) ⇒ Types::UpdateSecurityResponse

You can use this operation to update the encrypting and authentication settings for an existing cluster.

Examples:

Request syntax with placeholder values


resp = client.update_security({
  client_authentication: {
    sasl: {
      scram: {
        enabled: false,
      },
      iam: {
        enabled: false,
      },
    },
    tls: {
      certificate_authority_arn_list: ["__string"],
      enabled: false,
    },
    unauthenticated: {
      enabled: false,
    },
  },
  cluster_arn: "__string", # required
  current_version: "__string", # required
  encryption_info: {
    encryption_at_rest: {
      data_volume_kms_key_id: "__string", # required
    },
    encryption_in_transit: {
      client_broker: "TLS", # accepts TLS, TLS_PLAINTEXT, PLAINTEXT
      in_cluster: false,
    },
  },
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :client_authentication (Types::ClientAuthentication)

    Includes all client authentication related information.

  • :cluster_arn (required, String)
  • :current_version (required, String)

    You can use the DescribeCluster operation to get the current version of the cluster. After the security update is complete, the cluster will have a new version.

  • :encryption_info (Types::EncryptionInfo)

    Includes all encryption-related information.

Returns:

See Also:

[View source]

3353
3354
3355
3356
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 3353

def update_security(params = {}, options = {})
  req = build_request(:update_security, params)
  req.send_request(options)
end

#update_storage(params = {}) ⇒ Types::UpdateStorageResponse

Updates cluster broker volume size (or) sets cluster storage mode to TIERED.

Examples:

Request syntax with placeholder values


resp = client.update_storage({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  provisioned_throughput: {
    enabled: false,
    volume_throughput: 1,
  },
  storage_mode: "LOCAL", # accepts LOCAL, TIERED
  volume_size_gb: 1,
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The version of cluster to update from. A successful operation will then generate a new version.

  • :provisioned_throughput (Types::ProvisionedThroughput)

    EBS volume provisioned throughput information.

  • :storage_mode (String)

    Controls storage mode for supported storage tiers.

  • :volume_size_gb (Integer)

    size of the EBS volume to update.

Returns:

See Also:

[View source]

3403
3404
3405
3406
# File 'gems/aws-sdk-kafka/lib/aws-sdk-kafka/client.rb', line 3403

def update_storage(params = {}, options = {})
  req = build_request(:update_storage, params)
  req.send_request(options)
end