You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::IoTAnalytics::Client

Inherits:
Seahorse::Client::Base show all
Defined in:
(unknown)

Overview

An API client for AWS IoT Analytics. To construct a client, you need to configure a :region and :credentials.

iotanalytics = Aws::IoTAnalytics::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

See #initialize for a full list of supported configuration options.

Region

You can configure a default region in the following locations:

  • ENV['AWS_REGION']
  • Aws.config[:region]

Go here for a list of supported regions.

Credentials

Default credentials are loaded automatically from the following locations:

  • ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY']
  • Aws.config[:credentials]
  • The shared credentials ini file at ~/.aws/credentials (more information)
  • From an instance profile when running on EC2

You can also construct a credentials object from one of the following classes:

Alternatively, you configure credentials with :access_key_id and :secret_access_key:

# load credentials from disk
creds = YAML.load(File.read('/path/to/secrets'))

Aws::IoTAnalytics::Client.new(
  access_key_id: creds['access_key_id'],
  secret_access_key: creds['secret_access_key']
)

Always load your credentials from outside your application. Avoid configuring credentials statically and never commit them to source control.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

Constructor collapse

API Operations collapse

Instance Method Summary collapse

Methods inherited from Seahorse::Client::Base

add_plugin, api, #build_request, clear_plugins, define, new, #operation, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options = {}) ⇒ Aws::IoTAnalytics::Client

Constructs an API client.

Options Hash (options):

  • :access_key_id (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :active_endpoint_cache (Boolean)

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types. See Plugins::ParamConverter for more details.

  • :credentials (required, Credentials)

    Your AWS credentials. The following locations will be searched in order for credentials:

    • :access_key_id, :secret_access_key, and :session_token options
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']
    • HOME/.aws/credentials shared credentials file
    • EC2 instance profile credentials See Plugins::RequestSigner for more details.
  • :disable_host_prefix_injection (Boolean)

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available. See Plugins::EndpointPattern for more details.

  • :endpoint (String)

    A default endpoint is constructed from the :region. See Plugins::RegionalEndpoint for more details.

  • :endpoint_cache_max_entries (Integer)

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_max_threads (Integer)

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_poll_interval (Integer)

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec. See Plugins::EndpointDiscovery for more details.

  • :endpoint_discovery (Boolean)

    When set to true, endpoint discovery will be enabled for operations when available. Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :http_continue_timeout (Float) — default: 1

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_idle_timeout (Integer) — default: 5

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_open_timeout (Integer) — default: 15

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_proxy (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_read_timeout (Integer) — default: 60

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_wire_trace (Boolean) — default: false

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the logger at. See Plugins::Logging for more details.

  • :log_formatter (Logging::LogFormatter)

    The log formatter. Defaults to Seahorse::Client::Logging::Formatter.default. See Plugins::Logging for more details.

  • :logger (Logger) — default: nil

    The Logger instance to send log messages to. If this option is not set, logging will be disabled. See Plugins::Logging for more details.

  • :profile (String)

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used. See Plugins::RequestSigner for more details.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised. See Seahorse::Client::Plugins::RaiseResponseErrors for more details.

  • :region (required, String)

    The AWS region to connect to. The region is used to construct the client endpoint. Defaults to ENV['AWS_REGION']. Also checks AMAZON_REGION and AWS_DEFAULT_REGION. See Plugins::RegionalEndpoint for more details.

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors and auth errors from expired credentials. See Plugins::RetryErrors for more details.

  • :secret_access_key (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :session_token (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :ssl_ca_bundle (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_directory (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_store (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_verify_peer (Boolean) — default: true

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled. See Plugins::StubResponses for more details.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request. See Plugins::ParamValidator for more details.

Instance Method Details

#batch_put_message(options = {}) ⇒ Types::BatchPutMessageResponse

Sends messages to a channel.

Examples:

Request syntax with placeholder values


resp = client.batch_put_message({
  channel_name: "ChannelName", # required
  messages: [ # required
    {
      message_id: "MessageId", # required
      payload: "data", # required
    },
  ],
})

Response structure


resp.batch_put_message_error_entries #=> Array
resp.batch_put_message_error_entries[0].message_id #=> String
resp.batch_put_message_error_entries[0].error_code #=> String
resp.batch_put_message_error_entries[0].error_message #=> String

Options Hash (options):

  • :channel_name (required, String)

    The name of the channel where the messages are sent.

  • :messages (required, Array<Types::Message>)

    The list of messages to be sent. Each message has format: \'{ \"messageId\": \"string\", \"payload\": \"string\"}\'.

    Note that the field names of message payloads (data) that you send to AWS IoT Analytics:

    • Must contain only alphanumeric characters and undescores (_); no other special characters are allowed.

    • Must begin with an alphabetic character or single underscore (_).

    • Cannot contain hyphens (-).

    • In regular expression terms: \"^[A-Za-z_]([A-Za-z0-9]*|[A-Za-z0-9][A-Za-z0-9_]*)$\".

    • Cannot be greater than 255 characters.

    • Are case-insensitive. (Fields named \"foo\" and \"FOO\" in the same payload are considered duplicates.)

    For example, 29 or 29 are valid, but 29, 29 or 29 are invalid in message payloads.

Returns:

#cancel_pipeline_reprocessing(options = {}) ⇒ Struct

Cancels the reprocessing of data through the pipeline.

Examples:

Request syntax with placeholder values


resp = client.cancel_pipeline_reprocessing({
  pipeline_name: "PipelineName", # required
  reprocessing_id: "ReprocessingId", # required
})

Options Hash (options):

  • :pipeline_name (required, String)

    The name of pipeline for which data reprocessing is canceled.

  • :reprocessing_id (required, String)

    The ID of the reprocessing task (returned by \"StartPipelineReprocessing\").

Returns:

  • (Struct)

    Returns an empty response.

#create_channel(options = {}) ⇒ Types::CreateChannelResponse

Creates a channel. A channel collects data from an MQTT topic and archives the raw, unprocessed messages before publishing the data to a pipeline.

Examples:

Request syntax with placeholder values


resp = client.create_channel({
  channel_name: "ChannelName", # required
  retention_period: {
    unlimited: false,
    number_of_days: 1,
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.channel_name #=> String
resp.channel_arn #=> String
resp.retention_period.unlimited #=> true/false
resp.retention_period.number_of_days #=> Integer

Options Hash (options):

  • :channel_name (required, String)

    The name of the channel.

  • :retention_period (Types::RetentionPeriod)

    How long, in days, message data is kept for the channel.

  • :tags (Array<Types::Tag>)

    Metadata which can be used to manage the channel.

Returns:

#create_dataset(options = {}) ⇒ Types::CreateDatasetResponse

Creates a data set. A data set stores data retrieved from a data store by applying a "queryAction" (a SQL query) or a "containerAction" (executing a containerized application). This operation creates the skeleton of a data set. The data set can be populated manually by calling "CreateDatasetContent" or automatically according to a "trigger" you specify.

Examples:

Request syntax with placeholder values


resp = client.create_dataset({
  dataset_name: "DatasetName", # required
  actions: [ # required
    {
      action_name: "DatasetActionName",
      query_action: {
        sql_query: "SqlQuery", # required
        filters: [
          {
            delta_time: {
              offset_seconds: 1, # required
              time_expression: "TimeExpression", # required
            },
          },
        ],
      },
      container_action: {
        image: "Image", # required
        execution_role_arn: "RoleArn", # required
        resource_configuration: { # required
          compute_type: "ACU_1", # required, accepts ACU_1, ACU_2
          volume_size_in_gb: 1, # required
        },
        variables: [
          {
            name: "VariableName", # required
            string_value: "StringValue",
            double_value: 1.0,
            dataset_content_version_value: {
              dataset_name: "DatasetName", # required
            },
            output_file_uri_value: {
              file_name: "OutputFileName", # required
            },
          },
        ],
      },
    },
  ],
  triggers: [
    {
      schedule: {
        expression: "ScheduleExpression",
      },
      dataset: {
        name: "DatasetName", # required
      },
    },
  ],
  content_delivery_rules: [
    {
      entry_name: "EntryName",
      destination: { # required
        iot_events_destination_configuration: {
          input_name: "IotEventsInputName", # required
          role_arn: "RoleArn", # required
        },
        s3_destination_configuration: {
          bucket: "BucketName", # required
          key: "BucketKeyExpression", # required
          glue_configuration: {
            table_name: "GlueTableName", # required
            database_name: "GlueDatabaseName", # required
          },
          role_arn: "RoleArn", # required
        },
      },
    },
  ],
  retention_period: {
    unlimited: false,
    number_of_days: 1,
  },
  versioning_configuration: {
    unlimited: false,
    max_versions: 1,
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.dataset_name #=> String
resp.dataset_arn #=> String
resp.retention_period.unlimited #=> true/false
resp.retention_period.number_of_days #=> Integer

Options Hash (options):

Returns:

#create_dataset_content(options = {}) ⇒ Types::CreateDatasetContentResponse

Creates the content of a data set by applying a "queryAction" (a SQL query) or a "containerAction" (executing a containerized application).

Examples:

Request syntax with placeholder values


resp = client.create_dataset_content({
  dataset_name: "DatasetName", # required
})

Response structure


resp.version_id #=> String

Options Hash (options):

  • :dataset_name (required, String)

    The name of the data set.

Returns:

#create_datastore(options = {}) ⇒ Types::CreateDatastoreResponse

Creates a data store, which is a repository for messages.

Examples:

Request syntax with placeholder values


resp = client.create_datastore({
  datastore_name: "DatastoreName", # required
  retention_period: {
    unlimited: false,
    number_of_days: 1,
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.datastore_name #=> String
resp.datastore_arn #=> String
resp.retention_period.unlimited #=> true/false
resp.retention_period.number_of_days #=> Integer

Options Hash (options):

  • :datastore_name (required, String)

    The name of the data store.

  • :retention_period (Types::RetentionPeriod)

    How long, in days, message data is kept for the data store.

  • :tags (Array<Types::Tag>)

    Metadata which can be used to manage the data store.

Returns:

#create_pipeline(options = {}) ⇒ Types::CreatePipelineResponse

Creates a pipeline. A pipeline consumes messages from one or more channels and allows you to process the messages before storing them in a data store. You must specify both a channel and a datastore activity and, optionally, as many as 23 additional activities in the pipelineActivities array.

Examples:

Request syntax with placeholder values


resp = client.create_pipeline({
  pipeline_name: "PipelineName", # required
  pipeline_activities: [ # required
    {
      channel: {
        name: "ActivityName", # required
        channel_name: "ChannelName", # required
        next: "ActivityName",
      },
      lambda: {
        name: "ActivityName", # required
        lambda_name: "LambdaName", # required
        batch_size: 1, # required
        next: "ActivityName",
      },
      datastore: {
        name: "ActivityName", # required
        datastore_name: "DatastoreName", # required
      },
      add_attributes: {
        name: "ActivityName", # required
        attributes: { # required
          "AttributeName" => "AttributeName",
        },
        next: "ActivityName",
      },
      remove_attributes: {
        name: "ActivityName", # required
        attributes: ["AttributeName"], # required
        next: "ActivityName",
      },
      select_attributes: {
        name: "ActivityName", # required
        attributes: ["AttributeName"], # required
        next: "ActivityName",
      },
      filter: {
        name: "ActivityName", # required
        filter: "FilterExpression", # required
        next: "ActivityName",
      },
      math: {
        name: "ActivityName", # required
        attribute: "AttributeName", # required
        math: "MathExpression", # required
        next: "ActivityName",
      },
      device_registry_enrich: {
        name: "ActivityName", # required
        attribute: "AttributeName", # required
        thing_name: "AttributeName", # required
        role_arn: "RoleArn", # required
        next: "ActivityName",
      },
      device_shadow_enrich: {
        name: "ActivityName", # required
        attribute: "AttributeName", # required
        thing_name: "AttributeName", # required
        role_arn: "RoleArn", # required
        next: "ActivityName",
      },
    },
  ],
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.pipeline_name #=> String
resp.pipeline_arn #=> String

Options Hash (options):

  • :pipeline_name (required, String)

    The name of the pipeline.

  • :pipeline_activities (required, Array<Types::PipelineActivity>)

    A list of \"PipelineActivity\" objects. Activities perform transformations on your messages, such as removing, renaming or adding message attributes; filtering messages based on attribute values; invoking your Lambda functions on messages for advanced processing; or performing mathematical transformations to normalize device data.

    The list can be 2-25 PipelineActivity objects and must contain both a channel and a datastore activity. Each entry in the list must contain only one activity, for example:

    pipelineActivities = [ { "channel": { ... } }, { "lambda": { ... } }, ... ]

  • :tags (Array<Types::Tag>)

    Metadata which can be used to manage the pipeline.

Returns:

#delete_channel(options = {}) ⇒ Struct

Deletes the specified channel.

Examples:

Request syntax with placeholder values


resp = client.delete_channel({
  channel_name: "ChannelName", # required
})

Options Hash (options):

  • :channel_name (required, String)

    The name of the channel to delete.

Returns:

  • (Struct)

    Returns an empty response.

#delete_dataset(options = {}) ⇒ Struct

Deletes the specified data set.

You do not have to delete the content of the data set before you perform this operation.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset({
  dataset_name: "DatasetName", # required
})

Options Hash (options):

  • :dataset_name (required, String)

    The name of the data set to delete.

Returns:

  • (Struct)

    Returns an empty response.

#delete_dataset_content(options = {}) ⇒ Struct

Deletes the content of the specified data set.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset_content({
  dataset_name: "DatasetName", # required
  version_id: "DatasetContentVersion",
})

Options Hash (options):

  • :dataset_name (required, String)

    The name of the data set whose content is deleted.

  • :version_id (String)

    The version of the data set whose content is deleted. You can also use the strings \"$LATEST\" or \"$LATEST_SUCCEEDED\" to delete the latest or latest successfully completed data set. If not specified, \"$LATEST_SUCCEEDED\" is the default.

Returns:

  • (Struct)

    Returns an empty response.

#delete_datastore(options = {}) ⇒ Struct

Deletes the specified data store.

Examples:

Request syntax with placeholder values


resp = client.delete_datastore({
  datastore_name: "DatastoreName", # required
})

Options Hash (options):

  • :datastore_name (required, String)

    The name of the data store to delete.

Returns:

  • (Struct)

    Returns an empty response.

#delete_pipeline(options = {}) ⇒ Struct

Deletes the specified pipeline.

Examples:

Request syntax with placeholder values


resp = client.delete_pipeline({
  pipeline_name: "PipelineName", # required
})

Options Hash (options):

  • :pipeline_name (required, String)

    The name of the pipeline to delete.

Returns:

  • (Struct)

    Returns an empty response.

#describe_channel(options = {}) ⇒ Types::DescribeChannelResponse

Retrieves information about a channel.

Examples:

Request syntax with placeholder values


resp = client.describe_channel({
  channel_name: "ChannelName", # required
  include_statistics: false,
})

Response structure


resp.channel.name #=> String
resp.channel.arn #=> String
resp.channel.status #=> String, one of "CREATING", "ACTIVE", "DELETING"
resp.channel.retention_period.unlimited #=> true/false
resp.channel.retention_period.number_of_days #=> Integer
resp.channel.creation_time #=> Time
resp.channel.last_update_time #=> Time
resp.statistics.size.estimated_size_in_bytes #=> Float
resp.statistics.size.estimated_on #=> Time

Options Hash (options):

  • :channel_name (required, String)

    The name of the channel whose information is retrieved.

  • :include_statistics (Boolean)

    If true, additional statistical information about the channel is included in the response.

Returns:

#describe_dataset(options = {}) ⇒ Types::DescribeDatasetResponse

Retrieves information about a data set.

Examples:

Request syntax with placeholder values


resp = client.describe_dataset({
  dataset_name: "DatasetName", # required
})

Response structure


resp.dataset.name #=> String
resp.dataset.arn #=> String
resp.dataset.actions #=> Array
resp.dataset.actions[0].action_name #=> String
resp.dataset.actions[0].query_action.sql_query #=> String
resp.dataset.actions[0].query_action.filters #=> Array
resp.dataset.actions[0].query_action.filters[0].delta_time.offset_seconds #=> Integer
resp.dataset.actions[0].query_action.filters[0].delta_time.time_expression #=> String
resp.dataset.actions[0].container_action.image #=> String
resp.dataset.actions[0].container_action.execution_role_arn #=> String
resp.dataset.actions[0].container_action.resource_configuration.compute_type #=> String, one of "ACU_1", "ACU_2"
resp.dataset.actions[0].container_action.resource_configuration.volume_size_in_gb #=> Integer
resp.dataset.actions[0].container_action.variables #=> Array
resp.dataset.actions[0].container_action.variables[0].name #=> String
resp.dataset.actions[0].container_action.variables[0].string_value #=> String
resp.dataset.actions[0].container_action.variables[0].double_value #=> Float
resp.dataset.actions[0].container_action.variables[0].dataset_content_version_value.dataset_name #=> String
resp.dataset.actions[0].container_action.variables[0].output_file_uri_value.file_name #=> String
resp.dataset.triggers #=> Array
resp.dataset.triggers[0].schedule.expression #=> String
resp.dataset.triggers[0].dataset.name #=> String
resp.dataset.content_delivery_rules #=> Array
resp.dataset.content_delivery_rules[0].entry_name #=> String
resp.dataset.content_delivery_rules[0].destination.iot_events_destination_configuration.input_name #=> String
resp.dataset.content_delivery_rules[0].destination.iot_events_destination_configuration.role_arn #=> String
resp.dataset.content_delivery_rules[0].destination.s3_destination_configuration.bucket #=> String
resp.dataset.content_delivery_rules[0].destination.s3_destination_configuration.key #=> String
resp.dataset.content_delivery_rules[0].destination.s3_destination_configuration.glue_configuration.table_name #=> String
resp.dataset.content_delivery_rules[0].destination.s3_destination_configuration.glue_configuration.database_name #=> String
resp.dataset.content_delivery_rules[0].destination.s3_destination_configuration.role_arn #=> String
resp.dataset.status #=> String, one of "CREATING", "ACTIVE", "DELETING"
resp.dataset.creation_time #=> Time
resp.dataset.last_update_time #=> Time
resp.dataset.retention_period.unlimited #=> true/false
resp.dataset.retention_period.number_of_days #=> Integer
resp.dataset.versioning_configuration.unlimited #=> true/false
resp.dataset.versioning_configuration.max_versions #=> Integer

Options Hash (options):

  • :dataset_name (required, String)

    The name of the data set whose information is retrieved.

Returns:

#describe_datastore(options = {}) ⇒ Types::DescribeDatastoreResponse

Retrieves information about a data store.

Examples:

Request syntax with placeholder values


resp = client.describe_datastore({
  datastore_name: "DatastoreName", # required
  include_statistics: false,
})

Response structure


resp.datastore.name #=> String
resp.datastore.arn #=> String
resp.datastore.status #=> String, one of "CREATING", "ACTIVE", "DELETING"
resp.datastore.retention_period.unlimited #=> true/false
resp.datastore.retention_period.number_of_days #=> Integer
resp.datastore.creation_time #=> Time
resp.datastore.last_update_time #=> Time
resp.statistics.size.estimated_size_in_bytes #=> Float
resp.statistics.size.estimated_on #=> Time

Options Hash (options):

  • :datastore_name (required, String)

    The name of the data store

  • :include_statistics (Boolean)

    If true, additional statistical information about the datastore is included in the response.

Returns:

#describe_logging_options(options = {}) ⇒ Types::DescribeLoggingOptionsResponse

Retrieves the current settings of the AWS IoT Analytics logging options.

Examples:

Request syntax with placeholder values


resp = client.describe_logging_options()

Response structure


resp.logging_options.role_arn #=> String
resp.logging_options.level #=> String, one of "ERROR"
resp.logging_options.enabled #=> true/false

Returns:

#describe_pipeline(options = {}) ⇒ Types::DescribePipelineResponse

Retrieves information about a pipeline.

Examples:

Request syntax with placeholder values


resp = client.describe_pipeline({
  pipeline_name: "PipelineName", # required
})

Response structure


resp.pipeline.name #=> String
resp.pipeline.arn #=> String
resp.pipeline.activities #=> Array
resp.pipeline.activities[0].channel.name #=> String
resp.pipeline.activities[0].channel.channel_name #=> String
resp.pipeline.activities[0].channel.next #=> String
resp.pipeline.activities[0].lambda.name #=> String
resp.pipeline.activities[0].lambda.lambda_name #=> String
resp.pipeline.activities[0].lambda.batch_size #=> Integer
resp.pipeline.activities[0].lambda.next #=> String
resp.pipeline.activities[0].datastore.name #=> String
resp.pipeline.activities[0].datastore.datastore_name #=> String
resp.pipeline.activities[0].add_attributes.name #=> String
resp.pipeline.activities[0].add_attributes.attributes #=> Hash
resp.pipeline.activities[0].add_attributes.attributes["AttributeName"] #=> String
resp.pipeline.activities[0].add_attributes.next #=> String
resp.pipeline.activities[0].remove_attributes.name #=> String
resp.pipeline.activities[0].remove_attributes.attributes #=> Array
resp.pipeline.activities[0].remove_attributes.attributes[0] #=> String
resp.pipeline.activities[0].remove_attributes.next #=> String
resp.pipeline.activities[0].select_attributes.name #=> String
resp.pipeline.activities[0].select_attributes.attributes #=> Array
resp.pipeline.activities[0].select_attributes.attributes[0] #=> String
resp.pipeline.activities[0].select_attributes.next #=> String
resp.pipeline.activities[0].filter.name #=> String
resp.pipeline.activities[0].filter.filter #=> String
resp.pipeline.activities[0].filter.next #=> String
resp.pipeline.activities[0].math.name #=> String
resp.pipeline.activities[0].math.attribute #=> String
resp.pipeline.activities[0].math.math #=> String
resp.pipeline.activities[0].math.next #=> String
resp.pipeline.activities[0].device_registry_enrich.name #=> String
resp.pipeline.activities[0].device_registry_enrich.attribute #=> String
resp.pipeline.activities[0].device_registry_enrich.thing_name #=> String
resp.pipeline.activities[0].device_registry_enrich.role_arn #=> String
resp.pipeline.activities[0].device_registry_enrich.next #=> String
resp.pipeline.activities[0].device_shadow_enrich.name #=> String
resp.pipeline.activities[0].device_shadow_enrich.attribute #=> String
resp.pipeline.activities[0].device_shadow_enrich.thing_name #=> String
resp.pipeline.activities[0].device_shadow_enrich.role_arn #=> String
resp.pipeline.activities[0].device_shadow_enrich.next #=> String
resp.pipeline.reprocessing_summaries #=> Array
resp.pipeline.reprocessing_summaries[0].id #=> String
resp.pipeline.reprocessing_summaries[0].status #=> String, one of "RUNNING", "SUCCEEDED", "CANCELLED", "FAILED"
resp.pipeline.reprocessing_summaries[0].creation_time #=> Time
resp.pipeline.creation_time #=> Time
resp.pipeline.last_update_time #=> Time

Options Hash (options):

  • :pipeline_name (required, String)

    The name of the pipeline whose information is retrieved.

Returns:

#get_dataset_content(options = {}) ⇒ Types::GetDatasetContentResponse

Retrieves the contents of a data set as pre-signed URIs.

Examples:

Request syntax with placeholder values


resp = client.get_dataset_content({
  dataset_name: "DatasetName", # required
  version_id: "DatasetContentVersion",
})

Response structure


resp.entries #=> Array
resp.entries[0].entry_name #=> String
resp.entries[0].data_uri #=> String
resp.timestamp #=> Time
resp.status.state #=> String, one of "CREATING", "SUCCEEDED", "FAILED"
resp.status.reason #=> String

Options Hash (options):

  • :dataset_name (required, String)

    The name of the data set whose contents are retrieved.

  • :version_id (String)

    The version of the data set whose contents are retrieved. You can also use the strings \"$LATEST\" or \"$LATEST_SUCCEEDED\" to retrieve the contents of the latest or latest successfully completed data set. If not specified, \"$LATEST_SUCCEEDED\" is the default.

Returns:

#list_channels(options = {}) ⇒ Types::ListChannelsResponse

Retrieves a list of channels.

Examples:

Request syntax with placeholder values


resp = client.list_channels({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.channel_summaries #=> Array
resp.channel_summaries[0].channel_name #=> String
resp.channel_summaries[0].status #=> String, one of "CREATING", "ACTIVE", "DELETING"
resp.channel_summaries[0].creation_time #=> Time
resp.channel_summaries[0].last_update_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    The token for the next set of results.

  • :max_results (Integer)

    The maximum number of results to return in this request.

    The default value is 100.

Returns:

#list_dataset_contents(options = {}) ⇒ Types::ListDatasetContentsResponse

Lists information about data set contents that have been created.

Examples:

Request syntax with placeholder values


resp = client.list_dataset_contents({
  dataset_name: "DatasetName", # required
  next_token: "NextToken",
  max_results: 1,
  scheduled_on_or_after: Time.now,
  scheduled_before: Time.now,
})

Response structure


resp.dataset_content_summaries #=> Array
resp.dataset_content_summaries[0].version #=> String
resp.dataset_content_summaries[0].status.state #=> String, one of "CREATING", "SUCCEEDED", "FAILED"
resp.dataset_content_summaries[0].status.reason #=> String
resp.dataset_content_summaries[0].creation_time #=> Time
resp.dataset_content_summaries[0].schedule_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :dataset_name (required, String)

    The name of the data set whose contents information you want to list.

  • :next_token (String)

    The token for the next set of results.

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :scheduled_on_or_after (Time)

    A filter to limit results to those data set contents whose creation is scheduled on or after the given time. See the field triggers.schedule in the CreateDataset request. (timestamp)

  • :scheduled_before (Time)

    A filter to limit results to those data set contents whose creation is scheduled before the given time. See the field triggers.schedule in the CreateDataset request. (timestamp)

Returns:

#list_datasets(options = {}) ⇒ Types::ListDatasetsResponse

Retrieves information about data sets.

Examples:

Request syntax with placeholder values


resp = client.list_datasets({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.dataset_summaries #=> Array
resp.dataset_summaries[0].dataset_name #=> String
resp.dataset_summaries[0].status #=> String, one of "CREATING", "ACTIVE", "DELETING"
resp.dataset_summaries[0].creation_time #=> Time
resp.dataset_summaries[0].last_update_time #=> Time
resp.dataset_summaries[0].triggers #=> Array
resp.dataset_summaries[0].triggers[0].schedule.expression #=> String
resp.dataset_summaries[0].triggers[0].dataset.name #=> String
resp.dataset_summaries[0].actions #=> Array
resp.dataset_summaries[0].actions[0].action_name #=> String
resp.dataset_summaries[0].actions[0].action_type #=> String, one of "QUERY", "CONTAINER"
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    The token for the next set of results.

  • :max_results (Integer)

    The maximum number of results to return in this request.

    The default value is 100.

Returns:

#list_datastores(options = {}) ⇒ Types::ListDatastoresResponse

Retrieves a list of data stores.

Examples:

Request syntax with placeholder values


resp = client.list_datastores({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.datastore_summaries #=> Array
resp.datastore_summaries[0].datastore_name #=> String
resp.datastore_summaries[0].status #=> String, one of "CREATING", "ACTIVE", "DELETING"
resp.datastore_summaries[0].creation_time #=> Time
resp.datastore_summaries[0].last_update_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    The token for the next set of results.

  • :max_results (Integer)

    The maximum number of results to return in this request.

    The default value is 100.

Returns:

#list_pipelines(options = {}) ⇒ Types::ListPipelinesResponse

Retrieves a list of pipelines.

Examples:

Request syntax with placeholder values


resp = client.list_pipelines({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.pipeline_summaries #=> Array
resp.pipeline_summaries[0].pipeline_name #=> String
resp.pipeline_summaries[0].reprocessing_summaries #=> Array
resp.pipeline_summaries[0].reprocessing_summaries[0].id #=> String
resp.pipeline_summaries[0].reprocessing_summaries[0].status #=> String, one of "RUNNING", "SUCCEEDED", "CANCELLED", "FAILED"
resp.pipeline_summaries[0].reprocessing_summaries[0].creation_time #=> Time
resp.pipeline_summaries[0].creation_time #=> Time
resp.pipeline_summaries[0].last_update_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    The token for the next set of results.

  • :max_results (Integer)

    The maximum number of results to return in this request.

    The default value is 100.

Returns:

#list_tags_for_resource(options = {}) ⇒ Types::ListTagsForResourceResponse

Lists the tags (metadata) which you have assigned to the resource.

Examples:

Request syntax with placeholder values


resp = client.list_tags_for_resource({
  resource_arn: "ResourceArn", # required
})

Response structure


resp.tags #=> Array
resp.tags[0].key #=> String
resp.tags[0].value #=> String

Options Hash (options):

  • :resource_arn (required, String)

    The ARN of the resource whose tags you want to list.

Returns:

#put_logging_options(options = {}) ⇒ Struct

Sets or updates the AWS IoT Analytics logging options.

Note that if you update the value of any loggingOptions field, it takes up to one minute for the change to take effect. Also, if you change the policy attached to the role you specified in the roleArn field (for example, to correct an invalid policy) it takes up to 5 minutes for that change to take effect.

Examples:

Request syntax with placeholder values


resp = client.put_logging_options({
  logging_options: { # required
    role_arn: "RoleArn", # required
    level: "ERROR", # required, accepts ERROR
    enabled: false, # required
  },
})

Options Hash (options):

  • :logging_options (required, Types::LoggingOptions)

    The new values of the AWS IoT Analytics logging options.

Returns:

  • (Struct)

    Returns an empty response.

#run_pipeline_activity(options = {}) ⇒ Types::RunPipelineActivityResponse

Simulates the results of running a pipeline activity on a message payload.

Examples:

Request syntax with placeholder values


resp = client.run_pipeline_activity({
  pipeline_activity: { # required
    channel: {
      name: "ActivityName", # required
      channel_name: "ChannelName", # required
      next: "ActivityName",
    },
    lambda: {
      name: "ActivityName", # required
      lambda_name: "LambdaName", # required
      batch_size: 1, # required
      next: "ActivityName",
    },
    datastore: {
      name: "ActivityName", # required
      datastore_name: "DatastoreName", # required
    },
    add_attributes: {
      name: "ActivityName", # required
      attributes: { # required
        "AttributeName" => "AttributeName",
      },
      next: "ActivityName",
    },
    remove_attributes: {
      name: "ActivityName", # required
      attributes: ["AttributeName"], # required
      next: "ActivityName",
    },
    select_attributes: {
      name: "ActivityName", # required
      attributes: ["AttributeName"], # required
      next: "ActivityName",
    },
    filter: {
      name: "ActivityName", # required
      filter: "FilterExpression", # required
      next: "ActivityName",
    },
    math: {
      name: "ActivityName", # required
      attribute: "AttributeName", # required
      math: "MathExpression", # required
      next: "ActivityName",
    },
    device_registry_enrich: {
      name: "ActivityName", # required
      attribute: "AttributeName", # required
      thing_name: "AttributeName", # required
      role_arn: "RoleArn", # required
      next: "ActivityName",
    },
    device_shadow_enrich: {
      name: "ActivityName", # required
      attribute: "AttributeName", # required
      thing_name: "AttributeName", # required
      role_arn: "RoleArn", # required
      next: "ActivityName",
    },
  },
  payloads: ["data"], # required
})

Response structure


resp.payloads #=> Array
resp.payloads[0] #=> IO
resp.log_result #=> String

Options Hash (options):

  • :pipeline_activity (required, Types::PipelineActivity)

    The pipeline activity that is run. This must not be a \'channel\' activity or a \'datastore\' activity because these activities are used in a pipeline only to load the original message and to store the (possibly) transformed message. If a \'lambda\' activity is specified, only short-running Lambda functions (those with a timeout of less than 30 seconds or less) can be used.

  • :payloads (required, Array<String>)

    The sample message payloads on which the pipeline activity is run.

Returns:

#sample_channel_data(options = {}) ⇒ Types::SampleChannelDataResponse

Retrieves a sample of messages from the specified channel ingested during the specified timeframe. Up to 10 messages can be retrieved.

Examples:

Request syntax with placeholder values


resp = client.sample_channel_data({
  channel_name: "ChannelName", # required
  max_messages: 1,
  start_time: Time.now,
  end_time: Time.now,
})

Response structure


resp.payloads #=> Array
resp.payloads[0] #=> IO

Options Hash (options):

  • :channel_name (required, String)

    The name of the channel whose message samples are retrieved.

  • :max_messages (Integer)

    The number of sample messages to be retrieved. The limit is 10, the default is also 10.

  • :start_time (Time)

    The start of the time window from which sample messages are retrieved.

  • :end_time (Time)

    The end of the time window from which sample messages are retrieved.

Returns:

#start_pipeline_reprocessing(options = {}) ⇒ Types::StartPipelineReprocessingResponse

Starts the reprocessing of raw message data through the pipeline.

Examples:

Request syntax with placeholder values


resp = client.start_pipeline_reprocessing({
  pipeline_name: "PipelineName", # required
  start_time: Time.now,
  end_time: Time.now,
})

Response structure


resp.reprocessing_id #=> String

Options Hash (options):

  • :pipeline_name (required, String)

    The name of the pipeline on which to start reprocessing.

  • :start_time (Time)

    The start time (inclusive) of raw message data that is reprocessed.

  • :end_time (Time)

    The end time (exclusive) of raw message data that is reprocessed.

Returns:

#tag_resource(options = {}) ⇒ Struct

Adds to or modifies the tags of the given resource. Tags are metadata which can be used to manage a resource.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "ResourceArn", # required
  tags: [ # required
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Options Hash (options):

  • :resource_arn (required, String)

    The ARN of the resource whose tags you want to modify.

  • :tags (required, Array<Types::Tag>)

    The new or modified tags for the resource.

Returns:

  • (Struct)

    Returns an empty response.

#untag_resource(options = {}) ⇒ Struct

Removes the given tags (metadata) from the resource.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "ResourceArn", # required
  tag_keys: ["TagKey"], # required
})

Options Hash (options):

  • :resource_arn (required, String)

    The ARN of the resource whose tags you want to remove.

  • :tag_keys (required, Array<String>)

    The keys of those tags which you want to remove.

Returns:

  • (Struct)

    Returns an empty response.

#update_channel(options = {}) ⇒ Struct

Updates the settings of a channel.

Examples:

Request syntax with placeholder values


resp = client.update_channel({
  channel_name: "ChannelName", # required
  retention_period: {
    unlimited: false,
    number_of_days: 1,
  },
})

Options Hash (options):

  • :channel_name (required, String)

    The name of the channel to be updated.

  • :retention_period (Types::RetentionPeriod)

    How long, in days, message data is kept for the channel.

Returns:

  • (Struct)

    Returns an empty response.

#update_dataset(options = {}) ⇒ Struct

Updates the settings of a data set.

Examples:

Request syntax with placeholder values


resp = client.update_dataset({
  dataset_name: "DatasetName", # required
  actions: [ # required
    {
      action_name: "DatasetActionName",
      query_action: {
        sql_query: "SqlQuery", # required
        filters: [
          {
            delta_time: {
              offset_seconds: 1, # required
              time_expression: "TimeExpression", # required
            },
          },
        ],
      },
      container_action: {
        image: "Image", # required
        execution_role_arn: "RoleArn", # required
        resource_configuration: { # required
          compute_type: "ACU_1", # required, accepts ACU_1, ACU_2
          volume_size_in_gb: 1, # required
        },
        variables: [
          {
            name: "VariableName", # required
            string_value: "StringValue",
            double_value: 1.0,
            dataset_content_version_value: {
              dataset_name: "DatasetName", # required
            },
            output_file_uri_value: {
              file_name: "OutputFileName", # required
            },
          },
        ],
      },
    },
  ],
  triggers: [
    {
      schedule: {
        expression: "ScheduleExpression",
      },
      dataset: {
        name: "DatasetName", # required
      },
    },
  ],
  content_delivery_rules: [
    {
      entry_name: "EntryName",
      destination: { # required
        iot_events_destination_configuration: {
          input_name: "IotEventsInputName", # required
          role_arn: "RoleArn", # required
        },
        s3_destination_configuration: {
          bucket: "BucketName", # required
          key: "BucketKeyExpression", # required
          glue_configuration: {
            table_name: "GlueTableName", # required
            database_name: "GlueDatabaseName", # required
          },
          role_arn: "RoleArn", # required
        },
      },
    },
  ],
  retention_period: {
    unlimited: false,
    number_of_days: 1,
  },
  versioning_configuration: {
    unlimited: false,
    max_versions: 1,
  },
})

Options Hash (options):

Returns:

  • (Struct)

    Returns an empty response.

#update_datastore(options = {}) ⇒ Struct

Updates the settings of a data store.

Examples:

Request syntax with placeholder values


resp = client.update_datastore({
  datastore_name: "DatastoreName", # required
  retention_period: {
    unlimited: false,
    number_of_days: 1,
  },
})

Options Hash (options):

  • :datastore_name (required, String)

    The name of the data store to be updated.

  • :retention_period (Types::RetentionPeriod)

    How long, in days, message data is kept for the data store.

Returns:

  • (Struct)

    Returns an empty response.

#update_pipeline(options = {}) ⇒ Struct

Updates the settings of a pipeline. You must specify both a channel and a datastore activity and, optionally, as many as 23 additional activities in the pipelineActivities array.

Examples:

Request syntax with placeholder values


resp = client.update_pipeline({
  pipeline_name: "PipelineName", # required
  pipeline_activities: [ # required
    {
      channel: {
        name: "ActivityName", # required
        channel_name: "ChannelName", # required
        next: "ActivityName",
      },
      lambda: {
        name: "ActivityName", # required
        lambda_name: "LambdaName", # required
        batch_size: 1, # required
        next: "ActivityName",
      },
      datastore: {
        name: "ActivityName", # required
        datastore_name: "DatastoreName", # required
      },
      add_attributes: {
        name: "ActivityName", # required
        attributes: { # required
          "AttributeName" => "AttributeName",
        },
        next: "ActivityName",
      },
      remove_attributes: {
        name: "ActivityName", # required
        attributes: ["AttributeName"], # required
        next: "ActivityName",
      },
      select_attributes: {
        name: "ActivityName", # required
        attributes: ["AttributeName"], # required
        next: "ActivityName",
      },
      filter: {
        name: "ActivityName", # required
        filter: "FilterExpression", # required
        next: "ActivityName",
      },
      math: {
        name: "ActivityName", # required
        attribute: "AttributeName", # required
        math: "MathExpression", # required
        next: "ActivityName",
      },
      device_registry_enrich: {
        name: "ActivityName", # required
        attribute: "AttributeName", # required
        thing_name: "AttributeName", # required
        role_arn: "RoleArn", # required
        next: "ActivityName",
      },
      device_shadow_enrich: {
        name: "ActivityName", # required
        attribute: "AttributeName", # required
        thing_name: "AttributeName", # required
        role_arn: "RoleArn", # required
        next: "ActivityName",
      },
    },
  ],
})

Options Hash (options):

  • :pipeline_name (required, String)

    The name of the pipeline to update.

  • :pipeline_activities (required, Array<Types::PipelineActivity>)

    A list of \"PipelineActivity\" objects. Activities perform transformations on your messages, such as removing, renaming or adding message attributes; filtering messages based on attribute values; invoking your Lambda functions on messages for advanced processing; or performing mathematical transformations to normalize device data.

    The list can be 2-25 PipelineActivity objects and must contain both a channel and a datastore activity. Each entry in the list must contain only one activity, for example:

    pipelineActivities = [ { "channel": { ... } }, { "lambda": { ... } }, ... ]

Returns:

  • (Struct)

    Returns an empty response.

#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ Boolean

Waiters polls an API operation until a resource enters a desired state.

Basic Usage

Waiters will poll until they are succesful, they fail by entering a terminal state, or until a maximum number of attempts are made.

# polls in a loop, sleeping between attempts client.waiter_until(waiter_name, params)

Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You configure waiters by passing a block to #wait_until:

# poll for ~25 seconds
client.wait_until(...) do |w|
  w.max_attempts = 5
  w.delay = 5
end

Callbacks

You can be notified before each polling attempt and before each delay. If you throw :success or :failure from these callbacks, it will terminate the waiter.

started_at = Time.now
client.wait_until(...) do |w|

  # disable max attempts
  w.max_attempts = nil

  # poll for 1 hour, instead of a number of attempts
  w.before_wait do |attempts, response|
    throw :failure if Time.now - started_at > 3600
  end

end

Handling Errors

When a waiter is successful, it returns true. When a waiter fails, it raises an error. All errors raised extend from Waiters::Errors::WaiterFailed.

begin
  client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end

Parameters:

  • waiter_name (Symbol)

    The name of the waiter. See #waiter_names for a full list of supported waiters.

  • params (Hash) (defaults to: {})

    Additional request parameters. See the #waiter_names for a list of supported waiters and what request they call. The called request determines the list of accepted parameters.

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the waiter was successful.

Raises:

  • (Errors::FailureStateError)

    Raised when the waiter terminates because the waiter has entered a state that it will not transition out of, preventing success.

  • (Errors::TooManyAttemptsError)

    Raised when the configured maximum number of attempts have been made, and the waiter is not yet successful.

  • (Errors::UnexpectedError)

    Raised when an error is encounted while polling for a resource that is not expected.

  • (Errors::NoSuchWaiterError)

    Raised when you request to wait for an unknown state.

#waiter_namesArray<Symbol>

Returns the list of supported waiters. The following table lists the supported waiters and the client method they call:

Waiter NameClient MethodDefault Delay:Default Max Attempts:

Returns:

  • (Array<Symbol>)

    the list of supported waiters.