You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::ForecastService::Client

Inherits:
Seahorse::Client::Base show all
Defined in:
(unknown)

Overview

An API client for Amazon Forecast Service. To construct a client, you need to configure a :region and :credentials.

forecastservice = Aws::ForecastService::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

See #initialize for a full list of supported configuration options.

Region

You can configure a default region in the following locations:

  • ENV['AWS_REGION']
  • Aws.config[:region]

Go here for a list of supported regions.

Credentials

Default credentials are loaded automatically from the following locations:

  • ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY']
  • Aws.config[:credentials]
  • The shared credentials ini file at ~/.aws/credentials (more information)
  • From an instance profile when running on EC2

You can also construct a credentials object from one of the following classes:

Alternatively, you configure credentials with :access_key_id and :secret_access_key:

# load credentials from disk
creds = YAML.load(File.read('/path/to/secrets'))

Aws::ForecastService::Client.new(
  access_key_id: creds['access_key_id'],
  secret_access_key: creds['secret_access_key']
)

Always load your credentials from outside your application. Avoid configuring credentials statically and never commit them to source control.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

Constructor collapse

API Operations collapse

Instance Method Summary collapse

Methods inherited from Seahorse::Client::Base

add_plugin, api, #build_request, clear_plugins, define, new, #operation, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options = {}) ⇒ Aws::ForecastService::Client

Constructs an API client.

Options Hash (options):

  • :access_key_id (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :active_endpoint_cache (Boolean)

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types. See Plugins::ParamConverter for more details.

  • :credentials (required, Credentials)

    Your AWS credentials. The following locations will be searched in order for credentials:

    • :access_key_id, :secret_access_key, and :session_token options
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']
    • HOME/.aws/credentials shared credentials file
    • EC2 instance profile credentials See Plugins::RequestSigner for more details.
  • :disable_host_prefix_injection (Boolean)

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available. See Plugins::EndpointPattern for more details.

  • :endpoint (String)

    A default endpoint is constructed from the :region. See Plugins::RegionalEndpoint for more details.

  • :endpoint_cache_max_entries (Integer)

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_max_threads (Integer)

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_poll_interval (Integer)

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec. See Plugins::EndpointDiscovery for more details.

  • :endpoint_discovery (Boolean)

    When set to true, endpoint discovery will be enabled for operations when available. Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :http_continue_timeout (Float) — default: 1

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_idle_timeout (Integer) — default: 5

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_open_timeout (Integer) — default: 15

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_proxy (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_read_timeout (Integer) — default: 60

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_wire_trace (Boolean) — default: false

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the logger at. See Plugins::Logging for more details.

  • :log_formatter (Logging::LogFormatter)

    The log formatter. Defaults to Seahorse::Client::Logging::Formatter.default. See Plugins::Logging for more details.

  • :logger (Logger) — default: nil

    The Logger instance to send log messages to. If this option is not set, logging will be disabled. See Plugins::Logging for more details.

  • :profile (String)

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used. See Plugins::RequestSigner for more details.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised. See Seahorse::Client::Plugins::RaiseResponseErrors for more details.

  • :region (required, String)

    The AWS region to connect to. The region is used to construct the client endpoint. Defaults to ENV['AWS_REGION']. Also checks AMAZON_REGION and AWS_DEFAULT_REGION. See Plugins::RegionalEndpoint for more details.

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors and auth errors from expired credentials. See Plugins::RetryErrors for more details.

  • :secret_access_key (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :session_token (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disable response data type conversions. This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

    When :simple_json is enabled, the request parameters hash must be formatted exactly as the DynamoDB API expects. See Plugins::Protocols::JsonRpc for more details.

  • :ssl_ca_bundle (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_directory (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_store (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_verify_peer (Boolean) — default: true

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled. See Plugins::StubResponses for more details.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request. See Plugins::ParamValidator for more details.

Instance Method Details

#create_dataset(options = {}) ⇒ Types::CreateDatasetResponse

Creates an Amazon Forecast dataset. The information about the dataset that you provide helps Forecast understand how to consume the data for model training. This includes the following:

  • DataFrequency - How frequently your historical time-series data is collected. Amazon Forecast uses this information when training the model and generating a forecast.

  • Domain and DatasetType - Each dataset has an associated dataset domain and a type within the domain. Amazon Forecast provides a list of predefined domains and types within each domain. For each unique dataset domain and type within the domain, Amazon Forecast requires your data to include a minimum set of predefined fields.

  • Schema - A schema specifies the fields of the dataset, including the field name and data type.

After creating a dataset, you import your training data into the dataset and add the dataset to a dataset group. You then use the dataset group to create a predictor. For more information, see howitworks-datasets-groups.

To get a list of all your datasets, use the ListDatasets operation.

The Status of a dataset must be ACTIVE before you can import training data. Use the DescribeDataset operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.create_dataset({
  dataset_name: "Name", # required
  domain: "RETAIL", # required, accepts RETAIL, CUSTOM, INVENTORY_PLANNING, EC2_CAPACITY, WORK_FORCE, WEB_TRAFFIC, METRICS
  dataset_type: "TARGET_TIME_SERIES", # required, accepts TARGET_TIME_SERIES, RELATED_TIME_SERIES, ITEM_METADATA
  data_frequency: "Frequency",
  schema: { # required
    attributes: [
      {
        attribute_name: "Name",
        attribute_type: "string", # accepts string, integer, float, timestamp
      },
    ],
  },
  encryption_config: {
    role_arn: "Arn", # required
    kms_key_arn: "KMSKeyArn", # required
  },
})

Response structure


resp.dataset_arn #=> String

Options Hash (options):

  • :dataset_name (required, String)

    A name for the dataset.

  • :domain (required, String)

    The domain associated with the dataset. The Domain and DatasetType that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose the RETAIL domain and TARGET_TIME_SERIES as the DatasetType, Amazon Forecast requires item_id, timestamp, and demand fields to be present in your data. For more information, see howitworks-datasets-groups.

  • :dataset_type (required, String)

    The dataset type. Valid values depend on the chosen Domain.

  • :data_frequency (String)

    The frequency of data collection.

    Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, \"D\" indicates every day and \"15min\" indicates every 15 minutes.

  • :schema (required, Types::Schema)

    The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset Domain and DatasetType that you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see howitworks-domains-ds-types.

  • :encryption_config (Types::EncryptionConfig)

    An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.

Returns:

See Also:

#create_dataset_group(options = {}) ⇒ Types::CreateDatasetGroupResponse

Creates an Amazon Forecast dataset group, which holds a collection of related datasets. You can add datasets to the dataset group when you create the dataset group, or you can add datasets later with the UpdateDatasetGroup operation.

After creating a dataset group and adding datasets, you use the dataset group when you create a predictor. For more information, see howitworks-datasets-groups.

To get a list of all your datasets groups, use the ListDatasetGroups operation.

The Status of a dataset group must be ACTIVE before you can create a predictor using the dataset group. Use the DescribeDatasetGroup operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.create_dataset_group({
  dataset_group_name: "Name", # required
  domain: "RETAIL", # required, accepts RETAIL, CUSTOM, INVENTORY_PLANNING, EC2_CAPACITY, WORK_FORCE, WEB_TRAFFIC, METRICS
  dataset_arns: ["Arn"],
})

Response structure


resp.dataset_group_arn #=> String

Options Hash (options):

  • :dataset_group_name (required, String)

    A name for the dataset group.

  • :domain (required, String)

    The domain associated with the dataset group. The Domain and DatasetType that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose the RETAIL domain and TARGET_TIME_SERIES as the DatasetType, Amazon Forecast requires item_id, timestamp, and demand fields to be present in your data. For more information, see howitworks-datasets-groups.

  • :dataset_arns (Array<String>)

    An array of Amazon Resource Names (ARNs) of the datasets that you want to include in the dataset group.

Returns:

See Also:

#create_dataset_import_job(options = {}) ⇒ Types::CreateDatasetImportJobResponse

Imports your training data to an Amazon Forecast dataset. You provide the location of your training data in an Amazon Simple Storage Service (Amazon S3) bucket and the Amazon Resource Name (ARN) of the dataset that you want to import the data to.

You must specify a DataSource object that includes an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data. For more information, see aws-forecast-iam-roles.

Two properties of the training data are optionally specified:

  • The delimiter that separates the data fields.

    The default delimiter is a comma (,), which is the only supported delimiter in this release.

  • The format of timestamps.

    If the format is not specified, Amazon Forecast expects the format to be "yyyy-MM-dd HH:mm:ss".

When Amazon Forecast uploads your training data, it verifies that the data was collected at the DataFrequency specified when the target dataset was created. For more information, see CreateDataset and howitworks-datasets-groups. Amazon Forecast also verifies the delimiter and timestamp format.

You can use the ListDatasetImportJobs operation to get a list of all your dataset import jobs, filtered by specified criteria.

To get a list of all your dataset import jobs, filtered by the specified criteria, use the ListDatasetGroups operation.

Examples:

Request syntax with placeholder values


resp = client.create_dataset_import_job({
  dataset_import_job_name: "Name", # required
  dataset_arn: "Arn", # required
  data_source: { # required
    s3_config: { # required
      path: "S3Path", # required
      role_arn: "Arn", # required
      kms_key_arn: "KMSKeyArn",
    },
  },
  timestamp_format: "TimestampFormat",
})

Response structure


resp.dataset_import_job_arn #=> String

Options Hash (options):

  • :dataset_import_job_name (required, String)

    The name for the dataset import job. It is recommended to include the current timestamp in the name to guard against getting a ResourceAlreadyExistsException exception, for example, 20190721DatasetImport.

  • :dataset_arn (required, String)

    The Amazon Resource Name (ARN) of the Amazon Forecast dataset that you want to import data to.

  • :data_source (required, Types::DataSource)

    The location of the training data to import and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data.

  • :timestamp_format (String)

    The format of timestamps in the dataset. Two formats are supported, dependent on the DataFrequency specified when the dataset was created.

    • \"yyyy-MM-dd\"

      For data frequencies: Y, M, W, and D

    • \"yyyy-MM-dd HH:mm:ss\"

      For data frequencies: H, 30min, 15min, and 1min; and optionally, for: Y, M, W, and D

Returns:

See Also:

#create_forecast(options = {}) ⇒ Types::CreateForecastResponse

Creates a forecast for each item in the TARGET_TIME_SERIES dataset that was used to train the predictor. This is known as inference. To retrieve the forecast for a single item at low latency, use the operation. To export the complete forecast into your Amazon Simple Storage Service (Amazon S3), use the CreateForecastExportJob operation.

The range of the forecast is determined by the ForecastHorizon, specified in the CreatePredictor request, multiplied by the DataFrequency, specified in the CreateDataset request. When you query a forecast, you can request a specific date range within the complete forecast.

To get a list of all your forecasts, use the ListForecasts operation.

The forecasts generated by Amazon Forecast are in the same timezone as the dataset that was used to create the predictor.

For more information, see howitworks-forecast.

The Status of the forecast must be ACTIVE before you can query or export the forecast. Use the DescribeForecast operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.create_forecast({
  forecast_name: "Name", # required
  predictor_arn: "Arn", # required
})

Response structure


resp.forecast_arn #=> String

Options Hash (options):

  • :forecast_name (required, String)

    The name for the forecast.

  • :predictor_arn (required, String)

    The Amazon Resource Name (ARN) of the predictor to use to generate the forecast.

Returns:

See Also:

#create_forecast_export_job(options = {}) ⇒ Types::CreateForecastExportJobResponse

Exports a forecast created by the CreateForecast operation to your Amazon Simple Storage Service (Amazon S3) bucket.

You must specify a DataDestination object that includes an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the Amazon S3 bucket. For more information, see aws-forecast-iam-roles.

For more information, see howitworks-forecast.

To get a list of all your forecast export jobs, use the ListForecastExportJobs operation.

The Status of the forecast export job must be ACTIVE before you can access the forecast in your Amazon S3 bucket. Use the DescribeForecastExportJob operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.create_forecast_export_job({
  forecast_export_job_name: "Name", # required
  forecast_arn: "Arn", # required
  destination: { # required
    s3_config: { # required
      path: "S3Path", # required
      role_arn: "Arn", # required
      kms_key_arn: "KMSKeyArn",
    },
  },
})

Response structure


resp.forecast_export_job_arn #=> String

Options Hash (options):

  • :forecast_export_job_name (required, String)

    The name for the forecast export job.

  • :forecast_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast that you want to export.

  • :destination (required, Types::DataDestination)

    The path to the Amazon S3 bucket where you want to save the forecast and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the bucket.

Returns:

See Also:

#create_predictor(options = {}) ⇒ Types::CreatePredictorResponse

Creates an Amazon Forecast predictor.

In the request, you provide a dataset group and either specify an algorithm or let Amazon Forecast choose the algorithm for you using AutoML. If you specify an algorithm, you also can override algorithm-specific hyperparameters.

Amazon Forecast uses the chosen algorithm to train a model using the latest version of the datasets in the specified dataset group. The result is called a predictor. You then generate a forecast using the CreateForecast operation.

After training a model, the CreatePredictor operation also evaluates it. To see the evaluation metrics, use the GetAccuracyMetrics operation. Always review the evaluation metrics before deciding to use the predictor to generate a forecast.

Optionally, you can specify a featurization configuration to fill and aggragate the data fields in the TARGET_TIME_SERIES dataset to improve model training. For more information, see FeaturizationConfig.

AutoML

If you set PerformAutoML to true, Amazon Forecast evaluates each algorithm and chooses the one that minimizes the objective function. The objective function is defined as the mean of the weighted p10, p50, and p90 quantile losses. For more information, see EvaluationResult.

When AutoML is enabled, the following properties are disallowed:

  • AlgorithmArn

  • HPOConfig

  • PerformHPO

  • TrainingParameters

To get a list of all your predictors, use the ListPredictors operation.

The Status of the predictor must be ACTIVE, signifying that training has completed, before you can use the predictor to create a forecast. Use the DescribePredictor operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.create_predictor({
  predictor_name: "Name", # required
  algorithm_arn: "Arn",
  forecast_horizon: 1, # required
  perform_auto_ml: false,
  perform_hpo: false,
  training_parameters: {
    "ParameterKey" => "ParameterValue",
  },
  evaluation_parameters: {
    number_of_backtest_windows: 1,
    back_test_window_offset: 1,
  },
  hpo_config: {
    parameter_ranges: {
      categorical_parameter_ranges: [
        {
          name: "Name", # required
          values: ["Value"], # required
        },
      ],
      continuous_parameter_ranges: [
        {
          name: "Name", # required
          max_value: 1.0, # required
          min_value: 1.0, # required
          scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
        },
      ],
      integer_parameter_ranges: [
        {
          name: "Name", # required
          max_value: 1, # required
          min_value: 1, # required
          scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
        },
      ],
    },
  },
  input_data_config: { # required
    dataset_group_arn: "Arn", # required
    supplementary_features: [
      {
        name: "Name", # required
        value: "Value", # required
      },
    ],
  },
  featurization_config: { # required
    forecast_frequency: "Frequency", # required
    forecast_dimensions: ["Name"],
    featurizations: [
      {
        attribute_name: "Name", # required
        featurization_pipeline: [
          {
            featurization_method_name: "filling", # required, accepts filling
            featurization_method_parameters: {
              "ParameterKey" => "ParameterValue",
            },
          },
        ],
      },
    ],
  },
  encryption_config: {
    role_arn: "Arn", # required
    kms_key_arn: "KMSKeyArn", # required
  },
})

Response structure


resp.predictor_arn #=> String

Options Hash (options):

  • :predictor_name (required, String)

    A name for the predictor.

  • :algorithm_arn (String)

    The Amazon Resource Name (ARN) of the algorithm to use for model training. Required if PerformAutoML is not set to true.

    Supported algorithms .title

    • arn:aws:forecast:::algorithm/ARIMA

    • arn:aws:forecast:::algorithm/Deep_AR_Plus

      - supports hyperparameter optimization (HPO)

    • arn:aws:forecast:::algorithm/ETS

    • arn:aws:forecast:::algorithm/NPTS

    • arn:aws:forecast:::algorithm/Prophet

  • :forecast_horizon (required, Integer)

    Specifies the number of time-steps that the model is trained to predict. The forecast horizon is also called the prediction length.

    For example, if you configure a dataset for daily data collection (using the DataFrequency parameter of the CreateDataset operation) and set the forecast horizon to 10, the model returns predictions for 10 days.

  • :perform_auto_ml (Boolean)

    Whether to perform AutoML. The default value is false. In this case, you are required to specify an algorithm.

    If you want Amazon Forecast to evaluate the algorithms it provides and choose the best algorithm and configuration for your training dataset, set PerformAutoML to true. This is a good option if you aren\'t sure which algorithm is suitable for your application.

  • :perform_hpo (Boolean)

    Whether to perform hyperparameter optimization (HPO). HPO finds optimal hyperparameter values for your training data. The process of performing HPO is known as a hyperparameter tuning job.

    The default value is false. In this case, Amazon Forecast uses default hyperparameter values from the chosen algorithm.

    To override the default values, set PerformHPO to true and supply the HyperParameterTuningJobConfig object. The tuning job specifies an objective metric, the hyperparameters to optimize, and the valid range for each hyperparameter.

    The following algorithms support HPO:

    • DeepAR+

    ^

  • :training_parameters (Hash<String,String>)

    The training parameters to override for model training. The parameters that you can override are listed in the individual algorithms in aws-forecast-choosing-recipes.

  • :evaluation_parameters (Types::EvaluationParameters)

    Used to override the default evaluation parameters of the specified algorithm. Amazon Forecast evaluates a predictor by splitting a dataset into training data and testing data. The evaluation parameters define how to perform the split and the number of iterations.

  • :hpo_config (Types::HyperParameterTuningJobConfig)

    Provides hyperparameter override values for the algorithm. If you don\'t provide this parameter, Amazon Forecast uses default values. The individual algorithms specify which hyperparameters support hyperparameter optimization (HPO). For more information, see aws-forecast-choosing-recipes.

  • :input_data_config (required, Types::InputDataConfig)

    Describes the dataset group that contains the data to use to train the predictor.

  • :featurization_config (required, Types::FeaturizationConfig)

    The featurization configuration.

  • :encryption_config (Types::EncryptionConfig)

    An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.

Returns:

See Also:

#delete_dataset(options = {}) ⇒ Struct

Deletes an Amazon Forecast dataset created using the CreateDataset operation. To be deleted, the dataset must have a status of ACTIVE or CREATE_FAILED. Use the DescribeDataset operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset({
  dataset_arn: "Arn", # required
})

Options Hash (options):

  • :dataset_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_dataset_group(options = {}) ⇒ Struct

Deletes a dataset group created using the CreateDatasetGroup operation. To be deleted, the dataset group must have a status of ACTIVE, CREATE_FAILED, or UPDATE_FAILED. Use the DescribeDatasetGroup operation to get the status.

The operation deletes only the dataset group, not the datasets in the group.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset_group({
  dataset_group_arn: "Arn", # required
})

Options Hash (options):

  • :dataset_group_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset group to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_dataset_import_job(options = {}) ⇒ Struct

Deletes a dataset import job created using the CreateDatasetImportJob operation. To be deleted, the import job must have a status of ACTIVE or CREATE_FAILED. Use the DescribeDatasetImportJob operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset_import_job({
  dataset_import_job_arn: "Arn", # required
})

Options Hash (options):

  • :dataset_import_job_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset import job to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_forecast(options = {}) ⇒ Struct

Deletes a forecast created using the CreateForecast operation. To be deleted, the forecast must have a status of ACTIVE or CREATE_FAILED. Use the DescribeForecast operation to get the status.

You can't delete a forecast while it is being exported.

Examples:

Request syntax with placeholder values


resp = client.delete_forecast({
  forecast_arn: "Arn", # required
})

Options Hash (options):

  • :forecast_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_forecast_export_job(options = {}) ⇒ Struct

Deletes a forecast export job created using the CreateForecastExportJob operation. To be deleted, the export job must have a status of ACTIVE or CREATE_FAILED. Use the DescribeForecastExportJob operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.delete_forecast_export_job({
  forecast_export_job_arn: "Arn", # required
})

Options Hash (options):

  • :forecast_export_job_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast export job to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_predictor(options = {}) ⇒ Struct

Deletes a predictor created using the CreatePredictor operation. To be deleted, the predictor must have a status of ACTIVE or CREATE_FAILED. Use the DescribePredictor operation to get the status.

Any forecasts generated by the predictor will no longer be available.

Examples:

Request syntax with placeholder values


resp = client.delete_predictor({
  predictor_arn: "Arn", # required
})

Options Hash (options):

  • :predictor_arn (required, String)

    The Amazon Resource Name (ARN) of the predictor to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#describe_dataset(options = {}) ⇒ Types::DescribeDatasetResponse

Describes an Amazon Forecast dataset created using the CreateDataset operation.

In addition to listing the properties provided by the user in the CreateDataset request, this operation includes the following properties:

  • CreationTime

  • LastModificationTime

  • Status

Examples:

Request syntax with placeholder values


resp = client.describe_dataset({
  dataset_arn: "Arn", # required
})

Response structure


resp.dataset_arn #=> String
resp.dataset_name #=> String
resp.domain #=> String, one of "RETAIL", "CUSTOM", "INVENTORY_PLANNING", "EC2_CAPACITY", "WORK_FORCE", "WEB_TRAFFIC", "METRICS"
resp.dataset_type #=> String, one of "TARGET_TIME_SERIES", "RELATED_TIME_SERIES", "ITEM_METADATA"
resp.data_frequency #=> String
resp.schema.attributes #=> Array
resp.schema.attributes[0].attribute_name #=> String
resp.schema.attributes[0].attribute_type #=> String, one of "string", "integer", "float", "timestamp"
resp.encryption_config.role_arn #=> String
resp.encryption_config.kms_key_arn #=> String
resp.status #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :dataset_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset.

Returns:

See Also:

#describe_dataset_group(options = {}) ⇒ Types::DescribeDatasetGroupResponse

Describes a dataset group created using the CreateDatasetGroup operation.

In addition to listing the properties provided by the user in the CreateDatasetGroup request, this operation includes the following properties:

  • DatasetArns - The datasets belonging to the group.

  • CreationTime

  • LastModificationTime

  • Status

Examples:

Request syntax with placeholder values


resp = client.describe_dataset_group({
  dataset_group_arn: "Arn", # required
})

Response structure


resp.dataset_group_name #=> String
resp.dataset_group_arn #=> String
resp.dataset_arns #=> Array
resp.dataset_arns[0] #=> String
resp.domain #=> String, one of "RETAIL", "CUSTOM", "INVENTORY_PLANNING", "EC2_CAPACITY", "WORK_FORCE", "WEB_TRAFFIC", "METRICS"
resp.status #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :dataset_group_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset group.

Returns:

See Also:

#describe_dataset_import_job(options = {}) ⇒ Types::DescribeDatasetImportJobResponse

Describes a dataset import job created using the CreateDatasetImportJob operation.

In addition to listing the properties provided by the user in the CreateDatasetImportJob request, this operation includes the following properties:

  • CreationTime

  • LastModificationTime

  • DataSize

  • FieldStatistics

  • Status

  • Message - If an error occurred, information about the error.

Examples:

Request syntax with placeholder values


resp = client.describe_dataset_import_job({
  dataset_import_job_arn: "Arn", # required
})

Response structure


resp.dataset_import_job_name #=> String
resp.dataset_import_job_arn #=> String
resp.dataset_arn #=> String
resp.timestamp_format #=> String
resp.data_source.s3_config.path #=> String
resp.data_source.s3_config.role_arn #=> String
resp.data_source.s3_config.kms_key_arn #=> String
resp.field_statistics #=> Hash
resp.field_statistics["String"].count #=> Integer
resp.field_statistics["String"].count_distinct #=> Integer
resp.field_statistics["String"].count_null #=> Integer
resp.field_statistics["String"].count_nan #=> Integer
resp.field_statistics["String"].min #=> String
resp.field_statistics["String"].max #=> String
resp.field_statistics["String"].avg #=> Float
resp.field_statistics["String"].stddev #=> Float
resp.data_size #=> Float
resp.status #=> String
resp.message #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :dataset_import_job_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset import job.

Returns:

See Also:

#describe_forecast(options = {}) ⇒ Types::DescribeForecastResponse

Describes a forecast created using the CreateForecast operation.

In addition to listing the properties provided by the user in the CreateForecast request, this operation includes the following properties:

  • DatasetGroupArn - The dataset group that provided the training data.

  • CreationTime

  • LastModificationTime

  • Status

  • Message - If an error occurred, information about the error.

Examples:

Request syntax with placeholder values


resp = client.describe_forecast({
  forecast_arn: "Arn", # required
})

Response structure


resp.forecast_arn #=> String
resp.forecast_name #=> String
resp.predictor_arn #=> String
resp.dataset_group_arn #=> String
resp.status #=> String
resp.message #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :forecast_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast.

Returns:

See Also:

#describe_forecast_export_job(options = {}) ⇒ Types::DescribeForecastExportJobResponse

Describes a forecast export job created using the CreateForecastExportJob operation.

In addition to listing the properties provided by the user in the CreateForecastExportJob request, this operation includes the following properties:

  • CreationTime

  • LastModificationTime

  • Status

  • Message - If an error occurred, information about the error.

Examples:

Request syntax with placeholder values


resp = client.describe_forecast_export_job({
  forecast_export_job_arn: "Arn", # required
})

Response structure


resp.forecast_export_job_arn #=> String
resp.forecast_export_job_name #=> String
resp.forecast_arn #=> String
resp.destination.s3_config.path #=> String
resp.destination.s3_config.role_arn #=> String
resp.destination.s3_config.kms_key_arn #=> String
resp.message #=> String
resp.status #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :forecast_export_job_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast export job.

Returns:

See Also:

#describe_predictor(options = {}) ⇒ Types::DescribePredictorResponse

Describes a predictor created using the CreatePredictor operation.

In addition to listing the properties provided by the user in the CreatePredictor request, this operation includes the following properties:

  • DatasetImportJobArns - The dataset import jobs used to import training data.

  • AutoMLAlgorithmArns - If AutoML is performed, the algorithms evaluated.

  • CreationTime

  • LastModificationTime

  • Status

  • Message - If an error occurred, information about the error.

Examples:

Request syntax with placeholder values


resp = client.describe_predictor({
  predictor_arn: "Arn", # required
})

Response structure


resp.predictor_arn #=> String
resp.predictor_name #=> String
resp.algorithm_arn #=> String
resp.forecast_horizon #=> Integer
resp.perform_auto_ml #=> true/false
resp.perform_hpo #=> true/false
resp.training_parameters #=> Hash
resp.training_parameters["ParameterKey"] #=> String
resp.evaluation_parameters.number_of_backtest_windows #=> Integer
resp.evaluation_parameters.back_test_window_offset #=> Integer
resp.hpo_config.parameter_ranges.categorical_parameter_ranges #=> Array
resp.hpo_config.parameter_ranges.categorical_parameter_ranges[0].name #=> String
resp.hpo_config.parameter_ranges.categorical_parameter_ranges[0].values #=> Array
resp.hpo_config.parameter_ranges.categorical_parameter_ranges[0].values[0] #=> String
resp.hpo_config.parameter_ranges.continuous_parameter_ranges #=> Array
resp.hpo_config.parameter_ranges.continuous_parameter_ranges[0].name #=> String
resp.hpo_config.parameter_ranges.continuous_parameter_ranges[0].max_value #=> Float
resp.hpo_config.parameter_ranges.continuous_parameter_ranges[0].min_value #=> Float
resp.hpo_config.parameter_ranges.continuous_parameter_ranges[0].scaling_type #=> String, one of "Auto", "Linear", "Logarithmic", "ReverseLogarithmic"
resp.hpo_config.parameter_ranges.integer_parameter_ranges #=> Array
resp.hpo_config.parameter_ranges.integer_parameter_ranges[0].name #=> String
resp.hpo_config.parameter_ranges.integer_parameter_ranges[0].max_value #=> Integer
resp.hpo_config.parameter_ranges.integer_parameter_ranges[0].min_value #=> Integer
resp.hpo_config.parameter_ranges.integer_parameter_ranges[0].scaling_type #=> String, one of "Auto", "Linear", "Logarithmic", "ReverseLogarithmic"
resp.input_data_config.dataset_group_arn #=> String
resp.input_data_config.supplementary_features #=> Array
resp.input_data_config.supplementary_features[0].name #=> String
resp.input_data_config.supplementary_features[0].value #=> String
resp.featurization_config.forecast_frequency #=> String
resp.featurization_config.forecast_dimensions #=> Array
resp.featurization_config.forecast_dimensions[0] #=> String
resp.featurization_config.featurizations #=> Array
resp.featurization_config.featurizations[0].attribute_name #=> String
resp.featurization_config.featurizations[0].featurization_pipeline #=> Array
resp.featurization_config.featurizations[0].featurization_pipeline[0].featurization_method_name #=> String, one of "filling"
resp.featurization_config.featurizations[0].featurization_pipeline[0].featurization_method_parameters #=> Hash
resp.featurization_config.featurizations[0].featurization_pipeline[0].featurization_method_parameters["ParameterKey"] #=> String
resp.encryption_config.role_arn #=> String
resp.encryption_config.kms_key_arn #=> String
resp.dataset_import_job_arns #=> Array
resp.dataset_import_job_arns[0] #=> String
resp.auto_ml_algorithm_arns #=> Array
resp.auto_ml_algorithm_arns[0] #=> String
resp.status #=> String
resp.message #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :predictor_arn (required, String)

    The Amazon Resource Name (ARN) of the predictor that you want information about.

Returns:

See Also:

#get_accuracy_metrics(options = {}) ⇒ Types::GetAccuracyMetricsResponse

Provides metrics on the accuracy of the models that were trained by the CreatePredictor operation. Use metrics to see how well the model performed and to decide whether to use the predictor to generate a forecast.

Metrics are generated for each backtest window evaluated. For more information, see EvaluationParameters.

The parameters of the filling method determine which items contribute to the metrics. If zero is specified, all items contribute. If nan is specified, only those items that have complete data in the range being evaluated contribute. For more information, see FeaturizationMethod.

For an example of how to train a model and review metrics, see getting-started.

Examples:

Request syntax with placeholder values


resp = client.get_accuracy_metrics({
  predictor_arn: "Arn", # required
})

Response structure


resp.predictor_evaluation_results #=> Array
resp.predictor_evaluation_results[0].algorithm_arn #=> String
resp.predictor_evaluation_results[0].test_windows #=> Array
resp.predictor_evaluation_results[0].test_windows[0].test_window_start #=> Time
resp.predictor_evaluation_results[0].test_windows[0].test_window_end #=> Time
resp.predictor_evaluation_results[0].test_windows[0].item_count #=> Integer
resp.predictor_evaluation_results[0].test_windows[0].evaluation_type #=> String, one of "SUMMARY", "COMPUTED"
resp.predictor_evaluation_results[0].test_windows[0].metrics.rmse #=> Float
resp.predictor_evaluation_results[0].test_windows[0].metrics.weighted_quantile_losses #=> Array
resp.predictor_evaluation_results[0].test_windows[0].metrics.weighted_quantile_losses[0].quantile #=> Float
resp.predictor_evaluation_results[0].test_windows[0].metrics.weighted_quantile_losses[0].loss_value #=> Float

Options Hash (options):

  • :predictor_arn (required, String)

    The Amazon Resource Name (ARN) of the predictor to get metrics for.

Returns:

See Also:

#list_dataset_groups(options = {}) ⇒ Types::ListDatasetGroupsResponse

Returns a list of dataset groups created using the CreateDatasetGroup operation. For each dataset group, a summary of its properties, including its Amazon Resource Name (ARN), is returned. You can retrieve the complete set of properties by using the ARN with the DescribeDatasetGroup operation.

Examples:

Request syntax with placeholder values


resp = client.list_dataset_groups({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.dataset_groups #=> Array
resp.dataset_groups[0].dataset_group_arn #=> String
resp.dataset_groups[0].dataset_group_name #=> String
resp.dataset_groups[0].creation_time #=> Time
resp.dataset_groups[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

Returns:

See Also:

#list_dataset_import_jobs(options = {}) ⇒ Types::ListDatasetImportJobsResponse

Returns a list of dataset import jobs created using the CreateDatasetImportJob operation. For each import job, a summary of its properties, including its Amazon Resource Name (ARN), is returned. You can retrieve the complete set of properties by using the ARN with the DescribeDatasetImportJob operation. You can filter the list by providing an array of Filter objects.

Examples:

Request syntax with placeholder values


resp = client.list_dataset_import_jobs({
  next_token: "NextToken",
  max_results: 1,
  filters: [
    {
      key: "String", # required
      value: "Arn", # required
      condition: "IS", # required, accepts IS, IS_NOT
    },
  ],
})

Response structure


resp.dataset_import_jobs #=> Array
resp.dataset_import_jobs[0].dataset_import_job_arn #=> String
resp.dataset_import_jobs[0].dataset_import_job_name #=> String
resp.dataset_import_jobs[0].data_source.s3_config.path #=> String
resp.dataset_import_jobs[0].data_source.s3_config.role_arn #=> String
resp.dataset_import_jobs[0].data_source.s3_config.kms_key_arn #=> String
resp.dataset_import_jobs[0].status #=> String
resp.dataset_import_jobs[0].message #=> String
resp.dataset_import_jobs[0].creation_time #=> Time
resp.dataset_import_jobs[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

  • :filters (Array<Types::Filter>)

    An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude, respectively, from the list, the predictors that match the statement. The match statement consists of a key and a value. In this release, Name is the only valid key, which filters on the DatasetImportJobName property.

    • Condition - IS or IS_NOT

    • Key - Name

    • Value - the value to match

    For example, to list all dataset import jobs named my_dataset_import_job, you would specify:

    "Filters": [ { "Condition": "IS", "Key": "Name", "Value": "my_dataset_import_job" } ]

Returns:

See Also:

#list_datasets(options = {}) ⇒ Types::ListDatasetsResponse

Returns a list of datasets created using the CreateDataset operation. For each dataset, a summary of its properties, including its Amazon Resource Name (ARN), is returned. You can retrieve the complete set of properties by using the ARN with the DescribeDataset operation.

Examples:

Request syntax with placeholder values


resp = client.list_datasets({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.datasets #=> Array
resp.datasets[0].dataset_arn #=> String
resp.datasets[0].dataset_name #=> String
resp.datasets[0].dataset_type #=> String, one of "TARGET_TIME_SERIES", "RELATED_TIME_SERIES", "ITEM_METADATA"
resp.datasets[0].domain #=> String, one of "RETAIL", "CUSTOM", "INVENTORY_PLANNING", "EC2_CAPACITY", "WORK_FORCE", "WEB_TRAFFIC", "METRICS"
resp.datasets[0].creation_time #=> Time
resp.datasets[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

Returns:

See Also:

#list_forecast_export_jobs(options = {}) ⇒ Types::ListForecastExportJobsResponse

Returns a list of forecast export jobs created using the CreateForecastExportJob operation. For each forecast export job, a summary of its properties, including its Amazon Resource Name (ARN), is returned. You can retrieve the complete set of properties by using the ARN with the DescribeForecastExportJob operation. The list can be filtered using an array of Filter objects.

Examples:

Request syntax with placeholder values


resp = client.list_forecast_export_jobs({
  next_token: "NextToken",
  max_results: 1,
  filters: [
    {
      key: "String", # required
      value: "Arn", # required
      condition: "IS", # required, accepts IS, IS_NOT
    },
  ],
})

Response structure


resp.forecast_export_jobs #=> Array
resp.forecast_export_jobs[0].forecast_export_job_arn #=> String
resp.forecast_export_jobs[0].forecast_export_job_name #=> String
resp.forecast_export_jobs[0].destination.s3_config.path #=> String
resp.forecast_export_jobs[0].destination.s3_config.role_arn #=> String
resp.forecast_export_jobs[0].destination.s3_config.kms_key_arn #=> String
resp.forecast_export_jobs[0].status #=> String
resp.forecast_export_jobs[0].message #=> String
resp.forecast_export_jobs[0].creation_time #=> Time
resp.forecast_export_jobs[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

  • :filters (Array<Types::Filter>)

    An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude, respectively, from the list, the predictors that match the statement. The match statement consists of a key and a value. In this release, Name is the only valid key, which filters on the ForecastExportJobName property.

    • Condition - IS or IS_NOT

    • Key - Name

    • Value - the value to match

    For example, to list all forecast export jobs named my_forecast_export_job, you would specify:

    "Filters": [ { "Condition": "IS", "Key": "Name", "Value": "my_forecast_export_job" } ]

Returns:

See Also:

#list_forecasts(options = {}) ⇒ Types::ListForecastsResponse

Returns a list of forecasts created using the CreateForecast operation. For each forecast, a summary of its properties, including its Amazon Resource Name (ARN), is returned. You can retrieve the complete set of properties by using the ARN with the DescribeForecast operation. The list can be filtered using an array of Filter objects.

Examples:

Request syntax with placeholder values


resp = client.list_forecasts({
  next_token: "NextToken",
  max_results: 1,
  filters: [
    {
      key: "String", # required
      value: "Arn", # required
      condition: "IS", # required, accepts IS, IS_NOT
    },
  ],
})

Response structure


resp.forecasts #=> Array
resp.forecasts[0].forecast_arn #=> String
resp.forecasts[0].forecast_name #=> String
resp.forecasts[0].predictor_arn #=> String
resp.forecasts[0].dataset_group_arn #=> String
resp.forecasts[0].status #=> String
resp.forecasts[0].message #=> String
resp.forecasts[0].creation_time #=> Time
resp.forecasts[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

  • :filters (Array<Types::Filter>)

    An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude, respectively, from the list, the predictors that match the statement. The match statement consists of a key and a value. In this release, Name is the only valid key, which filters on the ForecastName property.

    • Condition - IS or IS_NOT

    • Key - Name

    • Value - the value to match

    For example, to list all forecasts named my_forecast, you would specify:

    "Filters": [ { "Condition": "IS", "Key": "Name", "Value": "my_forecast" } ]

Returns:

See Also:

#list_predictors(options = {}) ⇒ Types::ListPredictorsResponse

Returns a list of predictors created using the CreatePredictor operation. For each predictor, a summary of its properties, including its Amazon Resource Name (ARN), is returned. You can retrieve the complete set of properties by using the ARN with the DescribePredictor operation. The list can be filtered using an array of Filter objects.

Examples:

Request syntax with placeholder values


resp = client.list_predictors({
  next_token: "NextToken",
  max_results: 1,
  filters: [
    {
      key: "String", # required
      value: "Arn", # required
      condition: "IS", # required, accepts IS, IS_NOT
    },
  ],
})

Response structure


resp.predictors #=> Array
resp.predictors[0].predictor_arn #=> String
resp.predictors[0].predictor_name #=> String
resp.predictors[0].dataset_group_arn #=> String
resp.predictors[0].status #=> String
resp.predictors[0].message #=> String
resp.predictors[0].creation_time #=> Time
resp.predictors[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

  • :filters (Array<Types::Filter>)

    An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude, respectively, from the list, the predictors that match the statement. The match statement consists of a key and a value. In this release, Name is the only valid key, which filters on the PredictorName property.

    • Condition - IS or IS_NOT

    • Key - Name

    • Value - the value to match

    For example, to list all predictors named my_predictor, you would specify:

    "Filters": [ { "Condition": "IS", "Key": "Name", "Value": "my_predictor" } ]

Returns:

See Also:

#update_dataset_group(options = {}) ⇒ Struct

Replaces any existing datasets in the dataset group with the specified datasets.

The Status of the dataset group must be ACTIVE before creating a predictor using the dataset group. Use the DescribeDatasetGroup operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.update_dataset_group({
  dataset_group_arn: "Arn", # required
  dataset_arns: ["Arn"], # required
})

Options Hash (options):

  • :dataset_group_arn (required, String)

    The ARN of the dataset group.

  • :dataset_arns (required, Array<String>)

    An array of Amazon Resource Names (ARNs) of the datasets to add to the dataset group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ Boolean

Waiters polls an API operation until a resource enters a desired state.

Basic Usage

Waiters will poll until they are succesful, they fail by entering a terminal state, or until a maximum number of attempts are made.

# polls in a loop, sleeping between attempts client.waiter_until(waiter_name, params)

Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You configure waiters by passing a block to #wait_until:

# poll for ~25 seconds
client.wait_until(...) do |w|
  w.max_attempts = 5
  w.delay = 5
end

Callbacks

You can be notified before each polling attempt and before each delay. If you throw :success or :failure from these callbacks, it will terminate the waiter.

started_at = Time.now
client.wait_until(...) do |w|

  # disable max attempts
  w.max_attempts = nil

  # poll for 1 hour, instead of a number of attempts
  w.before_wait do |attempts, response|
    throw :failure if Time.now - started_at > 3600
  end

end

Handling Errors

When a waiter is successful, it returns true. When a waiter fails, it raises an error. All errors raised extend from Waiters::Errors::WaiterFailed.

begin
  client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end

Parameters:

  • waiter_name (Symbol)

    The name of the waiter. See #waiter_names for a full list of supported waiters.

  • params (Hash) (defaults to: {})

    Additional request parameters. See the #waiter_names for a list of supported waiters and what request they call. The called request determines the list of accepted parameters.

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the waiter was successful.

Raises:

  • (Errors::FailureStateError)

    Raised when the waiter terminates because the waiter has entered a state that it will not transition out of, preventing success.

  • (Errors::TooManyAttemptsError)

    Raised when the configured maximum number of attempts have been made, and the waiter is not yet successful.

  • (Errors::UnexpectedError)

    Raised when an error is encounted while polling for a resource that is not expected.

  • (Errors::NoSuchWaiterError)

    Raised when you request to wait for an unknown state.

#waiter_namesArray<Symbol>

Returns the list of supported waiters. The following table lists the supported waiters and the client method they call:

Waiter NameClient MethodDefault Delay:Default Max Attempts:

Returns:

  • (Array<Symbol>)

    the list of supported waiters.