You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::ForecastService::Client

Inherits:
Seahorse::Client::Base show all
Defined in:
(unknown)

Overview

An API client for Amazon Forecast Service. To construct a client, you need to configure a :region and :credentials.

forecastservice = Aws::ForecastService::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

See #initialize for a full list of supported configuration options.

Region

You can configure a default region in the following locations:

  • ENV['AWS_REGION']
  • Aws.config[:region]

Go here for a list of supported regions.

Credentials

Default credentials are loaded automatically from the following locations:

  • ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY']
  • Aws.config[:credentials]
  • The shared credentials ini file at ~/.aws/credentials (more information)
  • From an instance profile when running on EC2

You can also construct a credentials object from one of the following classes:

Alternatively, you configure credentials with :access_key_id and :secret_access_key:

# load credentials from disk
creds = YAML.load(File.read('/path/to/secrets'))

Aws::ForecastService::Client.new(
  access_key_id: creds['access_key_id'],
  secret_access_key: creds['secret_access_key']
)

Always load your credentials from outside your application. Avoid configuring credentials statically and never commit them to source control.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

Constructor collapse

API Operations collapse

Instance Method Summary collapse

Methods inherited from Seahorse::Client::Base

add_plugin, api, #build_request, clear_plugins, define, new, #operation, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options = {}) ⇒ Aws::ForecastService::Client

Constructs an API client.

Options Hash (options):

  • :access_key_id (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :active_endpoint_cache (Boolean)

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types. See Plugins::ParamConverter for more details.

  • :credentials (required, Credentials)

    Your AWS credentials. The following locations will be searched in order for credentials:

    • :access_key_id, :secret_access_key, and :session_token options
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']
    • HOME/.aws/credentials shared credentials file
    • EC2 instance profile credentials See Plugins::RequestSigner for more details.
  • :disable_host_prefix_injection (Boolean)

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available. See Plugins::EndpointPattern for more details.

  • :endpoint (String)

    A default endpoint is constructed from the :region. See Plugins::RegionalEndpoint for more details.

  • :endpoint_cache_max_entries (Integer)

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_max_threads (Integer)

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_poll_interval (Integer)

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec. See Plugins::EndpointDiscovery for more details.

  • :endpoint_discovery (Boolean)

    When set to true, endpoint discovery will be enabled for operations when available. Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :http_continue_timeout (Float) — default: 1

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_idle_timeout (Integer) — default: 5

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_open_timeout (Integer) — default: 15

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_proxy (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_read_timeout (Integer) — default: 60

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_wire_trace (Boolean) — default: false

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the logger at. See Plugins::Logging for more details.

  • :log_formatter (Logging::LogFormatter)

    The log formatter. Defaults to Seahorse::Client::Logging::Formatter.default. See Plugins::Logging for more details.

  • :logger (Logger) — default: nil

    The Logger instance to send log messages to. If this option is not set, logging will be disabled. See Plugins::Logging for more details.

  • :profile (String)

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used. See Plugins::RequestSigner for more details.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised. See Seahorse::Client::Plugins::RaiseResponseErrors for more details.

  • :region (required, String)

    The AWS region to connect to. The region is used to construct the client endpoint. Defaults to ENV['AWS_REGION']. Also checks AMAZON_REGION and AWS_DEFAULT_REGION. See Plugins::RegionalEndpoint for more details.

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors and auth errors from expired credentials. See Plugins::RetryErrors for more details.

  • :secret_access_key (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :session_token (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disable response data type conversions. This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

    When :simple_json is enabled, the request parameters hash must be formatted exactly as the DynamoDB API expects. See Plugins::Protocols::JsonRpc for more details.

  • :ssl_ca_bundle (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_directory (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_store (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_verify_peer (Boolean) — default: true

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled. See Plugins::StubResponses for more details.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request. See Plugins::ParamValidator for more details.

Instance Method Details

#create_dataset(options = {}) ⇒ Types::CreateDatasetResponse

Creates an Amazon Forecast dataset. The information about the dataset that you provide helps Forecast understand how to consume the data for model training. This includes the following:

  • DataFrequency - How frequently your historical time-series data is collected.

  • Domain and DatasetType - Each dataset has an associated dataset domain and a type within the domain. Amazon Forecast provides a list of predefined domains and types within each domain. For each unique dataset domain and type within the domain, Amazon Forecast requires your data to include a minimum set of predefined fields.

  • Schema - A schema specifies the fields in the dataset, including the field name and data type.

After creating a dataset, you import your training data into it and add the dataset to a dataset group. You use the dataset group to create a predictor. For more information, see howitworks-datasets-groups.

To get a list of all your datasets, use the ListDatasets operation.

For example Forecast datasets, see the Amazon Forecast Sample GitHub repository.

The Status of a dataset must be ACTIVE before you can import training data. Use the DescribeDataset operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.create_dataset({
  dataset_name: "Name", # required
  domain: "RETAIL", # required, accepts RETAIL, CUSTOM, INVENTORY_PLANNING, EC2_CAPACITY, WORK_FORCE, WEB_TRAFFIC, METRICS
  dataset_type: "TARGET_TIME_SERIES", # required, accepts TARGET_TIME_SERIES, RELATED_TIME_SERIES, ITEM_METADATA
  data_frequency: "Frequency",
  schema: { # required
    attributes: [
      {
        attribute_name: "Name",
        attribute_type: "string", # accepts string, integer, float, timestamp
      },
    ],
  },
  encryption_config: {
    role_arn: "Arn", # required
    kms_key_arn: "KMSKeyArn", # required
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.dataset_arn #=> String

Options Hash (options):

  • :dataset_name (required, String)

    A name for the dataset.

  • :domain (required, String)

    The domain associated with the dataset. When you add a dataset to a dataset group, this value and the value specified for the Domain parameter of the CreateDatasetGroup operation must match.

    The Domain and DatasetType that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose the RETAIL domain and TARGET_TIME_SERIES as the DatasetType, Amazon Forecast requires item_id, timestamp, and demand fields to be present in your data. For more information, see howitworks-datasets-groups.

  • :dataset_type (required, String)

    The dataset type. Valid values depend on the chosen Domain.

  • :data_frequency (String)

    The frequency of data collection. This parameter is required for RELATED_TIME_SERIES datasets.

    Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, \"D\" indicates every day and \"15min\" indicates every 15 minutes.

  • :schema (required, Types::Schema)

    The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset Domain and DatasetType that you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see howitworks-domains-ds-types.

  • :encryption_config (Types::EncryptionConfig)

    An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.

  • :tags (Array<Types::Tag>)

    The optional metadata that you apply to the dataset to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

    The following basic restrictions apply to tags:

    • Maximum number of tags per resource - 50.

    • For each resource, each tag key must be unique, and each tag key can have only one value.

    • Maximum key length - 128 Unicode characters in UTF-8.

    • Maximum value length - 256 Unicode characters in UTF-8.

    • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

    • Tag keys and values are case sensitive.

    • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.

Returns:

See Also:

#create_dataset_group(options = {}) ⇒ Types::CreateDatasetGroupResponse

Creates a dataset group, which holds a collection of related datasets. You can add datasets to the dataset group when you create the dataset group, or later by using the UpdateDatasetGroup operation.

After creating a dataset group and adding datasets, you use the dataset group when you create a predictor. For more information, see howitworks-datasets-groups.

To get a list of all your datasets groups, use the ListDatasetGroups operation.

The Status of a dataset group must be ACTIVE before you can use the dataset group to create a predictor. To get the status, use the DescribeDatasetGroup operation.

Examples:

Request syntax with placeholder values


resp = client.create_dataset_group({
  dataset_group_name: "Name", # required
  domain: "RETAIL", # required, accepts RETAIL, CUSTOM, INVENTORY_PLANNING, EC2_CAPACITY, WORK_FORCE, WEB_TRAFFIC, METRICS
  dataset_arns: ["Arn"],
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.dataset_group_arn #=> String

Options Hash (options):

  • :dataset_group_name (required, String)

    A name for the dataset group.

  • :domain (required, String)

    The domain associated with the dataset group. When you add a dataset to a dataset group, this value and the value specified for the Domain parameter of the CreateDataset operation must match.

    The Domain and DatasetType that you choose determine the fields that must be present in training data that you import to a dataset. For example, if you choose the RETAIL domain and TARGET_TIME_SERIES as the DatasetType, Amazon Forecast requires that item_id, timestamp, and demand fields are present in your data. For more information, see howitworks-datasets-groups.

  • :dataset_arns (Array<String>)

    An array of Amazon Resource Names (ARNs) of the datasets that you want to include in the dataset group.

  • :tags (Array<Types::Tag>)

    The optional metadata that you apply to the dataset group to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

    The following basic restrictions apply to tags:

    • Maximum number of tags per resource - 50.

    • For each resource, each tag key must be unique, and each tag key can have only one value.

    • Maximum key length - 128 Unicode characters in UTF-8.

    • Maximum value length - 256 Unicode characters in UTF-8.

    • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

    • Tag keys and values are case sensitive.

    • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.

Returns:

See Also:

#create_dataset_import_job(options = {}) ⇒ Types::CreateDatasetImportJobResponse

Imports your training data to an Amazon Forecast dataset. You provide the location of your training data in an Amazon Simple Storage Service (Amazon S3) bucket and the Amazon Resource Name (ARN) of the dataset that you want to import the data to.

You must specify a DataSource object that includes an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data, as Amazon Forecast makes a copy of your data and processes it in an internal AWS system. For more information, see aws-forecast-iam-roles.

The training data must be in CSV format. The delimiter must be a comma (,).

You can specify the path to a specific CSV file, the S3 bucket, or to a folder in the S3 bucket. For the latter two cases, Amazon Forecast imports all files up to the limit of 10,000 files.

Because dataset imports are not aggregated, your most recent dataset import is the one that is used when training a predictor or generating a forecast. Make sure that your most recent dataset import contains all of the data you want to model off of, and not just the new data collected since the previous import.

To get a list of all your dataset import jobs, filtered by specified criteria, use the ListDatasetImportJobs operation.

Examples:

Request syntax with placeholder values


resp = client.create_dataset_import_job({
  dataset_import_job_name: "Name", # required
  dataset_arn: "Arn", # required
  data_source: { # required
    s3_config: { # required
      path: "S3Path", # required
      role_arn: "Arn", # required
      kms_key_arn: "KMSKeyArn",
    },
  },
  timestamp_format: "TimestampFormat",
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.dataset_import_job_arn #=> String

Options Hash (options):

  • :dataset_import_job_name (required, String)

    The name for the dataset import job. We recommend including the current timestamp in the name, for example, 20190721DatasetImport. This can help you avoid getting a ResourceAlreadyExistsException exception.

  • :dataset_arn (required, String)

    The Amazon Resource Name (ARN) of the Amazon Forecast dataset that you want to import data to.

  • :data_source (required, Types::DataSource)

    The location of the training data to import and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data. The training data must be stored in an Amazon S3 bucket.

    If encryption is used, DataSource must include an AWS Key Management Service (KMS) key and the IAM role must allow Amazon Forecast permission to access the key. The KMS key and IAM role must match those specified in the EncryptionConfig parameter of the CreateDataset operation.

  • :timestamp_format (String)

    The format of timestamps in the dataset. The format that you specify depends on the DataFrequency specified when the dataset was created. The following formats are supported

    • \"yyyy-MM-dd\"

      For the following data frequencies: Y, M, W, and D

    • \"yyyy-MM-dd HH:mm:ss\"

      For the following data frequencies: H, 30min, 15min, and 1min; and optionally, for: Y, M, W, and D

    If the format isn\'t specified, Amazon Forecast expects the format to be \"yyyy-MM-dd HH:mm:ss\".

  • :tags (Array<Types::Tag>)

    The optional metadata that you apply to the dataset import job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

    The following basic restrictions apply to tags:

    • Maximum number of tags per resource - 50.

    • For each resource, each tag key must be unique, and each tag key can have only one value.

    • Maximum key length - 128 Unicode characters in UTF-8.

    • Maximum value length - 256 Unicode characters in UTF-8.

    • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

    • Tag keys and values are case sensitive.

    • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.

Returns:

See Also:

#create_forecast(options = {}) ⇒ Types::CreateForecastResponse

Creates a forecast for each item in the TARGET_TIME_SERIES dataset that was used to train the predictor. This is known as inference. To retrieve the forecast for a single item at low latency, use the operation. To export the complete forecast into your Amazon Simple Storage Service (Amazon S3) bucket, use the CreateForecastExportJob operation.

The range of the forecast is determined by the ForecastHorizon value, which you specify in the CreatePredictor request. When you query a forecast, you can request a specific date range within the forecast.

To get a list of all your forecasts, use the ListForecasts operation.

The forecasts generated by Amazon Forecast are in the same time zone as the dataset that was used to create the predictor.

For more information, see howitworks-forecast.

The Status of the forecast must be ACTIVE before you can query or export the forecast. Use the DescribeForecast operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.create_forecast({
  forecast_name: "Name", # required
  predictor_arn: "Arn", # required
  forecast_types: ["ForecastType"],
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.forecast_arn #=> String

Options Hash (options):

  • :forecast_name (required, String)

    A name for the forecast.

  • :predictor_arn (required, String)

    The Amazon Resource Name (ARN) of the predictor to use to generate the forecast.

  • :forecast_types (Array<String>)

    The quantiles at which probabilistic forecasts are generated. You can currently specify up to 5 quantiles per forecast. Accepted values include 0.01 to 0.99 (increments of .01 only) and mean. The mean forecast is different from the median (0.50) when the distribution is not symmetric (for example, Beta and Negative Binomial). The default value is ["0.1", "0.5", "0.9"].

  • :tags (Array<Types::Tag>)

    The optional metadata that you apply to the forecast to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

    The following basic restrictions apply to tags:

    • Maximum number of tags per resource - 50.

    • For each resource, each tag key must be unique, and each tag key can have only one value.

    • Maximum key length - 128 Unicode characters in UTF-8.

    • Maximum value length - 256 Unicode characters in UTF-8.

    • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

    • Tag keys and values are case sensitive.

    • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.

Returns:

See Also:

#create_forecast_export_job(options = {}) ⇒ Types::CreateForecastExportJobResponse

Exports a forecast created by the CreateForecast operation to your Amazon Simple Storage Service (Amazon S3) bucket. The forecast file name will match the following conventions:

<ForecastExportJobName><ExportTimestamp><PartNumber>

where the <ExportTimestamp> component is in Java SimpleDateFormat (yyyy-MM-ddTHH-mm-ssZ).

You must specify a DataDestination object that includes an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the Amazon S3 bucket. For more information, see aws-forecast-iam-roles.

For more information, see howitworks-forecast.

To get a list of all your forecast export jobs, use the ListForecastExportJobs operation.

The Status of the forecast export job must be ACTIVE before you can access the forecast in your Amazon S3 bucket. To get the status, use the DescribeForecastExportJob operation.

Examples:

Request syntax with placeholder values


resp = client.create_forecast_export_job({
  forecast_export_job_name: "Name", # required
  forecast_arn: "Arn", # required
  destination: { # required
    s3_config: { # required
      path: "S3Path", # required
      role_arn: "Arn", # required
      kms_key_arn: "KMSKeyArn",
    },
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.forecast_export_job_arn #=> String

Options Hash (options):

  • :forecast_export_job_name (required, String)

    The name for the forecast export job.

  • :forecast_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast that you want to export.

  • :destination (required, Types::DataDestination)

    The location where you want to save the forecast and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the location. The forecast must be exported to an Amazon S3 bucket.

    If encryption is used, Destination must include an AWS Key Management Service (KMS) key. The IAM role must allow Amazon Forecast permission to access the key.

  • :tags (Array<Types::Tag>)

    The optional metadata that you apply to the forecast export job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

    The following basic restrictions apply to tags:

    • Maximum number of tags per resource - 50.

    • For each resource, each tag key must be unique, and each tag key can have only one value.

    • Maximum key length - 128 Unicode characters in UTF-8.

    • Maximum value length - 256 Unicode characters in UTF-8.

    • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

    • Tag keys and values are case sensitive.

    • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.

Returns:

See Also:

#create_predictor(options = {}) ⇒ Types::CreatePredictorResponse

Creates an Amazon Forecast predictor.

In the request, provide a dataset group and either specify an algorithm or let Amazon Forecast choose an algorithm for you using AutoML. If you specify an algorithm, you also can override algorithm-specific hyperparameters.

Amazon Forecast uses the algorithm to train a predictor using the latest version of the datasets in the specified dataset group. You can then generate a forecast using the CreateForecast operation.

To see the evaluation metrics, use the GetAccuracyMetrics operation.

You can specify a featurization configuration to fill and aggregate the data fields in the TARGET_TIME_SERIES dataset to improve model training. For more information, see FeaturizationConfig.

For RELATED_TIME_SERIES datasets, CreatePredictor verifies that the DataFrequency specified when the dataset was created matches the ForecastFrequency. TARGET_TIME_SERIES datasets don't have this restriction. Amazon Forecast also verifies the delimiter and timestamp format. For more information, see howitworks-datasets-groups.

By default, predictors are trained and evaluated at the 0.1 (P10), 0.5 (P50), and 0.9 (P90) quantiles. You can choose custom forecast types to train and evaluate your predictor by setting the ForecastTypes.

AutoML

If you want Amazon Forecast to evaluate each algorithm and choose the one that minimizes the objective function, set PerformAutoML to true. The objective function is defined as the mean of the weighted losses over the forecast types. By default, these are the p10, p50, and p90 quantile losses. For more information, see EvaluationResult.

When AutoML is enabled, the following properties are disallowed:

  • AlgorithmArn

  • HPOConfig

  • PerformHPO

  • TrainingParameters

To get a list of all of your predictors, use the ListPredictors operation.

Before you can use the predictor to create a forecast, the Status of the predictor must be ACTIVE, signifying that training has completed. To get the status, use the DescribePredictor operation.

Examples:

Request syntax with placeholder values


resp = client.create_predictor({
  predictor_name: "Name", # required
  algorithm_arn: "Arn",
  forecast_horizon: 1, # required
  forecast_types: ["ForecastType"],
  perform_auto_ml: false,
  perform_hpo: false,
  training_parameters: {
    "ParameterKey" => "ParameterValue",
  },
  evaluation_parameters: {
    number_of_backtest_windows: 1,
    back_test_window_offset: 1,
  },
  hpo_config: {
    parameter_ranges: {
      categorical_parameter_ranges: [
        {
          name: "Name", # required
          values: ["Value"], # required
        },
      ],
      continuous_parameter_ranges: [
        {
          name: "Name", # required
          max_value: 1.0, # required
          min_value: 1.0, # required
          scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
        },
      ],
      integer_parameter_ranges: [
        {
          name: "Name", # required
          max_value: 1, # required
          min_value: 1, # required
          scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
        },
      ],
    },
  },
  input_data_config: { # required
    dataset_group_arn: "Arn", # required
    supplementary_features: [
      {
        name: "Name", # required
        value: "Value", # required
      },
    ],
  },
  featurization_config: { # required
    forecast_frequency: "Frequency", # required
    forecast_dimensions: ["Name"],
    featurizations: [
      {
        attribute_name: "Name", # required
        featurization_pipeline: [
          {
            featurization_method_name: "filling", # required, accepts filling
            featurization_method_parameters: {
              "ParameterKey" => "ParameterValue",
            },
          },
        ],
      },
    ],
  },
  encryption_config: {
    role_arn: "Arn", # required
    kms_key_arn: "KMSKeyArn", # required
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.predictor_arn #=> String

Options Hash (options):

  • :predictor_name (required, String)

    A name for the predictor.

  • :algorithm_arn (String)

    The Amazon Resource Name (ARN) of the algorithm to use for model training. Required if PerformAutoML is not set to true.

    Supported algorithms: .title

    • arn:aws:forecast:::algorithm/ARIMA

    • arn:aws:forecast:::algorithm/CNN-QR

    • arn:aws:forecast:::algorithm/Deep_AR_Plus

    • arn:aws:forecast:::algorithm/ETS

    • arn:aws:forecast:::algorithm/NPTS

    • arn:aws:forecast:::algorithm/Prophet

  • :forecast_horizon (required, Integer)

    Specifies the number of time-steps that the model is trained to predict. The forecast horizon is also called the prediction length.

    For example, if you configure a dataset for daily data collection (using the DataFrequency parameter of the CreateDataset operation) and set the forecast horizon to 10, the model returns predictions for 10 days.

    The maximum forecast horizon is the lesser of 500 time-steps or 1/3 of the TARGET_TIME_SERIES dataset length.

  • :forecast_types (Array<String>)

    Specifies the forecast types used to train a predictor. You can specify up to five forecast types. Forecast types can be quantiles from 0.01 to 0.99, by increments of 0.01 or higher. You can also specify the mean forecast with mean.

    The default value is ["0.10", "0.50", "0.9"].

  • :perform_auto_ml (Boolean)

    Whether to perform AutoML. When Amazon Forecast performs AutoML, it evaluates the algorithms it provides and chooses the best algorithm and configuration for your training dataset.

    The default value is false. In this case, you are required to specify an algorithm.

    Set PerformAutoML to true to have Amazon Forecast perform AutoML. This is a good option if you aren\'t sure which algorithm is suitable for your training data. In this case, PerformHPO must be false.

  • :perform_hpo (Boolean)

    Whether to perform hyperparameter optimization (HPO). HPO finds optimal hyperparameter values for your training data. The process of performing HPO is known as running a hyperparameter tuning job.

    The default value is false. In this case, Amazon Forecast uses default hyperparameter values from the chosen algorithm.

    To override the default values, set PerformHPO to true and, optionally, supply the HyperParameterTuningJobConfig object. The tuning job specifies a metric to optimize, which hyperparameters participate in tuning, and the valid range for each tunable hyperparameter. In this case, you are required to specify an algorithm and PerformAutoML must be false.

    The following algorithms support HPO:

    • DeepAR+

    • CNN-QR

  • :training_parameters (Hash<String,String>)

    The hyperparameters to override for model training. The hyperparameters that you can override are listed in the individual algorithms. For the list of supported algorithms, see aws-forecast-choosing-recipes.

  • :evaluation_parameters (Types::EvaluationParameters)

    Used to override the default evaluation parameters of the specified algorithm. Amazon Forecast evaluates a predictor by splitting a dataset into training data and testing data. The evaluation parameters define how to perform the split and the number of iterations.

  • :hpo_config (Types::HyperParameterTuningJobConfig)

    Provides hyperparameter override values for the algorithm. If you don\'t provide this parameter, Amazon Forecast uses default values. The individual algorithms specify which hyperparameters support hyperparameter optimization (HPO). For more information, see aws-forecast-choosing-recipes.

    If you included the HPOConfig object, you must set PerformHPO to true.

  • :input_data_config (required, Types::InputDataConfig)

    Describes the dataset group that contains the data to use to train the predictor.

  • :featurization_config (required, Types::FeaturizationConfig)

    The featurization configuration.

  • :encryption_config (Types::EncryptionConfig)

    An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.

  • :tags (Array<Types::Tag>)

    The optional metadata that you apply to the predictor to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

    The following basic restrictions apply to tags:

    • Maximum number of tags per resource - 50.

    • For each resource, each tag key must be unique, and each tag key can have only one value.

    • Maximum key length - 128 Unicode characters in UTF-8.

    • Maximum value length - 256 Unicode characters in UTF-8.

    • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

    • Tag keys and values are case sensitive.

    • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.

Returns:

See Also:

#delete_dataset(options = {}) ⇒ Struct

Deletes an Amazon Forecast dataset that was created using the CreateDataset operation. You can only delete datasets that have a status of ACTIVE or CREATE_FAILED. To get the status use the DescribeDataset operation.

Forecast does not automatically update any dataset groups that contain the deleted dataset. In order to update the dataset group, use the operation, omitting the deleted dataset's ARN.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset({
  dataset_arn: "Arn", # required
})

Options Hash (options):

  • :dataset_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_dataset_group(options = {}) ⇒ Struct

Deletes a dataset group created using the CreateDatasetGroup operation. You can only delete dataset groups that have a status of ACTIVE, CREATE_FAILED, or UPDATE_FAILED. To get the status, use the DescribeDatasetGroup operation.

This operation deletes only the dataset group, not the datasets in the group.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset_group({
  dataset_group_arn: "Arn", # required
})

Options Hash (options):

  • :dataset_group_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset group to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_dataset_import_job(options = {}) ⇒ Struct

Deletes a dataset import job created using the CreateDatasetImportJob operation. You can delete only dataset import jobs that have a status of ACTIVE or CREATE_FAILED. To get the status, use the DescribeDatasetImportJob operation.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset_import_job({
  dataset_import_job_arn: "Arn", # required
})

Options Hash (options):

  • :dataset_import_job_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset import job to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_forecast(options = {}) ⇒ Struct

Deletes a forecast created using the CreateForecast operation. You can delete only forecasts that have a status of ACTIVE or CREATE_FAILED. To get the status, use the DescribeForecast operation.

You can't delete a forecast while it is being exported. After a forecast is deleted, you can no longer query the forecast.

Examples:

Request syntax with placeholder values


resp = client.delete_forecast({
  forecast_arn: "Arn", # required
})

Options Hash (options):

  • :forecast_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_forecast_export_job(options = {}) ⇒ Struct

Deletes a forecast export job created using the CreateForecastExportJob operation. You can delete only export jobs that have a status of ACTIVE or CREATE_FAILED. To get the status, use the DescribeForecastExportJob operation.

Examples:

Request syntax with placeholder values


resp = client.delete_forecast_export_job({
  forecast_export_job_arn: "Arn", # required
})

Options Hash (options):

  • :forecast_export_job_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast export job to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_predictor(options = {}) ⇒ Struct

Deletes a predictor created using the CreatePredictor operation. You can delete only predictor that have a status of ACTIVE or CREATE_FAILED. To get the status, use the DescribePredictor operation.

Examples:

Request syntax with placeholder values


resp = client.delete_predictor({
  predictor_arn: "Arn", # required
})

Options Hash (options):

  • :predictor_arn (required, String)

    The Amazon Resource Name (ARN) of the predictor to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#describe_dataset(options = {}) ⇒ Types::DescribeDatasetResponse

Describes an Amazon Forecast dataset created using the CreateDataset operation.

In addition to listing the parameters specified in the CreateDataset request, this operation includes the following dataset properties:

  • CreationTime

  • LastModificationTime

  • Status

Examples:

Request syntax with placeholder values


resp = client.describe_dataset({
  dataset_arn: "Arn", # required
})

Response structure


resp.dataset_arn #=> String
resp.dataset_name #=> String
resp.domain #=> String, one of "RETAIL", "CUSTOM", "INVENTORY_PLANNING", "EC2_CAPACITY", "WORK_FORCE", "WEB_TRAFFIC", "METRICS"
resp.dataset_type #=> String, one of "TARGET_TIME_SERIES", "RELATED_TIME_SERIES", "ITEM_METADATA"
resp.data_frequency #=> String
resp.schema.attributes #=> Array
resp.schema.attributes[0].attribute_name #=> String
resp.schema.attributes[0].attribute_type #=> String, one of "string", "integer", "float", "timestamp"
resp.encryption_config.role_arn #=> String
resp.encryption_config.kms_key_arn #=> String
resp.status #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :dataset_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset.

Returns:

See Also:

#describe_dataset_group(options = {}) ⇒ Types::DescribeDatasetGroupResponse

Describes a dataset group created using the CreateDatasetGroup operation.

In addition to listing the parameters provided in the CreateDatasetGroup request, this operation includes the following properties:

  • DatasetArns - The datasets belonging to the group.

  • CreationTime

  • LastModificationTime

  • Status

Examples:

Request syntax with placeholder values


resp = client.describe_dataset_group({
  dataset_group_arn: "Arn", # required
})

Response structure


resp.dataset_group_name #=> String
resp.dataset_group_arn #=> String
resp.dataset_arns #=> Array
resp.dataset_arns[0] #=> String
resp.domain #=> String, one of "RETAIL", "CUSTOM", "INVENTORY_PLANNING", "EC2_CAPACITY", "WORK_FORCE", "WEB_TRAFFIC", "METRICS"
resp.status #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :dataset_group_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset group.

Returns:

See Also:

#describe_dataset_import_job(options = {}) ⇒ Types::DescribeDatasetImportJobResponse

Describes a dataset import job created using the CreateDatasetImportJob operation.

In addition to listing the parameters provided in the CreateDatasetImportJob request, this operation includes the following properties:

  • CreationTime

  • LastModificationTime

  • DataSize

  • FieldStatistics

  • Status

  • Message - If an error occurred, information about the error.

Examples:

Request syntax with placeholder values


resp = client.describe_dataset_import_job({
  dataset_import_job_arn: "Arn", # required
})

Response structure


resp.dataset_import_job_name #=> String
resp.dataset_import_job_arn #=> String
resp.dataset_arn #=> String
resp.timestamp_format #=> String
resp.data_source.s3_config.path #=> String
resp.data_source.s3_config.role_arn #=> String
resp.data_source.s3_config.kms_key_arn #=> String
resp.field_statistics #=> Hash
resp.field_statistics["String"].count #=> Integer
resp.field_statistics["String"].count_distinct #=> Integer
resp.field_statistics["String"].count_null #=> Integer
resp.field_statistics["String"].count_nan #=> Integer
resp.field_statistics["String"].min #=> String
resp.field_statistics["String"].max #=> String
resp.field_statistics["String"].avg #=> Float
resp.field_statistics["String"].stddev #=> Float
resp.data_size #=> Float
resp.status #=> String
resp.message #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :dataset_import_job_arn (required, String)

    The Amazon Resource Name (ARN) of the dataset import job.

Returns:

See Also:

#describe_forecast(options = {}) ⇒ Types::DescribeForecastResponse

Describes a forecast created using the CreateForecast operation.

In addition to listing the properties provided in the CreateForecast request, this operation lists the following properties:

  • DatasetGroupArn - The dataset group that provided the training data.

  • CreationTime

  • LastModificationTime

  • Status

  • Message - If an error occurred, information about the error.

Examples:

Request syntax with placeholder values


resp = client.describe_forecast({
  forecast_arn: "Arn", # required
})

Response structure


resp.forecast_arn #=> String
resp.forecast_name #=> String
resp.forecast_types #=> Array
resp.forecast_types[0] #=> String
resp.predictor_arn #=> String
resp.dataset_group_arn #=> String
resp.status #=> String
resp.message #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :forecast_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast.

Returns:

See Also:

#describe_forecast_export_job(options = {}) ⇒ Types::DescribeForecastExportJobResponse

Describes a forecast export job created using the CreateForecastExportJob operation.

In addition to listing the properties provided by the user in the CreateForecastExportJob request, this operation lists the following properties:

  • CreationTime

  • LastModificationTime

  • Status

  • Message - If an error occurred, information about the error.

Examples:

Request syntax with placeholder values


resp = client.describe_forecast_export_job({
  forecast_export_job_arn: "Arn", # required
})

Response structure


resp.forecast_export_job_arn #=> String
resp.forecast_export_job_name #=> String
resp.forecast_arn #=> String
resp.destination.s3_config.path #=> String
resp.destination.s3_config.role_arn #=> String
resp.destination.s3_config.kms_key_arn #=> String
resp.message #=> String
resp.status #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :forecast_export_job_arn (required, String)

    The Amazon Resource Name (ARN) of the forecast export job.

Returns:

See Also:

#describe_predictor(options = {}) ⇒ Types::DescribePredictorResponse

Describes a predictor created using the CreatePredictor operation.

In addition to listing the properties provided in the CreatePredictor request, this operation lists the following properties:

  • DatasetImportJobArns - The dataset import jobs used to import training data.

  • AutoMLAlgorithmArns - If AutoML is performed, the algorithms that were evaluated.

  • CreationTime

  • LastModificationTime

  • Status

  • Message - If an error occurred, information about the error.

Examples:

Request syntax with placeholder values


resp = client.describe_predictor({
  predictor_arn: "Arn", # required
})

Response structure


resp.predictor_arn #=> String
resp.predictor_name #=> String
resp.algorithm_arn #=> String
resp.forecast_horizon #=> Integer
resp.forecast_types #=> Array
resp.forecast_types[0] #=> String
resp.perform_auto_ml #=> true/false
resp.perform_hpo #=> true/false
resp.training_parameters #=> Hash
resp.training_parameters["ParameterKey"] #=> String
resp.evaluation_parameters.number_of_backtest_windows #=> Integer
resp.evaluation_parameters.back_test_window_offset #=> Integer
resp.hpo_config.parameter_ranges.categorical_parameter_ranges #=> Array
resp.hpo_config.parameter_ranges.categorical_parameter_ranges[0].name #=> String
resp.hpo_config.parameter_ranges.categorical_parameter_ranges[0].values #=> Array
resp.hpo_config.parameter_ranges.categorical_parameter_ranges[0].values[0] #=> String
resp.hpo_config.parameter_ranges.continuous_parameter_ranges #=> Array
resp.hpo_config.parameter_ranges.continuous_parameter_ranges[0].name #=> String
resp.hpo_config.parameter_ranges.continuous_parameter_ranges[0].max_value #=> Float
resp.hpo_config.parameter_ranges.continuous_parameter_ranges[0].min_value #=> Float
resp.hpo_config.parameter_ranges.continuous_parameter_ranges[0].scaling_type #=> String, one of "Auto", "Linear", "Logarithmic", "ReverseLogarithmic"
resp.hpo_config.parameter_ranges.integer_parameter_ranges #=> Array
resp.hpo_config.parameter_ranges.integer_parameter_ranges[0].name #=> String
resp.hpo_config.parameter_ranges.integer_parameter_ranges[0].max_value #=> Integer
resp.hpo_config.parameter_ranges.integer_parameter_ranges[0].min_value #=> Integer
resp.hpo_config.parameter_ranges.integer_parameter_ranges[0].scaling_type #=> String, one of "Auto", "Linear", "Logarithmic", "ReverseLogarithmic"
resp.input_data_config.dataset_group_arn #=> String
resp.input_data_config.supplementary_features #=> Array
resp.input_data_config.supplementary_features[0].name #=> String
resp.input_data_config.supplementary_features[0].value #=> String
resp.featurization_config.forecast_frequency #=> String
resp.featurization_config.forecast_dimensions #=> Array
resp.featurization_config.forecast_dimensions[0] #=> String
resp.featurization_config.featurizations #=> Array
resp.featurization_config.featurizations[0].attribute_name #=> String
resp.featurization_config.featurizations[0].featurization_pipeline #=> Array
resp.featurization_config.featurizations[0].featurization_pipeline[0].featurization_method_name #=> String, one of "filling"
resp.featurization_config.featurizations[0].featurization_pipeline[0].featurization_method_parameters #=> Hash
resp.featurization_config.featurizations[0].featurization_pipeline[0].featurization_method_parameters["ParameterKey"] #=> String
resp.encryption_config.role_arn #=> String
resp.encryption_config.kms_key_arn #=> String
resp.predictor_execution_details.predictor_executions #=> Array
resp.predictor_execution_details.predictor_executions[0].algorithm_arn #=> String
resp.predictor_execution_details.predictor_executions[0].test_windows #=> Array
resp.predictor_execution_details.predictor_executions[0].test_windows[0].test_window_start #=> Time
resp.predictor_execution_details.predictor_executions[0].test_windows[0].test_window_end #=> Time
resp.predictor_execution_details.predictor_executions[0].test_windows[0].status #=> String
resp.predictor_execution_details.predictor_executions[0].test_windows[0].message #=> String
resp.dataset_import_job_arns #=> Array
resp.dataset_import_job_arns[0] #=> String
resp.auto_ml_algorithm_arns #=> Array
resp.auto_ml_algorithm_arns[0] #=> String
resp.status #=> String
resp.message #=> String
resp.creation_time #=> Time
resp.last_modification_time #=> Time

Options Hash (options):

  • :predictor_arn (required, String)

    The Amazon Resource Name (ARN) of the predictor that you want information about.

Returns:

See Also:

#get_accuracy_metrics(options = {}) ⇒ Types::GetAccuracyMetricsResponse

Provides metrics on the accuracy of the models that were trained by the CreatePredictor operation. Use metrics to see how well the model performed and to decide whether to use the predictor to generate a forecast. For more information, see Predictor Metrics.

This operation generates metrics for each backtest window that was evaluated. The number of backtest windows (NumberOfBacktestWindows) is specified using the EvaluationParameters object, which is optionally included in the CreatePredictor request. If NumberOfBacktestWindows isn't specified, the number defaults to one.

The parameters of the filling method determine which items contribute to the metrics. If you want all items to contribute, specify zero. If you want only those items that have complete data in the range being evaluated to contribute, specify nan. For more information, see FeaturizationMethod.

Before you can get accuracy metrics, the Status of the predictor must be ACTIVE, signifying that training has completed. To get the status, use the DescribePredictor operation.

Examples:

Request syntax with placeholder values


resp = client.get_accuracy_metrics({
  predictor_arn: "Arn", # required
})

Response structure


resp.predictor_evaluation_results #=> Array
resp.predictor_evaluation_results[0].algorithm_arn #=> String
resp.predictor_evaluation_results[0].test_windows #=> Array
resp.predictor_evaluation_results[0].test_windows[0].test_window_start #=> Time
resp.predictor_evaluation_results[0].test_windows[0].test_window_end #=> Time
resp.predictor_evaluation_results[0].test_windows[0].item_count #=> Integer
resp.predictor_evaluation_results[0].test_windows[0].evaluation_type #=> String, one of "SUMMARY", "COMPUTED"
resp.predictor_evaluation_results[0].test_windows[0].metrics.rmse #=> Float
resp.predictor_evaluation_results[0].test_windows[0].metrics.weighted_quantile_losses #=> Array
resp.predictor_evaluation_results[0].test_windows[0].metrics.weighted_quantile_losses[0].quantile #=> Float
resp.predictor_evaluation_results[0].test_windows[0].metrics.weighted_quantile_losses[0].loss_value #=> Float
resp.predictor_evaluation_results[0].test_windows[0].metrics.error_metrics #=> Array
resp.predictor_evaluation_results[0].test_windows[0].metrics.error_metrics[0].forecast_type #=> String
resp.predictor_evaluation_results[0].test_windows[0].metrics.error_metrics[0].wape #=> Float
resp.predictor_evaluation_results[0].test_windows[0].metrics.error_metrics[0].rmse #=> Float

Options Hash (options):

  • :predictor_arn (required, String)

    The Amazon Resource Name (ARN) of the predictor to get metrics for.

Returns:

See Also:

#list_dataset_groups(options = {}) ⇒ Types::ListDatasetGroupsResponse

Returns a list of dataset groups created using the CreateDatasetGroup operation. For each dataset group, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). You can retrieve the complete set of properties by using the dataset group ARN with the DescribeDatasetGroup operation.

Examples:

Request syntax with placeholder values


resp = client.list_dataset_groups({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.dataset_groups #=> Array
resp.dataset_groups[0].dataset_group_arn #=> String
resp.dataset_groups[0].dataset_group_name #=> String
resp.dataset_groups[0].creation_time #=> Time
resp.dataset_groups[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

Returns:

See Also:

#list_dataset_import_jobs(options = {}) ⇒ Types::ListDatasetImportJobsResponse

Returns a list of dataset import jobs created using the CreateDatasetImportJob operation. For each import job, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). You can retrieve the complete set of properties by using the ARN with the DescribeDatasetImportJob operation. You can filter the list by providing an array of Filter objects.

Examples:

Request syntax with placeholder values


resp = client.list_dataset_import_jobs({
  next_token: "NextToken",
  max_results: 1,
  filters: [
    {
      key: "String", # required
      value: "Arn", # required
      condition: "IS", # required, accepts IS, IS_NOT
    },
  ],
})

Response structure


resp.dataset_import_jobs #=> Array
resp.dataset_import_jobs[0].dataset_import_job_arn #=> String
resp.dataset_import_jobs[0].dataset_import_job_name #=> String
resp.dataset_import_jobs[0].data_source.s3_config.path #=> String
resp.dataset_import_jobs[0].data_source.s3_config.role_arn #=> String
resp.dataset_import_jobs[0].data_source.s3_config.kms_key_arn #=> String
resp.dataset_import_jobs[0].status #=> String
resp.dataset_import_jobs[0].message #=> String
resp.dataset_import_jobs[0].creation_time #=> Time
resp.dataset_import_jobs[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

  • :filters (Array<Types::Filter>)

    An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the datasets that match the statement from the list, respectively. The match statement consists of a key and a value.

    Filter properties

    • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the datasets that match the statement, specify IS. To exclude matching datasets, specify IS_NOT.

    • Key - The name of the parameter to filter on. Valid values are DatasetArn and Status.

    • Value - The value to match.

    For example, to list all dataset import jobs whose status is ACTIVE, you specify the following filter:

    "Filters": [ { "Condition": "IS", "Key": "Status", "Value": "ACTIVE" } ]

Returns:

See Also:

#list_datasets(options = {}) ⇒ Types::ListDatasetsResponse

Returns a list of datasets created using the CreateDataset operation. For each dataset, a summary of its properties, including its Amazon Resource Name (ARN), is returned. To retrieve the complete set of properties, use the ARN with the DescribeDataset operation.

Examples:

Request syntax with placeholder values


resp = client.list_datasets({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.datasets #=> Array
resp.datasets[0].dataset_arn #=> String
resp.datasets[0].dataset_name #=> String
resp.datasets[0].dataset_type #=> String, one of "TARGET_TIME_SERIES", "RELATED_TIME_SERIES", "ITEM_METADATA"
resp.datasets[0].domain #=> String, one of "RETAIL", "CUSTOM", "INVENTORY_PLANNING", "EC2_CAPACITY", "WORK_FORCE", "WEB_TRAFFIC", "METRICS"
resp.datasets[0].creation_time #=> Time
resp.datasets[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

Returns:

See Also:

#list_forecast_export_jobs(options = {}) ⇒ Types::ListForecastExportJobsResponse

Returns a list of forecast export jobs created using the CreateForecastExportJob operation. For each forecast export job, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). To retrieve the complete set of properties, use the ARN with the DescribeForecastExportJob operation. You can filter the list using an array of Filter objects.

Examples:

Request syntax with placeholder values


resp = client.list_forecast_export_jobs({
  next_token: "NextToken",
  max_results: 1,
  filters: [
    {
      key: "String", # required
      value: "Arn", # required
      condition: "IS", # required, accepts IS, IS_NOT
    },
  ],
})

Response structure


resp.forecast_export_jobs #=> Array
resp.forecast_export_jobs[0].forecast_export_job_arn #=> String
resp.forecast_export_jobs[0].forecast_export_job_name #=> String
resp.forecast_export_jobs[0].destination.s3_config.path #=> String
resp.forecast_export_jobs[0].destination.s3_config.role_arn #=> String
resp.forecast_export_jobs[0].destination.s3_config.kms_key_arn #=> String
resp.forecast_export_jobs[0].status #=> String
resp.forecast_export_jobs[0].message #=> String
resp.forecast_export_jobs[0].creation_time #=> Time
resp.forecast_export_jobs[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

  • :filters (Array<Types::Filter>)

    An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.

    Filter properties

    • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the forecast export jobs that match the statement, specify IS. To exclude matching forecast export jobs, specify IS_NOT.

    • Key - The name of the parameter to filter on. Valid values are ForecastArn and Status.

    • Value - The value to match.

    For example, to list all jobs that export a forecast named electricityforecast, specify the following filter:

    "Filters": [ { "Condition": "IS", "Key": "ForecastArn", "Value": "arn:aws:forecast:us-west-2:<acct-id>:forecast/electricityforecast" } ]

Returns:

See Also:

#list_forecasts(options = {}) ⇒ Types::ListForecastsResponse

Returns a list of forecasts created using the CreateForecast operation. For each forecast, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). To retrieve the complete set of properties, specify the ARN with the DescribeForecast operation. You can filter the list using an array of Filter objects.

Examples:

Request syntax with placeholder values


resp = client.list_forecasts({
  next_token: "NextToken",
  max_results: 1,
  filters: [
    {
      key: "String", # required
      value: "Arn", # required
      condition: "IS", # required, accepts IS, IS_NOT
    },
  ],
})

Response structure


resp.forecasts #=> Array
resp.forecasts[0].forecast_arn #=> String
resp.forecasts[0].forecast_name #=> String
resp.forecasts[0].predictor_arn #=> String
resp.forecasts[0].dataset_group_arn #=> String
resp.forecasts[0].status #=> String
resp.forecasts[0].message #=> String
resp.forecasts[0].creation_time #=> Time
resp.forecasts[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

  • :filters (Array<Types::Filter>)

    An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the forecasts that match the statement from the list, respectively. The match statement consists of a key and a value.

    Filter properties

    • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the forecasts that match the statement, specify IS. To exclude matching forecasts, specify IS_NOT.

    • Key - The name of the parameter to filter on. Valid values are DatasetGroupArn, PredictorArn, and Status.

    • Value - The value to match.

    For example, to list all forecasts whose status is not ACTIVE, you would specify:

    "Filters": [ { "Condition": "IS_NOT", "Key": "Status", "Value": "ACTIVE" } ]

Returns:

See Also:

#list_predictors(options = {}) ⇒ Types::ListPredictorsResponse

Returns a list of predictors created using the CreatePredictor operation. For each predictor, this operation returns a summary of its properties, including its Amazon Resource Name (ARN). You can retrieve the complete set of properties by using the ARN with the DescribePredictor operation. You can filter the list using an array of Filter objects.

Examples:

Request syntax with placeholder values


resp = client.list_predictors({
  next_token: "NextToken",
  max_results: 1,
  filters: [
    {
      key: "String", # required
      value: "Arn", # required
      condition: "IS", # required, accepts IS, IS_NOT
    },
  ],
})

Response structure


resp.predictors #=> Array
resp.predictors[0].predictor_arn #=> String
resp.predictors[0].predictor_name #=> String
resp.predictors[0].dataset_group_arn #=> String
resp.predictors[0].status #=> String
resp.predictors[0].message #=> String
resp.predictors[0].creation_time #=> Time
resp.predictors[0].last_modification_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.

  • :max_results (Integer)

    The number of items to return in the response.

  • :filters (Array<Types::Filter>)

    An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the predictors that match the statement from the list, respectively. The match statement consists of a key and a value.

    Filter properties

    • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the predictors that match the statement, specify IS. To exclude matching predictors, specify IS_NOT.

    • Key - The name of the parameter to filter on. Valid values are DatasetGroupArn and Status.

    • Value - The value to match.

    For example, to list all predictors whose status is ACTIVE, you would specify:

    "Filters": [ { "Condition": "IS", "Key": "Status", "Value": "ACTIVE" } ]

Returns:

See Also:

#list_tags_for_resource(options = {}) ⇒ Types::ListTagsForResourceResponse

Lists the tags for an Amazon Forecast resource.

Examples:

Request syntax with placeholder values


resp = client.list_tags_for_resource({
  resource_arn: "Arn", # required
})

Response structure


resp.tags #=> Array
resp.tags[0].key #=> String
resp.tags[0].value #=> String

Options Hash (options):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) that identifies the resource for which to list the tags. Currently, the supported resources are Forecast dataset groups, datasets, dataset import jobs, predictors, forecasts, and forecast export jobs.

Returns:

See Also:

#tag_resource(options = {}) ⇒ Struct

Associates the specified tags to a resource with the specified resourceArn. If existing tags on a resource are not specified in the request parameters, they are not changed. When a resource is deleted, the tags associated with that resource are also deleted.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "Arn", # required
  tags: [ # required
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Options Hash (options):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) that identifies the resource for which to list the tags. Currently, the supported resources are Forecast dataset groups, datasets, dataset import jobs, predictors, forecasts, and forecast export jobs.

  • :tags (required, Array<Types::Tag>)

    The tags to add to the resource. A tag is an array of key-value pairs.

    The following basic restrictions apply to tags:

    • Maximum number of tags per resource - 50.

    • For each resource, each tag key must be unique, and each tag key can have only one value.

    • Maximum key length - 128 Unicode characters in UTF-8.

    • Maximum value length - 256 Unicode characters in UTF-8.

    • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

    • Tag keys and values are case sensitive.

    • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#untag_resource(options = {}) ⇒ Struct

Deletes the specified tags from a resource.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "Arn", # required
  tag_keys: ["TagKey"], # required
})

Options Hash (options):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) that identifies the resource for which to list the tags. Currently, the supported resources are Forecast dataset groups, datasets, dataset import jobs, predictors, forecasts, and forecast exports.

  • :tag_keys (required, Array<String>)

    The keys of the tags to be removed.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_dataset_group(options = {}) ⇒ Struct

Replaces the datasets in a dataset group with the specified datasets.

The Status of the dataset group must be ACTIVE before you can use the dataset group to create a predictor. Use the DescribeDatasetGroup operation to get the status.

Examples:

Request syntax with placeholder values


resp = client.update_dataset_group({
  dataset_group_arn: "Arn", # required
  dataset_arns: ["Arn"], # required
})

Options Hash (options):

  • :dataset_group_arn (required, String)

    The ARN of the dataset group.

  • :dataset_arns (required, Array<String>)

    An array of the Amazon Resource Names (ARNs) of the datasets to add to the dataset group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ Boolean

Waiters polls an API operation until a resource enters a desired state.

Basic Usage

Waiters will poll until they are succesful, they fail by entering a terminal state, or until a maximum number of attempts are made.

# polls in a loop, sleeping between attempts client.waiter_until(waiter_name, params)

Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You configure waiters by passing a block to #wait_until:

# poll for ~25 seconds
client.wait_until(...) do |w|
  w.max_attempts = 5
  w.delay = 5
end

Callbacks

You can be notified before each polling attempt and before each delay. If you throw :success or :failure from these callbacks, it will terminate the waiter.

started_at = Time.now
client.wait_until(...) do |w|

  # disable max attempts
  w.max_attempts = nil

  # poll for 1 hour, instead of a number of attempts
  w.before_wait do |attempts, response|
    throw :failure if Time.now - started_at > 3600
  end

end

Handling Errors

When a waiter is successful, it returns true. When a waiter fails, it raises an error. All errors raised extend from Waiters::Errors::WaiterFailed.

begin
  client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end

Parameters:

  • waiter_name (Symbol)

    The name of the waiter. See #waiter_names for a full list of supported waiters.

  • params (Hash) (defaults to: {})

    Additional request parameters. See the #waiter_names for a list of supported waiters and what request they call. The called request determines the list of accepted parameters.

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the waiter was successful.

Raises:

  • (Errors::FailureStateError)

    Raised when the waiter terminates because the waiter has entered a state that it will not transition out of, preventing success.

  • (Errors::TooManyAttemptsError)

    Raised when the configured maximum number of attempts have been made, and the waiter is not yet successful.

  • (Errors::UnexpectedError)

    Raised when an error is encounted while polling for a resource that is not expected.

  • (Errors::NoSuchWaiterError)

    Raised when you request to wait for an unknown state.

#waiter_namesArray<Symbol>

Returns the list of supported waiters. The following table lists the supported waiters and the client method they call:

Waiter NameClient MethodDefault Delay:Default Max Attempts:

Returns:

  • (Array<Symbol>)

    the list of supported waiters.