Table Of Contents

Feedback

User Guide

First time using the AWS CLI? See the User Guide for help getting started.

[ aws . forecast ]

get-accuracy-metrics

Description

Provides metrics on the accuracy of the models that were trained by the CreatePredictor operation. Use metrics to see how well the model performed and to decide whether to use the predictor to generate a forecast.

Metrics are generated for each backtest window evaluated. For more information, see EvaluationParameters .

The parameters of the filling method determine which items contribute to the metrics. If zero is specified, all items contribute. If nan is specified, only those items that have complete data in the range being evaluated contribute. For more information, see FeaturizationMethod .

For an example of how to train a model and review metrics, see getting-started .

See also: AWS API Documentation

See 'aws help' for descriptions of global parameters.

Synopsis

  get-accuracy-metrics
--predictor-arn <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Options

--predictor-arn (string)

The Amazon Resource Name (ARN) of the predictor to get metrics for.

--cli-input-json (string) Performs service operation based on the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.

See 'aws help' for descriptions of global parameters.

Output

PredictorEvaluationResults -> (list)

An array of results from evaluating the predictor.

(structure)

The results of evaluating an algorithm. Returned as part of the GetAccuracyMetrics response.

AlgorithmArn -> (string)

The Amazon Resource Name (ARN) of the algorithm that was evaluated.

TestWindows -> (list)

The array of test windows used for evaluating the algorithm. The NumberOfBacktestWindows from the EvaluationParameters object determines the number of windows in the array.

(structure)

The metrics for a time range within the evaluation portion of a dataset. This object is part of the EvaluationResult object.

The TestWindowStart and TestWindowEnd parameters are determined by the BackTestWindowOffset parameter of the EvaluationParameters object.

TestWindowStart -> (timestamp)

The timestamp that defines the start of the window.

TestWindowEnd -> (timestamp)

The timestamp that defines the end of the window.

ItemCount -> (integer)

The number of data points within the window.

EvaluationType -> (string)

The type of evaluation.

  • SUMMARY - The average metrics across all windows.
  • COMPUTED - The metrics for the specified window.

Metrics -> (structure)

Provides metrics used to evaluate the performance of a predictor. This object is part of the WindowSummary object.

RMSE -> (double)

The root mean square error (RMSE).

WeightedQuantileLosses -> (list)

An array of weighted quantile losses. Quantiles divide a probability distribution into regions of equal probability. The distribution in this case is the loss function.

(structure)

The weighted loss value for a quantile. This object is part of the Metrics object.

Quantile -> (double)

The quantile. Quantiles divide a probability distribution into regions of equal probability. For example, if the distribution was divided into 5 regions of equal probability, the quantiles would be 0.2, 0.4, 0.6, and 0.8.

LossValue -> (double)

The difference between the predicted value and actual value over the quantile, weighted (normalized) by dividing by the sum over all quantiles.