Amazon QuickSight
User Guide

The AWS Documentation website is getting a new look!
Try it now and let us know what you think. Switch to the new look >>

You can return to the original look by selecting English in the language selector above.

Exploring Anomalies

After your anomaly detection is finished running, you can interactively explore the anomalies in your analysis.

Before you begin, look at your anomaly widget. It displays one of the following:

  • A completed run displays the narrative in the widget, with only the v-shaped on-visual menu icon at top right.

  • An alert icon with an exclamation point ( ! ) displays next to the on-visual menu icon. Refresh the page to clear this. Alternatively, you can choose the icon and then choose Update to dismiss it.

  • The Run now button displays. Refresh the page to clear this.

Important

You can't explore the anomalies while anomaly detection is running. However, you can do other work in your analysis, or even close the analysis, while it runs.

When you are ready to begin, open the anomaly exploration screen, choose the v-shaped on-visual menu, then choose Explore anomalies. Amazon QuickSight displays anomalies that were detected during the most recent run of anomaly detection.

The main section of the screen is composed of the following components:

  • Configure alerts – if you are exploring anomalies from a dashboard, you can use this button to subscribe to alerts and contribution analysis (if it's configured). The alerts can be setup for the level of severity (medium, high, and so on). You can get the top 5 alerts for Higher than expected, Lower than expected, or ALL. Dashboard readers can configure alerts for themselves. The Explore Anomalies page doesn't display this button if you opened the page from an analysis.

  • Status – under the Anomalies heading, the status section displays information on the last run. It tells you how many metrics were processed and how long ago. You can choose the link to learn more about the details, for example how many metrics were ignored.

  • Anomalies – the center of the screen displays the anomalies for the most recent data point in the time series. One or more graphs display with a chart showing variations in a metric over time. To use this graph, you select a point along the time line. The currently selected point in time is highlighted in the graph, and has a context menu offering you the option to analyze contributions to the current metric. You can also drag the cursor over the time line without choosing a specific point, to display the metric value for that point in time.

  • Anomalies by Date – if you choose SHOW ANOMALIES BY DATE, another graph displays to show you how many significant anomalies there were for each time point. You can see details in this chart on each bar's context menu.

  • Timeline adjustment – each graph has a timeline adjustor tool below the dates, which you can use to compress, expand, or choose a period of time to view.

  • Controls – the current settings display at the top of the workspace. You can expand this by using the double arrow icon on the far right. The following settings are available for anomaly exploration:

    • Anomaly threshold – sets how sensitive your detector is to detected anomalies. You should expect to see more anomalies with the setting set to Low, and fewer anomalies when the setting is set to High. This sensitivity is determined based on standard deviations of the anomaly score generated by the RCF algorithm.

    • Anomaly direction – sets display to show anomalies that are either higher or lower than expected.

    • Sorting method – choose the method you want to apply to sorting anomalies. These are listed in preferred order on the screen. Here they are listed alphabetically:

      • Weighted anomaly score – the anomaly score multiplied by the log of the absolute value of the difference between the actual value and the expected value. This is always a positive number.

      • Anomaly score – the actual anomaly score assigned to this data point.

      • Weighted difference from expected value – (default) the anomaly score multiplied by the difference between the actual value and the expected value.

      • Difference from expected value – the actual difference between the actual value and the expected value (actual−expected).

      • Actual value – the actual value with no formula applied.

    • Categories – one or more category settings appear at right, based on the fields in the category field well. Using these settings, you can limit the data that displays in the screen.

The left side of the screen displays contributors. It has the following components:

  • Narrative – at top left, a narrative displays to describe any change in the metrics.

  • Top contributors configuration – choose Configure to change the contributors and the date range to use in this section.

  • Sort by – sets the sort applied to the results that display below. You can choose from the following:

    • Absolute difference

    • Contribution percentage (default)

    • Deviation from expected

    • Percentage difference

  • Top contributor results – displays the results of the top contributor analysis for the point in time selected on the timeline at right.

The following screenshot shows the anomaly exploration screen with all sections expanded.

Tip

If the left side of the screen doesn't display the Contributors section, go back to the analysis and choose Explore anomalies again.