Monitor model quality - Amazon SageMaker

Monitor model quality

Model quality monitoring jobs monitor the performance of a model by comparing the predictions that the model makes with the actual Ground Truth labels that the model attempts to predict. To do this, model quality monitoring merges data that is captured from real-time or batch inference with actual labels that you store in an Amazon S3 bucket, and then compares the predictions with the actual labels.

To measure model quality, model monitor uses metrics that depend on the ML problem type. For example, if your model is for a regression problem, one of the metrics evaluated is mean square error (mse). For information about all of the metrics used for the different ML problem types, see Model Quality Metrics.

Model quality monitoring follows the same steps as data quality monitoring, but adds the additional step of merging the actual labels from Amazon S3 with the predictions captured from the real-time inference endpoint or batch transform job. To monitor model quality, follow these steps:

  • Enable data capture. This captures inference input and output from a real-time inference endpoint or batch transform job and stores the data in Amazon S3. For more information, see Capture data.

  • Create a baseline. In this step, you run a baseline job that compares predictions from the model with Ground Truth labels in a baseline dataset. The baseline job automatically creates baseline statistical rules and constraints that define thresholds against which the model performance is evaluated. For more information, see Create a Model Quality Baseline.

  • Define and schedule model quality monitoring jobs. For specific information and code samples of model quality monitoring jobs, see Schedule Model Quality Monitoring Jobs.For general information about monitoring jobs, see Schedule monitoring jobs.

  • Ingest Ground Truth labels that model monitor merges with captured prediction data from a real-time inference endpoint or batch transform job. For more information, see Ingest Ground Truth Labels and Merge Them With Predictions.

  • Integrate model quality monitoring with Amazon CloudWatch. For more information, see Model Quality CloudWatch Metrics.

  • Interpret the results of a monitoring job. For more information, see Interpret results.

  • Use SageMaker Studio to enable model quality monitoring and visualize results. For more information, see Visualize results for real-time endpoints in Amazon SageMaker Studio.