Evaluate, explain, and detect bias in models - Amazon SageMaker

Evaluate, explain, and detect bias in models

Amazon SageMaker offers features to improve your machine learning (ML) models by detecting potential bias and helping to explain the predictions that your models make from your tabular, computer vision, natural processing, or time series datasets. It helps you identify various types of bias in pre-training data and in post-training that can emerge during model training or when the model is in production. You can also evaluate a language model for model quality and responsibility metrics using foundation model evaluations.

The following topics give information about how to evaluate, explain, and detect bias with Amazon SageMaker.