Pipeline descriptions
Model training pipelines
This solution provides three pipelines to train ML models using Amazon SageMaker built-in algorithms. Deploying a training pipeline creates the following AWS resources:
-
An AWS Lambda function to initiate the creation of Amazon SageMaker training, tuning, or autopilot jobs
-
A Lambda function to automatically initiate the training Lambda function once the pipeline’s CloudFormation template is deployed
-
Amazon EventBridge rules to monitor the status of the training jobs
-
An Amazon SNS topic, to notify the solution’s administrators about pipelines changes via email
-
All required IAM roles
BYOM real-time inference pipelines
This solution allows you to deploy machine learning models trained using Amazon SageMaker built-in algorithms, or custom algorithms on Amazon SageMaker endpoints that provide real-time inferences. Deploying a real-time inference pipeline creates the following AWS resources:
-
An Amazon SageMaker model, endpoint configuration, and endpoint
-
An AWS Lambda function that invokes the Amazon SageMaker endpoint and returns inferences on the passed data
-
An Amazon API Gateway connected to the Lambda that provides authentication and authorization to securely access the Amazon SageMaker endpoint
-
All required IAM roles
BYOM batch transform pipelines
The batch transform pipelines create transform jobs using machine learning models trained using Amazon SageMaker built-in algorithms (or custom algorithms) to perform batch inferences on a batch of data. Deploying a batch transform pipeline creates the following AWS resources:
-
An Amazon SageMaker model
-
An AWS Lambda function that initiates the creation of the Amazon SageMaker Transform job
-
All required IAM roles
Custom algorithm image builder pipeline
The custom algorithm image builder pipeline allows you to use custom algorithms, and build and register Docker images in Amazon ECR. This pipeline is deployed in the orchestrator account, where the Amazon ECR repository is located. Deploying this pipeline creates the following AWS resources:
-
An AWS CodePipeline with the source stage and build stage
-
The build stage uses AWS CodeBuild to build and register the custom images
-
-
All required IAM roles
Model monitor pipeline
This solution uses Amazon SageMaker model monitor to continuously monitor the quality of deployed machine learning models. The solution supports Amazon SageMaker data quality, model quality, model bias, and model explainability (feature attribution) monitoring. The data from model monitor reports can be used to set alerts for violations generated by these monitors. This solution uses the following process to activate continuous model monitoring:
-
The deployed Amazon SageMaker endpoint captures data from incoming requests to the deployed model and the resulting model predictions. The data captured for each deployed model is stored in the S3 bucket location specified by data_capture_location in the API call under the prefix
<endpoint-name>
/<model-variant-name>
/<year>
/<month>
/<day>
/<hour>
/. -
For data quality, model bias, and model explainability monitoring, the solution creates baselines from the dataset that was used to train the deployed model. For model quality monitoring, the baseline dataset contains the predictions of the model and ground truth labels. The baseline datasets must be uploaded to the solution’s Amazon S3 assets bucket. The datasets S3 keys and the baseline output Amazon S3 path must be provided in the API call, or
mlops-config.json
file. -
For data quality and model quality, the baseline jobs computes metrics and suggests constraints for the metrics and produces two files:
constraints.json
andstatistics.json
. For model bias and model explainability,analysis_config.json
andanalysis.json
files are generated. -
The generated JSON files by baseline jobs are stored in the Amazon S3 bucket specified by
baseline_job_output_location
under the prefix<baseline-job-name>
/. These files are passed as input to the Amazon SageMaker monitors. -
The solution creates a monitoring schedule job based on your configurations via the API call or
mlops-config.json
file. The monitoring job compares real-time predictions data (captured in the first step) with the baseline created in the previous step (step 2). The job reports for each deployed model monitor pipeline are stored in the S3 bucket location specified bymonitoring_output_location
under the prefix
.<endpoint-name>
/<monitoring-job-name>
/<year>
/<month>
/<day>
/<hour>
/Note
For more information, refer to Amazon SageMaker data quality, model quality, model bias, and model explainability (feature attribution) monitoring.