Step 2: Provision the pipeline and train or deploy the ML model - MLOps Workload Orchestrator

Step 2: Provision the pipeline and train or deploy the ML model

Use the following procedure to provision the pipeline and train/deploy your ML model. If you are using API provisioning, the body of the API call must have the information specified in API operations. API endpoints require authentication with IAM. For more information, refer to the How do I enable IAM authentication for API Gateway APIs? support topic, and the Signing AWS requests with Signature Version 4 topic in the AWS General Reference Guide.

Note

If you are using API provisioning to launch the stack, you must make a POST request to the API Gateway endpoint specified in the stack’s output. The path will be structured as <apigateway_endpoint>/provisionpipeline.

If you are using Git provisioning to launch the stack, you must create a file named mlops-config.json and commit the file to the repository’s main branch.

  1. Monitor the progress of the pipeline by calling the <apigateway_endpoint>/pipelinestatus. The pipeline_id is displayed in the response of the initial /provisionpipeline API call.

  2. Run the provisioned pipeline by uploading the model artifacts to the Amazon S3 bucket specified in the output of the CloudFormation stack of the pipeline.

    When the pipeline provisioning is complete, you will receive another apigateway_endpoint as the inference endpoint of the deployed model.