Step 2: Provision the pipeline and train or deploy the ML model
Use the following procedure to provision the pipeline and
train/deploy your ML model. If you are using API provisioning, the
body of the API call must have the information specified in API operations. API endpoints require authentication with IAM. For more information, refer to the How do I enable IAM authentication for API Gateway APIs?
Note
If you are using API provisioning to launch the stack, you must make a POST request
to the API Gateway endpoint specified in the stack’s output. The path will be structured as
. <apigateway_endpoint>
/provisionpipeline
If you are using Git provisioning to launch the stack, you must create a file named mlops-config.json and commit the file to the repository’s main branch.
-
Monitor the progress of the pipeline by calling the
. The<apigateway_endpoint>
/pipelinestatuspipeline_id
is displayed in the response of the initial/provisionpipeline
API call. -
Run the provisioned pipeline by uploading the model artifacts to the Amazon S3 bucket specified in the output of the CloudFormation stack of the pipeline.
When the pipeline provisioning is complete, you will receive another
apigateway_endpoint
as the inference endpoint of the deployed model.