Train a Machine Learning Model - AWS Step Functions

Train a Machine Learning Model

This sample project demonstrates how to use Amazon SageMaker and AWS Step Functions to train a machine learning model and how to batch transform a test dataset. This sample project creates the following:

  • An AWS Lambda function

  • An Amazon Simple Storage Service (Amazon S3) bucket

  • An AWS Step Functions state machine

  • Related AWS Identity and Access Management (IAM) roles

In this project, Step Functions uses a Lambda function to seed an Amazon S3 bucket with a test dataset. It then trains a machine learning model and performs a batch transform, using the Amazon SageMaker service integration.

For more information about Amazon SageMaker and Step Functions service integrations, see the following:

Note

This sample project may incur charges.

For new AWS users, a free usage tier is available. On this tier, services are free below a certain level of usage. For more information about AWS costs and the Free Tier, see Amazon SageMaker Pricing.

Create the State Machine and Provision Resources

  1. Open the Step Functions console and choose Create a state machine.

  2. Choose Sample Projects, and then choose Train a machine learning model.

    The state machine Code and Visual Workflow are displayed.

    
          Training model workflow.
  3. Choose Next.

    The Deploy resources page is displayed, listing the resources that will be created. For this sample project, the resources include:

    • A Lambda function

    • An Amazon S3 bucket

    • A Step Functions state machine

    • Related IAM roles

  4. Choose Deploy Resources.

    Note

    It can take up to 10 minutes for these resources and related IAM permissions to be created. While the Deploy resources page is displayed, you can open the Stack ID link to see which resources are being provisioned.

Start a New Execution

  1. Open the Step Functions console.

  2. On the State machines page, choose the TrainAndBatchTransformStateMachine state machine that was created by the sample project, and then choose Start execution.

  3. On the New execution page, enter an execution name (optional), and then choose Start Execution.

  4. (Optional) To help identify your execution, you can specify an ID for it in the Enter an execution name box. If you don't enter an ID, Step Functions generates a unique ID automatically.

    Note

    Step Functions allows you to create state machine, execution, and activity names that contain non-ASCII characters. These non-ASCII names don't work with Amazon CloudWatch. To ensure that you can track CloudWatch metrics, choose a name that uses only ASCII characters.

  5. (Optional) Go to the newly created state machine on the Step Functions Dashboard, and then choose New execution.

  6. When an execution is complete, you can select states on the Visual workflow and browse the Input and Output under Step details.

Example State Machine Code

The state machine in this sample project integrates with Amazon SageMaker and AWS Lambda by passing parameters directly to those resources, and uses an Amazon S3 bucket for the training data source and output.

Browse through this example state machine to see how Step Functions controls Lambda and Amazon SageMaker.

For more information about how AWS Step Functions can control other AWS services, see Service Integrations with AWS Step Functions .

{ "StartAt": "Generate dataset", "States": { "Generate dataset": { "Resource": "arn:aws:lambda:us-west-2:123456789012:function:TrainAndBatchTransform-SeedingFunction-17RNSOTG97HPV", "Type": "Task", "Next": "Train model (XGBoost)" }, "Train model (XGBoost)": { "Resource": "arn:aws:states:::sagemaker:createTrainingJob.sync", "Parameters": { "AlgorithmSpecification": { "TrainingImage": "433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest", "TrainingInputMode": "File" }, "OutputDataConfig": { "S3OutputPath": "s3://trainandbatchtransform-s3bucket-1jn1le6gadwfz/models" }, "StoppingCondition": { "MaxRuntimeInSeconds": 86400 }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.m4.xlarge", "VolumeSizeInGB": 30 }, "RoleArn": "arn:aws:iam::123456789012:role/TrainAndBatchTransform-SageMakerAPIExecutionRole-Y9IX3DLF6EUO", "InputDataConfig": [ { "DataSource": { "S3DataSource": { "S3DataDistributionType": "ShardedByS3Key", "S3DataType": "S3Prefix", "S3Uri": "s3://trainandbatchtransform-s3bucket-1jn1le6gadwfz/csv/train.csv" } }, "ChannelName": "train", "ContentType": "text/csv" } ], "HyperParameters": { "objective": "reg:logistic", "eval_metric": "rmse", "num_round": "5" }, "TrainingJobName.$": "$$.Execution.Name" }, "Type": "Task", "Next": "Save Model" }, "Save Model": { "Parameters": { "PrimaryContainer": { "Image": "433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest", "Environment": {}, "ModelDataUrl.$": "$.ModelArtifacts.S3ModelArtifacts" }, "ExecutionRoleArn": "arn:aws:iam::123456789012:role/TrainAndBatchTransform-SageMakerAPIExecutionRole-Y9IX3DLF6EUO", "ModelName.$": "$.TrainingJobName" }, "Resource": "arn:aws:states:::sagemaker:createModel", "Type": "Task", "Next": "Batch transform" }, "Batch transform": { "Type": "Task", "Resource": "arn:aws:states:::sagemaker:createTransformJob.sync", "Parameters": { "ModelName.$": "$$.Execution.Name", "TransformInput": { "CompressionType": "None", "ContentType": "text/csv", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://trainandbatchtransform-s3bucket-1jn1le6gadwfz/csv/test.csv" } } }, "TransformOutput": { "S3OutputPath": "s3://trainandbatchtransform-s3bucket-1jn1le6gadwfz/output" }, "TransformResources": { "InstanceCount": 1, "InstanceType": "ml.m4.xlarge" }, "TransformJobName.$": "$$.Execution.Name" }, "End": true } } }

For information about how to configure IAM when using Step Functions with other AWS services, see IAM Policies for Integrated Services.

IAM Example

These example AWS Identity and Access Management (IAM) policies generated by the sample project include the least privilege necessary to execute the state machine and related resources. We recommend that you include only those permissions that are necessary in your IAM policies.

{ "Version": "2012-10-17", "Statement": [ { "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*", "Effect": "Allow" } ] }

The following policy allows the Lambda function to seed the Amazon S3 bucket with sample data.

{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::trainandbatchtransform-s3bucket-1jn1le6gadwfz/*", "Effect": "Allow" } ] }

For information about how to configure IAM when using Step Functions with other AWS services, see IAM Policies for Integrated Services.