SageMaker Roles - Amazon SageMaker

SageMaker Roles

Amazon SageMaker performs operations on your behalf using other AWS services. You must grant SageMaker permissions to use these services and the resources they act upon. You grant SageMaker these permissions using an AWS Identity and Access Management (IAM) execution role. For more information on IAM roles, see IAM roles.

To create and use an execution role, you can use the following procedures.

Create execution role

Use the following procedure to create an execution role with the IAM managed policy, AmazonSageMakerFullAccess, attached. If your use case requires more granular permissions, use other sections on this page to create an execution role that meets your business needs. You can create an execution role using the SageMaker console or the AWS CLI.

Important

The IAM managed policy, AmazonSageMakerFullAccess, used in the following procedure only grants the execution role permission to perform certain Amazon S3 actions on buckets or objects with SageMaker, Sagemaker, sagemaker, or aws-glue in the name. To learn how to add an additional policy to an execution role to grant it access to other Amazon S3 buckets and objects, see Add Additional Amazon S3 Permissions to a SageMaker Execution Role.

Note

You can create an execution role directly when you create a SageMaker domain or a notebook instance.

To create a new execution role from the SageMaker console

  1. Open the IAM console at https://console.aws.amazon.com/iam/.

  2. Choose Roles and then choose Create role.

  3. Keep AWS service as the Trusted entity type and then use the down arrow to find SageMaker in Use cases for other AWS services.

  4. Choose SageMaker – Execution and then choose Next.

  5. The IAM managed policy, AmazonSageMakerFullAccess, is automatically attached to the role. To see the permissions included in this policy, choose the plus (+) sign next to the policy name. Choose Next.

  6. Enter a Role name and a Description.

  7. (Optional) Add additional permissions and tags to the role.

  8. Choose Create role.

  9. On the Roles section of the IAM console, find the role you just created. If needed, use the text box to search for the role using the role name.

  10. On the role summary page, make note of the ARN.

To create a new execution role from the AWS CLI

Before you create an execution role using the AWS CLI, make sure to update and configure it by following the instructions in AWS CLI Prerequisites, then continue with the instructions in Onboard from the AWS CLI.

Once you have created an execution role, you can associate it with a SageMaker domain, a user profile, or with a Jupyter notebook instance.

You can also pass the ARN of an execution role to your API call. For example, using Amazon SageMaker Python SDK, you can pass the ARN of your execution role to an estimator. In the code sample that follows, we create an estimator using the XGBoost algorithm container and pass the ARN of the execution role as a parameter. For the full example on GitHub, see Customer Churn Prediction with XGBoost.

import sagemaker, boto3 from sagemaker import image_uris sess = sagemaker.Session() region = sess.boto_region_name bucket = sess.default_bucket() prefix = "sagemaker/DEMO-xgboost-churn" container = sagemaker.image_uris.retrieve("xgboost", region, "1.7-1") xgb = sagemaker.estimator.Estimator( container, execution-role-ARN, instance_count=1, instance_type="ml.m4.xlarge", output_path="s3://{}/{}/output".format(bucket, prefix), sagemaker_session=sess, ) ...

Add Additional Amazon S3 Permissions to a SageMaker Execution Role

When you use a SageMaker feature with resources in Amazon S3, such as input data, the execution role you specify in your request (for example CreateTrainingJob) is used to access these resources.

If you attach the IAM managed policy, AmazonSageMakerFullAccess, to an execution role, that role has permission to perform certain Amazon S3 actions on buckets or objects with SageMaker, Sagemaker, sagemaker, or aws-glue in the name. It also has permission to perform the following actions on any Amazon S3 resource:

"s3:CreateBucket", "s3:GetBucketLocation", "s3:ListBucket", "s3:ListAllMyBuckets", "s3:GetBucketCors", "s3:PutBucketCors"

To give an execution role permissions to access one or more specific buckets in Amazon S3, you can attach a policy similar to the following to the role. This policy grants an IAM role permission to perform all actions that AmazonSageMakerFullAccess allows but restricts this access to the buckets DOC-EXAMPLE-BUCKET1 and DOC-EXAMPLE-BUCKET2. Refer to the security documentation for the specific SageMaker feature you are using to learn more about the Amazon S3 permissions required for that feature.

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:AbortMultipartUpload" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*", "arn:aws:s3:::DOC-EXAMPLE-BUCKET2/*" ] }, { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:GetBucketLocation", "s3:ListBucket", "s3:ListAllMyBuckets", "s3:GetBucketCors", "s3:PutBucketCors" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetBucketAcl", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET1", "arn:aws:s3:::DOC-EXAMPLE-BUCKET2" ] } ] }

Get execution role

You can use the SageMaker console or the AWS CLI to retrieve the ARN of the execution role attached to a SageMaker domain, a user profile, or a notebook instance.

  • To find the ARN of the IAM execution role attached to a SageMaker domain, see View and edit domains.

  • To find the ARN of the IAM execution role attached to a user profile, see View User Profiles and User Profile Details.

  • To find the ARN of the IAM execution role attached to a notebook instance:

    1. Open the IAM console at https://console.aws.amazon.com/iam/.

    2. On the left navigation pane, choose Notebook then Notebook instances.

    3. From the list of notebooks, select the notebook that you want to view.

    4. The ARN is in the Permissions and encryption section.

Alternatively, Amazon SageMaker Python SDK users can retrieve the ARN of the execution role attached to their user profile or a notebook instance by running the following code:

import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role()
Note

The execution role is available only when running a notebook within SageMaker. If you run get_execution_role in a notebook not on SageMaker, expect a "region" error.

Passing Roles

Actions like passing a role between services are a common function within SageMaker. You can find more details on Actions, Resources, and Condition Keys for SageMaker in the IAM User Guide.

You pass the role (iam:PassRole) when making these API calls: CreateAutoMLJob, CreateCompilationJob, CreateDomain, CreateFeatureGroup, CreateFlowDefiniton, CreateHyperParameterTuningJob, CreateImage, CreateLabelingJob, CreateModel, CreateMonitoringSchedule, CreateNotebookInstance, CreateProcessingJob, CreateTrainingJob, CreateUserProfile, RenderUiTemplate, UpdateImage, and UpdateNotebookInstance.

You attach the following trust policy to the IAM role which grants SageMaker principal permissions to assume the role, and is the same for all of the execution roles:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "sagemaker.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }

The permissions that you need to grant to the role vary depending on the API that you call. The following sections explain these permissions.

Note

Instead of managing permissions by crafting a permission policy, you can use the AWS-managed AmazonSageMakerFullAccess permission policy. The permissions in this policy are fairly broad, to allow for any actions you might want to perform in SageMaker. For a listing of the policy including information about the reasons for adding many of the permissions, see AWS managed policy: AmazonSageMakerFullAccess. If you prefer to create custom policies and manage permissions to scope the permissions only to the actions you need to perform with the execution role, see the following topics.

Important

If you're running into issues, see Troubleshooting Amazon SageMaker Identity and Access.

For more information about IAM roles, see IAM Roles in the IAM User Guide.

CreateAutoMLJob API: Execution Role Permissions

For an execution role that you can pass in a CreateAutoMLJob API request, you can attach the following minimum permission policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } }, { "Effect": "Allow", "Action": [ "sagemaker:DescribeEndpointConfig", "sagemaker:DescribeModel", "sagemaker:InvokeEndpoint", "sagemaker:ListTags", "sagemaker:DescribeEndpoint", "sagemaker:CreateModel", "sagemaker:CreateEndpointConfig", "sagemaker:CreateEndpoint", "sagemaker:DeleteModel", "sagemaker:DeleteEndpointConfig", "sagemaker:DeleteEndpoint", "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }

If you specify a private VPC for your AutoML job, add the following permissions:

{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ] }

If your input is encrypted using server-side encryption with an AWS KMS–managed key (SSE-KMS), add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:Decrypt" ] }

If you specify a KMS key in the output configuration of your AutoML job, add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:Encrypt" ] }

If you specify a volume KMS key in the resource configuration of your AutoML job, add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:CreateGrant" ] }

CreateDomain API: Execution Role Permissions

The execution role for domains with IAM Identity Center and the user/execution role for IAM domains need the following permissions when you pass an AWS KMS customer managed key as the KmsKeyId in the CreateDomain API request. The permissions are enforced during the CreateApp API call.

For an execution role that you can pass in the CreateDomain API request, you can attach the following permission policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:CreateGrant", "kms:DescribeKey" ], "Resource": "arn:aws:kms:region:account-id:key/kms-key-id" } ] }

Alternatively, if the permissions are specified in a KMS policy, you can attach the following policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::account-id:role/ExecutionRole" ] }, "Action": [ "kms:CreateGrant", "kms:DescribeKey" ], "Resource": "*" } ] }

CreateImage and UpdateImage APIs: Execution Role Permissions

For an execution role that you can pass in a CreateImage or UpdateImage API request, you can attach the following permission policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } } ] }

CreateNotebookInstance API: Execution Role Permissions

The permissions that you grant to the execution role for calling the CreateNotebookInstance API depend on what you plan to do with the notebook instance. If you plan to use it to invoke SageMaker APIs and pass the same role when calling the CreateTrainingJob and CreateModel APIs, attach the following permissions policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sagemaker:*", "ecr:GetAuthorizationToken", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "ecr:BatchCheckLayerAvailability", "ecr:SetRepositoryPolicy", "ecr:CompleteLayerUpload", "ecr:BatchDeleteImage", "ecr:UploadLayerPart", "ecr:DeleteRepositoryPolicy", "ecr:InitiateLayerUpload", "ecr:DeleteRepository", "ecr:PutImage", "ecr:CreateRepository", "cloudwatch:PutMetricData", "cloudwatch:GetMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents", "s3:CreateBucket", "s3:ListBucket", "s3:GetBucketLocation", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "robomaker:CreateSimulationApplication", "robomaker:DescribeSimulationApplication", "robomaker:DeleteSimulationApplication", "robomaker:CreateSimulationJob", "robomaker:DescribeSimulationJob", "robomaker:CancelSimulationJob", "ec2:CreateVpcEndpoint", "ec2:DescribeRouteTables", "elasticfilesystem:DescribeMountTargets" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "codecommit:GitPull", "codecommit:GitPush" ], "Resource": [ "arn:aws:codecommit:*:*:*sagemaker*", "arn:aws:codecommit:*:*:*SageMaker*", "arn:aws:codecommit:*:*:*Sagemaker*" ] }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } } ] }

To tighten the permissions, limit them to specific Amazon S3 and Amazon ECR resources, by restricting "Resource": "*", as follows:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sagemaker:*", "ecr:GetAuthorizationToken", "cloudwatch:PutMetricData", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::inputbucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::inputbucket/object1", "arn:aws:s3:::outputbucket/path", "arn:aws:s3:::inputbucket/object2", "arn:aws:s3:::inputbucket/object3" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": [ "arn:aws:ecr:region::repository/my-repo1", "arn:aws:ecr:region::repository/my-repo2", "arn:aws:ecr:region::repository/my-repo3" ] } ] }

If you plan to access other resources, such as Amazon DynamoDB or Amazon Relational Database Service, add the relevant permissions to this policy.

In the preceding policy, you scope the policy as follows:

  • Scope the s3:ListBucket permission to the specific bucket that you specify as InputDataConfig.DataSource.S3DataSource.S3Uri in a CreateTrainingJob request.

  • Scope s3:GetObject , s3:PutObject, and s3:DeleteObject permissions as follows:

    • Scope to the following values that you specify in a CreateTrainingJob request:

      InputDataConfig.DataSource.S3DataSource.S3Uri

      OutputDataConfig.S3OutputPath

    • Scope to the following values that you specify in a CreateModel request:

      PrimaryContainer.ModelDataUrl

      SuplementalContainers.ModelDataUrl

  • Scope ecr permissions as follows:

    • Scope to the AlgorithmSpecification.TrainingImage value that you specify in a CreateTrainingJob request.

    • Scope to the PrimaryContainer.Image value that you specify in a CreateModel request:

The cloudwatch and logs actions are applicable for "*" resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.

CreateHyperParameterTuningJob API: Execution Role Permissions

For an execution role that you can pass in a CreateHyperParameterTuningJob API request, you can attach the following permission policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }

Instead of the specifying "Resource": "*", you could scope these permissions to specific Amazon S3, Amazon ECR, and Amazon CloudWatch Logs resources:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "ecr:GetAuthorizationToken" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::inputbucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::inputbucket/object", "arn:aws:s3:::outputbucket/path" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "arn:aws:ecr:region::repository/my-repo" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams" ], "Resource": "arn:aws:logs:*:*:log-group:/aws/sagemaker/TrainingJobs*" } ] }

If the training container associated with the hyperparameter tuning job needs to access other data sources, such as DynamoDB or Amazon RDS resources, add relevant permissions to this policy.

In the preceding policy, you scope the policy as follows:

  • Scope the s3:ListBucket permission to a specific bucket that you specify as the InputDataConfig.DataSource.S3DataSource.S3Uri in a CreateTrainingJob request.

  • Scope the s3:GetObject and s3:PutObject permissions to the following objects that you specify in the input and output data configuration in a CreateHyperParameterTuningJob request:

    InputDataConfig.DataSource.S3DataSource.S3Uri

    OutputDataConfig.S3OutputPath

  • Scope Amazon ECR permissions to the registry path (AlgorithmSpecification.TrainingImage) that you specify in a CreateHyperParameterTuningJob request.

  • Scope Amazon CloudWatch Logs permissions to log group of SageMaker training jobs.

The cloudwatch actions are applicable for "*" resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.

If you specify a private VPC for your hyperparameter tuning job, add the following permissions:

{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ] }

If your input is encrypted using server-side encryption with an AWS KMS–managed key (SSE-KMS), add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:Decrypt" ] }

If you specify a KMS key in the output configuration of your hyperparameter tuning job, add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:Encrypt" ] }

If you specify a volume KMS key in the resource configuration of your hyperparameter tuning job, add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:CreateGrant" ] }

CreateProcessingJob API: Execution Role Permissions

For an execution role that you can pass in a CreateProcessingJob API request, you can attach the following permission policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }

Instead of the specifying "Resource": "*", you could scope these permissions to specific Amazon S3 and Amazon ECR resources:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "ecr:GetAuthorizationToken" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::inputbucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::inputbucket/object", "arn:aws:s3:::outputbucket/path" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "arn:aws:ecr:region::repository/my-repo" } ] }

If CreateProcessingJob.AppSpecification.ImageUri needs to access other data sources, such as DynamoDB or Amazon RDS resources, add relevant permissions to this policy.

In the preceding policy, you scope the policy as follows:

  • Scope the s3:ListBucket permission to a specific bucket that you specify as the ProcessingInputs in a CreateProcessingJob request.

  • Scope the s3:GetObject and s3:PutObject permissions to the objects that will be downloaded or uploaded in the ProcessingInputs and ProcessingOutputConfig in a CreateProcessingJob request.

  • Scope Amazon ECR permissions to the registry path (AppSpecification.ImageUri) that you specify in a CreateProcessingJob request.

The cloudwatch and logs actions are applicable for "*" resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.

If you specify a private VPC for your processing job, add the following permissions. Don't scope in the policy with any conditions or resource filters. Otherwise, the validation checks that occur during the creation of the processing job fail.

{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ] }

If your input is encrypted using server-side encryption with an AWS KMS–managed key (SSE-KMS), add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:Decrypt" ] }

If you specify a KMS key in the output configuration of your processing job, add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:Encrypt" ] }

If you specify a volume KMS key in the resource configuration of your processing job, add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:CreateGrant" ] }

CreateTrainingJob API: Execution Role Permissions

For an execution role that you can pass in a CreateTrainingJob API request, you can attach the following permission policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }

Instead of the specifying "Resource": "*", you could scope these permissions to specific Amazon S3 and Amazon ECR resources:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "ecr:GetAuthorizationToken" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::inputbucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::inputbucket/object", "arn:aws:s3:::outputbucket/path" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "arn:aws:ecr:region::repository/my-repo" } ] }

If CreateTrainingJob.AlgorithSpecifications.TrainingImage needs to access other data sources, such as DynamoDB or Amazon RDS resources, add relevant permissions to this policy.

In the preceding policy, you scope the policy as follows:

  • Scope the s3:ListBucket permission to a specific bucket that you specify as the InputDataConfig.DataSource.S3DataSource.S3Uri in a CreateTrainingJob request.

  • Scope the s3:GetObject and s3:PutObject permissions to the following objects that you specify in the input and output data configuration in a CreateTrainingJob request:

    InputDataConfig.DataSource.S3DataSource.S3Uri

    OutputDataConfig.S3OutputPath

  • Scope Amazon ECR permissions to the registry path (AlgorithmSpecification.TrainingImage) that you specify in a CreateTrainingJob request.

The cloudwatch and logs actions are applicable for "*" resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.

If you specify a private VPC for your training job, add the following permissions:

{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ] }

If your input is encrypted using server-side encryption with an AWS KMS–managed key (SSE-KMS), add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:Decrypt" ] }

If you specify a KMS key in the output configuration of your training job, add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:Encrypt" ] }

If you specify a volume KMS key in the resource configuration of your training job, add the following permissions:

{ "Effect": "Allow", "Action": [ "kms:CreateGrant" ] }

CreateModel API: Execution Role Permissions

For an execution role that you can pass in a CreateModel API request, you can attach the following permission policy to the role:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }

Instead of the specifying "Resource": "*", you can scope these permissions to specific Amazon S3 and Amazon ECR resources:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "ecr:GetAuthorizationToken" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::inputbucket/object" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": [ "arn:aws:ecr:region::repository/my-repo", "arn:aws:ecr:region::repository/my-repo" ] } ] }

If CreateModel.PrimaryContainer.Image need to access other data sources, such as Amazon DynamoDB or Amazon RDS resources, add relevant permissions to this policy.

In the preceding policy, you scope the policy as follows:

  • Scope S3 permissions to objects that you specify in the PrimaryContainer.ModelDataUrl in a CreateModel request.

  • Scope Amazon ECR permissions to a specific registry path that you specify as the PrimaryContainer.Image and SecondaryContainer.Image in a CreateModel request.

The cloudwatch and logs actions are applicable for "*" resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.

Note

If you plan to use the SageMaker deployment guardrails feature for model deployment in production, ensure that your execution role has permission to perform the cloudwatch:DescribeAlarms action on your auto-rollback alarms.

If you specify a private VPC for your model, add the following permissions:

{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ] }