SageMaker Roles
As a managed service, SageMaker performs operations on your behalf on the AWS hardware that is managed by SageMaker. SageMaker can perform only operations that the user permits.
A SageMaker user can grant these permissions with an IAM role (referred to as an execution role).
To create and use a locally available execution role, you can use the following procedures.
Get execution role
When you run a notebook within SageMaker you can access the execution role with the following code:
sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role()
The execution role is intended to be available only when running a notebook within
SageMaker. If you run get_execution_role
in a notebook not on SageMaker,
expect a "region" error.
To find the IAM role ARN created when you created your the notebook instance or Studio application, go to the Notebook instances page in the console and select the relevant notebook from the list of Names. in the configuration detail page the IAM role ARN is given in the Permissions and encryption section.
To create a new role
-
Log onto the console -> IAM -> Roles -> Create Role
-
Create a service-linked role with
sagemaker.amazonaws.com
-
Give the role
AmazonSageMakerFullAccess
-
Give the role
AmazonS3FullAccess
(limit the permissions to specific buckets if possible) -
Make note of the ARN once it is created
With a known ARN for your role, you can programmatically check the role when running
the notebook locally or on SageMaker.
Replace RoleName
with your known ARN:
try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='AmazonSageMaker-ExecutionRole-20201200T100000')['Role']['Arn']
Passing Roles
Actions like passing a role between services are a common function within SageMaker. You can find more details on Actions, Resources, and Condition Keys for SageMaker in the IAM User Guide.
You pass the role (iam:PassRole
) when making these API calls:
CreateAutoMLJob
,
CreateCompilationJob
,
CreateDomain
,
CreateFlowDefiniton
,
CreateHyperParameterTuningJob
,
CreateImage
,
CreateLabelingJob
,
CreateModel
,
CreateMonitoringSchedule
,
CreateNotebookInstance
,
CreateProcessingJob
,
CreateTrainingJob
,
CreateUserProfile
,
RenderUiTemplate
, and
UpdateImage
.
You attach the following trust policy to the IAM role which grants SageMaker principal permissions to assume the role, and is the same for all of the execution roles:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "sagemaker.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
The permissions that you need to grant to the role vary depending on the API that you call. The following sections explain these permissions.
Instead of managing permissions by crafting a permission policy, you can use the
AWS-managed AmazonSageMakerFullAccess
permission policy. The
permissions in this policy are fairly broad, to allow for any actions you might want
to perform in SageMaker. For a listing of the policy including information about the
reasons for adding many of the permissions, see AmazonSageMakerFullAccess Policy. If you
prefer to create custom policies and manage permissions to scope the permissions
only to the actions you need to perform with the execution role, see the following
topics.
For more information about IAM roles, see IAM Roles in the IAM User Guide.
Topics
- CreateDomain API: Execution Role Permissions
- CreateImage and UpdateImage APIs: Execution Role Permissions
- CreateNotebookInstance API: Execution Role Permissions
- CreateHyperParameterTuningJob API: Execution Role Permissions
- CreateProcessingJob API: Execution Role Permissions
- CreateTrainingJob API: Execution Role Permissions
- CreateModel API: Execution Role Permissions
- AmazonSageMakerFullAccess Policy
CreateDomain API: Execution Role Permissions
The execution role for AWS SSO domains and the user/execution role for IAM
domains need the following permissions when you pass an AWS KMS customer managed key
(CMK) as the KmsKeyId
in the CreateDomain
API request.
The permissions are enforced during the CreateApp
API call.
For an execution role that you can pass in the CreateDomain
API
request, you can attach the following permission policy to the role:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:CreateGrant", "kms:DescribeKey" ], "Resource": "arn:aws:kms:region:account-id:key/
kms-key-id
" }, ] }
Alternatively, if the permissions are specified in a KMS policy, you can attach the following policy to the role:
{ "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::account-id:role/ExecutionRole" ] }, "Action": [ "kms:DescribeKey", "kms:CreateGrant" ], "Resource": "*" }
CreateImage and UpdateImage APIs: Execution Role Permissions
For an execution role that you can pass in a CreateImage
or UpdateImage
API request, you can attach the following permission policy to the role:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } } ] }
CreateNotebookInstance API: Execution Role Permissions
The permissions that you grant to the execution role for calling the
CreateNotebookInstance
API depend on what you plan to do with the
notebook instance. If you plan to use it to invoke SageMaker APIs and pass the same
role
when calling the CreateTrainingJob
and CreateModel
APIs,
attach the following permissions policy to the role:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sagemaker:*", "ecr:GetAuthorizationToken", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "ecr:BatchCheckLayerAvailability", "ecr:SetRepositoryPolicy", "ecr:CompleteLayerUpload", "ecr:BatchDeleteImage", "ecr:UploadLayerPart", "ecr:DeleteRepositoryPolicy", "ecr:InitiateLayerUpload", "ecr:DeleteRepository", "ecr:PutImage", "ecr:CreateRepository", "cloudwatch:PutMetricData", "cloudwatch:GetMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents", "s3:CreateBucket", "s3:ListBucket", "s3:GetBucketLocation", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "robomaker:CreateSimulationApplication", "robomaker:DescribeSimulationApplication", "robomaker:DeleteSimulationApplication", "robomaker:CreateSimulationJob", "robomaker:DescribeSimulationJob", "robomaker:CancelSimulationJob", "ec2:CreateVpcEndpoint", "ec2:DescribeRouteTables", "fsx:DescribeFileSystem", "elasticfilesystem:DescribeMountTargets" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "codecommit:GitPull", "codecommit:GitPush" ], "Resource": [ "arn:aws:codecommit:*:*:*sagemaker*", "arn:aws:codecommit:*:*:*SageMaker*", "arn:aws:codecommit:*:*:*Sagemaker*" ] }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } } ] }
To tighten the permissions, limit them to specific Amazon S3 and Amazon ECR resources,
by
restricting "Resource": "*"
, as follows:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sagemaker:*", "ecr:GetAuthorizationToken", "cloudwatch:PutMetricData", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::inputbucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::
inputbucket
/object1
", "arn:aws:s3:::outputbucket
/path
", "arn:aws:s3:::inputbucket
/object2
", "arn:aws:s3:::inputbucket
/object3
" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": [ "arn:aws:ecr:::repository/my-repo1
", "arn:aws:ecr:::repository/my-repo2
", "arn:aws:ecr:::repository/my-repo3
" ] } ] }
If you plan to access other resources, such as Amazon DynamoDB or Amazon Relational Database Service, add the relevant permissions to this policy.
In the preceding policy, you scope the policy as follows:
-
Scope the
s3:ListBucket
permission to the specific bucket that you specify asInputDataConfig.DataSource.S3DataSource.S3Uri
in aCreateTrainingJob
request. -
Scope
s3:GetObject
,s3:PutObject
, ands3:DeleteObject
permissions as follows:-
Scope to the following values that you specify in a
CreateTrainingJob
request:InputDataConfig.DataSource.S3DataSource.S3Uri
OutputDataConfig.S3OutputPath
-
Scope to the following values that you specify in a
CreateModel
request:PrimaryContainer.ModelDataUrl
SuplementalContainers.ModelDataUrl
-
-
Scope
ecr
permissions as follows:-
Scope to the
AlgorithmSpecification.TrainingImage
value that you specify in aCreateTrainingJob
request. -
Scope to the
PrimaryContainer.Image
value that you specify in aCreateModel
request:
-
The cloudwatch
and logs
actions are applicable for "*"
resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.
CreateHyperParameterTuningJob API: Execution Role Permissions
For an execution role that you can pass in a
CreateHyperParameterTuningJob
API request, you can attach the
following permission policy to the role:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }
Instead of the specifying "Resource": "*"
, you could scope these
permissions to specific Amazon S3 and Amazon ECR resources:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "ecr:GetAuthorizationToken" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::
inputbucket
" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::inputbucket
/object
", "arn:aws:s3:::outputbucket
/path
" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "arn:aws:ecr:::repository/my-repo
" } ] }
If the training container associated with the hyperparameter tuning job needs to access other data sources, such as DynamoDB or Amazon RDS resources, add relevant permissions to this policy.
In the preceding policy, you scope the policy as follows:
-
Scope the
s3:ListBucket
permission to a specific bucket that you specify as theInputDataConfig.DataSource.S3DataSource.S3Uri
in aCreateTrainingJob
request. -
Scope the
s3:GetObject
ands3:PutObject
permissions to the following objects that you specify in the input and output data configuration in aCreateHyperParameterTuningJob
request:InputDataConfig.DataSource.S3DataSource.S3Uri
OutputDataConfig.S3OutputPath
-
Scope Amazon ECR permissions to the registry path (
AlgorithmSpecification.TrainingImage
) that you specify in aCreateHyperParameterTuningJob
request.
The cloudwatch
and logs
actions are applicable for "*"
resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.
If you specify a private VPC for your hyperparameter tuning job, add the following permissions:
{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ] }
If your input is encrypted using server-side encryption with an AWS KMS–managed key (SSE-KMS), add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:Decrypt" ] }
If you specify a KMS key in the output configuration of your hyperparameter tuning job, add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:Encrypt" ] }
If you specify a volume KMS key in the resource configuration of your hyperparameter tuning job, add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:CreateGrant" ] }
CreateProcessingJob API: Execution Role Permissions
For an execution role that you can pass in a CreateProcessingJob
API
request, you can attach the following permission policy to the role:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }
Instead of the specifying "Resource": "*"
, you could scope these
permissions to specific Amazon S3 and Amazon ECR resources:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "ecr:GetAuthorizationToken" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::
inputbucket
" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::inputbucket
/object
", "arn:aws:s3:::outputbucket
/path
" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "arn:aws:ecr:::repository/my-repo
" } ] }
If CreateProcessingJob.AppSpecification.ImageUri
needs to
access other data sources, such as DynamoDB or Amazon RDS resources, add relevant
permissions to this policy.
In the preceding policy, you scope the policy as follows:
-
Scope the
s3:ListBucket
permission to a specific bucket that you specify as theProcessingInputs
in aCreateProcessingJob
request. -
Scope the
s3:GetObject
ands3:PutObject
permissions to the objects that will be downloaded or uploaded in theProcessingInputs
andProcessingOutputConfig
in aCreateProcessingJob
request. -
Scope Amazon ECR permissions to the registry path (
AppSpecification.ImageUri
) that you specify in aCreateProcessingJob
request.
The cloudwatch
and logs
actions are applicable for "*"
resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.
If you specify a private VPC for your processing job, add the following permissions:
{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups"
If your input is encrypted using server-side encryption with an AWS KMS–managed key (SSE-KMS), add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:Decrypt" ] }
If you specify a KMS key in the output configuration of your processing job, add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:Encrypt" ] }
If you specify a volume KMS key in the resource configuration of your processing job, add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:CreateGrant" ] }
CreateTrainingJob API: Execution Role Permissions
For an execution role that you can pass in a CreateTrainingJob
API
request, you can attach the following permission policy to the role:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }
Instead of the specifying "Resource": "*"
, you could scope these
permissions to specific Amazon S3 and Amazon ECR resources:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "ecr:GetAuthorizationToken" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::
inputbucket
" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::inputbucket
/object
", "arn:aws:s3:::outputbucket
/path
" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "arn:aws:ecr:::repository/my-repo
" } ] }
If CreateTrainingJob.AlgorithSpecifications.TrainingImage
needs to
access other data sources, such as DynamoDB or Amazon RDS resources, add relevant
permissions to this policy.
In the preceding policy, you scope the policy as follows:
-
Scope the
s3:ListBucket
permission to a specific bucket that you specify as theInputDataConfig.DataSource.S3DataSource.S3Uri
in aCreateTrainingJob
request. -
Scope the
s3:GetObject
ands3:PutObject
permissions to the following objects that you specify in the input and output data configuration in aCreateTrainingJob
request:InputDataConfig.DataSource.S3DataSource.S3Uri
OutputDataConfig.S3OutputPath
-
Scope Amazon ECR permissions to the registry path (
AlgorithmSpecification.TrainingImage
) that you specify in aCreateTrainingJob
request.
The cloudwatch
and logs
actions are applicable for "*"
resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.
If you specify a private VPC for your training job, add the following permissions:
{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups"
If your input is encrypted using server-side encryption with an AWS KMS–managed key (SSE-KMS), add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:Decrypt" ] }
If you specify a KMS key in the output configuration of your training job, add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:Encrypt" ] }
If you specify a volume KMS key in the resource configuration of your training job, add the following permissions:
{ "Effect": "Allow", "Action": [ "kms:CreateGrant" ] }
CreateModel API: Execution Role Permissions
For an execution role that you can pass in a CreateModel
API request,
you can attach the following permission policy to the role:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] }
Instead of the specifying "Resource": "*"
, you can scope these
permissions to specific Amazon S3 and Amazon ECR resources:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "ecr:GetAuthorizationToken" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::
inputbucket
/object
", "arn:aws:s3:::inputbucket
/object
" ] }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": [ "arn:aws:ecr:::repository/my-repo
", "arn:aws:ecr:::repository/my-repo
" ] } ] }
If CreateModel.PrimaryContainer.Image
need to access other data
sources, such as Amazon DynamoDB or Amazon RDS resources, add relevant permissions
to this policy.
In the preceding policy, you scope the policy as follows:
-
Scope S3 permissions to objects that you specify in the
PrimaryContainer.ModelDataUrl
in aCreateModel
request. -
Scope Amazon ECR permissions to a specific registry path that you specify as the
PrimaryContainer.Image
andSecondaryContainer.Image
in aCreateModel
request.
The cloudwatch
and logs
actions are applicable for "*"
resources. For more information, see CloudWatch Resources and Operations in the Amazon CloudWatch User Guide.
If you specify a private VPC for your model, add the following permissions:
{ "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DeleteNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DescribeDhcpOptions", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups"
AmazonSageMakerFullAccess Policy
The AmazonSageMakerFullAccess
The following list explains why some of the categories of permissions in the
AmazonSageMakerFullAccess
policy are needed.
application-autoscaling
-
Needed for automatically scaling a SageMaker real-time inference endpoint.
aws-marketplace
-
Needed to view AWS AI Marketplace subscriptions.
cloudwatch
-
Needed to post CloudWatch metrics, interact with alarms, and upload CloudWatch Logs logs in your account.
codecommit
-
Needed for AWS CodeCommit integration with SageMaker notebook instances.
cognito
-
Needed for SageMaker Ground Truth to define your private workforce and work teams.
ec2
-
Needed to manage elastic network interfaces when you specify a Amazon VPC for your SageMaker jobs and notebook instances.
ec2:DescribeVpcs
-
All SageMaker services launch Amazon EC2 instances and require this permission set.
ecr
-
Needed to pull and store Docker artifacts for training and inference. This is required only if you use your own container in SageMaker.
elastic-inference
-
Needed to integrate Amazon Elastic Inference with SageMaker.
glue
-
Needed for inference pipeline pre-processing from within SageMaker notebook instances.
groundtruthlabeling
-
Needed for SageMaker Ground Truth.
iam:ListRoles
-
Needed to give the SageMaker console access to list available roles.
kms
-
Needed to give the SageMaker console access to list the available AWS KMS keys.
logs
-
Needed to allow SageMaker jobs and endpoints to publish log streams.