AWS Tools for Windows PowerShell
Command Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Synopsis

Calls the Amazon SageMaker Service CreateCompilationJob API operation.

Syntax

New-SMCompilationJob
-CompilationJobName <String>
-TargetPlatform_Accelerator <TargetPlatformAccelerator>
-TargetPlatform_Arch <TargetPlatformArch>
-OutputConfig_CompilerOption <String>
-InputConfig_DataInputConfig <String>
-InputConfig_Framework <Framework>
-InputConfig_FrameworkVersion <String>
-OutputConfig_KmsKeyId <String>
-StoppingCondition_MaxRuntimeInSecond <Int32>
-StoppingCondition_MaxWaitTimeInSecond <Int32>
-ModelPackageVersionArn <String>
-TargetPlatform_Os <TargetPlatformOs>
-RoleArn <String>
-OutputConfig_S3OutputLocation <String>
-InputConfig_S3Uri <String>
-VpcConfig_SecurityGroupId <String[]>
-VpcConfig_Subnet <String[]>
-Tag <Tag[]>
-OutputConfig_TargetDevice <TargetDevice>
-Select <String>
-PassThru <SwitchParameter>
-Force <SwitchParameter>

Description

Starts a model compilation job. After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify. If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with Amazon Web Services IoT Greengrass. In that case, deploy them as an ML resource. In the request body, you provide the following:
  • A name for the compilation job
  • Information about the input model artifacts
  • The output location for the compiled model and the device (target) that the model runs on
  • The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job.
You can also provide a Tag to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn for the compiled job. To stop a model compilation job, use StopCompilationJob. To get information about a particular model compilation job, use DescribeCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.

Parameters

-CompilationJobName <String>
A name for the model compilation job. The name must be unique within the Amazon Web Services Region and within your Amazon Web Services account.
Required?True
Position?1
Accept pipeline input?True (ByValue, ByPropertyName)
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-InputConfig_DataInputConfig <String>
Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.
  • TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.
    • Examples for one input:
      • If using the console, {"input":[1,1024,1024,3]}
      • If using the CLI, {\"input\":[1,1024,1024,3]}
    • Examples for two inputs:
      • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}
      • If using the CLI, {\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}
  • KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.
    • Examples for one input:
      • If using the console, {"input_1":[1,3,224,224]}
      • If using the CLI, {\"input_1\":[1,3,224,224]}
    • Examples for two inputs:
      • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}
      • If using the CLI, {\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}
  • MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.
    • Examples for one input:
      • If using the console, {"data":[1,3,1024,1024]}
      • If using the CLI, {\"data\":[1,3,1024,1024]}
    • Examples for two inputs:
      • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}
      • If using the CLI, {\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}
  • PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.
    • Examples for one input in dictionary format:
      • If using the console, {"input0":[1,3,224,224]}
      • If using the CLI, {\"input0\":[1,3,224,224]}
    • Example for one input in list format: [[1,3,224,224]]
    • Examples for two inputs in dictionary format:
      • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}
      • If using the CLI, {\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}
    • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]
  • XGBOOST: input data name and shape are not needed.
DataInputConfig supports the following parameters for CoreMLOutputConfig$TargetDevice (ML Model format):
  • shape: Input shape, for example {"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:
    • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}
    • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}
  • default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}
  • type: Input type. Allowed values: Image and Tensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale.
  • bias: If the input type is an Image, you need to provide the bias vector.
  • scale: If the input type is an Image, you need to provide a scale factor.
CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:
  • Tensor type input:
    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}
  • Tensor type input without input name (PyTorch):
    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]
  • Image type input:
    • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}
    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
  • Image type input without input name (PyTorch):
    • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]
    • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
Depending on the model format, DataInputConfig requires the following parameters for ml_eia2OutputConfig:TargetDevice.
  • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig. Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:
    • "DataInputConfig": {"inputs": [1, 224, 224, 3]}
    • "CompilerOptions": {"signature_def_key": "serving_custom"}
  • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions. For example:
    • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}
    • "CompilerOptions": {"output_names": ["output_tensor:0"]}
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-InputConfig_Framework <Framework>
Identifies the framework in which the model was trained. For example: TENSORFLOW.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-InputConfig_FrameworkVersion <String>
Specifies the framework version to use. This API field is only supported for the PyTorch and TensorFlow frameworks.For information about framework versions supported for cloud targets and edge devices, see Cloud Supported Instance Types and Frameworks and Edge Supported Frameworks.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-InputConfig_S3Uri <String>
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-ModelPackageVersionArn <String>
The Amazon Resource Name (ARN) of a versioned model package. Provide either a ModelPackageVersionArn or an InputConfig object in the request syntax. The presence of both objects in the CreateCompilationJob request will return an exception.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OutputConfig_CompilerOption <String>
Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.
  • DTYPE: Specifies the data type for the input. When compiling for ml_* (except for ml_inf) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:
    • float32: Use either "float" or "float32".
    • int64: Use either "int64" or "long".
    For example, {"dtype" : "float32"}.
  • CPU: Compilation for CPU supports the following compiler options.
    • mcpu: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}
    • mattr: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}
  • ARM: Details of ARM CPU compilations.
    • NEON: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.
  • NVIDIA: Compilation for NVIDIA GPU supports the following compiler options.
    • gpu_code: Specifies the targeted architecture.
    • trt-ver: Specifies the TensorRT versions in x.y.z. format.
    • cuda-ver: Specifies the CUDA version in x.y format.
    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
  • ANDROID: Compilation for the Android OS supports the following compiler options:
    • ANDROID_PLATFORM: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}.
    • mattr: Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.
  • INFERENTIA: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"--verbose 1 --num-neuroncores 2 -O2\"". For information about supported compiler options, see Neuron Compiler CLI.
  • CoreML: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:
    • class_labels: Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"}. Labels inside the txt file should be separated by newlines.
  • EIA: Compilation for the Elastic Inference Accelerator supports the following compiler options:
    • precision_mode: Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32". Default is "FP32".
    • signature_def_key: Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.
    • output_names: Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names.
    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOutputConfig_CompilerOptions
-OutputConfig_KmsKeyId <String>
The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.The KmsKeyId can be any of the following formats:
  • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
  • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
  • Alias name: alias/ExampleAlias
  • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-OutputConfig_S3OutputLocation <String>
Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.
Required?True
Position?Named
Accept pipeline input?True (ByPropertyName)
-OutputConfig_TargetDevice <TargetDevice>
Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-PassThru <SwitchParameter>
Changes the cmdlet behavior to return the value passed to the CompilationJobName parameter. The -PassThru parameter is deprecated, use -Select '^CompilationJobName' instead. This parameter will be removed in a future version.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-RoleArn <String>
The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf. During model compilation, Amazon SageMaker needs your permission to:
  • Read input data from an S3 bucket
  • Write model artifacts to an S3 bucket
  • Write logs to Amazon CloudWatch Logs
  • Publish metrics to Amazon CloudWatch
You grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission. For more information, see Amazon SageMaker Roles.
Required?True
Position?Named
Accept pipeline input?True (ByPropertyName)
-Select <String>
Use the -Select parameter to control the cmdlet output. The default value is 'CompilationJobArn'. Specifying -Select '*' will result in the cmdlet returning the whole service response (Amazon.SageMaker.Model.CreateCompilationJobResponse). Specifying the name of a property of type Amazon.SageMaker.Model.CreateCompilationJobResponse will result in that property being returned. Specifying -Select '^ParameterName' will result in the cmdlet returning the selected cmdlet parameter value.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-StoppingCondition_MaxRuntimeInSecond <Int32>
The maximum length of time, in seconds, that a training or compilation job can run.For compilation jobs, if the job does not complete during this time, a TimeOut error is generated. We recommend starting with 900 seconds and increasing as necessary based on your model.For all other jobs, if the job does not complete during this time, SageMaker ends the job. When RetryStrategy is specified in the job request, MaxRuntimeInSeconds specifies the maximum time for all of the attempts in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesStoppingCondition_MaxRuntimeInSeconds
-StoppingCondition_MaxWaitTimeInSecond <Int32>
The maximum length of time, in seconds, that a managed Spot training job has to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the job can run. It must be equal to or greater than MaxRuntimeInSeconds. If the job does not complete during this time, SageMaker ends the job.When RetryStrategy is specified in the job request, MaxWaitTimeInSeconds specifies the maximum time for all of the attempts in total, not each individual attempt.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesStoppingCondition_MaxWaitTimeInSeconds
-Tag <Tag[]>
An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesTags
-TargetPlatform_Accelerator <TargetPlatformAccelerator>
Specifies a target platform accelerator (optional).
  • NVIDIA: Nvidia graphics processing unit. It also requires gpu-code, trt-ver, cuda-ver compiler options
  • MALI: ARM Mali graphics processor
  • INTEL_GRAPHICS: Integrated Intel graphics
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOutputConfig_TargetPlatform_Accelerator
-TargetPlatform_Arch <TargetPlatformArch>
Specifies a target platform architecture.
  • X86_64: 64-bit version of the x86 instruction set.
  • X86: 32-bit version of the x86 instruction set.
  • ARM64: ARMv8 64-bit CPU.
  • ARM_EABIHF: ARMv7 32-bit, Hard Float.
  • ARM_EABI: ARMv7 32-bit, Soft Float. Used by Android 32-bit ARM platform.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOutputConfig_TargetPlatform_Arch
-TargetPlatform_Os <TargetPlatformOs>
Specifies a target platform OS.
  • LINUX: Linux-based operating systems.
  • ANDROID: Android operating systems. Android API level can be specified using the ANDROID_PLATFORM compiler option. For example, "CompilerOptions": {'ANDROID_PLATFORM': 28}
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesOutputConfig_TargetPlatform_Os
-VpcConfig_SecurityGroupId <String[]>
The VPC security group IDs. IDs have the form of sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesVpcConfig_SecurityGroupIds
-VpcConfig_Subnet <String[]>
The ID of the subnets in the VPC that you want to connect the compilation job to for accessing the model in Amazon S3.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesVpcConfig_Subnets

Common Credential and Region Parameters

-AccessKey <String>
The AWS access key for the user account. This can be a temporary access key if the corresponding session token is supplied to the -SessionToken parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesAK
-Credential <AWSCredentials>
An AWSCredentials object instance containing access and secret key information, and optionally a token for session-based credentials.
Required?False
Position?Named
Accept pipeline input?True (ByValue, ByPropertyName)
-EndpointUrl <String>
The endpoint to make the call against.Note: This parameter is primarily for internal AWS use and is not required/should not be specified for normal usage. The cmdlets normally determine which endpoint to call based on the region specified to the -Region parameter or set as default in the shell (via Set-DefaultAWSRegion). Only specify this parameter if you must direct the call to a specific custom endpoint.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-NetworkCredential <PSCredential>
Used with SAML-based authentication when ProfileName references a SAML role profile. Contains the network credentials to be supplied during authentication with the configured identity provider's endpoint. This parameter is not required if the user's default network identity can or should be used during authentication.
Required?False
Position?Named
Accept pipeline input?True (ByValue, ByPropertyName)
-ProfileLocation <String>
Used to specify the name and location of the ini-format credential file (shared with the AWS CLI and other AWS SDKs)If this optional parameter is omitted this cmdlet will search the encrypted credential file used by the AWS SDK for .NET and AWS Toolkit for Visual Studio first. If the profile is not found then the cmdlet will search in the ini-format credential file at the default location: (user's home directory)\.aws\credentials.If this parameter is specified then this cmdlet will only search the ini-format credential file at the location given.As the current folder can vary in a shell or during script execution it is advised that you use specify a fully qualified path instead of a relative path.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesAWSProfilesLocation, ProfilesLocation
-ProfileName <String>
The user-defined name of an AWS credentials or SAML-based role profile containing credential information. The profile is expected to be found in the secure credential file shared with the AWS SDK for .NET and AWS Toolkit for Visual Studio. You can also specify the name of a profile stored in the .ini-format credential file used with the AWS CLI and other AWS SDKs.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesStoredCredentials, AWSProfileName
-Region <Object>
The system name of an AWS region or an AWSRegion instance. This governs the endpoint that will be used when calling service operations. Note that the AWS resources referenced in a call are usually region-specific.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesRegionToCall
-SecretKey <String>
The AWS secret key for the user account. This can be a temporary secret key if the corresponding session token is supplied to the -SessionToken parameter.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesSK, SecretAccessKey
-SessionToken <String>
The session token if the access and secret keys are temporary session-based credentials.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
AliasesST

Outputs

This cmdlet returns a System.String object. The service call response (type Amazon.SageMaker.Model.CreateCompilationJobResponse) can also be referenced from properties attached to the cmdlet entry in the $AWSHistory stack.

Supported Version

AWS Tools for PowerShell: 2.x.y.z