AWS Tools for Windows PowerShell
Command Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Synopsis

Calls the Amazon SageMaker Service CreateTransformJob API operation.

Syntax

New-SMTransformJob
-TransformJobName <String>
-TransformOutput_Accept <String>
-TransformOutput_AssembleWith <AssemblyType>
-BatchStrategy <BatchStrategy>
-TransformInput_CompressionType <CompressionType>
-TransformInput_ContentType <String>
-Environment <Hashtable>
-TransformResources_InstanceCount <Int32>
-TransformResources_InstanceType <TransformInstanceType>
-TransformOutput_KmsKeyId <String>
-MaxConcurrentTransform <Int32>
-MaxPayloadInMB <Int32>
-ModelName <String>
-S3DataSource_S3DataType <S3DataType>
-TransformOutput_S3OutputPath <String>
-S3DataSource_S3Uri <String>
-TransformInput_SplitType <SplitType>
-Tag <Tag[]>
-TransformResources_VolumeKmsKeyId <String>
-Force <SwitchParameter>

Description

Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify. To perform batch transformations, you create a transform job and use the data that you have readily available. In the request body, you provide the following:
  • TransformJobName - Identifies the transform job. The name must be unique within an AWS Region in an AWS account.
  • ModelName - Identifies the model to use. ModelName must be the name of an existing Amazon SageMaker model in the same AWS Region and AWS account. For information on creating a model, see CreateModel.
  • TransformInput - Describes the dataset to be transformed and the Amazon S3 location where it is stored.
  • TransformOutput - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.
  • TransformResources - Identifies the ML compute instances for the transform job.
For more information about how batch transformation works Amazon SageMaker, see How It Works.

Parameters

-BatchStrategy <BatchStrategy>
Determines the number of records included in a single mini-batch. SingleRecord means only one record is used per mini-batch. MultiRecord means a mini-batch is set to contain as many records that can fit within the MaxPayloadInMB limit.Batch transform will automatically split your input data into whatever payload size is specified if you set SplitType to Line and BatchStrategy to MultiRecord. There's no need to split the dataset into smaller files or to use larger payload sizes unless the records in your dataset are very large.
Required?False
Position?Named
Accept pipeline input?False
-Environment <Hashtable>
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
Required?False
Position?Named
Accept pipeline input?False
-Force <SwitchParameter>
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required?False
Position?Named
Accept pipeline input?False
-MaxConcurrentTransform <Int32>
The maximum number of parallel requests that can be sent to each instance in a transform job. This is good for algorithms that implement multiple workers on larger instances . The default value is 1. To allow Amazon SageMaker to determine the appropriate number for MaxConcurrentTransforms, set the value to 0.
Required?False
Position?Named
Accept pipeline input?False
-MaxPayloadInMB <Int32>
The maximum payload size allowed, in MB. A payload is the data portion of a record (without metadata). The value in MaxPayloadInMB must be greater or equal to the size of a single record. You can approximate the size of a record by dividing the size of your dataset by the number of records. Then multiply this value by the number of records you want in a mini-batch. It is recommended to enter a value slightly larger than this to ensure the records fit within the maximum payload size. The default value is 6 MB. For an unlimited payload size, set the value to 0.
Required?False
Position?Named
Accept pipeline input?False
-ModelName <String>
The name of the model that you want to use for the transform job. ModelName must be the name of an existing Amazon SageMaker model within an AWS Region in an AWS account.
Required?False
Position?Named
Accept pipeline input?False
-S3DataSource_S3DataType <S3DataType>
If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.If you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.
Required?False
Position?Named
Accept pipeline input?False
-S3DataSource_S3Uri <String>
Depending on the value specified for the S3DataType, identifies either a key name prefix or a manifest. For example:
  • A key name prefix might look like this: s3://bucketname/exampleprefix.
  • A manifest might look like this: s3://bucketname/example.manifest The manifest is an S3 object which is a JSON file with the following format: [ {"prefix": "s3://customer_bucket/some/prefix/"}, "relative/path/to/custdata-1", "relative/path/custdata-2", ... ] The preceding JSON matches the following S3Uris: s3://customer_bucket/some/prefix/relative/path/to/custdata-1s3://customer_bucket/some/prefix/relative/path/custdata-1... The complete set of S3Uris in this manifest constitutes the input data for the channel for this datasource. The object that each S3Uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.
Required?False
Position?Named
Accept pipeline input?False
-Tag <Tag[]>
An array of key-value pairs. Adding tags is optional. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
Required?False
Position?Named
Accept pipeline input?False
-TransformInput_CompressionType <CompressionType>
Compressing data helps save on storage space. If your transform data is compressed, specify the compression type.and Amazon SageMaker will automatically decompress the data for the transform job accordingly. The default value is None.
Required?False
Position?Named
Accept pipeline input?False
-TransformInput_ContentType <String>
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
Required?False
Position?Named
Accept pipeline input?False
-TransformInput_SplitType <SplitType>
The method to use to split the transform job's data into smaller batches. The default value is None. If you don't want to split the data, specify None. If you want to split records on a newline character boundary, specify Line. To split records according to the RecordIO format, specify RecordIO.Amazon SageMaker will send maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.For information about the RecordIO format, see Data Format.
Required?False
Position?Named
Accept pipeline input?False
-TransformJobName <String>
The name of the transform job. The name must be unique within an AWS Region in an AWS account.
Required?False
Position?1
Accept pipeline input?True (ByValue, )
-TransformOutput_Accept <String>
The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.
Required?False
Position?Named
Accept pipeline input?False
-TransformOutput_AssembleWith <AssemblyType>
Defines how to assemble the results of the transform job as a single S3 object. You should select a format that is most convenient to you. To concatenate the results in binary format, specify None. To add a newline character at the end of every transformed record, specify Line. To assemble the output in RecordIO format, specify RecordIO. The default value is None.For information about the RecordIO format, see Data Format.
Required?False
Position?Named
Accept pipeline input?False
-TransformOutput_KmsKeyId <String>
The AWS Key Management Service (AWS KMS) key for Amazon S3 server-side encryption that Amazon SageMaker uses to encrypt the transformed data.If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.The KMS key policy must grant permission to the IAM role that you specify in your CreateTramsformJob request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.
Required?False
Position?Named
Accept pipeline input?False
-TransformOutput_S3OutputPath <String>
The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix.For every S3 object used as input for the transform job, the transformed data is stored in a corresponding subfolder in the location under the output prefix. For example, the input data s3://bucket-name/input-name-prefix/dataset01/data.csv will have the transformed data stored at s3://bucket-name/key-name-prefix/dataset01/, based on the original name, as a series of .part files (.part0001, part0002, etc).
Required?False
Position?Named
Accept pipeline input?False
-TransformResources_InstanceCount <Int32>
The number of ML compute instances to use in the transform job. For distributed transform, provide a value greater than 1. The default value is 1.
Required?False
Position?Named
Accept pipeline input?False
-TransformResources_InstanceType <TransformInstanceType>
The ML compute instance type for the transform job. For using built-in algorithms to transform moderately sized datasets, ml.m4.xlarge or ml.m5.large should suffice. There is no default value for InstanceType.
Required?False
Position?Named
Accept pipeline input?False
-TransformResources_VolumeKmsKeyId <String>
The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the batch transform job.
Required?False
Position?Named
Accept pipeline input?False

Common Credential and Region Parameters

-AccessKey <String>
The AWS access key for the user account. This can be a temporary access key if the corresponding session token is supplied to the -SessionToken parameter.
Required? False
Position? Named
Accept pipeline input? False
-Credential <AWSCredentials>
An AWSCredentials object instance containing access and secret key information, and optionally a token for session-based credentials.
Required? False
Position? Named
Accept pipeline input? False
-ProfileLocation <String>

Used to specify the name and location of the ini-format credential file (shared with the AWS CLI and other AWS SDKs)

If this optional parameter is omitted this cmdlet will search the encrypted credential file used by the AWS SDK for .NET and AWS Toolkit for Visual Studio first. If the profile is not found then the cmdlet will search in the ini-format credential file at the default location: (user's home directory)\.aws\credentials. Note that the encrypted credential file is not supported on all platforms. It will be skipped when searching for profiles on Windows Nano Server, Mac, and Linux platforms.

If this parameter is specified then this cmdlet will only search the ini-format credential file at the location given.

As the current folder can vary in a shell or during script execution it is advised that you use specify a fully qualified path instead of a relative path.

Required? False
Position? Named
Accept pipeline input? False
-ProfileName <String>
The user-defined name of an AWS credentials or SAML-based role profile containing credential information. The profile is expected to be found in the secure credential file shared with the AWS SDK for .NET and AWS Toolkit for Visual Studio. You can also specify the name of a profile stored in the .ini-format credential file used with the AWS CLI and other AWS SDKs.
Required? False
Position? Named
Accept pipeline input? False
-NetworkCredential <PSCredential>
Used with SAML-based authentication when ProfileName references a SAML role profile. Contains the network credentials to be supplied during authentication with the configured identity provider's endpoint. This parameter is not required if the user's default network identity can or should be used during authentication.
Required? False
Position? Named
Accept pipeline input? False
-SecretKey <String>
The AWS secret key for the user account. This can be a temporary secret key if the corresponding session token is supplied to the -SessionToken parameter.
Required? False
Position? Named
Accept pipeline input? False
-SessionToken <String>
The session token if the access and secret keys are temporary session-based credentials.
Required? False
Position? Named
Accept pipeline input? False
-Region <String>
The system name of the AWS region in which the operation should be invoked. For example, us-east-1, eu-west-1 etc.
Required? False
Position? Named
Accept pipeline input? False
-EndpointUrl <String>

The endpoint to make the call against.

Note: This parameter is primarily for internal AWS use and is not required/should not be specified for normal usage. The cmdlets normally determine which endpoint to call based on the region specified to the -Region parameter or set as default in the shell (via Set-DefaultAWSRegion). Only specify this parameter if you must direct the call to a specific custom endpoint.

Required? False
Position? Named
Accept pipeline input? False

Inputs

You can pipe a String object to this cmdlet for the TransformJobName parameter.

Outputs

This cmdlet returns a String object. The service call response (type Amazon.SageMaker.Model.CreateTransformJobResponse) can also be referenced from properties attached to the cmdlet entry in the $AWSHistory stack.

Supported Version

AWS Tools for PowerShell: 2.x.y.z