Table Of Contents

Feedback

User Guide

First time using the AWS CLI? See the User Guide for help getting started.

[ aws . sagemaker ]

create-training-job

Description

Starts a model training job. After training completes, Amazon SageMaker saves the resulting model artifacts to an Amazon S3 location that you specify.

If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts in a deep learning service other than Amazon SageMaker, provided that you know how to use them for inferences.

In the request body, you provide the following:

  • AlgorithmSpecification - Identifies the training algorithm to use.
  • HyperParameters - Specify these algorithm-specific parameters to influence the quality of the final model. For a list of hyperparameters for each training algorithm provided by Amazon SageMaker, see Algorithms .
  • InputDataConfig - Describes the training dataset and the Amazon S3 location where it is stored.
  • OutputDataConfig - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results of model training.
  • ResourceConfig - Identifies the resources, ML compute instances, and ML storage volumes to deploy for model training. In distributed training, you specify more than one instance.
  • RoleARN - The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during model training. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete model training.
  • StoppingCondition - Sets a duration for training. Use this parameter to cap model training costs.

For more information about Amazon SageMaker, see How It Works .

See also: AWS API Documentation

See 'aws help' for descriptions of global parameters.

Synopsis

  create-training-job
--training-job-name <value>
[--hyper-parameters <value>]
--algorithm-specification <value>
--role-arn <value>
[--input-data-config <value>]
--output-data-config <value>
--resource-config <value>
[--vpc-config <value>]
--stopping-condition <value>
[--tags <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Options

--training-job-name (string)

The name of the training job. The name must be unique within an AWS Region in an AWS account.

--hyper-parameters (map)

Algorithm-specific parameters that influence the quality of the model. You set hyperparameters before you start the learning process. For a list of hyperparameters for each training algorithm provided by Amazon SageMaker, see Algorithms .

You can specify a maximum of 100 hyperparameters. Each hyperparameter is a key-value pair. Each key and value is limited to 256 characters, as specified by the Length Constraint .

Shorthand Syntax:

KeyName1=string,KeyName2=string

JSON Syntax:

{"string": "string"
  ...}

--algorithm-specification (structure)

The registry path of the Docker image that contains the training algorithm and algorithm-specific metadata, including the input mode. For more information about algorithms provided by Amazon SageMaker, see Algorithms . For information about providing your own algorithms, see Using Your Own Algorithms with Amazon SageMaker .

Shorthand Syntax:

TrainingImage=string,TrainingInputMode=string

JSON Syntax:

{
  "TrainingImage": "string",
  "TrainingInputMode": "Pipe"|"File"
}

--role-arn (string)

The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

During model training, Amazon SageMaker needs your permission to read input data from an S3 bucket, download a Docker image that contains training code, write model artifacts to an S3 bucket, write logs to Amazon CloudWatch Logs, and publish metrics to Amazon CloudWatch. You grant permissions for all of these tasks to an IAM role. For more information, see Amazon SageMaker Roles .

Note

To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission.

--input-data-config (list)

An array of Channel objects. Each channel is a named input source. InputDataConfig describes the input data and its location.

Algorithms can accept input data from one or more channels. For example, an algorithm might have two channels of input data, training_data and validation_data . The configuration for each channel provides the S3 location where the input data is stored. It also provides information about the stored data: the MIME type, compression method, and whether the data is wrapped in RecordIO format.

Depending on the input mode that the algorithm supports, Amazon SageMaker either copies input data files from an S3 bucket to a local directory in the Docker container, or makes it available as input streams.

Shorthand Syntax:

ChannelName=string,DataSource={S3DataSource={S3DataType=string,S3Uri=string,S3DataDistributionType=string}},ContentType=string,CompressionType=string,RecordWrapperType=string,InputMode=string ...

JSON Syntax:

[
  {
    "ChannelName": "string",
    "DataSource": {
      "S3DataSource": {
        "S3DataType": "ManifestFile"|"S3Prefix",
        "S3Uri": "string",
        "S3DataDistributionType": "FullyReplicated"|"ShardedByS3Key"
      }
    },
    "ContentType": "string",
    "CompressionType": "None"|"Gzip",
    "RecordWrapperType": "None"|"RecordIO",
    "InputMode": "Pipe"|"File"
  }
  ...
]

--output-data-config (structure)

Specifies the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates subfolders for the artifacts.

Shorthand Syntax:

KmsKeyId=string,S3OutputPath=string

JSON Syntax:

{
  "KmsKeyId": "string",
  "S3OutputPath": "string"
}

--resource-config (structure)

The resources, including the ML compute instances and ML storage volumes, to use for model training.

ML storage volumes store model artifacts and incremental states. Training algorithms might also use ML storage volumes for scratch space. If you want Amazon SageMaker to use the ML storage volume to store the training data, choose File as the TrainingInputMode in the algorithm specification. For distributed training algorithms, specify an instance count greater than 1.

Shorthand Syntax:

InstanceType=string,InstanceCount=integer,VolumeSizeInGB=integer,VolumeKmsKeyId=string

JSON Syntax:

{
  "InstanceType": "ml.m4.xlarge"|"ml.m4.2xlarge"|"ml.m4.4xlarge"|"ml.m4.10xlarge"|"ml.m4.16xlarge"|"ml.m5.large"|"ml.m5.xlarge"|"ml.m5.2xlarge"|"ml.m5.4xlarge"|"ml.m5.12xlarge"|"ml.m5.24xlarge"|"ml.c4.xlarge"|"ml.c4.2xlarge"|"ml.c4.4xlarge"|"ml.c4.8xlarge"|"ml.p2.xlarge"|"ml.p2.8xlarge"|"ml.p2.16xlarge"|"ml.p3.2xlarge"|"ml.p3.8xlarge"|"ml.p3.16xlarge"|"ml.c5.xlarge"|"ml.c5.2xlarge"|"ml.c5.4xlarge"|"ml.c5.9xlarge"|"ml.c5.18xlarge",
  "InstanceCount": integer,
  "VolumeSizeInGB": integer,
  "VolumeKmsKeyId": "string"
}

--vpc-config (structure)

A VpcConfig object that specifies the VPC that you want your training job to connect to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud .

Shorthand Syntax:

SecurityGroupIds=string,string,Subnets=string,string

JSON Syntax:

{
  "SecurityGroupIds": ["string", ...],
  "Subnets": ["string", ...]
}

--stopping-condition (structure)

Sets a duration for training. Use this parameter to cap model training costs. To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms might use this 120-second window to save the model artifacts.

When Amazon SageMaker terminates a job because the stopping condition has been met, training algorithms provided by Amazon SageMaker save the intermediate results of the job. This intermediate data is a valid model artifact. You can use it to create a model using the CreateModel API.

Shorthand Syntax:

MaxRuntimeInSeconds=integer

JSON Syntax:

{
  "MaxRuntimeInSeconds": integer
}

--tags (list)

An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide .

Shorthand Syntax:

Key=string,Value=string ...

JSON Syntax:

[
  {
    "Key": "string",
    "Value": "string"
  }
  ...
]

--cli-input-json (string) Performs service operation based on the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.

See 'aws help' for descriptions of global parameters.

Output

TrainingJobArn -> (string)

The Amazon Resource Name (ARN) of the training job.