You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::SageMaker::Types::TransformInput
- Inherits:
-
Struct
- Object
- Struct
- Aws::SageMaker::Types::TransformInput
- Defined in:
- (unknown)
Overview
When passing TransformInput as input to an Aws::Client method, you can use a vanilla Hash:
{
data_source: { # required
s3_data_source: { # required
s3_data_type: "ManifestFile", # required, accepts ManifestFile, S3Prefix, AugmentedManifestFile
s3_uri: "S3Uri", # required
},
},
content_type: "ContentType",
compression_type: "None", # accepts None, Gzip
split_type: "None", # accepts None, Line, RecordIO, TFRecord
}
Describes the input source of a transform job and the way the transform job consumes it.
Returned by:
Instance Attribute Summary collapse
-
#compression_type ⇒ String
If your transform data is compressed, specify the compression type.
-
#content_type ⇒ String
The multipurpose internet mail extension (MIME) type of the data.
-
#data_source ⇒ Types::TransformDataSource
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
-
#split_type ⇒ String
The method to use to split the transform job\'s data files into smaller batches.
Instance Attribute Details
#compression_type ⇒ String
If your transform data is compressed, specify the compression type.
Amazon SageMaker automatically decompresses the data for the transform
job accordingly. The default value is None
.
Possible values:
- None
- Gzip
#content_type ⇒ String
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
#data_source ⇒ Types::TransformDataSource
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
#split_type ⇒ String
The method to use to split the transform job\'s data files into smaller
batches. Splitting is necessary when the total size of each object is
too large to fit in a single request. You can also use data splitting to
improve performance by processing multiple concurrent mini-batches. The
default value for SplitType
is None
, which indicates that input data
files are not split, and request payloads contain the entire contents of
an input object. Set the value of this parameter to Line
to split
records on a newline character boundary. SplitType
also supports a
number of record-oriented binary data formats. Currently, the supported
record formats are:
RecordIO
TFRecord
When splitting is enabled, the size of a mini-batch depends on the
values of the BatchStrategy
and MaxPayloadInMB
parameters. When the
value of BatchStrategy
is MultiRecord
, Amazon SageMaker sends the
maximum number of records in each request, up to the MaxPayloadInMB
limit. If the value of BatchStrategy
is SingleRecord
, Amazon
SageMaker sends individual records in each request.
BatchStrategy
is set to
SingleRecord
. Padding is not removed if the value of BatchStrategy
is set to MultiRecord
.
For more information about RecordIO
, see Create a Dataset Using
RecordIO in the MXNet documentation. For more information about
TFRecord
, see Consuming TFRecord data in the TensorFlow
documentation.