@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class Channel extends Object implements Serializable, Cloneable, StructuredPojo
A channel is a named input source that training algorithms can consume.
| Constructor and Description |
|---|
Channel() |
| Modifier and Type | Method and Description |
|---|---|
Channel |
clone() |
boolean |
equals(Object obj) |
String |
getChannelName()
The name of the channel.
|
String |
getCompressionType()
If training data is compressed, the compression type.
|
String |
getContentType()
The MIME type of the data.
|
DataSource |
getDataSource()
The location of the channel data.
|
String |
getInputMode()
(Optional) The input mode to use for the data channel in a training job.
|
String |
getRecordWrapperType()
|
ShuffleConfig |
getShuffleConfig()
A configuration for a shuffle option for input data in a channel.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller. |
void |
setChannelName(String channelName)
The name of the channel.
|
void |
setCompressionType(String compressionType)
If training data is compressed, the compression type.
|
void |
setContentType(String contentType)
The MIME type of the data.
|
void |
setDataSource(DataSource dataSource)
The location of the channel data.
|
void |
setInputMode(String inputMode)
(Optional) The input mode to use for the data channel in a training job.
|
void |
setRecordWrapperType(String recordWrapperType)
|
void |
setShuffleConfig(ShuffleConfig shuffleConfig)
A configuration for a shuffle option for input data in a channel.
|
String |
toString()
Returns a string representation of this object.
|
Channel |
withChannelName(String channelName)
The name of the channel.
|
Channel |
withCompressionType(CompressionType compressionType)
If training data is compressed, the compression type.
|
Channel |
withCompressionType(String compressionType)
If training data is compressed, the compression type.
|
Channel |
withContentType(String contentType)
The MIME type of the data.
|
Channel |
withDataSource(DataSource dataSource)
The location of the channel data.
|
Channel |
withInputMode(String inputMode)
(Optional) The input mode to use for the data channel in a training job.
|
Channel |
withInputMode(TrainingInputMode inputMode)
(Optional) The input mode to use for the data channel in a training job.
|
Channel |
withRecordWrapperType(RecordWrapper recordWrapperType)
|
Channel |
withRecordWrapperType(String recordWrapperType)
|
Channel |
withShuffleConfig(ShuffleConfig shuffleConfig)
A configuration for a shuffle option for input data in a channel.
|
public void setChannelName(String channelName)
The name of the channel.
channelName - The name of the channel.public String getChannelName()
The name of the channel.
public Channel withChannelName(String channelName)
The name of the channel.
channelName - The name of the channel.public void setDataSource(DataSource dataSource)
The location of the channel data.
dataSource - The location of the channel data.public DataSource getDataSource()
The location of the channel data.
public Channel withDataSource(DataSource dataSource)
The location of the channel data.
dataSource - The location of the channel data.public void setContentType(String contentType)
The MIME type of the data.
contentType - The MIME type of the data.public String getContentType()
The MIME type of the data.
public Channel withContentType(String contentType)
The MIME type of the data.
contentType - The MIME type of the data.public void setCompressionType(String compressionType)
If training data is compressed, the compression type. The default value is None.
CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set it to
None.
compressionType - If training data is compressed, the compression type. The default value is None.
CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set
it to None.CompressionTypepublic String getCompressionType()
If training data is compressed, the compression type. The default value is None.
CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set it to
None.
None.
CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set
it to None.CompressionTypepublic Channel withCompressionType(String compressionType)
If training data is compressed, the compression type. The default value is None.
CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set it to
None.
compressionType - If training data is compressed, the compression type. The default value is None.
CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set
it to None.CompressionTypepublic Channel withCompressionType(CompressionType compressionType)
If training data is compressed, the compression type. The default value is None.
CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set it to
None.
compressionType - If training data is compressed, the compression type. The default value is None.
CompressionType is used only in Pipe input mode. In File mode, leave this field unset or set
it to None.CompressionTypepublic void setRecordWrapperType(String recordWrapperType)
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
recordWrapperType - Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
RecordWrapperpublic String getRecordWrapperType()
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
RecordWrapperpublic Channel withRecordWrapperType(String recordWrapperType)
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
recordWrapperType - Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
RecordWrapperpublic Channel withRecordWrapperType(RecordWrapper recordWrapperType)
Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
recordWrapperType - Specify RecordIO as the value when input data is in raw format but the training algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3 object in a RecordIO record. If the input data is already in RecordIO format, you don't need to set this attribute. For more information, see Create a Dataset Using RecordIO.
In File mode, leave this field unset or set it to None.
RecordWrapperpublic void setInputMode(String inputMode)
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set for TrainingInputMode. Use this parameter to
override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training
job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML
storage volume, and mount the directory to a Docker volume, use File input mode. To stream data
directly from Amazon S3 to the container, choose Pipe input mode.
To use a model for incremental training, choose File input model.
inputMode - (Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set for TrainingInputMode. Use this
parameter to override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the
training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the
provisioned ML storage volume, and mount the directory to a Docker volume, use File input
mode. To stream data directly from Amazon S3 to the container, choose Pipe input mode.
To use a model for incremental training, choose File input model.
TrainingInputModepublic String getInputMode()
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set for TrainingInputMode. Use this parameter to
override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training
job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML
storage volume, and mount the directory to a Docker volume, use File input mode. To stream data
directly from Amazon S3 to the container, choose Pipe input mode.
To use a model for incremental training, choose File input model.
InputMode, SageMaker uses the value set for TrainingInputMode. Use this
parameter to override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the
training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to
the provisioned ML storage volume, and mount the directory to a Docker volume, use File
input mode. To stream data directly from Amazon S3 to the container, choose Pipe input
mode.
To use a model for incremental training, choose File input model.
TrainingInputModepublic Channel withInputMode(String inputMode)
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set for TrainingInputMode. Use this parameter to
override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training
job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML
storage volume, and mount the directory to a Docker volume, use File input mode. To stream data
directly from Amazon S3 to the container, choose Pipe input mode.
To use a model for incremental training, choose File input model.
inputMode - (Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set for TrainingInputMode. Use this
parameter to override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the
training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the
provisioned ML storage volume, and mount the directory to a Docker volume, use File input
mode. To stream data directly from Amazon S3 to the container, choose Pipe input mode.
To use a model for incremental training, choose File input model.
TrainingInputModepublic Channel withInputMode(TrainingInputMode inputMode)
(Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set for TrainingInputMode. Use this parameter to
override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the training
job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML
storage volume, and mount the directory to a Docker volume, use File input mode. To stream data
directly from Amazon S3 to the container, choose Pipe input mode.
To use a model for incremental training, choose File input model.
inputMode - (Optional) The input mode to use for the data channel in a training job. If you don't set a value for
InputMode, SageMaker uses the value set for TrainingInputMode. Use this
parameter to override the TrainingInputMode setting in a AlgorithmSpecification request when you have a channel that needs a different input mode from the
training job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the
provisioned ML storage volume, and mount the directory to a Docker volume, use File input
mode. To stream data directly from Amazon S3 to the container, choose Pipe input mode.
To use a model for incremental training, choose File input model.
TrainingInputModepublic void setShuffleConfig(ShuffleConfig shuffleConfig)
A configuration for a shuffle option for input data in a channel. If you use S3Prefix for
S3DataType, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile, the order of the S3 object references in the ManifestFile is shuffled. If
you use AugmentedManifestFile, the order of the JSON lines in the AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the
order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a
multi-node training job when ShuffleConfig is combined with S3DataDistributionType of
ShardedByS3Key, the data is shuffled across nodes so that the content sent to a particular node on
the first epoch might be sent to a different node on the second epoch.
shuffleConfig - A configuration for a shuffle option for input data in a channel. If you use S3Prefix for
S3DataType, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile, the order of the S3 object references in the ManifestFile is
shuffled. If you use AugmentedManifestFile, the order of the JSON lines in the
AugmentedManifestFile is shuffled. The shuffling order is determined using the
Seed value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that
the order of the training data is different for each epoch, it helps reduce bias and possible overfitting.
In a multi-node training job when ShuffleConfig is combined with S3DataDistributionType of
ShardedByS3Key, the data is shuffled across nodes so that the content sent to a particular
node on the first epoch might be sent to a different node on the second epoch.
public ShuffleConfig getShuffleConfig()
A configuration for a shuffle option for input data in a channel. If you use S3Prefix for
S3DataType, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile, the order of the S3 object references in the ManifestFile is shuffled. If
you use AugmentedManifestFile, the order of the JSON lines in the AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the
order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a
multi-node training job when ShuffleConfig is combined with S3DataDistributionType of
ShardedByS3Key, the data is shuffled across nodes so that the content sent to a particular node on
the first epoch might be sent to a different node on the second epoch.
S3Prefix for
S3DataType, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile, the order of the S3 object references in the ManifestFile is
shuffled. If you use AugmentedManifestFile, the order of the JSON lines in the
AugmentedManifestFile is shuffled. The shuffling order is determined using the
Seed value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that
the order of the training data is different for each epoch, it helps reduce bias and possible
overfitting. In a multi-node training job when ShuffleConfig is combined with
S3DataDistributionType of ShardedByS3Key, the data is shuffled across nodes so
that the content sent to a particular node on the first epoch might be sent to a different node on the
second epoch.
public Channel withShuffleConfig(ShuffleConfig shuffleConfig)
A configuration for a shuffle option for input data in a channel. If you use S3Prefix for
S3DataType, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile, the order of the S3 object references in the ManifestFile is shuffled. If
you use AugmentedManifestFile, the order of the JSON lines in the AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the
order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a
multi-node training job when ShuffleConfig is combined with S3DataDistributionType of
ShardedByS3Key, the data is shuffled across nodes so that the content sent to a particular node on
the first epoch might be sent to a different node on the second epoch.
shuffleConfig - A configuration for a shuffle option for input data in a channel. If you use S3Prefix for
S3DataType, this shuffles the results of the S3 key prefix matches. If you use
ManifestFile, the order of the S3 object references in the ManifestFile is
shuffled. If you use AugmentedManifestFile, the order of the JSON lines in the
AugmentedManifestFile is shuffled. The shuffling order is determined using the
Seed value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that
the order of the training data is different for each epoch, it helps reduce bias and possible overfitting.
In a multi-node training job when ShuffleConfig is combined with S3DataDistributionType of
ShardedByS3Key, the data is shuffled across nodes so that the content sent to a particular
node on the first epoch might be sent to a different node on the second epoch.
public String toString()
toString in class ObjectObject.toString()public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojoProtocolMarshaller.marshall in interface StructuredPojoprotocolMarshaller - Implementation of ProtocolMarshaller used to marshall this object's data.