Object Detection - TensorFlow Hyperparameters - Amazon SageMaker

Object Detection - TensorFlow Hyperparameters

Hyperparameters are parameters that are set before a machine learning model begins learning. The following hyperparameters are supported by the Amazon SageMaker built-in Object Detection - TensorFlow algorithm. See Tune an Object Detection - TensorFlow model for information on hyperparameter tuning.

Parameter Name Description
batch_size

The batch size for training.

Valid values: positive integer.

Default value: 3.

beta_1

The beta1 for the "adam" optimizer. Represents the exponential decay rate for the first moment estimates. Ignored for other optimizers.

Valid values: float, range: [0.0, 1.0].

Default value: 0.9.

beta_2

The beta2 for the "adam" optimizer. Represents the exponential decay rate for the second moment estimates. Ignored for other optimizers.

Valid values: float, range: [0.0, 1.0].

Default value: 0.999.

early_stopping

Set to "True" to use early stopping logic during training. If "False", early stopping is not used.

Valid values: string, either: ("True" or "False").

Default value: "False".

early_stopping_min_delta The minimum change needed to qualify as an improvement. An absolute change less than the value of early_stopping_min_delta does not qualify as improvement. Used only when early_stopping is set to "True".

Valid values: float, range: [0.0, 1.0].

Default value: 0.0.

early_stopping_patience

The number of epochs to continue training with no improvement. Used only when early_stopping is set to "True".

Valid values: positive integer.

Default value: 5.

epochs

The number of training epochs.

Valid values: positive integer.

Default value: 5 for smaller models, 1 for larger models.

epsilon

The epsilon for "adam", "rmsprop", "adadelta", and "adagrad" optimizers. Usually set to a small value to avoid division by 0. Ignored for other optimizers.

Valid values: float, range: [0.0, 1.0].

Default value: 1e-7.

initial_accumulator_value

The starting value for the accumulators, or the per-parameter momentum values, for the "adagrad" optimizer. Ignored for other optimizers.

Valid values: float, range: [0.0, 1.0].

Default value: 0.1.

learning_rate The optimizer learning rate.

Valid values: float, range: [0.0, 1.0].

Default value: 0.001.

momentum

The momentum for the "sgd" and "nesterov" optimizers. Ignored for other optimizers.

Valid values: float, range: [0.0, 1.0].

Default value: 0.9.

optimizer

The optimizer type. For more information, see Optimizers in the TensorFlow documentation.

Valid values: string, any of the following: ("adam", "sgd", "nesterov", "rmsprop", "adagrad" , "adadelta").

Default value: "adam".

reinitialize_top_layer

If set to "Auto", the top classification layer parameters are re-initialized during fine-tuning. For incremental training, top classification layer parameters are not re-initialized unless set to "True".

Valid values: string, any of the following: ("Auto", "True" or "False").

Default value: "Auto".

rho

The discounting factor for the gradient of the "adadelta" and "rmsprop" optimizers. Ignored for other optimizers.

Valid values: float, range: [0.0, 1.0].

Default value: 0.95.

train_only_on_top_layer

If "True", only the top classification layer parameters are fine-tuned. If "False", all model parameters are fine-tuned.

Valid values: string, either: ("True" or "False").

Default value: "False".