Linear learner hyperparameters
The following table contains the hyperparameters for the linear learner algorithm.
These are parameters that are set by users to facilitate the estimation of model
parameters from data. The required hyperparameters that must be set are listed first,
in
alphabetical order. The optional hyperparameters that can be set are listed next,
also
in alphabetical order. When a hyperparameter is set to auto
, Amazon SageMaker will
automatically calculate and set the value of that hyperparameter.
Parameter Name  Description 

num_classes 
The number of classes for the response variable. The algorithm
assumes that classes are labeled Required when
Valid values: Integers from 3 to 1,000,000 
predictor_type 
Specifies the type of target variable as a binary classification, multiclass classification, or regression. Required Valid values: 
accuracy_top_k 
When computing the topk accuracy metric for multiclass classification, the value of k. If the model assigns one of the topk scores to the true label, an example is scored as correct. Optional Valid values: Positive integers Default value: 3 
balance_multiclass_weights 
Specifies whether to use class weights, which give each class
equal importance in the loss function. Used only when the
Optional Valid values: Default value: 
beta_1 
The exponential decay rate for firstmoment estimates. Applies
only when the Optional Valid values: Default value: 
beta_2 
The exponential decay rate for secondmoment estimates. Applies
only when the Optional Valid values: Default value: 
bias_lr_mult 
Allows a different learning rate for the bias term. The actual
learning rate for the bias is Optional Valid values: Default value: 
bias_wd_mult 
Allows different regularization for the bias term. The actual L2
regularization weight for the bias is Optional Valid values: Default value: 
binary_classifier_model_selection_criteria 
When
Optional Valid values: Default value: 
early_stopping_patience 
If no improvement is made in the relevant metric, the number of
epochs to wait before ending training. If you have provided a value for
binary_classifier_model_selection_criteria . the metric
is that value. Otherwise, the metric is the same as the value specified
for the loss hyperparameter.
The metric is evaluated
on the validation data. If you haven't provided validation data, the
metric is always the same as the value specified for the
Optional Valid values: Positive integer Default value: 3 
early_stopping_tolerance 
The relative tolerance to measure an improvement in loss. If the ratio of the improvement in loss divided by the previous best loss is smaller than this value, early stopping considers the improvement to be zero. Optional Valid values: Positive floatingpoint integer Default value: 0.001 
epochs 
The maximum number of passes over the training data. Optional Valid values: Positive integer Default value: 15 
f_beta 
The value of beta to use when calculating F score metrics for
binary or multiclass classification. Also used if the value
specified for
Optional Valid values: Positive floatingpoint integers Default value: 1.0 
feature_dim 
The number of features in the input data. Optional Valid values: Default values: 
huber_delta 
The parameter for Huber loss. During training and metric evaluation, compute L2 loss for errors smaller than delta and L1 loss for errors larger than delta. Optional Valid values: Positive floatingpoint integer Default value: 1.0 
init_bias 
Initial weight for the bias term. Optional Valid values: Floatingpoint integer Default value: 0 
init_method 
Sets the initial distribution function used for model weights. Functions include:
Optional Valid values: Default value: 
init_scale 
Scales an initial uniform distribution for model weights. Applies
only when the Optional Valid values: Positive floatingpoint integer Default value: 0.07 
init_sigma 
The initial standard deviation for the normal distribution.
Applies only when the Optional Valid values: Positive floatingpoint integer Default value: 0.01 
l1 
The L1 regularization parameter. If you don't want to use L1 regularization, set the value to 0. Optional Valid values: Default value: 
learning_rate 
The step size used by the optimizer for parameter updates. Optional Valid values: Default value: 
loss 
Specifies the loss function. The available loss functions and their default values depend on
the value of
Valid values: Optional Default value: 
loss_insensitivity 
The parameter for the epsiloninsensitive loss type. During training and metric evaluation, any error smaller than this value is considered to be zero. Optional Valid values: Positive floatingpoint integer Default value: 0.01 
lr_scheduler_factor 
For every Optional Valid values: Default value: 
lr_scheduler_minimum_lr 
The learning rate never decreases to a value lower than the value
set for Optional Valid values: Default values: 
lr_scheduler_step 
The number of steps between decreases of the learning rate.
Applies only when the Optional Valid values: Default value: 
margin 
The margin for the Optional Valid values: Positive floatingpoint integer Default value: 1.0 
mini_batch_size 
The number of observations per minibatch for the data iterator. Optional Valid values: Positive integer Default value: 1000 
momentum 
The momentum of the Optional Valid values: Default value: 
normalize_data 
Normalizes the feature data before training. Data normalization shifts the data for each feature to have a mean of zero and scales it to have unit standard deviation. Optional Valid values: Default value: 
normalize_label 
Normalizes the label. Label normalization shifts the label to have a mean of zero and scales it to have unit standard deviation. The Optional Valid values: Default value: 
num_calibration_samples 
The number of observations from the validation dataset to use for model calibration (when finding the best threshold). Optional Valid values: Default value: 
num_models 
The number of models to train in parallel. For the default,
Optional Valid values: Default values: 
num_point_for_scaler 
The number of data points to use for calculating normalization or unbiasing of terms. Optional Valid values: Positive integer Default value: 10,000 
optimizer 
The optimization algorithm to use. Optional Valid values:
Default value: 
positive_example_weight_mult 
The weight assigned to positive examples when training a binary
classifier. The weight of negative examples is fixed at 1. If you
want the algorithm to choose a weight so that errors in classifying
negative vs. positive examples
have equal impact on training loss, specify Optional Valid values: Default value: 1.0 
quantile 
The quantile for quantile loss. For quantile q, the model attempts
to produce predictions so that the value of Optional Valid values: Floatingpoint integer between 0 and 1 Default value: 0.5 
target_precision 
The target precision. If
Optional Valid values: Floatingpoint integer between 0 and 1.0 Default value: 0.8 
target_recall 
The target recall. If
Optional Valid values: Floatingpoint integer between 0 and 1.0 Default value: 0.8 
unbias_data 
Unbiases the features before training so that the mean is 0. By
default data is unbiased as the Optional Valid values: Default value:

unbias_label 
Unbiases labels before training so that the mean is 0. Applies to
regression only if the Optional Valid values: Default value: 
use_bias 
Specifies whether the model should include a bias term, which is the intercept term in the linear equation. Optional Valid values: Default value: 
use_lr_scheduler 
Whether to use a scheduler for the learning rate. If you want to
use a scheduler, specify Optional Valid values: Default value: 
wd 
The weight decay parameter, also known as the L2 regularization parameter. If you don't want to use L2 regularization, set the value to 0. Optional Valid values: Default value: 