Amazon Personalize
Developer Guide

The AWS Documentation website is getting a new look!
Try it now and let us know what you think. Switch to the new look >>

You can return to the original look by selecting English in the language selector above.

Hyperparameters and HPO

Hyperparameters are used to optimize the trained model and are set before training begins. This contrasts with model parameters whose values are determined during the training process.

Hyperparameters are specified using the algorithmHyperParameters key that is part of the SolutionConfig object that is passed to the CreateSolution API.

Different recipes use different hyperparameters. For the available hyperparameters, see the individual recipes in Using Predefined Recipes.

Hyperparameter optimization (HPO), or tuning, is the task of choosing optimal hyperparameters for a specific learning objective. The optimal hyperparameters are determined by running many training jobs using different values from the specified ranges of possibilities. By default, Amazon Personalize does not perform HPO. To use HPO, set performHPO to true, and include the hpoConfig object.

Hyperparameters can be categorical, continuous, or integer-valued. The hpoConfig object has keys that correspond to each of these types, where you specify the hyperparameters and their ranges. Note that not all hyperparameters can be tuned (see the recipe tables).

The following is a partial example of a CreateSolution request using the HRNN recipe.

{ "performAutoML": false, "recipeArn": "arn:aws:personalize:::recipe/aws-hrnn", "performHPO": true, "solutionConfig": { "algorithmHyperParameters": { "hidden_dimension": "55" }, "hpoConfig": { "algorithmHyperParameterRanges": { "categoricalHyperParameterRanges": [ { "name": "recency_mask", "values": [ "true", "false" ] } ], "integerHyperParameterRanges": [ { "name": "bptt", "minValue": 20, "maxValue": 40 } ] }, "hpoObjective": { "metricName": "precision_at_25", "metricRegex": "string", "type": "float" }, "hpoResourceConfig": { "maxNumberOfTrainingJobs": "4", "maxParallelTrainingJobs": "2" } } } }

For more information, see Automatic Model Tuning.