AWS DeepRacer
Developer Guide

This is prerelease documentation for a service in preview release. It is subject to change.

Iterate Training to Improve AWS DeepRacer Models and Training Performance

After you have successfully trained your AWS DeepRacer model for the simple straight track, you can verify that your AWS DeepRacer vehicle (virtual or physical) can drive itself without going off the track. If you let the vehicle run on a looped track, it won't stay on the track. The reward function has ignored the actions to make turns to follow the track.

To make your vehicle handle those actions, you must enhance the reward function to give out a reward when the agent makes a permissible turn and produce a penalty if the agent makes an illegal turn. Then, you're ready to start another round of training. To take advantage of the prior trainings, you can start the new training by cloning the previous trained model, passing along the previously learned knowledge. You can follow this pattern to progressively add more features to the reward function to train your AWS DeepRacer vehicle to drive in increasingly more complex environments.

You can also apply this iterative process to improve the training performance by systematically tuning the hyperparameters used in training. Hyperparameters, such as learning rate, future reward discount, batch size included to compute gradient descent, number of episodes in a training session, and number of steps in an episode, are empirical factors affecting how fast and how stable the total average expected reward converges to the global maximum. Optimal values require systematic experimentation until proven effective. Cloning a previously trained model as the starting point of a new round of training with modified hyperparameters leverages already learned knowledge and could help improving the overall training efficiency.

In this section, you learn how to clone a trained model with an enhanced reward function or a modified set of hyperparameters. Before walking through the steps to clone a model for continued training, we illustrate how to update the reward function to handle new situations. We also explain the range and meaning of hyperparameter values used in the supported reinforcement learning algorithms.