AWS DeepRacer
Developer Guide

This is prerelease documentation for a service in preview release. It is subject to change.

Train and Evaluate AWS DeepRacer Models

In general, to train your reinforcement learning model, you describe all of the environment, including states, actions and reward to ensure it captures the problem you are trying to solve. The AWS DeepRacer service have defined the states and actions for you and gives you the option of defining the reward so that you can focus on learning about reinforcement learning.

For example, in autonomous racing, the agent can be a vehicle with sensors. A well-marked driving track can be the environment. A positive reward would be when the vehicle stays on the track and a negative reward would be when it's off the track.

Let's say that you're interested in having the vehicle drive without getting off a straight track. The reward function takes images as input from the sensors and produces a score after sorting out the current position. Speed and the position of any obstacles nearby might also be involved. A simple form of the reward function could be that the score is zero if the vehicle is on the track, -1 if it's off the track, and +1 if reaches the finish line. With this reward function, the vehicle gets penalized for going off the track and rewarded for reaching the destination.

Training the vehicle involves repeated episodes along the track from start to finish. In an episode the agent interacts with the track to take the optimal course of actions by maximizing the expected future reward. At the end, the training produces a reinforcement learning model. After the training, the agent can follow the model to infer the optimal choice of actions in any given state, when the model is evaluated or deployed to run on a physical agent, such as an AWS DeepRacer scale vehicle.

To train a reinforcement learning model in practice, you must choose a learning algorithm. For autonomous driving, the proximal policy optimization (PPO) algorithm is a suitable choice. You can then choose a deep-learning framework supporting the chosen algorithm, unless you want to write one from scratch. AWS DeepRacer integrates with Amazon SageMaker to make some popular deep-learning frameworks, such as TensorFlow, readily available in the AWS DeepRacer console. Using a framework simplifies configuring and executing training jobs and lets you focus on building and enhancing reward functions specific to your problems.

Training a reinforcement learning model is an iterative process. First, it can be a challenge to define a reward function to cover all important behaviors of an agent in an environment all at once. Second, hyperparameters are often tuned to ensure satisfactory training performance. Both require experimentation. A prudent approach is to start with a simple reward function and then progressively enhance it. AWS DeepRacer facilitates this iterative process by enabling you to clone a trained model and then use it to jump-start the next round training. At each iteration you can introduce one or a few more sophisticated treatments to the reward function to handle previous ignored variables or you can systematically adjust hyperparameters until the result converges.

As with general practice in machine learning, you must evaluate a trained reinforcement learning model to ascertain its efficacy before deploying it to a physical agent for running inference in a real-world situation. For autonomous racing, the evaluation can be based on how often a vehicle stays on a given track from start to finish or how fast it can finish the course without getting off the track. The AWS DeepRacer simulation runs in the AWS RoboMaker simulator and lets you run the evaluation and post the performance metrics.

The following sections discuss these topics in details.