AWS DeepRacer
Developer Guide

This is prerelease documentation for a service in preview release. It is subject to change.

Evaluate Trained AWS DeepRacer Models

To evaluate a model is to test the performance of a trained model. In AWS DeepRacer, the standard performance metric is the average time of finishing three consecutive laps. Evaluating an AWS DeepRacer model involves the following tasks:

  1. Configure and start an evaluation job.

  2. Observe the evaluation in progress while the job is running. This can be done in the AWS DeepRacer simulator.

  3. Inspect the evaluation summary after the evaluation job is done. You can terminate an evaluation job in progress at any time.

You can test a model in multiple evaluation jobs, but you must run them one after another. AWS DeepRacer only keeps the latest evaluation job status and the result.

You can evaluate an AWS DeepRacer model using the AWS DeepRacer simulator as a virtual environment.

For step-by-step instructions to run an AWS DeepRacer evaluation job in simulation, see Evaluate Your AWS DeepRacer Models in Simulation.

For more information about how to run an AWS DeepRacer evaluation job with an AWS DeepRacer vehicle in a physical environment, see Drive Your AWS DeepRacer Vehicle .