Evaluating a custom model - AWS DeepComposer

Evaluating a custom model

In this topic, you learn how to evaluate the loss function graph on the Models page, and what to do when your training is unsuccessful.

Important

This topic assumes that you have chosen the hyperparameters as documented in the topic on training a custom MuseGAN model. If you chose different hyperparameter values, your results will be different.

To evaluate a custom model
  1. Open the AWS DeepComposer console.

  2. In the navigation pane, choose Models.

  3. On the Models page, choose your custom model.

  4. On the model’s training results page, under Discriminator and generator loss over time, review the Loss function graph.

    
                        An example of the discriminator and generator loss over time
                            graph.

    The generator loss and discriminator loss are superimposed onto the same graph, but on different scales. The scale of the generator loss is shown on the left side and the scale of the discriminator loss is shown on the right side.

    In this graph, the generator loss plateaus around the 25th epoch, when it stops significantly improving its ability to generate realistic music.

  5. Choose Sample output to listen to accompaniment tracks that would be generated had inference been performed at that specific epoch.

    For every 50th epoch, you can listen to the accompaniment tracks that could have been generated.

As you can see, both the generator's and discriminator's loss plateaus after the 25th epoch. This plateau indicates that the model was trained successfully.

Evaluating a model when training is unsuccessful

You can determine when training hasn't been successful by evaluating the quality of the sound that your model outputs in the sample output. Notice that in the training, the values for discriminator and generator fluctuate greatly. However, as the training continues, the fluctuations decrease. Towards the end of training, the losses should converge to a value and then should remain at that arbitrary value.

If the learning rate is set too low, the model won’t converge. If the learning rate is set too high, the model will train too quickly and will create disarranged sounds. Also, it's important to note that there are different metrics for U-Net and MuseGAN models. Therefore, what works for one model might not have the same effect on another.

Learn more

After creating, training, and evaluating your first custom model, you can continue training more models and composing music. You can train as many models as you like and evaluate them in the music studio.