Selecting the Instance Type for DLAMI - Deep Learning AMI

Selecting the Instance Type for DLAMI

Selecting the instance type can be another challenge, but we'll make this easier for you with a few pointers on how to choose the best one. Remember, the DLAMI is free, but the underlying compute resources are not.

  • If you're new to deep learning, then you probably want an "entry-level" instance with a single GPU.

  • If you're budget conscious, then you will need to start a bit smaller and look at the CPU-only instances.

  • If you're interested in running a trained model for inference and predictions (and not training), then you might want to run Amazon Elastic Inference. This will give you access to a fraction of a GPU, so you can scale affordably. For high-volume inference services, you may find that a CPU instance with a lot of memory, or even a cluster of these, is a better solution for you.

  • If you're using a large model with a lot of data or a high batch size, then you will need a larger instance with more memory. You can also distribute your model to a cluster of GPUs. You may find that using an instance with less memory is a better solution for you if you decrease your batch size. This may impact your accuracy and training speed.

  • If you’re interested in running Machine Learning applications using NVIDIA Collective Communications Library (NCCL) requiring high levels of inter-node communications at scale, you might want to use Elastic Fabric Adapter (EFA).

DLAMIs are not available in every region, but it is possible to copy DLAMIs to the region of your choice. See Copying an AMI for more info. Each region supports a different range of instance types and often an instance type has a slightly different cost in different regions. On each DLAMI main page, you will see a list of instance costs. Note the region selection list and be sure you pick a region that's close to you or your customers. If you plan to use more than one DLAMI and potentially create a cluster, be sure to use the same region for all of nodes in the cluster.

So with all of those points in mind, make note of the instance type that best applies to your use case and budget. The rest of the topics in this guide help further inform you and go into more detail.


The Deep Learning AMIs include drivers, software, or toolkits developed, owned, or provided by NVIDIA Corporation. You agree to use these NVIDIA drivers, software, or toolkits only on Amazon EC2 instances that include NVIDIA hardware.

Next Up

Recommended GPU Instances