Supported Frameworks, AWS Regions, Instance Types, and Tested Models
Important
Amazon Web Services (AWS) announces that there will be no new releases or versions of SageMaker Training Compiler. You can continue to utilize SageMaker Training Compiler through the existing AWS Deep Learning Containers (DLCs) for SageMaker Training. It is important to note that while the existing DLCs remain accessible, they will no longer receive patches or updates from AWS, in accordance with the AWS Deep Learning Containers Framework Support Policy.
Before using SageMaker Training Compiler, check if your framework of choice is supported, the instance types are available in your AWS account, and your AWS account is in one of the supported AWS Regions.
Note
SageMaker Training Compiler is available in the SageMaker Python SDK v2.70.0 or later.
Supported Frameworks
SageMaker Training Compiler supports the following deep learning frameworks and is available through AWS Deep Learning Containers.
Topics
PyTorch
Framework | Framework version | Deep Learning Container URI | Extendable for Docker customization |
---|---|---|---|
PyTorch | PyTorch v1.13.1 | 763104351884.dkr.ecr.<region> .amazonaws.com/pytorch-trcomp-training:1.12.0-gpu-py38-cu113-ubuntu20.04-sagemaker |
No |
PyTorch v1.12.0 | 763104351884.dkr.ecr.<region> .amazonaws.com/pytorch-trcomp-training:1.13.1-gpu-py39-cu117-ubuntu20.04-sagemaker |
No | |
PyTorch with Hugging Face Transformers |
Transformers v4.21.1 PyTorch v1.11.0 |
763104351884.dkr.ecr. |
No |
Transformers v4.17.0 PyTorch v1.10.2 |
763104351884.dkr.ecr. |
No | |
Transformers v4.11.0 PyTorch v1.9.0 |
763104351884.dkr.ecr. |
No |
TensorFlow
Framework | Framework version | Deep Learning Container URI | Extendable for Docker customization |
---|---|---|---|
TensorFlow |
TensorFlow v2.11.0 |
763104351884.dkr.ecr. |
Yes |
TensorFlow v2.10.0 |
763104351884.dkr.ecr. |
Yes | |
TensorFlow v2.9.1 |
763104351884.dkr.ecr. |
Yes | |
TensorFlow with Hugging Face Transformers |
Transformers v4.17.0 TensorFlow v2.6.3 |
763104351884.dkr.ecr. |
No |
Transformers v4.11.0 TensorFlow v2.5.1 |
763104351884.dkr.ecr. |
No |
For more information, see Available Images
AWS Regions
The SageMaker Training Compiler Containers
Supported Instance Types
SageMaker Training Compiler is tested on and supports the following ML instance types.
-
P4 instances
-
P3 instances
-
G4dn instances
-
G5 instances
For specs of the instance types, see the Accelerated
Computing section in the Amazon EC2 Instance Types page
If you encountered an error message similar to the following, follow the instructions at Request a service quota increase for SageMaker AI resources.
ResourceLimitExceeded: An error occurred (ResourceLimitExceeded) when calling the CreateTrainingJob operation: The account-level service limit 'ml.p3dn.24xlarge for training job usage' is 0 Instances, with current utilization of 0 Instances and a request delta of 1 Instances. Please contact AWS support to request an increase for this limit.
Tested Models
The following table includes a list of the models that have been tested with SageMaker Training Compiler. For reference, the largest batch size that is able to fit into memory is also included alongside other training parameters. SageMaker Training Compiler can change the memory footprint of the model training process; as a result, a larger batch size can often be used during the training process, further decreasing total training time. In some cases, SageMaker Training Compiler intelligently promotes caching which leads to a decrease in the largest batch size that can fit on the GPU. You must retune your model hyperparameters and find an optimal batch size for your case. To save time, use the following reference tables to look up a batch size that can be a good starting point for your use case.
Note
The batch sizes are local batch size that fit into each individual GPU in the respective instance type. You should also adjust the learning rate when changing the batch size.
Natural language processing (NLP) models
The following models are tested for training jobs for all combinations of single-node and multi-node with single or multi GPU cores and Automatic Mixed Precision (AMP) as indicated.
Single-node/multi-node single-GPU/multi-GPU | ||||||
---|---|---|---|---|---|---|
Model | Dataset | Instance type | Precision | Sequence Length | Batch size for native frameworks | Batch size for SageMaker Training Compiler |
albert-base-v2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 80 | 192 |
albert-base-v2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 | 332 |
albert-base-v2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 80 | 224 |
bert-base-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 160 | 288 |
camembert-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 160 | 280 |
distilbert-base-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 240 | 472 |
distilgpt2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 77 | 128 |
distilgpt2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 138 | 390 |
distilgpt2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 96 | 256 |
distilroberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 96 | 192 |
distilroberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 171 | 380 |
distilroberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 112 | 256 |
gpt2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 52 | 152 |
gpt2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 84 | 240 |
gpt2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 58 | 164 |
microsoft/deberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 48 | 128 |
microsoft/deberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 84 | 207 |
microsoft/deberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 53 | 133 |
roberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 125 | 224 |
xlm-roberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 16 | 31 |
xlm-roberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 18 | 50 |
xlnet-base-cased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 | 240 |
bert-base-uncased | wikitext-103-v1 | g5.48xlarge | float16 | 512 | 29 | 50 |
distilbert-base-uncased | wikitext-103-v1 | g5.48xlarge | float16 | 512 | 45 | 64 |
gpt2 | wikitext-103-v1 | g5.48xlarge | float16 | 512 | 18 | 45 |
roberta-base | wikitext-103-v1 | g5.48xlarge | float16 | 512 | 23 | 44 |
gpt2 | wikitext-103-v1 | p4d.24xlarge | float16 | 512 | 36 | 64 |
Computer Vision (CV) models
Tested using TensorFlow
Model Garden
Single/multi-node single/multi-GPU | |||||
---|---|---|---|---|---|
Model | Dataset | Instance type | Precision | Batch size for native frameworks | Batch size for SageMaker Training Compiler |
ResNet152 | food101 | g4dn.16xlarge | float16 | 128 | 144 |
ResNet152 | food101 | g5.4xlarge | float16 | 128 | 192 |
ResNet152 | food101 | p3.2xlarge | float16 | 152 | 156 |
ViT | food101 | g4dn.16xlarge | float16 | 512 | 512 |
ViT | food101 | g5.4xlarge | float16 | 992 | 768 |
ViT | food101 | p3.2xlarge | float16 | 848 | 768 |
Natural language processing (NLP) models
The following models are tested for training jobs for all combinations of single-node and multi-node with single or multi GPU cores and Automatic Mixed Precision (AMP) as indicated.
Single-node/multi-node single-GPU/multi-GPU | ||||||
---|---|---|---|---|---|---|
Model | Dataset | Instance type | Precision | Sequence Length | Batch size for native frameworks | Batch size for SageMaker Training Compiler |
albert-base-v2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 128 | 248 |
bert-base-uncased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 160 | 288 |
camembert-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 160 | 279 |
camembert-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 105 | 164 |
distilgpt2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 136 | 256 |
distilgpt2 | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 80 | 118 |
gpt2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 84 | 240 |
gpt2 | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 80 | 119 |
microsoft/deberta-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 93 | 197 |
microsoft/deberta-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 113 | 130 |
roberta-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 125 | 224 |
roberta-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 78 | 112 |
xlnet-base-cased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 138 | 240 |
bert-base-uncased | wikitext-103-v1 | ml.p4d.24xlarge | float16 | 512 | 52 | |
distilbert-base-uncased | wikitext-103-v1 | ml.p4d.24xlarge | float16 | 512 | 160 | |
gpt2 | wikitext-103-v1 | ml.p4d.24xlarge | float16 | 512 | 25 | |
roberta-base | wikitext-103-v1 | ml.p4d.24xlarge | float16 | 512 | 64 |
Computer Vision (CV) models
Tested using TensorFlow
Model Garden
Single/multi-node single/multi-GPU | |||||
---|---|---|---|---|---|
Model | Dataset | Instance type | Precision | Batch size for native frameworks | Batch size for SageMaker Training Compiler |
MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g5.2xlarge | float16 | 6 | 8 |
MaskRCNN-ResNet50-FPN | COCO-2017 | ml.p3.2xlarge | float16 | 4 | 6 |
ResNet50 | ImageNet | ml.g5.2xlarge | float16 | 192 | 256 |
ResNet50 | ImageNet | ml.p3.2xlarge | float16 | 256 | 256 |
ResNet101 | ImageNet | ml.g5.2xlarge | float16 | 128 | 256 |
ResNet101 | ImageNet | ml.p3.2xlarge | float16 | 128 | 128 |
ResNet152 | ImageNet | ml.g5.2xlarge | float16 | 128 | 224 |
ResNet152 | ImageNet | ml.p3.2xlarge | float16 | 128 | 128 |
VisionTransformer | ImageNet | ml.g5.2xlarge | float16 | 112 | 144 |
VisionTransformer | ImageNet | ml.p3.2xlarge | float16 | 96 | 128 |
Natural Language Processing (NLP) models
Tested using Transformer modelsSequence_Len=128
and
Automatic Mixed Precision (AMP) as indicated.
Single/multi-node single/multi-GPU | |||||
---|---|---|---|---|---|
Model | Dataset | Instance type | Precision | Batch size for native frameworks | Batch size for SageMaker Training Compiler |
albert-base-v2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 160 | 197 |
albert-base-v2 | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 95 | 127 |
bert-base-uncased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 160 | 128 |
bert-base-uncased | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 104 | 111 |
bert-large-uncased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 65 | 48 |
bert-large-uncased | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 40 | 35 |
camembert-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 162 |
camembert-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 105 | 111 |
distilbert-base-uncased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 256 | 264 |
distilbert-base-uncased | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 169 |
gpt2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 120 |
gpt2 | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 80 | 83 |
jplu/tf-xlm-roberta-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 32 | 32 |
jplu/tf-xlm-roberta-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 32 | 36 |
microsoft/mpnet-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 144 | 160 |
microsoft/mpnet-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 106 | 110 |
roberta-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 128 |
roberta-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 72 | 98 |
albert-base-v2 | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 128 | 192 |
albert-base-v2 | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 95 | 96 |
distilbert-base-uncased | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 256 | 256 |
distilbert-base-uncased | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 140 | 184 |
google/electra-small-discriminator | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 256 | 384 |
google/electra-small-discriminator | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 256 | 268 |
gpt2 | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 116 | 116 |
gpt2 | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 85 | 83 |
gpt2 | wikitext-2-raw-v1 | ml.p4d.24xlarge | float16 | 94 | 110 |
microsoft/mpnet-base | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 187 | 164 |
microsoft/mpnet-base | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 106 | 111 |
Computer Vision (CV) models
Tested using TensorFlow
Model Garden
Single-node single-GPU/multi-GPU | |||||
---|---|---|---|---|---|
Model | Dataset | Instance type | Precision | Batch size for native frameworks | Batch size for SageMaker Training Compiler |
DetectionTransformer-ResNet50 | COCO-2017 | ml.g4dn.2xlarge | float32 | 2 | 4 |
DetectionTransformer-ResNet50 | COCO-2017 | ml.g5.2xlarge | float32 | 3 | 6 |
DetectionTransformer-ResNet50 | COCO-2017 | ml.p3.2xlarge | float32 | 2 | 4 |
MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g4dn.2xlarge | float16 | 4 | 6 |
MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g5.2xlarge | float16 | 6 | 8 |
MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g5.48xlarge | float16 | 48 | 64 |
MaskRCNN-ResNet50-FPN | COCO-2017 | ml.p3.2xlarge | float16 | 4 | 6 |
ResNet50 | ImageNet | ml.g4dn.2xlarge | float16 | 224 | 256 |
ResNet50 | ImageNet | ml.g5.2xlarge | float16 | 192 | 160 |
ResNet50 | ImageNet | ml.g5.48xlarge | float16 | 2048 | 2048 |
ResNet50 | ImageNet | ml.p3.2xlarge | float16 | 224 | 160 |
ResNet101 | ImageNet | ml.g4dn.2xlarge | float16 | 160 | 128 |
ResNet101 | ImageNet | ml.g5.2xlarge | float16 | 192 | 256 |
ResNet101 | ImageNet | ml.g5.48xlarge | float16 | 2048 | 2048 |
ResNet101 | ImageNet | ml.p3.2xlarge | float16 | 160 | 224 |
ResNet152 | ImageNet | ml.g4dn.2xlarge | float16 | 128 | 128 |
ResNet152 | ImageNet | ml.g5.2xlarge | float16 | 192 | 224 |
ResNet152 | ImageNet | ml.g5.48xlarge | float16 | 1536 | 1792 |
ResNet152 | ImageNet | ml.p3.2xlarge | float16 | 128 | 160 |
VisionTransformer | ImageNet | ml.g4dn.2xlarge | float16 | 80 | 128 |
VisionTransformer | ImageNet | ml.g5.2xlarge | float16 | 112 | 144 |
VisionTransformer | ImageNet | ml.g5.48xlarge | float16 | 896 | 1152 |
VisionTransformer | ImageNet | ml.p3.2xlarge | float16 | 80 | 128 |
Natural Language Processing (NLP) models
Tested using Transformer modelsSequence_Len=128
and
Automatic Mixed Precision (AMP) as indicated.
Single-node single-GPU/multi-GPU | |||||
---|---|---|---|---|---|
Model | Dataset | Instance type | Precision | Batch size for native frameworks | Batch size for SageMaker Training Compiler |
albert-base-v2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 112 |
albert-base-v2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 128 |
albert-base-v2 | wikitext-2-raw-v1 | p3.8xlarge | float16 | 128 | 135 |
albert-base-v2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 191 |
bert-base-uncased | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 64 | 94 |
bert-base-uncased | wikitext-2-raw-v1 | p3.2xlarge | float16 | 96 | 101 |
bert-base-uncased | wikitext-2-raw-v1 | p3.8xlarge | float16 | 96 | 96 |
bert-base-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 |
bert-large-uncased | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 35 | 21 |
bert-large-uncased | wikitext-2-raw-v1 | p3.2xlarge | float16 | 39 | 26 |
bert-large-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 60 | 50 |
camembert-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 96 | 90 |
camembert-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 96 | 98 |
camembert-base | wikitext-2-raw-v1 | p3.8xlarge | float16 | 96 | 96 |
camembert-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 |
distilbert-base-uncased | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 256 | 160 |
distilbert-base-uncased | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 176 |
distilbert-base-uncased | wikitext-2-raw-v1 | p3.8xlarge | float16 | 128 | 160 |
distilbert-base-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 256 | 258 |
google_electra-small-discriminator | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 256 | 216 |
google_electra-small-discriminator | wikitext-2-raw-v1 | p3.2xlarge | float16 | 256 | 230 |
google_electra-small-discriminator | wikitext-2-raw-v1 | p3.8xlarge | float16 | 256 | 224 |
google_electra-small-discriminator | wikitext-2-raw-v1 | g5.4xlarge | float16 | 256 | 320 |
gpt2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 80 | 64 |
gpt2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 80 | 77 |
gpt2 | wikitext-2-raw-v1 | p3.8xlarge | float16 | 80 | 72 |
gpt2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 120 |
jplu_tf-xlm-roberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 28 | 24 |
jplu_tf-xlm-roberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 32 | 24 |
jplu_tf-xlm-roberta-base | wikitext-2-raw-v1 | p3.8xlarge | float16 | 32 | 26 |
jplu_tf-xlm-roberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 66 | 52 |
microsoft_mpnet-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 96 | 92 |
microsoft_mpnet-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 96 | 101 |
microsoft_mpnet-base | wikitext-2-raw-v1 | p3.8xlarge | float16 | 96 | 101 |
microsoft_mpnet-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 152 |
roberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 64 | 72 |
roberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 64 | 84 |
roberta-base | wikitext-2-raw-v1 | p3.8xlarge | float16 | 64 | 86 |
roberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 |
Tested using TensorFlow
Model Garden
Single-node single-GPU/multi-GPU | ||||
---|---|---|---|---|
Model | Dataset | Instance type | Batch size for native frameworks | Batch size for SageMaker Training Compiler |
ResNet50 | ImageNet | ml.g4dn.2xlarge | 192 | 256* |
ResNet101 | ImageNet | ml.g4dn.2xlarge | 128 | 160 |
ml.g5.2xlarge | 224 | 256* | ||
ml.p3.16xlarge | 1536 | 1792 | ||
ResNet152 | ImageNet | ml.g5.2xlarge | 192 | 224 |
ml.p3.2xlarge | 160 | 160 | ||
ml.p3.16xlarge | 1024 | 1280 | ||
VisionTransformer | ImageNet | ml.g4dn.2xlarge | 80 | 128* |
ml.g5.2xlarge | 112 | 128* | ||
ml.p3.2xlarge | 56 | 128* | ||
ml.p3.16xlarge | 640 | 1024* | ||
DetectionTransformer-ResNet50 | COCO-2017 | ml.g4dn.2xlarge | 2 | 2 |
ml.g5.2xlarge | 3 | 6 | ||
ml.p3.2xlarge | 2 | 4 | ||
ml.p3.16xlarge | 8 | 32 | ||
MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g4dn.2xlarge | 4 | 4 |
ml.g5.2xlarge | 6 | 8 | ||
ml.p3.2xlarge | 4 | 6 |
* The batch sizes marked with the asterisk symbol (*) indicate the largest batch size tested by the SageMaker Training Compiler developer team. For the marked cells, the instance may be able to fit a larger batch size than what is indicated.
Tested with Sequence_Len=512
and Automatic Mixed Precision
(AMP).
Single-node single-GPU | |||||
---|---|---|---|---|---|
Model | Dataset | Instance type | Instance count | Batch size for native frameworks | Batch size for Training Compiler |
albert-base-v2 | wikitext-2 | ml.g4dn.2xlarge | 1 | 14 | 28 |
ml.g5.2xlarge | 1 | 18 | 40 | ||
ml.p3.2xlarge | 1 | 14 | 32 | ||
bert-base-cased | wikitext-2 | ml.g4dn.2xlarge | 1 | 12 | 24 |
ml.g5.2xlarge | 1 | 28 | 44 | ||
ml.p3.2xlarge | 1 | 16 | 20 | ||
camembert-base | wikitext-2 | ml.g4dn.2xlarge | 1 | 16 | 28 |
ml.g5.2xlarge | 1 | 24 | 40 | ||
ml.p3.2xlarge | 1 | 16 | 24 | ||
distilbert-base-uncased | wikitext-2 | ml.g4dn.2xlarge | 1 | 28 | 52 |
ml.g5.2xlarge | 1 | 40 | 76 | ||
ml.p3.2xlarge | 1 | 32 | 48 | ||
wikitext-103-v1 | ml.p4d.24xlarge | 4 | 82 | 160 | |
distilgpt2 | wikitext-2 | ml.g4dn.2xlarge | 1 | 6 | 18 |
ml.g5.2xlarge | 1 | 12 | 28 | ||
ml.p3.2xlarge | 1 | 6 | 16 | ||
distilroberta-base | wikitext-2 | ml.g4dn.2xlarge | 1 | 20 | 40 |
ml.g5.2xlarge | 1 | 28 | 56 | ||
ml.p3.2xlarge | 1 | 24 | 40 | ||
EleutherAI/gpt-neo-125M | wikitext-2 | ml.g4dn.2xlarge | 1 | 4 | 8 |
ml.g5.2xlarge | 1 | 6 | 14 | ||
ml.p3.2xlarge | 1 | 4 | 10 | ||
gpt2 | wikitext-2 | ml.g4dn.2xlarge | 1 | 4 | 8 |
ml.g5.2xlarge | 1 | 6 | 16 | ||
ml.p3.2xlarge | 1 | 4 | 10 | ||
wikitext-103-v1 | ml.p4d.24xlarge | 4 | 13 | 25 | |
roberta-base | wikitext-2 | ml.g4dn.2xlarge | 1 | 12 | 20 |
ml.g5.2xlarge | 1 | 24 | 36 | ||
ml.p3.2xlarge | 1 | 12 | 20 | ||
wikitext-103-v1 | ml.p4d.24xlarge | 4 | 36 | 64 | |
xlnet-base-cased | wikitext-2 | ml.g4dn.2xlarge | 1 | 2 | 6 |
ml.g5.2xlarge | 1 | 2 | 10 | ||
ml.p3.2xlarge | 1 | 2 | 8 | ||
bert-base-uncased | wikitext-103-v1 | ml.p4d.24xlarge | 2 | 32 | 64 |
4 | 32 | 64 | |||
8 | 32 | 64 | |||
16 | 32 | 64 | |||
roberta-large | wikitext-103-v1 | ml.p4d.24xlarge | 4 | 16 | 24 |
microsoft/deberta-v3-base | wikitext-103-v1 | ml.p4d.24xlarge | 16 | 9 | 23 |
Tested with Sequence_Len=512
and Automatic Mixed Precision
(AMP).
Single-node single-GPU | |||
---|---|---|---|
Model | Instance type | Batch size for native frameworks | Batch size for Training Compiler |
albert-base-v2 | ml.p3.2xlarge | 14 | 28 |
ml.g4dn.2xlarge | 14 | 24 | |
bert-base-cased | ml.p3.2xlarge | 16 | 24 |
ml.g4dn.2xlarge | 12 | 24 | |
bert-base-uncased | ml.p3.2xlarge | 16 | 24 |
ml.g4dn.2xlarge | 12 | 28 | |
camembert-base | ml.p3.2xlarge | 12 | 24 |
ml.g4dn.2xlarge | 12 | 28 | |
distilbert-base-uncased | ml.p3.2xlarge | 28 | 48 |
ml.g4dn.2xlarge | 24 | 52 | |
distilgpt2 | ml.p3.2xlarge | 6 | 12 |
ml.g4dn.2xlarge | 6 | 14 | |
distilroberta-base | ml.p3.2xlarge | 20 | 40 |
ml.g4dn.2xlarge | 12 | 40 | |
EleutherAI/gpt-neo-125M | ml.p3.2xlarge | 2 | 10 |
ml.g4dn.2xlarge | 2 | 8 | |
facebook/bart-base | ml.p3.2xlarge | 2 | 6 |
ml.g4dn.2xlarge | 2 | 6 | |
gpt2 | ml.p3.2xlarge | 4 | 8 |
ml.g4dn.2xlarge | 2 | 8 | |
roberta-base | ml.p3.2xlarge | 12 | 20 |
ml.g4dn.2xlarge | 12 | 20 | |
xlnet-base-cased | ml.p3.2xlarge | 2 | 8 |
ml.g4dn.2xlarge | 4 | 6 |
Tested with Sequence_Len=512
and Automatic Mixed Precision
(AMP).
Single-node single-GPU | |||
---|---|---|---|
Model | Instance type | Batch size for native | Batch size for Training Compiler |
albert-base-v2 | ml.p3.2xlarge | 12 | 32 |
bert-base-cased | ml.p3.2xlarge | 14 | 24 |
bert-base-chinese | ml.p3.2xlarge | 16 | 24 |
bert-base-multilingual-cased | ml.p3.2xlarge | 4 | 16 |
bert-base-multilingual-uncased | ml.p3.2xlarge | 8 | 16 |
bert-base-uncased | ml.p3.2xlarge | 12 | 24 |
cl-tohoku/bert-base-japanese-whole-word-masking | ml.p3.2xlarge | 12 | 24 |
cl-tohoku/bert-base-japanese | ml.p3.2xlarge | 12 | 24 |
distilbert-base-uncased | ml.p3.2xlarge | 28 | 32 |
distilbert-base-uncased-finetuned-sst-2-english | ml.p3.2xlarge | 28 | 32 |
distilgpt2 | ml.p3.2xlarge | 16 | 32 |
facebook/bart-base | ml.p3.2xlarge | 4 | 8 |
gpt2 | ml.p3.2xlarge | 6 | 20 |
nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large | ml.p3.2xlarge | 20 | 32 |
roberta-base | ml.p3.2xlarge | 12 | 20 |
Single-node multi-GPU | |||
---|---|---|---|
Model | Instance type | Batch size for native | Batch size for Training Compiler |
bert-base-chinese | ml.p3.8xlarge | 16 | 26 |
bert-base-multilingual-cased | ml.p3.8xlarge | 6 | 16 |
bert-base-multilingual-uncased | ml.p3.8xlarge | 6 | 16 |
bert-base-uncased | ml.p3.8xlarge | 14 | 24 |
distilbert-base-uncased | ml.p3.8xlarge | 14 | 32 |
distilgpt2 | ml.p3.8xlarge | 6 | 32 |
facebook/bart-base | ml.p3.8xlarge | 8 | 16 |
gpt2 | ml.p3.8xlarge | 8 | 20 |
roberta-base | ml.p3.8xlarge | 12 | 20 |
Tested with Sequence_Len=128
and Automatic Mixed Precision
(AMP).
Model | Instance type | Batch size for native frameworks | Batch size for Training Compiler |
---|---|---|---|
albert-base-v2 | ml.g4dn.16xlarge | 136 | 208 |
albert-base-v2 | ml.g5.4xlarge | 219 | 312 |
albert-base-v2 | ml.p3.2xlarge | 152 | 208 |
albert-base-v2 | ml.p3.8xlarge | 152 | 192 |
bert-base-uncased | ml.g4dn.16xlarge | 120 | 101 |
bert-base-uncased | ml.g5.4xlarge | 184 | 160 |
bert-base-uncased | ml.p3.2xlarge | 128 | 108 |
bert-large-uncased | ml.g4dn.16xlarge | 37 | 28 |
bert-large-uncased | ml.g5.4xlarge | 64 | 55 |
bert-large-uncased | ml.p3.2xlarge | 40 | 32 |
camembert-base | ml.g4dn.16xlarge | 96 | 100 |
camembert-base | ml.g5.4xlarge | 190 | 160 |
camembert-base | ml.p3.2xlarge | 129 | 108 |
camembert-base | ml.p3.8xlarge | 128 | 104 |
distilbert-base-uncased | ml.g4dn.16xlarge | 210 | 160 |
distilbert-base-uncased | ml.g5.4xlarge | 327 | 288 |
distilbert-base-uncased | ml.p3.2xlarge | 224 | 196 |
distilbert-base-uncased | ml.p3.8xlarge | 192 | 182 |
google_electra-small-discriminator | ml.g4dn.16xlarge | 336 | 288 |
google_electra-small-discriminator | ml.g5.4xlarge | 504 | 384 |
google_electra-small-discriminator | ml.p3.2xlarge | 352 | 323 |
gpt2 | ml.g4dn.16xlarge | 89 | 64 |
gpt2 | ml.g5.4xlarge | 140 | 146 |
gpt2 | ml.p3.2xlarge | 94 | 96 |
gpt2 | ml.p3.8xlarge | 96 | 88 |
jplu_tf-xlm-roberta-base | ml.g4dn.16xlarge | 52 | 16 |
jplu_tf-xlm-roberta-base | ml.g5.4xlarge | 64 | 44 |
microsoft_mpnet-base | ml.g4dn.16xlarge | 120 | 100 |
microsoft_mpnet-base | ml.g5.4xlarge | 192 | 160 |
microsoft_mpnet-base | ml.p3.2xlarge | 128 | 104 |
microsoft_mpnet-base | ml.p3.8xlarge | 130 | 92 |
roberta-base | ml.g4dn.16xlarge | 108 | 64 |
roberta-base | ml.g5.4xlarge | 176 | 142 |
roberta-base | ml.p3.2xlarge | 118 | 100 |
roberta-base | ml.p3.8xlarge | 112 | 88 |
Tested with Sequence_Len=128
and Automatic Mixed Precision
(AMP).
Single-node single-GPU | |||
---|---|---|---|
Model | Instance type | Batch size for native | Batch size for Training Compiler |
albert-base-v2 | ml.p3.2xlarge | 128 | 128 |
bart-base | ml.p3.2xlarge | 12 | 64 |
bart-large | ml.p3.2xlarge | 4 | 28 |
bert-base-cased | ml.p3.2xlarge | 16 | 128 |
bert-base-chinese | ml.p3.2xlarge | 16 | 128 |
bert-base-multilingual-cased | ml.p3.2xlarge | 12 | 64 |
bert-base-multilingual-uncased | ml.p3.2xlarge | 16 | 96 |
bert-base-uncased | ml.p3.2xlarge | 16 | 96 |
bert-large-uncased | ml.p3.2xlarge | 4 | 24 |
cl-tohoku/bert-base-japanese | ml.p3.2xlarge | 16 | 128 |
cl-tohoku/bert-base-japanese-whole-word-masking | ml.p3.2xlarge | 16 | 128 |
distilbert-base-sst2 | ml.p3.2xlarge | 32 | 128 |
distilbert-base-uncased | ml.p3.2xlarge | 32 | 128 |
distilgpt2 | ml.p3.2xlarge | 32 | 128 |
gpt2 | ml.p3.2xlarge | 12 | 64 |
gpt2-large | ml.p3.2xlarge | 2 | 24 |
jplu/tf-xlm-roberta-base | ml.p3.2xlarge | 12 | 32 |
roberta-base | ml.p3.2xlarge | 4 | 64 |
roberta-large | ml.p3.2xlarge | 4 | 64 |
t5-base | ml.p3.2xlarge | 64 | 64 |
t5-small | ml.p3.2xlarge | 128 | 128 |