|« PreviousNext »|
|Did this page help you? Yes | No | Tell us about it...|
If you require high parallel processing capability, you'll benefit from using GPU instances, which provide access to NVIDIA GPUs with up to 1,536 CUDA cores and 4 GB of video memory. You can use GPU instances to accelerate many scientific, engineering, and rendering applications by leveraging the Compute Unified Device Architecture (CUDA) or OpenCL parallel computing frameworks. You can also use them for graphics applications, including game streaming, 3-D application streaming, and other graphics workloads.
GPU instances run as HVM-based instances. Hardware virtual machine (HVM) virtualization uses hardware-assist technology provided by the AWS platform. With HVM virtualization, the guest VM runs as if it were on a native hardware platform, except that it still uses paravirtual (PV) network and storage drivers for improved performance. This enables Amazon EC2 to provide dedicated access to one or more discrete GPUs in each GPU instance.
You can cluster GPU instances into a placement group. Placement groups provide low latency and high-bandwidth connectivity between the instances within a single Availability Zone. For more information, see Placement Groups.
For more information about the hardware specifications for each Amazon EC2 instance type, see Instance Type Details.
GPU instances currently have the following limitations:
They aren't available in every region.
They must be launched from HVM AMIs.
They can't access the GPU unless the NVIDIA drivers are installed.
They aren't available for use with Amazon DevPay.
We limit the number of instances that you can run. For more information, see How many instances can I run in Amazon EC2? in the Amazon EC2 FAQ. To request an increase in these limits, use the following form: Request to Increase Amazon EC2 Instance Limit.
To help you get started, NVIDIA provides AMIs for GPU instances for Amazon Linux and Windows. These reference AMIs include the NVIDIA driver, which enables full functionality and performance of the NVIDIA GPUs. For a list of AMIs with the NVIDIA driver, see AWS Marketplace (NVIDIA GRID).
You can launch a CG1 instance using any HVM AMI.
You can launch a G2 instance using Windows Server 2012 and Windows Server 2008 R2 AMIs. In addition, you can launch Linux HVM AMIs with the following operating systems: Amazon Linux, SUSE Enterprise Linux, and Ubuntu. If you encounter the following error when launching a G2 instance from an AMI for a different operating system, contact Customer Service or reach out through the Amazon EC2 forum.
Client.UnsupportedOperation: Instances of type 'g2.2xlarge' may not be launched from AMI <ami-id>.
After you launch a G2 instance, you can create your own AMI from the instance. However, if
you create a snapshot of the root volume of the instance, register it as an AMI, and then launch a G2 instance,
you'll get the
Client.UnsupportedOperation error. To launch a G2 instance from
your own AMI, you must create the AMI from a G2 instance using the console (select the instance,
click Actions, and then click Create Image),
create-image (AWS CLI),
or ec2-create-image (Amazon EC2 CLI).
A GPU instance must have the appropriate NVIDIA driver. The NVIDIA driver you install must be compiled against the kernel that you intend to run on your instance.
Amazon provides updated and compatible builds of the NVIDIA kernel drivers for each official kernel upgrade. If you decide to use a different NVIDIA driver version than the one Amazon provides, or decide to use a kernel that's not an official Amazon build, you must uninstall the Amazon-provided NVIDIA packages from your system to avoid conflicts with the versions of the drivers you are trying to install.
Use this command to uninstall Amazon-provided NVIDIA packages:
$sudo yum erase nvidia cudatoolkit
The Amazon-provided CUDA toolkit package has dependencies on the NVIDIA drivers. Uninstalling the NVIDIA packages erases the CUDA toolkit. You must reinstall the CUDA toolkit after installing the NVIDIA driver.
You can download NVIDIA drivers from http://www.nvidia.com/Download/Find.aspx. Select a driver for the NVIDIA GRID K520 (G2 instances) or Tesla M-Class M2050 (CG1 instances) for Linux 64-bit systems. For more information about installing and configuring the driver, open the ADDITIONAL INFORMATION tab on the download page for the driver on the NVIDIA website and click the README link.
To install the driver for an Amazon Linux AMI
Make sure the
kernel-devel package is installed and matches the version of
the kernel you are currently running.
$yum install kernel-devel-`uname -r`
Run the self-install script to install the NVIDIA driver. For example:
Reboot the instance. For more information, see Reboot Your Instance.
Confirm that the driver is functional. The response for the following command lists the installed NVIDIA driver version and details about the GPUs.
$/usr/bin/nvidia-smi -q -a
To install the NVIDIA driver on your Windows instance, log on to your instance as Administrator using Remote Desktop. You can download NVIDIA drivers from http://www.nvidia.com/Download/Find.aspx. Select a driver for the NVIDIA GRID K520 (G2 instances) or Tesla M-Class M2050 (CG1 instances) for your version of Windows Server. Open the folder where you downloaded the driver and double-click the installation file to launch it. Follow the instructions to install the driver and reboot your instance as required. To verify that the GPU is working properly, check device manager.
When using Remote Desktop, GPUs that use the WDDM driver model are replaced with a non-accelerated Remote Desktop display driver. In order to access your GPU hardware, you must use a different remote access tool, such as VNC. You can also use one of the GPU AMIs from the AWS Marketplace because they provide remote access tools that support 3-D acceleration.