How to use All of Us GPUs

  • Updated

Graphics Processing Units (GPUs)

The All of Us Researcher Workbench now supports the use of Graphic Processing Units (GPUs) when using Jupyter notebook cloud environments. GPUs are a kind of computer processor that can dramatically accelerate certain tasks, providing faster turnaround and cost savings compared to virtual machines (VMs) that only use traditional CPUs. 

The base docker image from which the All of Us cloud environment images are extended is based on Google’s Deep Learning platform. This image installs the NVidia drivers necessary for GPU support within the All of Us cloud analysis environment.

How to add GPUs to your Cloud Environment

To add GPUs to your cloud environment's compute configuration, navigate to a workspace, click on the Cloud analysis environment button in the right navigation bar, then click the Enable GPUs check box in the Cloud compute profile section:

cloud_env.png

If you already have an existing environment, you will  see that the checkbox is unavailable, and you have to delete the environment manually.  You can do this either by clicking the delete button at the bottom of the Cloud Analysis Environment page. This will make the GPU checkbox available when creating a fresh environment. 

Once you select Enable GPUs, you can select your desired GPU type and the number of GPUs:

gpu.png

 

If you want to modify the GPU configuration of an existing environment (increase GPU power or change the GPU type), your environment will be recreated to support that change. 

You can read about GPUs on the Google Cloud Engine in more depth here, which outlines the number of GPUs, GPU memory, available CPUs, and available memory for each GPU mode. You can see more detail about GPU pricing here.

GPUs Limitations

  • As with other interactive analysis compute resources in Terra, only the n1 family of machines is supported.

  • All of Us only supports GPU use with the standard VM. If you select the Hail Genomic Analysis recommended environment or switch from a standard VM to a Dataproc Cluster, you will no longer have the option to configure GPUs.

  • You may experience a runtime creation failure in one of the following circumstances:

    • You run up against your quota limitation. See this article to find out how to check and change your quotas to fix this issue.

    • You see a ZONE_RESOURCE_POOL_EXHAUSTED error, in which case you can either wait a day or two and try again

How to check that you've successfully enabled GPUs

You can check that you have successfully enabled GPUs with the following code snippets in a Jupyter Notebook with PyTorch:

pip install torchvision
import torch
torch.cuda.is_available()
print(torch.version.cuda)
torch.cuda.current_device()
torch.cuda.get_device_name(#)

Was this article helpful?

0 out of 3 found this helpful

Have more questions? Submit a request

Comments

0 comments

Article is closed for comments.