Compute profile configuration options
Categories:
Prior reading: Cloud apps overview
Purpose: This document provides detailed instructions for customizing the compute resources allocated to a cloud app through the Workbench UI.
Note: This article does not apply to Dataproc cluster apps. To learn more about configuring a Dataproc cluster app, please see Use Dataproc (Spark) and Hail on Verily Workbench.
Introduction
When creating a cloud app, you have the ability to customize it with the following compute options:
- Virtual machine type
- Number and type of CPUs
- Number and type of GPUs
- Memory
- Disk storage
The configurations you select will factor in to the cost of running your app. These values are configurable at the time of app creation via the Workbench UI or the Workbench CLI (command-line interface). You can also edit the compute profile configurations after an app's been created if your resource needs have changed.
Note
This article focuses on setting and updating compute options via the Workbench UI and the Workbench CLI. For information about updating compute options via the Google Cloud console, see Manage your apps in Google Cloud console.Understand compute options
Virtual machines
A virtual machine (VM) is an emulation of a physical computer and can perform the same functions, such as running applications and other software. VM instances are created from pools of resources in cloud data centers. You can specify your VM's geographic region, compute, and storage resources to meet your job's requirements.
You can create and destroy virtual machines at will to run applications of interest to you, such as interactive analysis environments or data processing algorithms. Virtual machines underlie Verily Workbench’s cloud apps.
By default, Workbench uses N1 high-memory machine types. To learn more about them, click here.
Be aware
The type of VM you select will determine the number of CPUs and memory used. Therefore, it can have a major effect on your app costs.CPUs (central processing units)
The central processing unit (CPU), or simply processor, can be considered the “brain” of a computer. Every computational machine will have at least one CPU, which is connected to every part of the computer system. It’s the operational center that receives, executes, and delegates the instructions received from programs. CPUs also handle computational calculations and logic. Increasing the number of CPUs accelerates the processing of these tasks. Other types of processors (GPUs or TPUs) may be better suited for processing specialized tasks, such as parallel computing and machine learning.
If you think you'll need more than two CPUs, you can choose a different machine type via the Workbench UI at the time of creation.
If you need to modify the machine type after you've created the app, you can update it via the Editing dialog or Google Cloud console as described above.
In some Google Cloud Platform (GCP) zones, the N1 machine type families may be temporarily unavailable due to capacity constraints. When the app fails to create, you can either try a different zone or switch to an N2 machine type as a substitute. When the app fails to restart due to zonal capacity constraints, you can't switch the existing VM to a different zone, but you can try switching to a different machine type. To create apps with non-N1 or N2 machine types, you can use the Workbench CLI.
Key differences between N1 & N2
Both are general purpose machine types. N2 is the second-generation machine powered by the 2nd Generation Intel Xeon Scalable Processors (Cascade Lake) CPU platform. N2 offers slightly higher memory bandwidth per vCPU, and it also has slightly higher hourly rates. N2 has a higher availability in same zones. However, only N1 machines support GPU workloads with NVIDIA GPUs.
GPUs (graphical processing units)
A graphical processing unit (GPU) is a specialized processor that excels at parallel computing, which means processing many tasks at once. While a central processing unit (CPU) must process tasks one by one, a GPU can split complex tasks into many pieces and work through them simultaneously. GPUs were traditionally used to accelerate video rendering, but their parallel computing ability also makes them ideal for tasks such as sequence alignment, AI, and machine learning.
GPUs can be added when you create your app via the Workbench UI or Workbench CLI. You can also change GPUs (e.g., switch to a different GPU type) via the Workbench UI, Workbench CLI, or the Google Cloud console after the app has been created.
Be aware
Use of GPUs will increase the running cost of a VM per hour. This makes it particularly important to stop a GPU-enabled app when you're not using it.You can learn more about GPUs on Google Compute Engine here, and see more details about GPU pricing here.
Memory
Memory, also known as random access memory (RAM), is where programs and data that are currently in use are temporarily stored. The central processing unit (CPU) receives instructions and data from programs, which are kept in the computer’s memory while being used. Once the instructions are completed, or the program is no longer in use, the memory is freed up. If the computer system doesn’t have enough memory for all of the CPU’s instructions, the system’s performance will diminish and slow down. While the CPU is commonly thought of as a computer’s brain, you can think of memory as the attention span.
The memory amount scales with the number of CPUs. Configure the memory via the machine type selection.
Disk storage
The data disk for your cloud app is mounted at /var/lib/docker to persist container images and
volumes. By default, your app's data disk size will be 500 GB for Compute Engine instances.
You can set the disk size when creating a new app via the UI. The data disk can be as small as 50 GB, but the minimum recommended size is 100 GB. The maximum allowed is 65,536 GB (64 TB).
If you select a shared configuration template in the Choose configuration step, the app will inherit the specified data disk size. However, this amount can be updated in the Compute options step.
Autostop
Another configurable app option is an autostop feature that automatically stops running apps after a specified idle time. The default idle time is four hours, but this can be changed at any time.
Set compute options for a new app
Virtual machines & CPUs
New apps based off of JupyterLab, R Analysis Environment, and Visual Studio Code apps will include an N1 high-memory VM, two CPUs, and 13 GB of total memory by default. The machine type for NVIDIA apps will depend on the GPU type selected.
You can select a different machine type if you need more or less computing power:
On the Compute options step during app creation, choose an option from the Machine type dropdown. You can see the number of CPUs and memory for each machine type. The Cost estimate will also update when you select an option.
Using the wb app create CLI command, you can
specify a specific VM type via the --machine-type option.
GPUs
With the exception of NVIDIA apps, GPUs are not included in new apps by default. However, if you anticipate heavier workloads, you can add them when you create your app.
On the Compute options step during app creation, select GPUs to show the GPU configuration fields. Select a GPU type, number of GPUs, and machine type. The Cost estimate will update when you configure your GPUs.
Using the wb app create command, you can add GPUs via the --accelerator-type and
--accelerator-core-count options.
Note
Available GPU types vary by region. You may have to change your workspace's default resource region if the GPU type you need is unavailable. See GPU regions and zones for more details.Disk storage
New Compute Engine apps come with 500 GB of disk storage space by default. For most apps, we recommend a minimum of 100 GB. However, disk storage can be as small as 50 GB or as large as 65,536 GB (64 TB).
When selecting the disk size, consider both the amount of data you’ll download for analysis and the size of the container images. For example, if your analysis requires downloading 50 GB of data and your container image is 20 GB, allocate a data disk of at least 100 GB to allow for intermediate files and Docker overhead.
Be aware
Data disk size can only be increased after an app is created. See Disk storage below for more details.On the Compute options step during app creation, enter the desired data disk size. The Cost estimate will update accordingly.
Using the wb app create command, you can set the
data disk size via the --data-disk-size option.
Autostop
The autostop idle time is set to four hours by default. This can be changed to any length from 1 hour to 14 days. You can also disable the autostop feature.
On the Compute options step during app creation, enter the desired autostop idle time. Deselect the checkbox if you'd like to disable autostop.
Using the wb app create command, you can set the
autostop idle time via the --idle-timeout-seconds option. If this option is not included, the app
will have a default autostop idle time of four hours.
Update compute options for an existing app
You can use the Workbench UI or Workbench CLI to update the compute options of your app.
To update your app's configurations in the Workbench UI, select Edit in the action menu of the app card. This will open the Editing dialog.
You can update the following compute options via the Editing dialog:
- Machine type
- GPUs
- Data disk size
You can also change the autostop idle time in the Editing dialog.
Be aware
The VM must be stopped to edit machine type, GPUs, and data disk size.
Virtual machines & CPUs
By default, N1 standard, high memory, and high CPU machine types can be selected.
In the Editing dialog, choose an option from the Machine type dropdown. You can see the number of CPUs and memory for each machine type. The Cost estimate will update to show you how much more or less the new machine type would cost.
If the current machine type is not N1 standard, high memory, or high CPU, machine types with the same series and predefined configuration will also be listed.
Using the wb app update command, you can change the
machine type with the --new-machine-type option.
When updating via the Workbench CLI, the new machine type is not constrained to be within the current series or N1 series. However, it must be compatible with the app’s current machine type. See this machine series comparison to learn more about machine type compatibility.
GPUs
For heavier workloads, you can configure GPUs on your app. Select GPUs in the Compute section to configure GPU options. You can select the GPU type, the number of GPUs, and the machine type. The available machine types are determined by the selected GPU type and GPUs.
You can only use the Workbench UI to update GPUs in an existing app. In the Editing dialog, select GPUs to see GPU types that are compatible with your app's machine type and your workspace's region.
Disk storage
Once an app's been created, the data disk size can only be increased. If you need a smaller data disk, please create a new app.
The maximum allowed disk size is 65,536 GB (64 TB).
When the VM restarts, it will be configured to use the additional disk space. The configuration may take several minutes. The app can be accessed while this takes place.
In the Editing dialog, enter the new data disk size. The Cost estimate will update accordingly.
Using the wb app update command, you can change the
data disk size with the --new-data-disk-size option.
Autostop
Adjust the autostop idle time via the Workbench UI or Workbench CLI.
In the Editing dialog, enter the desired autostop idle time. Deselect the checkbox if you'd like to disable autostop.
Using the wb app update command, you can set the
autostop idle time via the --new-idle-timeout-seconds option.
Last Modified: 27 October 2025