Troubleshooting

Troubleshooting issues with your Verily Workbench configuration.

This article lists some troubleshooting tips.

Workbench CLI

The Workbench CLI tool rename

The Workbench CLI has been renamed from terra to wb. In the Verily Workbench cloud environments, wb is copied to terra, so that you may use either name for the tool. Backwards compatibility is preserved.

When you do an installation to a local machine, only the wb executable is created; however, if you like, you can copy it to terra and add that to your path as well. This may be helpful if you have existing scripts or notebooks that reference terra.

Check and set the current workspace

Especially if you have created multiple workspaces, it is useful to double-check your current workspace before you run any Workbench CLI commands that affect it (such as creating a new resource).

Run wb status to see information about your current workspace. If one is not set, or if you want to change workspaces, run:

wb workspace set --id=<your-workspace-id>

Note: You can also find the command above listed on the Overview page for a given workspace in the Verily Workbench web UI.

To see a list of your workspaces and their IDs, run:

wb workspace list

Clear context

If you appear to be having credentials-related issues, it may help to clear the context file and all credentials. This will require you to log in and select a workspace again.

cd $HOME/.workbench
rm context.json
rm StoredCredential

You’ll then need to rerun wb auth login, and select a workspace as described above.

Manual uninstall of the Workbench CLI

There is not yet an uninstaller. You can clear the entire context directory, which includes the context file, all credentials, and all JARs. You will then need to re-install.

rm -R $HOME/.workbench

Workspace buckets mounted in cloud environments

See the “Troubleshooting” section in Accessing workspace files and folders from your Cloud Environment for issues with bucket automounting.

Dataproc (managed Spark) clusters

If you have any secondary workers running as part of a Dataproc cluster, the cluster cannot be stopped (paused). You will first need to edit your cluster configuration, as described in the next section, to set the secondary workers to 0 nodes. Note that if the cluster is “UPDATING” (e.g., if autoscaling is engaged to turn nodes up or down), you will need to wait until the update is finished to make edits.

Last Modified: 12 May 2024