wb workflow job run

Name

wb-workflow-job-run - Start a job.

Synopsis

wb workflow job run [--description=<description>] [--format=<format>] [--job-id=<jobId>] --output-bucket-id=<outputBucketId> [--output-path=<outputPath>] --workflow=<id> [--workspace=<id>] [[[--inputs=<inputs>] [--inputs-uri=<inputsUri>]] [--batch-input-bucket-id=<inputsBucketId> --batch-input-csv-path=<inputsPath> [--row-selection=<rowSelection>] [--column-mapping=<String=String>[,<String=String>...] [--column-mapping=<String=String>[,<String=String>...]]... | --column-mapping-uri=<columnMappingUri>]]] [[--delete-intermediate-outputs] [--read-from-cache] [--write-to-cache]] [[--storage-type=<storageType>] [--storage-capacity=<storageCapacity>] [--enable-call-caching]] [[] []]

Description

Start a job.

Options

  • --delete-intermediate-outputs
    Delete the intermediate outputs file.

  • --description=<description>
    Description for the job.

  • --enable-call-caching
    (AWS Only) Enable call caching to reuse previous results with matching inputs.

  • --format=<format>
    Set the format for printing command output. Defaults to the config format property.

    Valid values: JSON, TEXT

    Default: null
    
  • --job-id=<jobId>
    Display name for the job.

  • --output-bucket-id=<outputBucketId>
    ID of the bucket where the output is written to, scoped to the workspace.

  • --output-path=<outputPath>
    Path to the outputs of the workflow in the bucket.

  • --read-from-cache
    (GCP Only) Read from cache.

  • --storage-capacity=<storageCapacity>
    Storage capacity in GB. Only specify if storage type is STATIC.

    Default: 500
    
  • --storage-type=<storageType>
    Storage type.

    Valid values: DYNAMIC, STATIC

    Default: DYNAMIC
    
  • --workflow=<id>
    Workflow ID.

  • --workspace=<id>
    Workspace ID to use for this command only.

  • --write-to-cache
    (GCP Only) Write to cache.

WDL Workflow Options: Inputs:

  • --inputs=<inputs>
    Inputs to the workflow as a JSON object. Example: --inputs '{"key1":"value1","key2":123}'
  • --inputs-uri=<inputsUri>
    URI path gs:// or s3:// to a JSON file where the inputs are defined.

Input Table:

  • --batch-input-bucket-id=<inputsBucketId>
    ID of the bucket where the inputs are read from, scoped to the workspace.
  • --batch-input-csv-path=<inputsPath>
    Path to the input CSV file in the bucket.
  • --column-mapping=<String=String>[,<String=String>...]
    Column mapping for the input CSV file, e.g., my_workflow.key1=col1,my_subworkflow.key2=col2. The left-hand value is the workflow input field, and the right-hand value is the column name in the CSV file.
  • --column-mapping-uri=<columnMappingUri>
    URI path gs:// or s3:// to a JSON file where the input key to CSV column mapping is defined.
  • --row-selection=<rowSelection>
    Row selection for the input CSV file, e.g., 1:100, 10:, :100

Last Modified: 20 April 2026