wb workflow job run

Name

wb-workflow-job-run - Start a job.

Synopsis

wb workflow job run [--description=<description>] [--format=<format>] [--job-id=<jobId>] --output-bucket-id=<outputBucketId> [--output-path=<outputPath>] --workflow=<id> [--workspace=<id>] [--inputs=<String=Object>[, <String=Object>...]]... [--batch-input-bucket-id=<inputsBucketId> --batch-input-csv-path=<inputsPath> [--row-selection=<rowSelection>] [--column-mapping=<String=String>[,<String=String>...] [--column-mapping=<String=String>[,<String=String>...]]... | --column-mapping-uri=<columnMappingUri>]] [[--delete-intermediate-outputs] [--read-from-cache] [--write-to-cache]] [[--storage-type=<storageType>] [--storage-capacity=<storageCapacity>]]

Description

Start a job.

Options

  • --batch-input-bucket-id=<inputsBucketId>
    ID of the bucket where the inputs are read from, scoped to the workspace.

  • --batch-input-csv-path=<inputsPath>
    Path to the input CSV file in the bucket.

  • --column-mapping=<String=String>[,<String=String>...]
    Column mapping for the input CSV file, e.g., my_workflow.key1=col1,my_subworkflow.key2=col2. The left-hand value is the workflow input field, and the right-hand value is the column name in the CSV file.

  • --column-mapping-uri=<columnMappingUri>
    URI path gs:// to a JSON file where the input key to CSV column mapping is defined.

  • --delete-intermediate-outputs
    Delete the intermediate outputs file.

  • --description=<description>
    Description for the job.

  • --format=<format>
    Set the format for printing command output. Defaults to the config format property.

    Valid values: JSON, TEXT

    Default: null
    
  • --inputs=<String=Object>[,<String=Object>...]
    Inputs to the workflow. Example: --inputs=key=value. For multiple inputs, use ","

  • --job-id=<jobId>
    Display name for the job.

  • --output-bucket-id=<outputBucketId>
    ID of the bucket where the output is written to, scoped to the workspace.

  • --output-path=<outputPath>
    Path to the outputs of the workflow in the bucket.

  • --read-from-cache
    Read from cache.

  • --row-selection=<rowSelection>
    Row selection for the input CSV file, e.g., 1:100, 10:, :100

  • --storage-capacity=<storageCapacity>
    Storage capacity in GB. Only specify if storage type is STATIC.

    Default: 500
    
  • --storage-type=<storageType>
    Storage type.

    Valid values: DYNAMIC, STATIC

    Default: DYNAMIC
    
  • --workflow=<id>
    Workflow ID.

  • --workspace=<id>
    Workspace ID to use for this command only.

  • --write-to-cache
    Write to cache.

Last Modified: 3 December 2025