wb resource create dataproc-cluster

Name

wb-resource-create-dataproc-cluster - Add a controlled GCP Dataproc cluster resource with Jupyter. For a detailed explanation of parameters, see https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters#Cluster

Synopsis

wb resource create dataproc-cluster [–quiet] [–autoscaling-policy=<autoscalingPolicy>] [–bucket=<configBucket>] [–cluster-id=<clusterId>] [–description=<description>] [–format=<format>] [–idle-delete-ttl=<idleDeleteTtl>] [–image-version=<imageVersion>] [–region=<region>] [–software-framework=<softwareFrameworkType _ >] [–temp-bucket=<tempBucket>] [–workspace=<id>] [–components=<components>[, <components>…]]… [–initialization-actions=<initializationAct_ _ ions>[,<initializationActions>…]]… [-M=<String=String>[, <String=String>…]]… [–properties=<String=String>[, <String=String>…]]… (–id=<id>) [[–manager-machine-type=<machineType>] [–manager-image-uri=<imageUri>] [[–manager-accelerator-type=<type>] [–manager-accelerator-count=<count>]] [[–manager-boot-disk-type=<bootDiskType>] [–manager-boot-disk-size=<bootDiskSizeGb>] [–manager-num-local-ssds=<numLocalSsds>] [–manager-local-ssd-interface=<localSsdInte_ _ rface>]]] [[–num-workers=<numNodes>] [–worker-machine-type=<machineType>] [–worker-image-uri=<imageUri>] [[–worker-accelerator-type=<type>] [–worker-accelerator-count=<count>]] [[–worker-boot-disk-type=<bootDiskType>] [–worker-boot-disk-size=<bootDiskSizeGb>] [–worker-num-local-ssds=<numLocalSsds>] [–worker-local-ssd-interface=<localSsdInter_ _ face>]]] [[–num-secondary-workers=<numNodes>] [–secondary-worker-machine-type=<machineTyp_ _ e>] [–secondary-worker-image-uri=<imageUri>] [–secondary-worker-type=<type>] [[–secondary-worker-accelerator-type=<type>_ ] [–secondary-worker-accelerator-count=<count _ >]] [[–secondary-worker-boot-disk-type=<bootDis_ _ kType>] [–secondary-worker-boot-disk-size=<bootDisk_ _ SizeGb>] [–secondary-worker-num-local-ssds=<numLocal_ _ Ssds>] [–secondary-worker-local-ssd-interface=<loc_ _ alSsdInterface>_]]]

Description

Add a controlled GCP Dataproc cluster resource with Jupyter. For a detailed explanation of parameters, see https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters#Cluster

Options

  • –id=<id>
    ID of the resource, scoped to the workspace. Only use letters, numbers, dashes, and underscores.

  • –description=<description>
    Description of the resource.

  • –workspace=<id>
    Workspace id to use for this command only.

  • –format=<format>
    Set the format for printing command output: JSON, TEXT. Defaults to the config format property.

    Default: null
    
  • –quiet
    Suppress interactive prompt.

  • –cluster-id=<clusterId>
    The unique name to give to the dataproc cluster. Cannot be changed later. The instance name must be 1 to 52 characters long and contain only lowercase letters, numeric characters, and dashes. The first character must be a lowercase letter and the last character cannot be a dash. If not specified, a value will be auto-generated for you.

  • –region=<region>
    The Google Cloud region of the cluster.

  • –image-version=<imageVersion>
    The dataproc cluster image version containing versions of its software components. See https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-version-clusters for the full list of image versions and their bundled software components.

  • –initialization-actions=<initializationActions>[,<initializationActions>…]
    A comma separated list of initialization scripts to run during cluster creation.The path must be a URL or Cloud Storage path, e.g. ‘gs://path-to-file/file-name’.

  • –components=<components>[,<components>…]
    Comma-separated list of components.

  • –properties=<String=String>[,<String=String>…]
    Properties in the format key=value.

  • –software-framework=<softwareFrameworkType>
    Software framework for the cluster. Available frameworks are: NONE, HAIL.

    Default: NONE
    
  • –bucket=<configBucket>
    Resource name of the cluster staging bucket. If not specified, a default staging bucket will be created.

  • –temp-bucket=<tempBucket>
    Resource name of the cluster temp bucket. If not specified, a default temp bucket will be created.

  • –autoscaling-policy=<autoscalingPolicy>
    Autoscaling policy id to attach to the cluster.

  • -M, –metadata=<String=String>[,<String=String>…]
    Custom metadata to apply to this cluster.

    specify multiple metadata in the format of –metadata=“key1=value1” -Mkey2=value2.

    It allows multiple metadata entries split by “,” like –metadata=key1=value1,key2=value2

    By default, set Workbench CLI server terra-cli-server=[CLI_SERVER_ID]

    and the Workbench workspace id (terra-workspace-id=[WORKSPACE_ID]).

  • –idle-delete-ttl=<idleDeleteTtl>
    Time-to-live after which the resource becomes idle and is deleted.

Manager node configurations

  • –manager-machine-type=<machineType>
    The machine type of the manager node.

    Default: n2-standard-2
    
  • –manager-image-uri=<imageUri>
    The image URI for the manager node.

  • –manager-accelerator-type=<type>
    The type of accelerator for the manager.

  • –manager-accelerator-count=<count>
    The count of accelerators for the manager.

    Default: 0
    
  • –manager-boot-disk-type=<bootDiskType>
    The type of boot disk for the manager node.

  • –manager-boot-disk-size=<bootDiskSizeGb>
    The size of the boot disk in GB for the manager node.

    Default: 500
    
  • –manager-num-local-ssds=<numLocalSsds>
    The number of local SSDs for the manager node.

    Default: 0
    
  • –manager-local-ssd-interface=<localSsdInterface>
    The interface type of local SSDs for the manager node.

    Default: scsi
    

Worker node configurations

  • –num-workers=<numNodes>
    The number of worker nodes.

    Default: 2
    
  • –worker-machine-type=<machineType>
    The machine type of the worker node.

    Default: n2-standard-2
    
  • –worker-image-uri=<imageUri>
    The image URI for the worker node.

  • –worker-accelerator-type=<type>
    The type of accelerator for the worker.

  • –worker-accelerator-count=<count>
    The count of accelerators for the worker.

    Default: 0
    
  • –worker-boot-disk-type=<bootDiskType>
    The type of boot disk for the worker node.

  • –worker-boot-disk-size=<bootDiskSizeGb>
    The size of the boot disk in GB for the worker node.

    Default: 500
    
  • –worker-num-local-ssds=<numLocalSsds>
    The number of local SSDs for the worker node.

    Default: 0
    
  • –worker-local-ssd-interface=<localSsdInterface>
    The interface type of local SSDs for the worker node.

    Default: scsi
    

Secondary worker node configurations

  • –num-secondary-workers=<numNodes>
    The number of secondary worker nodes.

    Default: 0
    
  • –secondary-worker-machine-type=<machineType>
    The machine type of the secondary worker node.

    Default: n2-standard-2
    
  • –secondary-worker-image-uri=<imageUri>
    The image URI for the secondary worker node.

  • –secondary-worker-type=<type>
    The type of the secondary worker. Valid values are preemptible, non-preemptible, and spot.

    Default: spot
    
  • –secondary-worker-accelerator-type=<type>
    The type of accelerator for the secondary worker.

  • –secondary-worker-accelerator-count=<count>
    The count of accelerators for the secondary worker.

    Default: 0
    
  • –secondary-worker-boot-disk-type=<bootDiskType>
    The type of boot disk for the secondary worker node.

  • –secondary-worker-boot-disk-size=<bootDiskSizeGb>
    The size of the boot disk in GB for the secondary worker node.

    Default: 500
    
  • –secondary-worker-num-local-ssds=<numLocalSsds>
    The number of local SSDs for the secondary worker node.

    Default: 0
    
  • –secondary-worker-local-ssd-interface=<localSsdInterface>
    The interface type of local SSDs for the secondary worker node.

    Default: scsi
    

Last Modified: 1 January 0001