Creates a node pool for a cluster.

Scopes

You will need authorization for the https://www.googleapis.com/auth/cloud-platform scope to make a valid call.

If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform. You can set the scope for this method like this: container1 --scope <scope> projects locations-clusters-node-pools-create ...

Required Scalar Argument

  • <parent> (string)
    • The parent (project, location, cluster name) where the node pool will be created. Specified in the format projects/*/locations/*/clusters/*.

Required Request Value

The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.

For example, a structure like this:

CreateNodePoolRequest:
  cluster-id: string
  node-pool:
    autoscaling:
      autoprovisioned: boolean
      enabled: boolean
      location-policy: string
      max-node-count: integer
      min-node-count: integer
      total-max-node-count: integer
      total-min-node-count: integer
    best-effort-provisioning:
      enabled: boolean
      min-provision-nodes: integer
    config:
      advanced-machine-features:
        threads-per-core: string
      boot-disk-kms-key: string
      confidential-nodes:
        enabled: boolean
      disk-size-gb: integer
      disk-type: string
      enable-confidential-storage: boolean
      ephemeral-storage-local-ssd-config:
        local-ssd-count: integer
      fast-socket:
        enabled: boolean
      gcfs-config:
        enabled: boolean
      gvnic:
        enabled: boolean
      image-type: string
      kubelet-config:
        cpu-cfs-quota: boolean
        cpu-cfs-quota-period: string
        cpu-manager-policy: string
        insecure-kubelet-readonly-port-enabled: boolean
        pod-pids-limit: string
      labels: { string: string }
      linux-node-config:
        cgroup-mode: string
        sysctls: { string: string }
      local-nvme-ssd-block-config:
        local-ssd-count: integer
      local-ssd-count: integer
      logging-config:
        variant-config:
          variant: string
      machine-type: string
      metadata: { string: string }
      min-cpu-platform: string
      node-group: string
      oauth-scopes: [string]
      preemptible: boolean
      reservation-affinity:
        consume-reservation-type: string
        key: string
        values: [string]
      resource-labels: { string: string }
      resource-manager-tags:
        tags: { string: string }
      sandbox-config:
        type: string
      service-account: string
      shielded-instance-config:
        enable-integrity-monitoring: boolean
        enable-secure-boot: boolean
      spot: boolean
      tags: [string]
      windows-node-config:
        os-version: string
      workload-metadata-config:
        mode: string
    etag: string
    initial-node-count: integer
    instance-group-urls: [string]
    locations: [string]
    management:
      auto-repair: boolean
      auto-upgrade: boolean
      upgrade-options:
        auto-upgrade-start-time: string
        description: string
    max-pods-constraint:
      max-pods-per-node: string
    name: string
    network-config:
      create-pod-range: boolean
      enable-private-nodes: boolean
      network-performance-config:
        total-egress-bandwidth-tier: string
      pod-cidr-overprovision-config:
        disable: boolean
      pod-ipv4-cidr-block: string
      pod-ipv4-range-utilization: number
      pod-range: string
    placement-policy:
      policy-name: string
      tpu-topology: string
      type: string
    pod-ipv4-cidr-size: integer
    queued-provisioning:
      enabled: boolean
    self-link: string
    status: string
    status-message: string
    update-info:
      blue-green-info:
        blue-instance-group-urls: [string]
        blue-pool-deletion-start-time: string
        green-instance-group-urls: [string]
        green-pool-version: string
        phase: string
    upgrade-settings:
      blue-green-settings:
        node-pool-soak-duration: string
        standard-rollout-policy:
          batch-node-count: integer
          batch-percentage: number
          batch-soak-duration: string
      max-surge: integer
      max-unavailable: integer
      strategy: string
    version: string
  parent: string
  project-id: string
  zone: string

can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.

  • -r . cluster-id=at
    • Deprecated. The name of the cluster. This field has been deprecated and replaced by the parent field.
  • node-pool.autoscaling autoprovisioned=false
    • Can this node pool be deleted automatically.
  • enabled=false
    • Is autoscaling enabled for this node pool.
  • location-policy=ipsum
    • Location policy used when scaling up a nodepool.
  • max-node-count=28
    • Maximum number of nodes for one location in the NodePool. Must be >= min_node_count. There has to be enough quota to scale up the cluster.
  • min-node-count=82
    • Minimum number of nodes for one location in the NodePool. Must be >= 1 and <= max_node_count.
  • total-max-node-count=5
    • Maximum number of nodes in the node pool. Must be greater than total_min_node_count. There has to be enough quota to scale up the cluster. The total__node_count fields are mutually exclusive with the _node_count fields.
  • total-min-node-count=90

    • Minimum number of nodes in the node pool. Must be greater than 1 less than total_max_node_count. The total__node_count fields are mutually exclusive with the _node_count fields.
  • ..best-effort-provisioning enabled=true

    • When this is enabled, cluster/node pool creations will ignore non-fatal errors like stockout to best provision as many nodes as possible right now and eventually bring up all target number of nodes
  • min-provision-nodes=59

    • Minimum number of nodes to be provisioned to be considered as succeeded, and the rest of nodes will be provisioned gradually and eventually when stockout issue has been resolved.
  • ..config.advanced-machine-features threads-per-core=sea

    • The number of threads per physical core. To disable simultaneous multithreading (SMT) set this to 1. If unset, the maximum number of threads supported per core by the underlying processor is assumed.
  • .. boot-disk-kms-key=ipsum

    • The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
  • confidential-nodes enabled=true

    • Whether Confidential Nodes feature is enabled.
  • .. disk-size-gb=96

    • Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
  • disk-type=no
    • Type of the disk attached to each node (e.g. 'pd-standard', 'pd-ssd' or 'pd-balanced') If unspecified, the default disk type is 'pd-standard'
  • enable-confidential-storage=false
    • Optional. Reserved for future use.
  • ephemeral-storage-local-ssd-config local-ssd-count=88

    • Number of local SSDs to use to back ephemeral storage. Uses NVMe interfaces. A zero (or unset) value has different meanings depending on machine type being used: 1. For pre-Gen3 machines, which support flexible numbers of local ssds, zero (or unset) means to disable using local SSDs as ephemeral storage. The limit for this value is dependent upon the maximum number of disk available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information. 2. For Gen3 machines which dictate a specific number of local ssds, zero (or unset) means to use the default number of local ssds that goes with that machine type. For example, for a c3-standard-8-lssd machine, 2 local ssds would be provisioned. For c3-standard-8 (which doesn't support local ssds), 0 will be provisioned. See https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds for more info.
  • ..fast-socket enabled=true

    • Whether Fast Socket features are enabled in the node pool.
  • ..gcfs-config enabled=true

    • Whether to use GCFS.
  • ..gvnic enabled=false

    • Whether gVNIC features are enabled in the node pool.
  • .. image-type=sed

    • The image type to use for this node. Note that for a given image type, the latest version of it will be used. Please see https://cloud.google.com/kubernetes-engine/docs/concepts/node-images for available image types.
  • kubelet-config cpu-cfs-quota=false
    • Enable CPU CFS quota enforcement for containers that specify CPU limits. This option is enabled by default which makes kubelet use CFS quota (https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt) to enforce container CPU limits. Otherwise, CPU limits will not be enforced at all. Disable this option to mitigate CPU throttling problems while still having your pods to be in Guaranteed QoS class by specifying the CPU limits. The default value is 'true' if unspecified.
  • cpu-cfs-quota-period=sea
    • Set the CPU CFS quota period value 'cpu.cfs_period_us'. The string must be a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". The value must be a positive duration.
  • cpu-manager-policy=ipsum
    • Control the CPU management policy on the node. See https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/ The following values are allowed. * "none": the default, which represents the existing scheduling behavior. * "static": allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. The default value is 'none' if unspecified.
  • insecure-kubelet-readonly-port-enabled=true
    • Enable or disable Kubelet read only port.
  • pod-pids-limit=justo

    • Set the Pod PID limits. See https://kubernetes.io/docs/concepts/policy/pid-limiting/#pod-pid-limits Controls the maximum number of processes allowed to run in a pod. The value must be greater than or equal to 1024 and less than 4194304.
  • .. labels=key=ea

    • The map of Kubernetes labels (key/value pairs) to be applied to each node. These will added in addition to any default label(s) that Kubernetes may apply to the node. In case of conflict in label keys, the applied set may differ depending on the Kubernetes version -- it's best to assume the behavior is undefined and conflicts should be avoided. For more information, including usage and the valid values, see: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
    • the value will be associated with the given key
  • linux-node-config cgroup-mode=at
    • cgroup_mode specifies the cgroup mode to be used on the node.
  • sysctls=key=erat

    • The Linux kernel parameters to be applied to the nodes and all pods running on the nodes. The following parameters are supported. net.core.busy_poll net.core.busy_read net.core.netdev_max_backlog net.core.rmem_max net.core.wmem_default net.core.wmem_max net.core.optmem_max net.core.somaxconn net.ipv4.tcp_rmem net.ipv4.tcp_wmem net.ipv4.tcp_tw_reuse
    • the value will be associated with the given key
  • ..local-nvme-ssd-block-config local-ssd-count=87

    • Number of local NVMe SSDs to use. The limit for this value is dependent upon the maximum number of disk available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information. A zero (or unset) value has different meanings depending on machine type being used: 1. For pre-Gen3 machines, which support flexible numbers of local ssds, zero (or unset) means to disable using local SSDs as ephemeral storage. 2. For Gen3 machines which dictate a specific number of local ssds, zero (or unset) means to use the default number of local ssds that goes with that machine type. For example, for a c3-standard-8-lssd machine, 2 local ssds would be provisioned. For c3-standard-8 (which doesn't support local ssds), 0 will be provisioned. See https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds for more info.
  • .. local-ssd-count=25

    • The number of local SSD disks to be attached to the node. The limit for this value is dependent upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information.
  • logging-config.variant-config variant=invidunt

    • Logging variant deployed on nodes.
  • ... machine-type=nonumy

    • The name of a Google Compute Engine machine type If unspecified, the default machine type is e2-medium.
  • metadata=key=erat
    • The metadata key/value pairs assigned to instances in the cluster. Keys must conform to the regexp [a-zA-Z0-9-_]+ and be less than 128 bytes in length. These are reflected as part of a URL in the metadata server. Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project or be one of the reserved keys: - "cluster-location" - "cluster-name" - "cluster-uid" - "configure-sh" - "containerd-configure-sh" - "enable-os-login" - "gci-ensure-gke-docker" - "gci-metrics-enabled" - "gci-update-strategy" - "instance-template" - "kube-env" - "startup-script" - "user-data" - "disable-address-manager" - "windows-startup-script-ps1" - "common-psm1" - "k8s-node-setup-psm1" - "install-ssh-psm1" - "user-profile-psm1" Values are free-form strings, and only have meaning as interpreted by the image running in the instance. The only restriction placed on them is that each value's size must be less than or equal to 32 KB. The total size of all keys and values must be less than 512 KB.
    • the value will be associated with the given key
  • min-cpu-platform=erat
    • Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such as minCpuPlatform: &#34;Intel Haswell&#34; or minCpuPlatform: &#34;Intel Sandy Bridge&#34;. For more information, read how to specify min CPU platform
  • node-group=dolores
    • Setting this field will assign instances of this pool to run on the specified node group. This is useful for running workloads on sole tenant nodes.
  • oauth-scopes=ipsum
    • The set of Google API scopes to be made available on all of the node VMs under the "default" service account. The following scopes are recommended, but not required, and by default are not included: * https://www.googleapis.com/auth/compute is required for mounting persistent storage on your nodes. * https://www.googleapis.com/auth/devstorage.read_only is required for communicating with gcr.io (the Google Container Registry). If unspecified, no scopes are added, unless Cloud Logging or Cloud Monitoring are enabled, in which case their required scopes will be added.
    • Each invocation of this argument appends the given value to the array.
  • preemptible=false
    • Whether the nodes are created as preemptible VM instances. See: https://cloud.google.com/compute/docs/instances/preemptible for more information about preemptible VM instances.
  • reservation-affinity consume-reservation-type=elitr
    • Corresponds to the type of reservation consumption.
  • key=consetetur
    • Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, specify "compute.googleapis.com/reservation-name" as the key and specify the name of your reservation as its value.
  • values=et

    • Corresponds to the label value(s) of reservation resource(s).
    • Each invocation of this argument appends the given value to the array.
  • .. resource-labels=key=clita

    • The resource labels for the node pool to use to annotate any related Google Compute Engine resources.
    • the value will be associated with the given key
  • resource-manager-tags tags=key=sit

    • TagKeyValue must be in one of the following formats ([KEY]=[VALUE]) 1. tagKeys/{tag_key_id}=tagValues/{tag_value_id} 2. {org_id}/{tag_key_name}={tag_value_name} 3. {project_id}/{tag_key_name}={tag_value_name}
    • the value will be associated with the given key
  • ..sandbox-config type=takimata

    • Type of the sandbox to use for the node.
  • .. service-account=erat

    • The Google Cloud Platform Service Account to be used by the node VMs. Specify the email address of the Service Account; otherwise, if no Service Account is specified, the "default" service account is used.
  • shielded-instance-config enable-integrity-monitoring=true
    • Defines whether the instance has integrity monitoring enabled. Enables monitoring and attestation of the boot integrity of the instance. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the instance is created.
  • enable-secure-boot=true

    • Defines whether the instance has Secure Boot enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.
  • .. spot=false

    • Spot flag for enabling Spot VM, which is a rebrand of the existing preemptible flag.
  • tags=diam
    • The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during cluster or node pool creation. Each tag within the list must comply with RFC1035.
    • Each invocation of this argument appends the given value to the array.
  • windows-node-config os-version=diam

    • OSVersion specifies the Windows node config to be used on the node
  • ..workload-metadata-config mode=sed

    • Mode is the configuration for how to expose metadata to workloads running on the node pool.
  • ... etag=et

    • This checksum is computed by the server based on the value of node pool fields, and may be sent on update requests to ensure the client has an up-to-date value before proceeding.
  • initial-node-count=84
    • The initial node count for the pool. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota.
  • instance-group-urls=dolore
    • [Output only] The resource URLs of the managed instance groups associated with this node pool. During the node pool blue-green upgrade operation, the URLs contain both blue and green resources.
    • Each invocation of this argument appends the given value to the array.
  • locations=ipsum
    • The list of Google Compute Engine zones in which the NodePool's nodes should be located. If this value is unspecified during node pool creation, the Cluster.Locations value will be used, instead. Warning: changing node pool locations will result in nodes being added and/or removed.
    • Each invocation of this argument appends the given value to the array.
  • management auto-repair=true
    • A flag that specifies whether the node auto-repair is enabled for the node pool. If enabled, the nodes in this node pool will be monitored and, if they fail health checks too many times, an automatic repair action will be triggered.
  • auto-upgrade=true
    • A flag that specifies whether node auto-upgrade is enabled for the node pool. If enabled, node auto-upgrade helps keep the nodes in your node pool up to date with the latest release version of Kubernetes.
  • upgrade-options auto-upgrade-start-time=sit
    • [Output only] This field is set when upgrades are about to commence with the approximate start time for the upgrades, in RFC3339 text format.
  • description=lorem

    • [Output only] This field is set when upgrades are about to commence with the description of the upgrade.
  • ...max-pods-constraint max-pods-per-node=stet

    • Constraint enforced on the max num of pods per node.
  • .. name=duo

    • The name of the node pool.
  • network-config create-pod-range=false
    • Input only. Whether to create a new range for pod IPs in this node pool. Defaults are provided for pod_range and pod_ipv4_cidr_block if they are not specified. If neither create_pod_range or pod_range are specified, the cluster-level default (ip_allocation_policy.cluster_ipv4_cidr_block) is used. Only applicable if ip_allocation_policy.use_ip_aliases is true. This field cannot be changed after the node pool has been created.
  • enable-private-nodes=false
    • Whether nodes have internal IP addresses only. If enable_private_nodes is not specified, then the value is derived from cluster.privateClusterConfig.enablePrivateNodes
  • network-performance-config total-egress-bandwidth-tier=et

    • Specifies the total network bandwidth tier for the NodePool.
  • ..pod-cidr-overprovision-config disable=true

    • Whether Pod CIDR overprovisioning is disabled. Note: Pod CIDR overprovisioning is enabled by default.
  • .. pod-ipv4-cidr-block=rebum.

    • The IP address range for pod IPs in this node pool. Only applicable if create_pod_range is true. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) to pick a specific range to use. Only applicable if ip_allocation_policy.use_ip_aliases is true. This field cannot be changed after the node pool has been created.
  • pod-ipv4-range-utilization=0.5220589381617724
    • Output only. [Output only] The utilization of the IPv4 range for the pod. The ratio is Usage/[Total number of IPs in the secondary range], Usage=numNodesnumZonespodIPsPerNode.
  • pod-range=stet

    • The ID of the secondary range for pod IPs. If create_pod_range is true, this ID is used for the new range. If create_pod_range is false, uses an existing secondary range with this ID. Only applicable if ip_allocation_policy.use_ip_aliases is true. This field cannot be changed after the node pool has been created.
  • ..placement-policy policy-name=aliquyam

    • If set, refers to the name of a custom resource policy supplied by the user. The resource policy must be in the same project and region as the node pool. If not found, InvalidArgument error is returned.
  • tpu-topology=kasd
    • Optional. TPU placement topology for pod slice node pool. https://cloud.google.com/tpu/docs/types-topologies#tpu_topologies
  • type=lorem

    • The type of placement.
  • .. pod-ipv4-cidr-size=53

    • [Output only] The pod CIDR block size per node in this node pool.
  • queued-provisioning enabled=true

    • Denotes that this nodepool is QRM specific, meaning nodes can be only obtained through queuing via the Cluster Autoscaler ProvisioningRequest API.
  • .. self-link=tempor

    • [Output only] Server-defined URL for the resource.
  • status=dolor
    • [Output only] The status of the nodes in this pool instance.
  • status-message=amet
    • [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
  • update-info.blue-green-info blue-instance-group-urls=sit
    • The resource URLs of the [managed instance groups] (/compute/docs/instance-groups/creating-groups-of-managed-instances) associated with blue pool.
    • Each invocation of this argument appends the given value to the array.
  • blue-pool-deletion-start-time=rebum.
    • Time to start deleting blue pool to complete blue-green upgrade, in RFC3339 text format.
  • green-instance-group-urls=sea
    • The resource URLs of the [managed instance groups] (/compute/docs/instance-groups/creating-groups-of-managed-instances) associated with green pool.
    • Each invocation of this argument appends the given value to the array.
  • green-pool-version=ipsum
    • Version of green pool.
  • phase=ipsum

    • Current blue-green upgrade phase.
  • ...upgrade-settings.blue-green-settings node-pool-soak-duration=et

    • Time needed after draining entire blue pool. After this period, blue pool will be cleaned up.
  • standard-rollout-policy batch-node-count=7
    • Number of blue nodes to drain in a batch.
  • batch-percentage=0.08920170932611438
    • Percentage of the blue pool nodes to drain in a batch. The range of this field should be (0.0, 1.0].
  • batch-soak-duration=sadipscing

    • Soak time after each batch gets drained. Default to zero.
  • ... max-surge=97

    • The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
  • max-unavailable=47
    • The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
  • strategy=consetetur

    • Update strategy of the node pool.
  • .. version=et

    • The version of Kubernetes running on this NodePool's nodes. If unspecified, it defaults as described here.
  • .. parent=sit

    • The parent (project, location, cluster name) where the node pool will be created. Specified in the format projects/*/locations/*/clusters/*.
  • project-id=lorem
  • zone=nonumy
    • Deprecated. The name of the Google Compute Engine zone in which the cluster resides. This field has been deprecated and replaced by the parent field.

About Cursors

The cursor position is key to comfortably set complex nested structures. The following rules apply:

  • The cursor position is always set relative to the current one, unless the field name starts with the . character. Fields can be nested such as in -r f.s.o .
  • The cursor position is set relative to the top-level structure if it starts with ., e.g. -r .s.s
  • You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify -r struct.sub_struct=bar.
  • You can move the cursor one level up by using ... Each additional . moves it up one additional level. E.g. ... would go three levels up.

Optional Output Flags

The method's return value a JSON encoded structure, which will be written to standard output by default.

  • -o out
    • out specifies the destination to which to write the server's result to. It will be a JSON-encoded structure. The destination may be - to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.

Optional General Properties

The following properties can configure any call, and are not specific to this method.

  • -p $-xgafv=string

    • V1 error format.
  • -p access-token=string

    • OAuth access token.
  • -p alt=string

    • Data format for response.
  • -p callback=string

    • JSONP
  • -p fields=string

    • Selector specifying which fields to include in a partial response.
  • -p key=string

    • API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
  • -p oauth-token=string

    • OAuth 2.0 token for the current user.
  • -p pretty-print=boolean

    • Returns response with indentations and line breaks.
  • -p quota-user=string

    • Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
  • -p upload-type=string

    • Legacy upload protocol for media (e.g. "media", "multipart").
  • -p upload-protocol=string

    • Upload protocol for media (e.g. "raw", "multipart").