Use this method to create a stream.

Scopes

You will need authorization for the https://www.googleapis.com/auth/cloud-platform scope to make a valid call.

If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform. You can set the scope for this method like this: datastream1 --scope <scope> projects locations-streams-create ...

Required Scalar Argument

  • <parent> (string)
    • Required. The parent that owns the collection of streams.

Required Request Value

The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.

For example, a structure like this:

Stream:
  create-time: string
  customer-managed-encryption-key: string
  destination-config:
    bigquery-destination-config:
      data-freshness: string
      single-target-dataset:
        dataset-id: string
      source-hierarchy-datasets:
        dataset-template:
          dataset-id-prefix: string
          kms-key-name: string
          location: string
    destination-connection-profile: string
    gcs-destination-config:
      file-rotation-interval: string
      file-rotation-mb: integer
      json-file-format:
        compression: string
        schema-file-format: string
      path: string
  display-name: string
  labels: { string: string }
  last-recovery-time: string
  name: string
  source-config:
    mysql-source-config:
      max-concurrent-backfill-tasks: integer
      max-concurrent-cdc-tasks: integer
    oracle-source-config:
      max-concurrent-backfill-tasks: integer
      max-concurrent-cdc-tasks: integer
    postgresql-source-config:
      max-concurrent-backfill-tasks: integer
      publication: string
      replication-slot: string
    source-connection-profile: string
    sql-server-source-config:
      max-concurrent-backfill-tasks: integer
      max-concurrent-cdc-tasks: integer
  state: string
  update-time: string

can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.

  • -r . create-time=sea
    • Output only. The creation time of the stream.
  • customer-managed-encryption-key=et
    • Immutable. A reference to a KMS encryption key. If provided, it will be used to encrypt the data. If left blank, data will be encrypted using an internal Stream-specific encryption key provisioned through KMS.
  • destination-config.bigquery-destination-config data-freshness=at
    • The guaranteed data freshness (in seconds) when querying tables created by the stream. Editing this field will only affect new tables created in the future, but existing tables will not be impacted. Lower values mean that queries will return fresher data, but may result in higher cost.
  • single-target-dataset dataset-id=dolore

    • The dataset ID of the target dataset. DatasetIds allowed characters: https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#datasetreference.
  • ..source-hierarchy-datasets.dataset-template dataset-id-prefix=eirmod

    • If supplied, every created dataset will have its name prefixed by the provided value. The prefix and name will be separated by an underscore. i.e. _.
  • kms-key-name=lorem
    • Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key. i.e. projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}. See https://cloud.google.com/bigquery/docs/customer-managed-encryption for more information.
  • location=accusam

    • Required. The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
  • .... destination-connection-profile=amet

    • Required. Destination connection profile resource. Format: projects/{project}/locations/{location}/connectionProfiles/{name}
  • gcs-destination-config file-rotation-interval=erat
    • The maximum duration for which new events are added before a file is closed and a new file is created. Values within the range of 15-60 seconds are allowed.
  • file-rotation-mb=32
    • The maximum file size to be saved in the bucket.
  • json-file-format compression=erat
    • Compression of the loaded JSON file.
  • schema-file-format=accusam

    • The schema file format along JSON data files.
  • .. path=sea

    • Path inside the Cloud Storage bucket to write data to.
  • ... display-name=takimata

    • Required. Display name.
  • labels=key=lorem
    • Labels.
    • the value will be associated with the given key
  • last-recovery-time=et
    • Output only. If the stream was recovered, the time of the last recovery. Note: This field is currently experimental.
  • name=at
    • Output only. The stream's name.
  • source-config.mysql-source-config max-concurrent-backfill-tasks=97
    • Maximum number of concurrent backfill tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.
  • max-concurrent-cdc-tasks=79

    • Maximum number of concurrent CDC tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.
  • ..oracle-source-config max-concurrent-backfill-tasks=53

    • Maximum number of concurrent backfill tasks. The number should be non-negative. If not set (or set to 0), the system's default value is used.
  • max-concurrent-cdc-tasks=20

    • Maximum number of concurrent CDC tasks. The number should be non-negative. If not set (or set to 0), the system's default value is used.
  • ..postgresql-source-config max-concurrent-backfill-tasks=91

    • Maximum number of concurrent backfill tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.
  • publication=nonumy
    • Required. The name of the publication that includes the set of all tables that are defined in the stream's include_objects.
  • replication-slot=et

    • Required. Immutable. The name of the logical replication slot that's configured with the pgoutput plugin.
  • .. source-connection-profile=gubergren

    • Required. Source connection profile resoource. Format: projects/{project}/locations/{location}/connectionProfiles/{name}
  • sql-server-source-config max-concurrent-backfill-tasks=80
    • Max concurrent backfill tasks.
  • max-concurrent-cdc-tasks=41

    • Max concurrent CDC tasks.
  • ... state=consetetur

    • The state of the stream.
  • update-time=sit
    • Output only. The last update time of the stream.

About Cursors

The cursor position is key to comfortably set complex nested structures. The following rules apply:

  • The cursor position is always set relative to the current one, unless the field name starts with the . character. Fields can be nested such as in -r f.s.o .
  • The cursor position is set relative to the top-level structure if it starts with ., e.g. -r .s.s
  • You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify -r struct.sub_struct=bar.
  • You can move the cursor one level up by using ... Each additional . moves it up one additional level. E.g. ... would go three levels up.

Optional Output Flags

The method's return value a JSON encoded structure, which will be written to standard output by default.

  • -o out
    • out specifies the destination to which to write the server's result to. It will be a JSON-encoded structure. The destination may be - to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.

Optional Method Properties

You may set the following properties to further configure the call. Please note that -p is followed by one or more key-value-pairs, and is called like this -p k1=v1 k2=v2 even though the listing below repeats the -p for completeness.

  • -p force=boolean

    • Optional. Create the stream without validating it.
  • -p request-id=string

    • Optional. A request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes since the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).
  • -p stream-id=string

    • Required. The stream identifier.
  • -p validate-only=boolean

    • Optional. Only validate the stream, but don't create any resources. The default is false.

Optional General Properties

The following properties can configure any call, and are not specific to this method.

  • -p $-xgafv=string

    • V1 error format.
  • -p access-token=string

    • OAuth access token.
  • -p alt=string

    • Data format for response.
  • -p callback=string

    • JSONP
  • -p fields=string

    • Selector specifying which fields to include in a partial response.
  • -p key=string

    • API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
  • -p oauth-token=string

    • OAuth 2.0 token for the current user.
  • -p pretty-print=boolean

    • Returns response with indentations and line breaks.
  • -p quota-user=string

    • Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
  • -p upload-type=string

    • Legacy upload protocol for media (e.g. "media", "multipart").
  • -p upload-protocol=string

    • Upload protocol for media (e.g. "raw", "multipart").