Updates a transfer job. Updating a job's transfer spec does not affect transfer operations that are running already. Note: The job's status field can be modified using this RPC (for example, to set a job's status to DELETED, DISABLED, or ENABLED).

Scopes

You will need authorization for the https://www.googleapis.com/auth/cloud-platform scope to make a valid call.

If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform. You can set the scope for this method like this: storagetransfer1 --scope <scope> transfer-jobs patch ...

Required Scalar Argument

  • <job-name> (string)
    • Required. The name of job to update.

Required Request Value

The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.

For example, a structure like this:

UpdateTransferJobRequest:
  project-id: string
  transfer-job:
    creation-time: string
    deletion-time: string
    description: string
    event-stream:
      event-stream-expiration-time: string
      event-stream-start-time: string
      name: string
    last-modification-time: string
    latest-operation-name: string
    logging-config:
      enable-onprem-gcs-transfer-logs: boolean
      log-action-states: [string]
      log-actions: [string]
    name: string
    notification-config:
      event-types: [string]
      payload-format: string
      pubsub-topic: string
    project-id: string
    schedule:
      end-time-of-day:
        hours: integer
        minutes: integer
        nanos: integer
        seconds: integer
      repeat-interval: string
      schedule-end-date:
        day: integer
        month: integer
        year: integer
      schedule-start-date:
        day: integer
        month: integer
        year: integer
      start-time-of-day:
        hours: integer
        minutes: integer
        nanos: integer
        seconds: integer
    status: string
    transfer-spec:
      aws-s3-compatible-data-source:
        bucket-name: string
        endpoint: string
        path: string
        region: string
        s3-metadata:
          auth-method: string
          list-api: string
          protocol: string
          request-model: string
      aws-s3-data-source:
        aws-access-key:
          access-key-id: string
          secret-access-key: string
        bucket-name: string
        cloudfront-domain: string
        credentials-secret: string
        path: string
        role-arn: string
      azure-blob-storage-data-source:
        azure-credentials:
          sas-token: string
        container: string
        credentials-secret: string
        path: string
        storage-account: string
      gcs-data-sink:
        bucket-name: string
        managed-folder-transfer-enabled: boolean
        path: string
      gcs-data-source:
        bucket-name: string
        managed-folder-transfer-enabled: boolean
        path: string
      gcs-intermediate-data-location:
        bucket-name: string
        managed-folder-transfer-enabled: boolean
        path: string
      hdfs-data-source:
        path: string
      http-data-source:
        list-url: string
      object-conditions:
        exclude-prefixes: [string]
        include-prefixes: [string]
        last-modified-before: string
        last-modified-since: string
        max-time-elapsed-since-last-modification: string
        min-time-elapsed-since-last-modification: string
      posix-data-sink:
        root-directory: string
      posix-data-source:
        root-directory: string
      sink-agent-pool-name: string
      source-agent-pool-name: string
      transfer-manifest:
        location: string
      transfer-options:
        delete-objects-from-source-after-transfer: boolean
        delete-objects-unique-in-sink: boolean
        metadata-options:
          acl: string
          gid: string
          kms-key: string
          mode: string
          storage-class: string
          symlink: string
          temporary-hold: string
          time-created: string
          uid: string
        overwrite-objects-already-existing-in-sink: boolean
        overwrite-when: string
  update-transfer-job-field-mask: string

can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.

  • -r . project-id=est
    • Required. The ID of the Google Cloud project that owns the job.
  • transfer-job creation-time=at
    • Output only. The time that the transfer job was created.
  • deletion-time=sed
    • Output only. The time that the transfer job was deleted.
  • description=sit
    • A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
  • event-stream event-stream-expiration-time=et
    • Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
  • event-stream-start-time=tempor
    • Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
  • name=aliquyam

    • Required. Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
  • .. last-modification-time=ipsum

    • Output only. The time that the transfer job was last modified.
  • latest-operation-name=et
    • The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
  • logging-config enable-onprem-gcs-transfer-logs=true
    • For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
  • log-action-states=est
    • States in which log_actions are logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
    • Each invocation of this argument appends the given value to the array.
  • log-actions=sed

    • Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
    • Each invocation of this argument appends the given value to the array.
  • .. name=diam

    • A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with &#34;transferJobs/&#34; prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start with transferJobs/OPI specifically. For all other transfer types, this name must not start with transferJobs/OPI. Non-PosixFilesystem example: &#34;transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$&#34; PosixFilesystem example: &#34;transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$&#34; Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error.
  • notification-config event-types=dolores
    • Event types for which a notification is desired. If empty, send notifications for all event types.
    • Each invocation of this argument appends the given value to the array.
  • payload-format=dolores
    • Required. The desired format of the notification message payloads.
  • pubsub-topic=et

    • Required. The Topic.name of the Pub/Sub topic to which to publish notifications. Must be of the format: projects/{project}/topics/{topic}. Not matching this format results in an INVALID_ARGUMENT error.
  • .. project-id=sed

    • The ID of the Google Cloud project that owns the job.
  • schedule.end-time-of-day hours=90
    • Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
  • minutes=16
    • Minutes of hour of day. Must be from 0 to 59.
  • nanos=7
    • Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
  • seconds=21

    • Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
  • .. repeat-interval=no

    • Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
  • schedule-end-date day=10
    • Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
  • month=24
    • Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
  • year=56

    • Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
  • ..schedule-start-date day=69

    • Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
  • month=32
    • Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
  • year=6

    • Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
  • ..start-time-of-day hours=70

    • Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
  • minutes=19
    • Minutes of hour of day. Must be from 0 to 59.
  • nanos=54
    • Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
  • seconds=44

    • Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
  • ... status=et

    • Status of the job. This value MUST be specified for CreateTransferJobRequests. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.
  • transfer-spec.aws-s3-compatible-data-source bucket-name=sea
    • Required. Specifies the name of the bucket.
  • endpoint=consetetur
    • Required. Specifies the endpoint of the storage service.
  • path=consetetur
    • Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
  • region=stet
    • Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
  • s3-metadata auth-method=est
    • Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
  • list-api=aliquyam
    • The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
  • protocol=elitr
    • Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
  • request-model=duo

    • Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
  • ...aws-s3-data-source.aws-access-key access-key-id=diam

    • Required. AWS access key ID.
  • secret-access-key=est

    • Required. AWS secret access key. This field is not returned in RPC responses.
  • .. bucket-name=sit

  • cloudfront-domain=sed
    • Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format: https://{id}.cloudfront.net or any valid custom domain https://...
  • credentials-secret=eos
    • Optional. The Resource name of a secret in Secret Manager. AWS credentials must be stored in Secret Manager in JSON format: { "access_key_id": "ACCESS_KEY_ID", "secret_access_key": "SECRET_ACCESS_KEY" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessor for the resource. See [Configure access to a source: Amazon S3] (https://cloud.google.com/storage-transfer/docs/source-amazon-s3#secret_manager) for more information. If credentials_secret is specified, do not specify role_arn or aws_access_key. This feature is in preview. Format: projects/{project_number}/secrets/{secret_name}
  • path=lorem
    • Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
  • role-arn=ea

    • The Amazon Resource Name (ARN) of the role to support temporary credentials via AssumeRoleWithWebIdentity. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using a AssumeRoleWithWebIdentity call for the provided role using the GoogleServiceAccount for this project.
  • ..azure-blob-storage-data-source.azure-credentials sas-token=stet

  • .. container=dolores

    • Required. The container to transfer from the Azure Storage account.
  • credentials-secret=eos
    • Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted roles/secretmanager.secretAccessor for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. If credentials_secret is specified, do not specify azure_credentials. This feature is in preview. Format: projects/{project_number}/secrets/{secret_name}
  • path=et
    • Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
  • storage-account=sea

    • Required. The name of the Azure Storage account.
  • ..gcs-data-sink bucket-name=et

  • managed-folder-transfer-enabled=false
    • Transfer managed folders is in public preview. This option is only applicable to the Cloud Storage source bucket. If set to true: - The source managed folder will be transferred to the destination bucket - The destination managed folder will always be overwritten, other OVERWRITE options will not be supported
  • path=dolore

    • Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
  • ..gcs-data-source bucket-name=eirmod

  • managed-folder-transfer-enabled=true
    • Transfer managed folders is in public preview. This option is only applicable to the Cloud Storage source bucket. If set to true: - The source managed folder will be transferred to the destination bucket - The destination managed folder will always be overwritten, other OVERWRITE options will not be supported
  • path=accusam

    • Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
  • ..gcs-intermediate-data-location bucket-name=amet

  • managed-folder-transfer-enabled=true
    • Transfer managed folders is in public preview. This option is only applicable to the Cloud Storage source bucket. If set to true: - The source managed folder will be transferred to the destination bucket - The destination managed folder will always be overwritten, other OVERWRITE options will not be supported
  • path=erat

    • Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
  • ..hdfs-data-source path=accusam

    • Root path to transfer files.
  • ..http-data-source list-url=sea

    • Required. The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
  • ..object-conditions exclude-prefixes=takimata

    • If you specify exclude_prefixes, Storage Transfer Service uses the items in the exclude_prefixes array to determine which objects to exclude from a transfer. Objects must not start with one of the matching exclude_prefixes for inclusion in a transfer. The following are requirements of exclude_prefixes: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the object s3://my-aws-bucket/logs/y=2015/requests.gz, specify the exclude-prefix as logs/y=2015/requests.gz. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included by include_prefixes. The max size of exclude_prefixes is 1000. For more information, see Filtering objects from transfers.
    • Each invocation of this argument appends the given value to the array.
  • include-prefixes=lorem
    • If you specify include_prefixes, Storage Transfer Service uses the items in the include_prefixes array to determine which objects to include in a transfer. Objects must start with one of the matching include_prefixes for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of the exclude_prefixes specified for inclusion in the transfer. The following are requirements of include_prefixes: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the object s3://my-aws-bucket/logs/y=2015/requests.gz, specify the include-prefix as logs/y=2015/requests.gz. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size of include_prefixes is 1000. For more information, see Filtering objects from transfers.
    • Each invocation of this argument appends the given value to the array.
  • last-modified-before=et
    • If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
  • last-modified-since=at
    • If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The last_modified_since and last_modified_before fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: * last_modified_since to the start of the day * last_modified_before to the end of the day
  • max-time-elapsed-since-last-modification=dolor
    • Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperationand the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
  • min-time-elapsed-since-last-modification=et

    • Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the TransferOperation and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
  • ..posix-data-sink root-directory=sit

    • Root directory path to the filesystem.
  • ..posix-data-source root-directory=erat

    • Root directory path to the filesystem.
  • .. sink-agent-pool-name=sea

    • Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
  • source-agent-pool-name=nonumy
    • Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
  • transfer-manifest location=et

    • Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have storage.objects.get permission for this object. An example path is gs://bucket_name/path/manifest.csv.
  • ..transfer-options delete-objects-from-source-after-transfer=true

    • Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
  • delete-objects-unique-in-sink=false
    • Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
  • metadata-options acl=sit
    • Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
  • gid=aliquyam
    • Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
  • kms-key=eos
    • Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
  • mode=at
    • Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
  • storage-class=dolores
    • Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
  • symlink=consetetur
    • Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
  • temporary-hold=gubergren
    • Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
  • time-created=dolor
    • Specifies how each object's timeCreated metadata is preserved for transfers. If unspecified, the default behavior is the same as TIME_CREATED_SKIP.
  • uid=aliquyam

    • Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
  • .. overwrite-objects-already-existing-in-sink=true

    • When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
  • overwrite-when=amet.

    • When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
  • .... update-transfer-job-field-mask=ipsum

    • The field mask of the fields in transferJob that are to be updated in this request. Fields in transferJob that can be updated are: description, transfer_spec, notification_config, logging_config, and status. To update the transfer_spec of the job, a complete transfer specification must be provided. An incomplete specification missing any required fields is rejected with the error INVALID_ARGUMENT.

About Cursors

The cursor position is key to comfortably set complex nested structures. The following rules apply:

  • The cursor position is always set relative to the current one, unless the field name starts with the . character. Fields can be nested such as in -r f.s.o .
  • The cursor position is set relative to the top-level structure if it starts with ., e.g. -r .s.s
  • You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify -r struct.sub_struct=bar.
  • You can move the cursor one level up by using ... Each additional . moves it up one additional level. E.g. ... would go three levels up.

Optional Output Flags

The method's return value a JSON encoded structure, which will be written to standard output by default.

  • -o out
    • out specifies the destination to which to write the server's result to. It will be a JSON-encoded structure. The destination may be - to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.

Optional General Properties

The following properties can configure any call, and are not specific to this method.

  • -p $-xgafv=string

    • V1 error format.
  • -p access-token=string

    • OAuth access token.
  • -p alt=string

    • Data format for response.
  • -p callback=string

    • JSONP
  • -p fields=string

    • Selector specifying which fields to include in a partial response.
  • -p key=string

    • API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
  • -p oauth-token=string

    • OAuth 2.0 token for the current user.
  • -p pretty-print=boolean

    • Returns response with indentations and line breaks.
  • -p quota-user=string

    • Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
  • -p upload-type=string

    • Legacy upload protocol for media (e.g. "media", "multipart").
  • -p upload-protocol=string

    • Upload protocol for media (e.g. "raw", "multipart").