Creates a transfer job that runs periodically.
Scopes
You will need authorization for the https://www.googleapis.com/auth/cloud-platform scope to make a valid call.
If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform.
You can set the scope for this method like this: storagetransfer1 --scope <scope> transfer-jobs create ...
Required Request Value
The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.
For example, a structure like this:
TransferJob:
creation-time: string
deletion-time: string
description: string
event-stream:
event-stream-expiration-time: string
event-stream-start-time: string
name: string
last-modification-time: string
latest-operation-name: string
logging-config:
enable-onprem-gcs-transfer-logs: boolean
log-action-states: [string]
log-actions: [string]
name: string
notification-config:
event-types: [string]
payload-format: string
pubsub-topic: string
project-id: string
schedule:
end-time-of-day:
hours: integer
minutes: integer
nanos: integer
seconds: integer
repeat-interval: string
schedule-end-date:
day: integer
month: integer
year: integer
schedule-start-date:
day: integer
month: integer
year: integer
start-time-of-day:
hours: integer
minutes: integer
nanos: integer
seconds: integer
status: string
transfer-spec:
aws-s3-compatible-data-source:
bucket-name: string
endpoint: string
path: string
region: string
s3-metadata:
auth-method: string
list-api: string
protocol: string
request-model: string
aws-s3-data-source:
aws-access-key:
access-key-id: string
secret-access-key: string
bucket-name: string
cloudfront-domain: string
credentials-secret: string
path: string
role-arn: string
azure-blob-storage-data-source:
azure-credentials:
sas-token: string
container: string
credentials-secret: string
path: string
storage-account: string
gcs-data-sink:
bucket-name: string
managed-folder-transfer-enabled: boolean
path: string
gcs-data-source:
bucket-name: string
managed-folder-transfer-enabled: boolean
path: string
gcs-intermediate-data-location:
bucket-name: string
managed-folder-transfer-enabled: boolean
path: string
hdfs-data-source:
path: string
http-data-source:
list-url: string
object-conditions:
exclude-prefixes: [string]
include-prefixes: [string]
last-modified-before: string
last-modified-since: string
max-time-elapsed-since-last-modification: string
min-time-elapsed-since-last-modification: string
posix-data-sink:
root-directory: string
posix-data-source:
root-directory: string
sink-agent-pool-name: string
source-agent-pool-name: string
transfer-manifest:
location: string
transfer-options:
delete-objects-from-source-after-transfer: boolean
delete-objects-unique-in-sink: boolean
metadata-options:
acl: string
gid: string
kms-key: string
mode: string
storage-class: string
symlink: string
temporary-hold: string
time-created: string
uid: string
overwrite-objects-already-existing-in-sink: boolean
overwrite-when: string
can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.
-r . creation-time=amet.
- Output only. The time that the transfer job was created.
deletion-time=takimata
- Output only. The time that the transfer job was deleted.
description=amet.
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
event-stream event-stream-expiration-time=duo
- Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
event-stream-start-time=ipsum
- Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
-
name=gubergren
- Required. Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
-
.. last-modification-time=lorem
- Output only. The time that the transfer job was last modified.
latest-operation-name=gubergren
- The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
logging-config enable-onprem-gcs-transfer-logs=false
- For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
log-action-states=dolor
- States in which
log_actions
are logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead. - Each invocation of this argument appends the given value to the array.
- States in which
-
log-actions=ea
- Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- Each invocation of this argument appends the given value to the array.
-
.. name=ipsum
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with
"transferJobs/"
prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPI
specifically. For all other transfer types, this name must not start withtransferJobs/OPI
. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"
PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"
Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error.
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with
notification-config event-types=invidunt
- Event types for which a notification is desired. If empty, send notifications for all event types.
- Each invocation of this argument appends the given value to the array.
payload-format=amet
- Required. The desired format of the notification message payloads.
-
pubsub-topic=duo
- Required. The
Topic.name
of the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}
. Not matching this format results in an INVALID_ARGUMENT error.
- Required. The
-
.. project-id=ipsum
- The ID of the Google Cloud project that owns the job.
schedule.end-time-of-day hours=8
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
minutes=64
- Minutes of hour of day. Must be from 0 to 59.
nanos=89
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
-
seconds=85
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
-
.. repeat-interval=est
- Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
schedule-end-date day=51
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
month=51
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
-
year=94
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
-
..schedule-start-date day=39
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
month=84
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
-
year=2
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
-
..start-time-of-day hours=45
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
minutes=76
- Minutes of hour of day. Must be from 0 to 59.
nanos=15
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
-
seconds=58
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
-
... status=duo
- Status of the job. This value MUST be specified for
CreateTransferJobRequests
. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.
- Status of the job. This value MUST be specified for
transfer-spec.aws-s3-compatible-data-source bucket-name=sed
- Required. Specifies the name of the bucket.
endpoint=no
- Required. Specifies the endpoint of the storage service.
path=stet
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
region=kasd
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
s3-metadata auth-method=et
- Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
list-api=sed
- The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
protocol=et
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
-
request-model=et
- Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
-
...aws-s3-data-source.aws-access-key access-key-id=vero
- Required. AWS access key ID.
-
secret-access-key=erat
- Required. AWS secret access key. This field is not returned in RPC responses.
-
.. bucket-name=sed
- Required. S3 Bucket name (see Creating a bucket).
cloudfront-domain=duo
- Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format:
https://{id}.cloudfront.net
or any valid custom domainhttps://...
- Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format:
credentials-secret=dolore
- Optional. The Resource name of a secret in Secret Manager. AWS credentials must be stored in Secret Manager in JSON format: { "access_key_id": "ACCESS_KEY_ID", "secret_access_key": "SECRET_ACCESS_KEY" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Amazon S3] (https://cloud.google.com/storage-transfer/docs/source-amazon-s3#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify role_arn or aws_access_key. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Optional. The Resource name of a secret in Secret Manager. AWS credentials must be stored in Secret Manager in JSON format: { "access_key_id": "ACCESS_KEY_ID", "secret_access_key": "SECRET_ACCESS_KEY" } GoogleServiceAccount must be granted
path=et
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
-
role-arn=voluptua.
- The Amazon Resource Name (ARN) of the role to support temporary credentials via
AssumeRoleWithWebIdentity
. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentity
call for the provided role using the GoogleServiceAccount for this project.
- The Amazon Resource Name (ARN) of the role to support temporary credentials via
-
..azure-blob-storage-data-source.azure-credentials sas-token=amet.
- Required. Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
-
.. container=consetetur
- Required. The container to transfer from the Azure Storage account.
credentials-secret=diam
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
path=dolor
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
-
storage-account=et
- Required. The name of the Azure Storage account.
-
..gcs-data-sink bucket-name=et
- Required. Cloud Storage bucket name. Must meet Bucket Name Requirements.
managed-folder-transfer-enabled=false
- Transfer managed folders is in public preview. This option is only applicable to the Cloud Storage source bucket. If set to true: - The source managed folder will be transferred to the destination bucket - The destination managed folder will always be overwritten, other OVERWRITE options will not be supported
-
path=stet
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
-
..gcs-data-source bucket-name=dolor
- Required. Cloud Storage bucket name. Must meet Bucket Name Requirements.
managed-folder-transfer-enabled=false
- Transfer managed folders is in public preview. This option is only applicable to the Cloud Storage source bucket. If set to true: - The source managed folder will be transferred to the destination bucket - The destination managed folder will always be overwritten, other OVERWRITE options will not be supported
-
path=vero
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
-
..gcs-intermediate-data-location bucket-name=invidunt
- Required. Cloud Storage bucket name. Must meet Bucket Name Requirements.
managed-folder-transfer-enabled=true
- Transfer managed folders is in public preview. This option is only applicable to the Cloud Storage source bucket. If set to true: - The source managed folder will be transferred to the destination bucket - The destination managed folder will always be overwritten, other OVERWRITE options will not be supported
-
path=vero
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
-
..hdfs-data-source path=elitr
- Root path to transfer files.
-
..http-data-source list-url=lorem
- Required. The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
-
..object-conditions exclude-prefixes=diam
- If you specify
exclude_prefixes
, Storage Transfer Service uses the items in theexclude_prefixes
array to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixes
for inclusion in a transfer. The following are requirements ofexclude_prefixes
: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the exclude-prefix aslogs/y=2015/requests.gz
. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes
. The max size ofexclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - Each invocation of this argument appends the given value to the array.
- If you specify
include-prefixes=no
- If you specify
include_prefixes
, Storage Transfer Service uses the items in theinclude_prefixes
array to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixes
for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixes
specified for inclusion in the transfer. The following are requirements ofinclude_prefixes
: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the include-prefix aslogs/y=2015/requests.gz
. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - Each invocation of this argument appends the given value to the array.
- If you specify
last-modified-before=ipsum
- If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
last-modified-since=accusam
- If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The
last_modified_since
andlast_modified_before
fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_since
to the start of the day *last_modified_before
to the end of the day
- If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The
max-time-elapsed-since-last-modification=takimata
- Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
-
min-time-elapsed-since-last-modification=consetetur
- Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
-
..posix-data-sink root-directory=voluptua.
- Root directory path to the filesystem.
-
..posix-data-source root-directory=et
- Root directory path to the filesystem.
-
.. sink-agent-pool-name=erat
- Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
source-agent-pool-name=consetetur
- Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
-
transfer-manifest location=amet.
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have
storage.objects.get
permission for this object. An example path isgs://bucket_name/path/manifest.csv
.
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have
-
..transfer-options delete-objects-from-source-after-transfer=true
- Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
delete-objects-unique-in-sink=false
- Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
metadata-options acl=accusam
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
gid=voluptua.
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
kms-key=dolore
- Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
mode=dolore
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
storage-class=dolore
- Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
symlink=voluptua.
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
temporary-hold=amet.
- Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
time-created=ea
- Specifies how each object's
timeCreated
metadata is preserved for transfers. If unspecified, the default behavior is the same as TIME_CREATED_SKIP.
- Specifies how each object's
-
uid=sadipscing
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
-
.. overwrite-objects-already-existing-in-sink=true
- When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
overwrite-when=no
- When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
About Cursors
The cursor position is key to comfortably set complex nested structures. The following rules apply:
- The cursor position is always set relative to the current one, unless the field name starts with the
.
character. Fields can be nested such as in-r f.s.o
. - The cursor position is set relative to the top-level structure if it starts with
.
, e.g.-r .s.s
- You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify
-r struct.sub_struct=bar
. - You can move the cursor one level up by using
..
. Each additional.
moves it up one additional level. E.g....
would go three levels up.
Optional Output Flags
The method's return value a JSON encoded structure, which will be written to standard output by default.
- -o out
- out specifies the destination to which to write the server's result to.
It will be a JSON-encoded structure.
The destination may be
-
to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.
- out specifies the destination to which to write the server's result to.
It will be a JSON-encoded structure.
The destination may be
Optional General Properties
The following properties can configure any call, and are not specific to this method.
-
-p $-xgafv=string
- V1 error format.
-
-p access-token=string
- OAuth access token.
-
-p alt=string
- Data format for response.
-
-p callback=string
- JSONP
-
-p fields=string
- Selector specifying which fields to include in a partial response.
-
-p key=string
- API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
-
-p oauth-token=string
- OAuth 2.0 token for the current user.
-
-p pretty-print=boolean
- Returns response with indentations and line breaks.
-
-p quota-user=string
- Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
-
-p upload-type=string
- Legacy upload protocol for media (e.g. "media", "multipart").
-
-p upload-protocol=string
- Upload protocol for media (e.g. "raw", "multipart").