Imports data into a Cloud SQL instance from a SQL dump or CSV file in Cloud Storage.
Scopes
You will need authorization for the https://www.googleapis.com/auth/cloud-platform scope to make a valid call.
If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform.
You can set the scope for this method like this: sqladmin1-beta4 --scope <scope> instances import ...
Required Scalar Arguments
- <project> (string)
- Project ID of the project that contains the instance.
- <instance> (string)
- Cloud SQL instance ID. This does not include the project ID.
Required Request Value
The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.
For example, a structure like this:
InstancesImportRequest:
import-context:
bak-import-options:
bak-type: string
encryption-options:
cert-path: string
pvk-password: string
pvk-path: string
no-recovery: boolean
recovery-only: boolean
stop-at: string
stop-at-mark: string
striped: boolean
csv-import-options:
columns: [string]
escape-character: string
fields-terminated-by: string
lines-terminated-by: string
quote-character: string
table: string
database: string
file-type: string
import-user: string
kind: string
uri: string
can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.
-r .import-context.bak-import-options bak-type=et
- Type of the bak content, FULL or DIFF.
encryption-options cert-path=sanctus
- Path to the Certificate (.cer) in Cloud Storage, in the form
gs://bucketName/fileName
. The instance must have write permissions to the bucket and read access to the file.
- Path to the Certificate (.cer) in Cloud Storage, in the form
pvk-password=lorem
- Password that encrypts the private key
-
pvk-path=est
- Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form
gs://bucketName/fileName
. The instance must have write permissions to the bucket and read access to the file.
- Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form
-
.. no-recovery=true
- Whether or not the backup importing will restore database with NORECOVERY option Applies only to Cloud SQL for SQL Server.
recovery-only=true
- Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server.
stop-at=sed
- Optional. The timestamp when the import should stop. This timestamp is in the RFC 3339 format (for example,
2023-10-01T16:19:00.094
). This field is equivalent to the STOPAT keyword and applies to Cloud SQL for SQL Server only.
- Optional. The timestamp when the import should stop. This timestamp is in the RFC 3339 format (for example,
stop-at-mark=no
- Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only.
-
striped=false
- Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server.
-
..csv-import-options columns=elitr
- The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
- Each invocation of this argument appends the given value to the array.
escape-character=sed
- Specifies the character that should appear before a data character that needs to be escaped.
fields-terminated-by=no
- Specifies the character that separates columns within each row (line) of the file.
lines-terminated-by=nonumy
- This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values.
quote-character=at
- Specifies the quoting character to be used when a data value is quoted.
-
table=sadipscing
- The table to which CSV data is imported.
-
.. database=aliquyam
- The target database for the import. If
fileType
isSQL
, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. IffileType
isCSV
, one database must be specified.
- The target database for the import. If
file-type=dolores
- The file type for the specified uri. *
SQL
: The file contains SQL statements. *CSV
: The file contains CSV data. *BAK
: The file contains backup data for a SQL Server instance.
- The file type for the specified uri. *
import-user=sadipscing
- The PostgreSQL user for this import operation. PostgreSQL instances only.
kind=erat
- This is always
sql#importContext
.
- This is always
uri=aliquyam
- Path to the import file in Cloud Storage, in the form
gs://bucketName/fileName
. Compressed gzip files (.gz) are supported whenfileType
isSQL
. The instance must have write permissions to the bucket and read access to the file.
- Path to the import file in Cloud Storage, in the form
About Cursors
The cursor position is key to comfortably set complex nested structures. The following rules apply:
- The cursor position is always set relative to the current one, unless the field name starts with the
.
character. Fields can be nested such as in-r f.s.o
. - The cursor position is set relative to the top-level structure if it starts with
.
, e.g.-r .s.s
- You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify
-r struct.sub_struct=bar
. - You can move the cursor one level up by using
..
. Each additional.
moves it up one additional level. E.g....
would go three levels up.
Optional Output Flags
The method's return value a JSON encoded structure, which will be written to standard output by default.
- -o out
- out specifies the destination to which to write the server's result to.
It will be a JSON-encoded structure.
The destination may be
-
to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.
- out specifies the destination to which to write the server's result to.
It will be a JSON-encoded structure.
The destination may be
Optional General Properties
The following properties can configure any call, and are not specific to this method.
-
-p $-xgafv=string
- V1 error format.
-
-p access-token=string
- OAuth access token.
-
-p alt=string
- Data format for response.
-
-p callback=string
- JSONP
-
-p fields=string
- Selector specifying which fields to include in a partial response.
-
-p key=string
- API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
-
-p oauth-token=string
- OAuth 2.0 token for the current user.
-
-p pretty-print=boolean
- Returns response with indentations and line breaks.
-
-p quota-user=string
- Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
-
-p upload-type=string
- Legacy upload protocol for media (e.g. "media", "multipart").
-
-p upload-protocol=string
- Upload protocol for media (e.g. "raw", "multipart").