Imports data into a Cloud SQL instance from a SQL dump or CSV file in Cloud Storage.

Scopes

You will need authorization for the https://www.googleapis.com/auth/cloud-platform scope to make a valid call.

If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform. You can set the scope for this method like this: sql1-beta4 --scope <scope> instances import ...

Required Scalar Arguments

  • <project> (string)
    • Project ID of the project that contains the instance.
  • <instance> (string)
    • Cloud SQL instance ID. This does not include the project ID.

Required Request Value

The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.

For example, a structure like this:

InstancesImportRequest:
  import-context:
    bak-import-options:
      encryption-options:
        cert-path: string
        pvk-password: string
        pvk-path: string
    csv-import-options:
      columns: [string]
      table: string
    database: string
    file-type: string
    import-user: string
    kind: string
    uri: string

can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.

  • -r .import-context.bak-import-options.encryption-options cert-path=voluptua.
    • Path to the Certificate (.cer) in Cloud Storage, in the form <code>gs://bucketName/fileName</code>. The instance must have write permissions to the bucket and read access to the file.
  • pvk-password=et
    • Password that encrypts the private key
  • pvk-path=erat

    • Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form <code>gs://bucketName/fileName</code>. The instance must have write permissions to the bucket and read access to the file.
  • ...csv-import-options columns=consetetur

    • The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
    • Each invocation of this argument appends the given value to the array.
  • table=amet.

    • The table to which CSV data is imported.
  • .. database=sed

    • The target database for the import. If <code>fileType</code> is <code>SQL</code>, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If <code>fileType</code> is <code>CSV</code>, one database must be specified.
  • file-type=takimata
    • The file type for the specified uri. <br><code>SQL</code>: The file contains SQL statements. <br><code>CSV</code>: The file contains CSV data.
  • import-user=dolores
    • The PostgreSQL user for this import operation. PostgreSQL instances only.
  • kind=gubergren
    • This is always <code>sql#importContext</code>.
  • uri=et
    • Path to the import file in Cloud Storage, in the form <code>gs: //bucketName/fileName</code>. Compressed gzip files (.gz) are supported // when <code>fileType</code> is <code>SQL</code>. The instance must have // write permissions to the bucket and read access to the file.

About Cursors

The cursor position is key to comfortably set complex nested structures. The following rules apply:

  • The cursor position is always set relative to the current one, unless the field name starts with the . character. Fields can be nested such as in -r f.s.o .
  • The cursor position is set relative to the top-level structure if it starts with ., e.g. -r .s.s
  • You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify -r struct.sub_struct=bar.
  • You can move the cursor one level up by using ... Each additional . moves it up one additional level. E.g. ... would go three levels up.

Optional Output Flags

The method's return value a JSON encoded structure, which will be written to standard output by default.

  • -o out
    • out specifies the destination to which to write the server's result to. It will be a JSON-encoded structure. The destination may be - to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.

Optional General Properties

The following properties can configure any call, and are not specific to this method.

  • -p $-xgafv=string

    • V1 error format.
  • -p access-token=string

    • OAuth access token.
  • -p alt=string

    • Data format for response.
  • -p callback=string

    • JSONP
  • -p fields=string

    • Selector specifying which fields to include in a partial response.
  • -p key=string

    • API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
  • -p oauth-token=string

    • OAuth 2.0 token for the current user.
  • -p pretty-print=boolean

    • Returns response with indentations and line breaks.
  • -p quota-user=string

    • Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
  • -p upload-type=string

    • Legacy upload protocol for media (e.g. "media", "multipart").
  • -p upload-protocol=string

    • Upload protocol for media (e.g. "raw", "multipart").