Creates an entry. You can create entries only with 'FILESET', 'CLUSTER', 'DATA_STREAM', or custom types. Data Catalog automatically creates entries with other types during metadata ingestion from integrated systems. You must enable the Data Catalog API in the project identified by the parent parameter. For more information, see Data Catalog resource project. An entry group can have a maximum of 100,000 entries.

Scopes

You will need authorization for the https://www.googleapis.com/auth/cloud-platform scope to make a valid call.

If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform. You can set the scope for this method like this: datacatalog1 --scope <scope> projects locations-entry-groups-entries-create ...

Required Scalar Argument

  • <parent> (string)
    • Required. The name of the entry group this entry belongs to. Note: The entry itself and its child resources might not be stored in the location specified in its name.

Required Request Value

The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.

For example, a structure like this:

GoogleCloudDatacatalogV1Entry:
  bigquery-date-sharded-spec:
    dataset: string
    latest-shard-resource: string
    shard-count: int64
    table-prefix: string
  bigquery-table-spec:
    table-source-type: string
    table-spec:
      grouped-entry: string
    view-spec:
      view-query: string
  business-context:
    entry-overview:
      overview: string
  cloud-bigtable-system-spec:
    instance-display-name: string
  data-source:
    resource: string
    service: string
    source-entry: string
    storage-properties:
      file-pattern: [string]
      file-type: string
  data-source-connection-spec:
    bigquery-connection-spec:
      cloud-sql:
        database: string
        instance-id: string
        type: string
      connection-type: string
      has-credential: boolean
  database-table-spec:
    database-view-spec:
      base-table: string
      sql-query: string
      view-type: string
    dataplex-table:
      dataplex-spec:
        asset: string
        compression-format: string
        data-format:
          avro:
            text: string
          protobuf:
            text: string
          thrift:
            text: string
        project-id: string
      user-managed: boolean
    type: string
  dataset-spec:
    vertex-dataset-spec:
      data-item-count: int64
      data-type: string
  description: string
  display-name: string
  feature-online-store-spec:
    storage-type: string
  fileset-spec:
    dataplex-fileset:
      dataplex-spec:
        asset: string
        compression-format: string
        data-format:
          avro:
            text: string
          protobuf:
            text: string
          thrift:
            text: string
        project-id: string
  fully-qualified-name: string
  gcs-fileset-spec:
    file-patterns: [string]
  integrated-system: string
  labels: { string: string }
  linked-resource: string
  looker-system-spec:
    parent-instance-display-name: string
    parent-instance-id: string
    parent-model-display-name: string
    parent-model-id: string
    parent-view-display-name: string
    parent-view-id: string
  model-spec:
    vertex-model-spec:
      container-image-uri: string
      version-aliases: [string]
      version-description: string
      version-id: string
      vertex-model-source-info:
        copy: boolean
        source-type: string
  name: string
  personal-details:
    star-time: string
    starred: boolean
  routine-spec:
    bigquery-routine-spec:
      imported-libraries: [string]
    definition-body: string
    language: string
    return-type: string
    routine-type: string
  source-system-timestamps:
    create-time: string
    expire-time: string
    update-time: string
  sql-database-system-spec:
    database-version: string
    instance-host: string
    sql-engine: string
  type: string
  usage-signal:
    favorite-count: int64
    update-time: string
  user-specified-system: string
  user-specified-type: string

can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.

  • -r .bigquery-date-sharded-spec dataset=amet
    • Output only. The Data Catalog resource name of the dataset entry the current table belongs to. For example: projects/{PROJECT_ID}/locations/{LOCATION}/entrygroups/{ENTRY_GROUP_ID}/entries/{ENTRY_ID}.
  • latest-shard-resource=duo
    • Output only. BigQuery resource name of the latest shard.
  • shard-count=-50
    • Output only. Total number of shards.
  • table-prefix=sed

    • Output only. The table name prefix of the shards. The name of any given shard is [table_prefix]YYYYMMDD. For example, for the MyTable20180101 shard, the table_prefix is MyTable.
  • ..bigquery-table-spec table-source-type=ut

    • Output only. The table source type.
  • table-spec grouped-entry=gubergren

    • Output only. If the table is date-sharded, that is, it matches the [prefix]YYYYMMDD name pattern, this field is the Data Catalog resource name of the date-sharded grouped entry. For example: projects/{PROJECT_ID}/locations/{LOCATION}/entrygroups/{ENTRY_GROUP_ID}/entries/{ENTRY_ID}. Otherwise, grouped_entry is empty.
  • ..view-spec view-query=rebum.

    • Output only. The query that defines the table view.
  • ...business-context.entry-overview overview=est

    • Entry overview with support for rich text. The overview must only contain Unicode characters, and should be formatted using HTML. The maximum length is 10 MiB as this value holds HTML descriptions including encoded images. The maximum length of the text without images is 100 KiB.
  • ...cloud-bigtable-system-spec instance-display-name=ipsum

    • Display name of the Instance. This is user specified and different from the resource name.
  • ..data-source resource=ipsum

    • Full name of a resource as defined by the service. For example: //bigquery.googleapis.com/projects/{PROJECT_ID}/locations/{LOCATION}/datasets/{DATASET_ID}/tables/{TABLE_ID}
  • service=est
    • Service that physically stores the data.
  • source-entry=gubergren
    • Output only. Data Catalog entry name, if applicable.
  • storage-properties file-pattern=ea
    • Patterns to identify a set of files for this fileset. Examples of a valid file_pattern: * gs://bucket_name/dir/*: matches all files in the bucket_name/dir directory * gs://bucket_name/dir/**: matches all files in the bucket_name/dir and all subdirectories recursively * gs://bucket_name/file*: matches files prefixed by file in bucket_name * gs://bucket_name/??.txt: matches files with two characters followed by .txt in bucket_name * gs://bucket_name/[aeiou].txt: matches files that contain a single vowel character followed by .txt in bucket_name * gs://bucket_name/[a-m].txt: matches files that contain a, b, ... or m followed by .txt in bucket_name * gs://bucket_name/a/*/b: matches all files in bucket_name that match the a/*/b pattern, such as a/c/b, a/d/b * gs://another_bucket/a.txt: matches gs://another_bucket/a.txt
    • Each invocation of this argument appends the given value to the array.
  • file-type=dolor

    • File type in MIME format, for example, text/plain.
  • ...data-source-connection-spec.bigquery-connection-spec.cloud-sql database=lorem

    • Database name.
  • instance-id=eos
    • Cloud SQL instance ID in the format of project:location:instance.
  • type=labore

    • Type of the Cloud SQL database.
  • .. connection-type=sed

    • The type of the BigQuery connection.
  • has-credential=false

    • True if there are credentials attached to the BigQuery connection; false otherwise.
  • ...database-table-spec.database-view-spec base-table=sed

    • Name of a singular table this view reflects one to one.
  • sql-query=no
    • SQL query used to generate this view.
  • view-type=stet

    • Type of this view.
  • ..dataplex-table.dataplex-spec asset=kasd

    • Fully qualified resource name of an asset in Dataplex, to which the underlying data source (Cloud Storage bucket or BigQuery dataset) of the entity is attached.
  • compression-format=et
    • Compression format of the data, e.g., zip, gzip etc.
  • data-format.avro text=sed

    • JSON source of the Avro schema.
  • ..protobuf text=et

    • Protocol buffer source of the schema.
  • ..thrift text=et

    • Thrift IDL source of the schema.
  • ... project-id=vero

    • Project ID of the underlying Cloud Storage or BigQuery data. Note that this may not be the same project as the correspondingly Dataplex lake / zone / asset.
  • .. user-managed=false

    • Indicates if the table schema is managed by the user or not.
  • .. type=duo

    • Type of this table.
  • ..dataset-spec.vertex-dataset-spec data-item-count=-34

    • The number of DataItems in this Dataset. Only apply for non-structured Dataset.
  • data-type=et

    • Type of the dataset.
  • ... description=voluptua.

    • Entry description that can consist of several sentences or paragraphs that describe entry contents. The description must not contain Unicode non-characters as well as C0 and C1 control codes except tabs (HT), new lines (LF), carriage returns (CR), and page breaks (FF). The maximum size is 2000 bytes when encoded in UTF-8. Default value is an empty string.
  • display-name=amet.
    • Display name of an entry. The maximum size is 500 bytes when encoded in UTF-8. Default value is an empty string.
  • feature-online-store-spec storage-type=consetetur

    • Output only. Type of underelaying storage for the FeatureOnlineStore.
  • ..fileset-spec.dataplex-fileset.dataplex-spec asset=diam

    • Fully qualified resource name of an asset in Dataplex, to which the underlying data source (Cloud Storage bucket or BigQuery dataset) of the entity is attached.
  • compression-format=dolor
    • Compression format of the data, e.g., zip, gzip etc.
  • data-format.avro text=et

    • JSON source of the Avro schema.
  • ..protobuf text=et

    • Protocol buffer source of the schema.
  • ..thrift text=sadipscing

    • Thrift IDL source of the schema.
  • ... project-id=stet

    • Project ID of the underlying Cloud Storage or BigQuery data. Note that this may not be the same project as the correspondingly Dataplex lake / zone / asset.
  • .... fully-qualified-name=dolor

    • Fully Qualified Name (FQN) of the resource. Set automatically for entries representing resources from synced systems. Settable only during creation, and read-only later. Can be used for search and lookup of the entries.
  • gcs-fileset-spec file-patterns=duo

    • Required. Patterns to identify a set of files in Google Cloud Storage. For more information, see [Wildcard Names] (https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames). Note: Currently, bucket wildcards are not supported. Examples of valid file_patterns: * gs://bucket_name/dir/*: matches all files in bucket_name/dir directory * gs://bucket_name/dir/**: matches all files in bucket_name/dir and all subdirectories * gs://bucket_name/file*: matches files prefixed by file in bucket_name * gs://bucket_name/??.txt: matches files with two characters followed by .txt in bucket_name * gs://bucket_name/[aeiou].txt: matches files that contain a single vowel character followed by .txt in bucket_name * gs://bucket_name/[a-m].txt: matches files that contain a, b, ... or m followed by .txt in bucket_name * gs://bucket_name/a/*/b: matches all files in bucket_name that match the a/*/b pattern, such as a/c/b, a/d/b * gs://another_bucket/a.txt: matches gs://another_bucket/a.txt You can combine wildcards to match complex sets of files, for example: gs://bucket_name/[a-m]??.j*g
    • Each invocation of this argument appends the given value to the array.
  • .. integrated-system=vero

    • Output only. Indicates the entry's source system that Data Catalog integrates with, such as BigQuery, Pub/Sub, or Dataproc Metastore.
  • labels=key=vero
    • Cloud labels attached to the entry. In Data Catalog, you can create and modify labels attached only to custom entries. Synced entries have unmodifiable labels that come from the source system.
    • the value will be associated with the given key
  • linked-resource=invidunt
    • The resource this metadata entry refers to. For Google Cloud Platform resources, linked_resource is the [Full Resource Name] (https://cloud.google.com/apis/design/resource_names#full_resource_name). For example, the linked_resource for a table resource from BigQuery is: //bigquery.googleapis.com/projects/{PROJECT_ID}/datasets/{DATASET_ID}/tables/{TABLE_ID} Output only when the entry is one of the types in the EntryType enum. For entries with a user_specified_type, this field is optional and defaults to an empty string. The resource string must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), periods (.), colons (:), slashes (/), dashes (-), and hashes (#). The maximum size is 200 bytes when encoded in UTF-8.
  • looker-system-spec parent-instance-display-name=stet
    • Name of the parent Looker Instance. Empty if it does not exist.
  • parent-instance-id=vero
    • ID of the parent Looker Instance. Empty if it does not exist. Example value: someinstance.looker.com
  • parent-model-display-name=elitr
    • Name of the parent Model. Empty if it does not exist.
  • parent-model-id=lorem
    • ID of the parent Model. Empty if it does not exist.
  • parent-view-display-name=diam
    • Name of the parent View. Empty if it does not exist.
  • parent-view-id=no

    • ID of the parent View. Empty if it does not exist.
  • ..model-spec.vertex-model-spec container-image-uri=ipsum

    • URI of the Docker image to be used as the custom container for serving predictions.
  • version-aliases=accusam
    • User provided version aliases so that a model version can be referenced via alias
    • Each invocation of this argument appends the given value to the array.
  • version-description=takimata
    • The description of this version.
  • version-id=consetetur
    • The version ID of the model.
  • vertex-model-source-info copy=false
    • If this Model is copy of another Model. If true then source_type pertains to the original.
  • source-type=erat

    • Type of the model source.
  • .... name=consetetur

    • Output only. The resource name of an entry in URL format. Note: The entry itself and its child resources might not be stored in the location specified in its name.
  • personal-details star-time=amet.
    • Set if the entry is starred; unset otherwise.
  • starred=true

    • True if the entry is starred by the user; false otherwise.
  • ..routine-spec.bigquery-routine-spec imported-libraries=et

    • Paths of the imported libraries.
    • Each invocation of this argument appends the given value to the array.
  • .. definition-body=accusam

    • The body of the routine.
  • language=voluptua.
    • The language the routine is written in. The exact value depends on the source system. For BigQuery routines, possible values are: * SQL * JAVASCRIPT
  • return-type=dolore
    • Return type of the argument. The exact value depends on the source system and the language.
  • routine-type=dolore

    • The type of the routine.
  • ..source-system-timestamps create-time=dolore

    • Creation timestamp of the resource within the given system.
  • expire-time=voluptua.
    • Output only. Expiration timestamp of the resource within the given system. Currently only applicable to BigQuery resources.
  • update-time=amet.

    • Timestamp of the last modification of the resource or its metadata within a given system. Note: Depending on the source system, not every modification updates this timestamp. For example, BigQuery timestamps every metadata modification but not data or permission changes.
  • ..sql-database-system-spec database-version=ea

    • Version of the database engine.
  • instance-host=sadipscing
    • Host of the SQL database enum InstanceHost { UNDEFINED = 0; SELF_HOSTED = 1; CLOUD_SQL = 2; AMAZON_RDS = 3; AZURE_SQL = 4; } Host of the enclousing database instance.
  • sql-engine=lorem

    • SQL Database Engine. enum SqlEngine { UNDEFINED = 0; MY_SQL = 1; POSTGRE_SQL = 2; SQL_SERVER = 3; } Engine of the enclosing database instance.
  • .. type=invidunt

    • The type of the entry. For details, see EntryType.
  • usage-signal favorite-count=-11
    • Favorite count in the source system.
  • update-time=est

    • The end timestamp of the duration of usage statistics.
  • .. user-specified-system=at

    • Indicates the entry's source system that Data Catalog doesn't automatically integrate with. The user_specified_system string has the following limitations: * Is case insensitive. * Must begin with a letter or underscore. * Can only contain letters, numbers, and underscores. * Must be at least 1 character and at most 64 characters long.
  • user-specified-type=sed
    • Custom entry type that doesn't match any of the values allowed for input and listed in the EntryType enum. When creating an entry, first check the type values in the enum. If there are no appropriate types for the new entry, provide a custom value, for example, my_special_type. The user_specified_type string has the following limitations: * Is case insensitive. * Must begin with a letter or underscore. * Can only contain letters, numbers, and underscores. * Must be at least 1 character and at most 64 characters long.

About Cursors

The cursor position is key to comfortably set complex nested structures. The following rules apply:

  • The cursor position is always set relative to the current one, unless the field name starts with the . character. Fields can be nested such as in -r f.s.o .
  • The cursor position is set relative to the top-level structure if it starts with ., e.g. -r .s.s
  • You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify -r struct.sub_struct=bar.
  • You can move the cursor one level up by using ... Each additional . moves it up one additional level. E.g. ... would go three levels up.

Optional Output Flags

The method's return value a JSON encoded structure, which will be written to standard output by default.

  • -o out
    • out specifies the destination to which to write the server's result to. It will be a JSON-encoded structure. The destination may be - to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.

Optional Method Properties

You may set the following properties to further configure the call. Please note that -p is followed by one or more key-value-pairs, and is called like this -p k1=v1 k2=v2 even though the listing below repeats the -p for completeness.

  • -p entry-id=string
    • Required. The ID of the entry to create. The ID must contain only letters (a-z, A-Z), numbers (0-9), and underscores (_). The maximum size is 64 bytes when encoded in UTF-8.

Optional General Properties

The following properties can configure any call, and are not specific to this method.

  • -p $-xgafv=string

    • V1 error format.
  • -p access-token=string

    • OAuth access token.
  • -p alt=string

    • Data format for response.
  • -p callback=string

    • JSONP
  • -p fields=string

    • Selector specifying which fields to include in a partial response.
  • -p key=string

    • API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
  • -p oauth-token=string

    • OAuth 2.0 token for the current user.
  • -p pretty-print=boolean

    • Returns response with indentations and line breaks.
  • -p quota-user=string

    • Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
  • -p upload-type=string

    • Legacy upload protocol for media (e.g. "media", "multipart").
  • -p upload-protocol=string

    • Upload protocol for media (e.g. "raw", "multipart").