Creates an evaluation job.

Scopes

You will need authorization for the https://www.googleapis.com/auth/cloud-platform scope to make a valid call.

If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform. You can set the scope for this method like this: datalabeling1-beta1 --scope <scope> projects evaluation-jobs-create ...

Required Scalar Argument

  • <parent> (string)
    • Required. Evaluation job resource parent. Format: "projects/{project_id}"

Required Request Value

The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.

For example, a structure like this:

GoogleCloudDatalabelingV1beta1CreateEvaluationJobRequest:
  job:
    annotation-spec-set: string
    create-time: string
    description: string
    evaluation-job-config:
      bigquery-import-keys: { string: string }
      bounding-poly-config:
        annotation-spec-set: string
        instruction-message: string
      evaluation-config:
        bounding-box-evaluation-options:
          iou-threshold: number
      evaluation-job-alert-config:
        email: string
        min-acceptable-mean-average-precision: number
      example-count: integer
      example-sample-percentage: number
      human-annotation-config:
        annotated-dataset-description: string
        annotated-dataset-display-name: string
        contributor-emails: [string]
        instruction: string
        label-group: string
        language-code: string
        question-duration: string
        replica-count: integer
        user-email-address: string
      image-classification-config:
        allow-multi-label: boolean
        annotation-spec-set: string
        answer-aggregation-type: string
      input-config:
        annotation-type: string
        bigquery-source:
          input-uri: string
        classification-metadata:
          is-multi-label: boolean
        data-type: string
        gcs-source:
          input-uri: string
          mime-type: string
        text-metadata:
          language-code: string
      text-classification-config:
        allow-multi-label: boolean
        annotation-spec-set: string
        sentiment-config:
          enable-label-sentiment-selection: boolean
    label-missing-ground-truth: boolean
    model-version: string
    name: string
    schedule: string
    state: string

can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.

  • -r .job annotation-spec-set=et
    • Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
  • create-time=accusam
    • Output only. Timestamp of when this evaluation job was created.
  • description=voluptua.
    • Required. Description of the job. The description can be up to 25,000 characters long.
  • evaluation-job-config bigquery-import-keys=key=dolore
    • Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    • the value will be associated with the given key
  • bounding-poly-config annotation-spec-set=dolore
    • Required. Annotation spec set resource name.
  • instruction-message=dolore

    • Optional. Instruction message showed on contributors UI.
  • ..evaluation-config.bounding-box-evaluation-options iou-threshold=0.18498741667957075

  • ...evaluation-job-alert-config email=ea

    • Required. An email address to send alerts to.
  • min-acceptable-mean-average-precision=0.05043422241780038

    • Required. A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
  • .. example-count=63

    • Required. The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
  • example-sample-percentage=0.7036393712200139
    • Required. Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
  • human-annotation-config annotated-dataset-description=at
    • Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
  • annotated-dataset-display-name=sed
    • Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
  • contributor-emails=sit
    • Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    • Each invocation of this argument appends the given value to the array.
  • instruction=et
    • Required. Instruction resource name.
  • label-group=tempor
    • Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
  • language-code=aliquyam
    • Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
  • question-duration=ipsum
    • Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
  • replica-count=83
    • Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
  • user-email-address=sanctus

    • Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
  • ..image-classification-config allow-multi-label=true

    • Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
  • annotation-spec-set=est
    • Required. Annotation spec set resource name.
  • answer-aggregation-type=sed

    • Optional. The type of how to aggregate answers.
  • ..input-config annotation-type=diam

    • Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
  • bigquery-source input-uri=dolores

    • Required. BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
  • ..classification-metadata is-multi-label=true

    • Whether the classification task is multi-label or not.
  • .. data-type=et

    • Required. Data type must be specifed when user tries to import data.
  • gcs-source input-uri=sed
    • Required. The input URI of source file. This must be a Cloud Storage path (gs://...).
  • mime-type=no

    • Required. The format of the source file. Only "text/csv" is supported.
  • ..text-metadata language-code=et

    • The language of this text, as a BCP-47. Default value is en-US.
  • ...text-classification-config allow-multi-label=false

    • Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
  • annotation-spec-set=sed
    • Required. Annotation spec set resource name.
  • sentiment-config enable-label-sentiment-selection=true

    • If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
  • .... label-missing-ground-truth=false

    • Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.
  • model-version=at
    • Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
  • name=sadipscing
    • Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
  • schedule=aliquyam
    • Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
  • state=dolores
    • Output only. Describes the current state of the job.

About Cursors

The cursor position is key to comfortably set complex nested structures. The following rules apply:

  • The cursor position is always set relative to the current one, unless the field name starts with the . character. Fields can be nested such as in -r f.s.o .
  • The cursor position is set relative to the top-level structure if it starts with ., e.g. -r .s.s
  • You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify -r struct.sub_struct=bar.
  • You can move the cursor one level up by using ... Each additional . moves it up one additional level. E.g. ... would go three levels up.

Optional Output Flags

The method's return value a JSON encoded structure, which will be written to standard output by default.

  • -o out
    • out specifies the destination to which to write the server's result to. It will be a JSON-encoded structure. The destination may be - to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.

Optional General Properties

The following properties can configure any call, and are not specific to this method.

  • -p $-xgafv=string

    • V1 error format.
  • -p access-token=string

    • OAuth access token.
  • -p alt=string

    • Data format for response.
  • -p callback=string

    • JSONP
  • -p fields=string

    • Selector specifying which fields to include in a partial response.
  • -p key=string

    • API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
  • -p oauth-token=string

    • OAuth 2.0 token for the current user.
  • -p pretty-print=boolean

    • Returns response with indentations and line breaks.
  • -p quota-user=string

    • Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
  • -p upload-type=string

    • Legacy upload protocol for media (e.g. "media", "multipart").
  • -p upload-protocol=string

    • Upload protocol for media (e.g. "raw", "multipart").