Adds a text (chat, for example), or audio (phone recording, for example) message from a participant into the conversation. Note: Always use agent versions for production traffic sent to virtual agents. See Versions and environments.

Scopes

You will need authorization for at least one of the following scopes to make a valid call:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform. You can set the scope for this method like this: dialogflow2-beta1 --scope <scope> projects locations-conversations-participants-analyze-content ...

Required Scalar Argument

  • <participant> (string)
    • Required. The name of the participant this text comes from. Format: projects//locations//conversations//participants/.

Required Request Value

The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.

For example, a structure like this:

GoogleCloudDialogflowV2beta1AnalyzeContentRequest:
  assist-query-params:
    documents-metadata-filters: { string: string }
  audio-input:
    audio: string
    config:
      audio-encoding: string
      barge-in-config:
        no-barge-in-duration: string
        total-duration: string
      disable-no-speech-recognized-event: boolean
      enable-automatic-punctuation: boolean
      enable-word-info: boolean
      language-code: string
      model: string
      model-variant: string
      opt-out-conformer-model-migration: boolean
      phrase-hints: [string]
      sample-rate-hertz: integer
      single-utterance: boolean
  cx-current-page: string
  event-input:
    language-code: string
    name: string
  intent-input:
    intent: string
    language-code: string
  message-send-time: string
  query-params:
    geo-location:
      latitude: number
      longitude: number
    knowledge-base-names: [string]
    platform: string
    reset-contexts: boolean
    sentiment-analysis-request-config:
      analyze-query-text-sentiment: boolean
    time-zone: string
    webhook-headers: { string: string }
  reply-audio-config:
    audio-encoding: string
    sample-rate-hertz: integer
    synthesize-speech-config:
      effects-profile-id: [string]
      pitch: number
      speaking-rate: number
      voice:
        name: string
        ssml-gender: string
      volume-gain-db: number
  request-id: string
  suggestion-input:
    answer-record: string
    intent-input:
      intent: string
      language-code: string
    text-override:
      language-code: string
      text: string
  text-input:
    language-code: string
    text: string

can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.

  • -r .assist-query-params documents-metadata-filters=key=labore

    • Key-value filters on the metadata of documents returned by article suggestion. If specified, article suggestion only returns suggested documents that match all filters in their Document.metadata. Multiple values for a metadata key should be concatenated by comma. For example, filters to match all documents that have 'US' or 'CA' in their market metadata values and 'agent' in their user metadata values will be documents_metadata_filters { key: &#34;market&#34; value: &#34;US,CA&#34; } documents_metadata_filters { key: &#34;user&#34; value: &#34;agent&#34; }
    • the value will be associated with the given key
  • ..audio-input audio=eos

    • Required. The natural language speech audio to be processed. A single request can contain up to 1 minute of speech audio data. The transcribed text cannot contain more than 256 bytes for virtual agent interactions.
  • config audio-encoding=invidunt
    • Required. Audio encoding of the audio content to process.
  • barge-in-config no-barge-in-duration=at
    • Duration that is not eligible for barge-in at the beginning of the input audio.
  • total-duration=sea

    • Total duration for the playback at the beginning of the input audio.
  • .. disable-no-speech-recognized-event=true

    • Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent.
  • enable-automatic-punctuation=true
    • Enable automatic punctuation option at the speech backend.
  • enable-word-info=false
    • If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
  • language-code=et
    • Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
  • model=et
    • Optional. Which Speech model to select for the given request. For more information, see Speech models.
  • model-variant=duo
    • Which variant of the Speech model to use.
  • opt-out-conformer-model-migration=false
    • If true, the request will opt out for STT conformer model migration. This field will be deprecated once force migration takes place in June 2024. Please refer to Dialogflow ES Speech model migration.
  • phrase-hints=voluptua.
  • sample-rate-hertz=68
  • single-utterance=true

    • If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.
  • ... cx-current-page=voluptua.

    • The unique identifier of the CX page to override the current_page in the session. Format: projects//locations//agents//flows//pages/. If cx_current_page is specified, the previous state of the session will be ignored by Dialogflow CX, including the previous page and the previous session parameters. In most cases, cx_current_page and cx_parameters should be configured together to direct a session to a specific state. Note: this field should only be used if you are connecting to a Dialogflow CX agent.
  • event-input language-code=voluptua.
    • Required. The language of this query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. This field is ignored when used in the context of a WebhookResponse.followup_event_input field, because the language was already defined in the originating detect intent request.
  • name=tempor

    • Required. The unique identifier of the event.
  • ..intent-input intent=takimata

    • Required. The unique identifier of the intent in V3 agent. Format: projects//locations//locations//agents//intents/.
  • language-code=ut

    • Required. The language of this conversational query. See Language Support for a list of the currently supported language codes.
  • .. message-send-time=no

    • Optional. The send time of the message from end user or human agent's perspective. It is used for identifying the same message under one participant. Given two messages under the same participant: * If send time are different regardless of whether the content of the messages are exactly the same, the conversation will regard them as two distinct messages sent by the participant. * If send time is the same regardless of whether the content of the messages are exactly the same, the conversation will regard them as same message, and ignore the message received later. If the value is not provided, a new request will always be regarded as a new message without any de-duplication.
  • query-params.geo-location latitude=0.3048795618896566
    • The latitude in degrees. It must be in the range [-90.0, +90.0].
  • longitude=0.8769893092921054

    • The longitude in degrees. It must be in the range [-180.0, +180.0].
  • .. knowledge-base-names=stet

    • KnowledgeBases to get alternative results from. If not set, the KnowledgeBases enabled in the agent (through UI) will be used. Format: projects//knowledgeBases/.
    • Each invocation of this argument appends the given value to the array.
  • platform=lorem
    • The platform of the virtual agent response messages. If not empty, only emits messages from this platform in the response. Valid values are the enum names of platform.
  • reset-contexts=false
    • Specifies whether to delete all contexts in the current session before the new ones are activated.
  • sentiment-analysis-request-config analyze-query-text-sentiment=true

    • Instructs the service to perform sentiment analysis on query_text. If not provided, sentiment analysis is not performed on query_text.
  • .. time-zone=rebum.

    • The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used.
  • webhook-headers=key=eirmod

    • This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook along with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google's specified headers are not allowed. Including: "Host", "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc.
    • the value will be associated with the given key
  • ..reply-audio-config audio-encoding=sit

    • Required. Audio encoding of the synthesized audio content.
  • sample-rate-hertz=40
    • The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).
  • synthesize-speech-config effects-profile-id=kasd
    • Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.
    • Each invocation of this argument appends the given value to the array.
  • pitch=0.6742454262168728
    • Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.
  • speaking-rate=0.743127749321243
    • Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error.
  • voice name=nonumy
    • Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and ssml_gender. For the list of available voices, please refer to Supported voices and languages.
  • ssml-gender=kasd

    • Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.
  • .. volume-gain-db=0.6224708663432337

    • Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that.
  • ... request-id=et

    • A unique identifier for this request. Restricted to 36 ASCII characters. A random UUID is recommended. This request is only idempotent if a request_id is provided.
  • suggestion-input answer-record=dolor
    • Required. The ID of a suggestion selected by the human agent. The suggestion(s) were generated in a previous call to request Dialogflow assist. The format is: projects//locations//answerRecords/ where is an alphanumeric string.
  • intent-input intent=elitr
    • Required. The unique identifier of the intent in V3 agent. Format: projects//locations//locations//agents//intents/.
  • language-code=sanctus

    • Required. The language of this conversational query. See Language Support for a list of the currently supported language codes.
  • ..text-override language-code=dolor

    • Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
  • text=sea

    • Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters for virtual agent interactions.
  • ...text-input language-code=sanctus

    • Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
  • text=sit
    • Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters for virtual agent interactions.

About Cursors

The cursor position is key to comfortably set complex nested structures. The following rules apply:

  • The cursor position is always set relative to the current one, unless the field name starts with the . character. Fields can be nested such as in -r f.s.o .
  • The cursor position is set relative to the top-level structure if it starts with ., e.g. -r .s.s
  • You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify -r struct.sub_struct=bar.
  • You can move the cursor one level up by using ... Each additional . moves it up one additional level. E.g. ... would go three levels up.

Optional Output Flags

The method's return value a JSON encoded structure, which will be written to standard output by default.

  • -o out
    • out specifies the destination to which to write the server's result to. It will be a JSON-encoded structure. The destination may be - to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.

Optional General Properties

The following properties can configure any call, and are not specific to this method.

  • -p $-xgafv=string

    • V1 error format.
  • -p access-token=string

    • OAuth access token.
  • -p alt=string

    • Data format for response.
  • -p callback=string

    • JSONP
  • -p fields=string

    • Selector specifying which fields to include in a partial response.
  • -p key=string

    • API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
  • -p oauth-token=string

    • OAuth 2.0 token for the current user.
  • -p pretty-print=boolean

    • Returns response with indentations and line breaks.
  • -p quota-user=string

    • Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
  • -p upload-type=string

    • Legacy upload protocol for media (e.g. "media", "multipart").
  • -p upload-protocol=string

    • Upload protocol for media (e.g. "raw", "multipart").