Adds a text (chat, for example), or audio (phone recording, for example) message from a participant into the conversation. Note: Always use agent versions for production traffic sent to virtual agents. See Versions and environments.

Scopes

You will need authorization for at least one of the following scopes to make a valid call:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

If unset, the scope for this method defaults to https://www.googleapis.com/auth/cloud-platform. You can set the scope for this method like this: dialogflow2-beta1 --scope <scope> projects conversations-participants-analyze-content ...

Required Scalar Argument

  • <participant> (string)
    • Required. The name of the participant this text comes from. Format: projects//locations//conversations//participants/.

Required Request Value

The request value is a data-structure with various fields. Each field may be a simple scalar or another data-structure. In the latter case it is advised to set the field-cursor to the data-structure's field to specify values more concisely.

For example, a structure like this:

GoogleCloudDialogflowV2beta1AnalyzeContentRequest:
  assist-query-params:
    documents-metadata-filters: { string: string }
  audio-input:
    audio: string
    config:
      audio-encoding: string
      barge-in-config:
        no-barge-in-duration: string
        total-duration: string
      disable-no-speech-recognized-event: boolean
      enable-automatic-punctuation: boolean
      enable-word-info: boolean
      language-code: string
      model: string
      model-variant: string
      opt-out-conformer-model-migration: boolean
      phrase-hints: [string]
      sample-rate-hertz: integer
      single-utterance: boolean
  cx-current-page: string
  event-input:
    language-code: string
    name: string
  intent-input:
    intent: string
    language-code: string
  message-send-time: string
  query-params:
    geo-location:
      latitude: number
      longitude: number
    knowledge-base-names: [string]
    platform: string
    reset-contexts: boolean
    sentiment-analysis-request-config:
      analyze-query-text-sentiment: boolean
    time-zone: string
    webhook-headers: { string: string }
  reply-audio-config:
    audio-encoding: string
    sample-rate-hertz: integer
    synthesize-speech-config:
      effects-profile-id: [string]
      pitch: number
      speaking-rate: number
      voice:
        name: string
        ssml-gender: string
      volume-gain-db: number
  request-id: string
  suggestion-input:
    answer-record: string
    intent-input:
      intent: string
      language-code: string
    text-override:
      language-code: string
      text: string
  text-input:
    language-code: string
    text: string

can be set completely with the following arguments which are assumed to be executed in the given order. Note how the cursor position is adjusted to the respective structures, allowing simple field names to be used most of the time.

  • -r .assist-query-params documents-metadata-filters=key=amet

    • Key-value filters on the metadata of documents returned by article suggestion. If specified, article suggestion only returns suggested documents that match all filters in their Document.metadata. Multiple values for a metadata key should be concatenated by comma. For example, filters to match all documents that have 'US' or 'CA' in their market metadata values and 'agent' in their user metadata values will be documents_metadata_filters { key: &#34;market&#34; value: &#34;US,CA&#34; } documents_metadata_filters { key: &#34;user&#34; value: &#34;agent&#34; }
    • the value will be associated with the given key
  • ..audio-input audio=magna

    • Required. The natural language speech audio to be processed. A single request can contain up to 1 minute of speech audio data. The transcribed text cannot contain more than 256 bytes for virtual agent interactions.
  • config audio-encoding=magna
    • Required. Audio encoding of the audio content to process.
  • barge-in-config no-barge-in-duration=invidunt
    • Duration that is not eligible for barge-in at the beginning of the input audio.
  • total-duration=et

    • Total duration for the playback at the beginning of the input audio.
  • .. disable-no-speech-recognized-event=true

    • Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent.
  • enable-automatic-punctuation=false
    • Enable automatic punctuation option at the speech backend.
  • enable-word-info=false
    • If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
  • language-code=vero
    • Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
  • model=ea
    • Optional. Which Speech model to select for the given request. For more information, see Speech models.
  • model-variant=et
    • Which variant of the Speech model to use.
  • opt-out-conformer-model-migration=true
    • If true, the request will opt out for STT conformer model migration. This field will be deprecated once force migration takes place in June 2024. Please refer to Dialogflow ES Speech model migration.
  • phrase-hints=eirmod
  • sample-rate-hertz=43
  • single-utterance=false

    • If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.
  • ... cx-current-page=dolor

    • The unique identifier of the CX page to override the current_page in the session. Format: projects//locations//agents//flows//pages/. If cx_current_page is specified, the previous state of the session will be ignored by Dialogflow CX, including the previous page and the previous session parameters. In most cases, cx_current_page and cx_parameters should be configured together to direct a session to a specific state. Note: this field should only be used if you are connecting to a Dialogflow CX agent.
  • event-input language-code=et
    • Required. The language of this query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. This field is ignored when used in the context of a WebhookResponse.followup_event_input field, because the language was already defined in the originating detect intent request.
  • name=et

    • Required. The unique identifier of the event.
  • ..intent-input intent=erat

    • Required. The unique identifier of the intent in V3 agent. Format: projects//locations//locations//agents//intents/.
  • language-code=eos

    • Required. The language of this conversational query. See Language Support for a list of the currently supported language codes.
  • .. message-send-time=nonumy

    • Optional. The send time of the message from end user or human agent's perspective. It is used for identifying the same message under one participant. Given two messages under the same participant: * If send time are different regardless of whether the content of the messages are exactly the same, the conversation will regard them as two distinct messages sent by the participant. * If send time is the same regardless of whether the content of the messages are exactly the same, the conversation will regard them as same message, and ignore the message received later. If the value is not provided, a new request will always be regarded as a new message without any de-duplication.
  • query-params.geo-location latitude=0.26872582587862837
    • The latitude in degrees. It must be in the range [-90.0, +90.0].
  • longitude=0.5409905970250275

    • The longitude in degrees. It must be in the range [-180.0, +180.0].
  • .. knowledge-base-names=nonumy

    • KnowledgeBases to get alternative results from. If not set, the KnowledgeBases enabled in the agent (through UI) will be used. Format: projects//knowledgeBases/.
    • Each invocation of this argument appends the given value to the array.
  • platform=stet
    • The platform of the virtual agent response messages. If not empty, only emits messages from this platform in the response. Valid values are the enum names of platform.
  • reset-contexts=true
    • Specifies whether to delete all contexts in the current session before the new ones are activated.
  • sentiment-analysis-request-config analyze-query-text-sentiment=true

    • Instructs the service to perform sentiment analysis on query_text. If not provided, sentiment analysis is not performed on query_text.
  • .. time-zone=dolores

    • The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used.
  • webhook-headers=key=aliquyam

    • This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook along with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google's specified headers are not allowed. Including: "Host", "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc.
    • the value will be associated with the given key
  • ..reply-audio-config audio-encoding=sanctus

    • Required. Audio encoding of the synthesized audio content.
  • sample-rate-hertz=13
    • The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).
  • synthesize-speech-config effects-profile-id=dolor
    • Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.
    • Each invocation of this argument appends the given value to the array.
  • pitch=0.2047944506633067
    • Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.
  • speaking-rate=0.8996515210360224
    • Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error.
  • voice name=no
    • Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and ssml_gender. For the list of available voices, please refer to Supported voices and languages.
  • ssml-gender=gubergren

    • Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.
  • .. volume-gain-db=0.1631162699452855

    • Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that.
  • ... request-id=consetetur

    • A unique identifier for this request. Restricted to 36 ASCII characters. A random UUID is recommended. This request is only idempotent if a request_id is provided.
  • suggestion-input answer-record=ea
    • Required. The ID of a suggestion selected by the human agent. The suggestion(s) were generated in a previous call to request Dialogflow assist. The format is: projects//locations//answerRecords/ where is an alphanumeric string.
  • intent-input intent=lorem
    • Required. The unique identifier of the intent in V3 agent. Format: projects//locations//locations//agents//intents/.
  • language-code=elitr

    • Required. The language of this conversational query. See Language Support for a list of the currently supported language codes.
  • ..text-override language-code=justo

    • Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
  • text=lorem

    • Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters for virtual agent interactions.
  • ...text-input language-code=labore

    • Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
  • text=gubergren
    • Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters for virtual agent interactions.

About Cursors

The cursor position is key to comfortably set complex nested structures. The following rules apply:

  • The cursor position is always set relative to the current one, unless the field name starts with the . character. Fields can be nested such as in -r f.s.o .
  • The cursor position is set relative to the top-level structure if it starts with ., e.g. -r .s.s
  • You can also set nested fields without setting the cursor explicitly. For example, to set a value relative to the current cursor position, you would specify -r struct.sub_struct=bar.
  • You can move the cursor one level up by using ... Each additional . moves it up one additional level. E.g. ... would go three levels up.

Optional Output Flags

The method's return value a JSON encoded structure, which will be written to standard output by default.

  • -o out
    • out specifies the destination to which to write the server's result to. It will be a JSON-encoded structure. The destination may be - to indicate standard output, or a filepath that is to contain the received bytes. If unset, it defaults to standard output.

Optional General Properties

The following properties can configure any call, and are not specific to this method.

  • -p $-xgafv=string

    • V1 error format.
  • -p access-token=string

    • OAuth access token.
  • -p alt=string

    • Data format for response.
  • -p callback=string

    • JSONP
  • -p fields=string

    • Selector specifying which fields to include in a partial response.
  • -p key=string

    • API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
  • -p oauth-token=string

    • OAuth 2.0 token for the current user.
  • -p pretty-print=boolean

    • Returns response with indentations and line breaks.
  • -p quota-user=string

    • Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
  • -p upload-type=string

    • Legacy upload protocol for media (e.g. "media", "multipart").
  • -p upload-protocol=string

    • Upload protocol for media (e.g. "raw", "multipart").