Dialogue Cloud

How to connect to Google DialogFlow in Dialogue Studio

With this scenario you will integrate your DialogFlow (Google) Natural Language Understanding (NLU) with AnywhereNow Dialogue Studio. This means you can integrate a smart assistant into your IVR Interactive Voice Response, or IVR, is a telephone application to take orders via telephone keypad or voice through a computer. By choosing menu options the caller receives information, without the intervention of a human operator, or will be forwarded to the appropriate Agent. that can handle questions, such as case information, and intelligent routing based on the users speech input. AnywhereNow Dialogue Studio contains specific nodes that can connect with Dialogflow (Please see the nodes explanation under components for more details).

Using a smart assistant in your IVR adds an extra layer that can gather information. The assistant can route your users to specific skills based on their voice input or provide a self-service experience.

Preparation

This scenario will make use of Google Dialogflow.

DialogFlow can be configured at https://dialogflow.cloud.google.com/ .

Intents are basically the goal of the users input, what does the user want to achieve, such as ‘I need help with insuring my new car’ or ‘I got a question about my current ticket’. Create a DialogFlow with at least two or three specific intents, start off with 3 intents that are not too similar. Per intent you need around 15-20 training phrases (more is better but too much can make your assistant too specialized in one intent). Each intent should have around the same amount of training phrases.

Entities are specific topics, such as the type of insurance. You can define synonyms which the assistant can recognize (instead of car you could introduce car brands). The content of the entity does not determine the intent, but the presence of an entity will. Don’t forget to highlight the entities in the training phrases, if it doesn’t do it automatically.

Prerequisites

For this scenario the following prerequisites must b in place:

Configuration

Bot greeting

Our first step is initiate the bot, greet the customer and wait for their response.

We are going to start with a Incoming conversation node. This node only listens to both audio sessions. This node is connected to our server using SignalR.

Steps:

  1. Drag and drop Incoming Call Node

  2. Open Node

    1. Select / Configure server

    2. Filter on: Audio/Video

Next is greeting and asking what their question is the customer, this can be done with a Say node. In here you can type the text, this will be converted to speech using the a Speech engine (See: Speech Engines for AnywhereNow).

Steps:

  1. Drag and drop a Say Node

  2. Open Node

    1. Enter text you want to play

  3. Connect end of the Incoming Call node with begin of the Say Node

Start Transcriptor

For the Voice-bot we want to listen to the customer. This can be done with the Transcriptor Node

Steps:

  1. Drag and drop a Transcriptor node

  2. Open Node

    1. Optional change the language of the node

  3. Connect the end of the Say node (or option otherwise of the switch node) with begin of the Transcriptor node.

By default the Transcriptor node will keep listening to the customer and restarting the flow. This is useful when integrating with natural language understanding API, which can hold an conversation (for example DialogFlow). In this case we need to stop after the first output. This can be done stopping the transcriptor with a stop payload.

Steps:

  1. Drag and drop Change node

  2. Open Node

    1. Create rule

      1. Set: msg.payload.stop

        to: true (Boolean)

  3. Connect "events with transcriptor" end with begin of the Change node

  4. Connect end of the Change node with begin of the Transcriptor node

Finally we need to store the output in an object we can use later. This can be done with a Change node.

Steps:

  1. Drag and drop a Change node

  2. Open Node

    1. Create rule

      1. Set: msg.payload

        to: msg.payload.transcriptor.transcript

  3. Connect "events with transcriptor" end with begin of the Change node

Configure DialogFlow

Next is the DialogFlow node, the node that connects your IVR to the smart assistant. This nodes needs to be configured, but the AnywhereNow Dialogue Studio presents the information to the creator. To configure this node correctly you need a working DialogFlow assistant and access to the Google cloud console. Also set the identifier to msg.session.id this will make sure the user will stay in the same session and does not start over after one run through DialogFlow.

Steps:

  • Drag and drop a DialogFlow node

  • Open Node:

  • Connect the output of the previous node(s) (Change node) to the begin of the Function node.

Now whenever a user says something in the IVR, this message is sent to DialogFlow which detects the intent, entities and if needed further dialogue to obtain the information that is needed to route the user to the correct Skill or handle their questions.

Handle response

Now that we have our response, we need to handle appropriate. First we need to store the output in an object we can use later. This can be done with a Change node.

Steps:

  1. Drag and drop a Change node

  2. Open Node

    1. Create 2 rules

      1. Set: msg.completed

        to: msg.payload.diagnosticInfo.fields.end_conversation.boolValue

      1. Set: msg.intent

        to: msg.payload.intent.displayName

      Note

      Additionally if you have parameters you can find them in:

      Copy
      Payload
      msg.payload.parameters.fields.[parametername].stringValue
  3. Connect end of the DialogFlow node with begin of the Change node

With this information you can introduce a Switch node, which lets you route the flow based on e.g. intents, parameters or entities. In this example we will switch on intent.

Steps:

  1. Drag and drop a Switch node

  2. Open node

    1. Change Property to msg.intent

    2. Add an option per intent, for example:

      == ticket_status

    3. Add the option otherwise, in the case the DialogFlow didn't detect the intent.

  3. Connect end of the Change node to begin of the Switch node

You can now continue with your flow, based on the intent of the customer. Below some scenarios you can encounter when using DialogFlow.

(Optional) Play DialogFlow Response

In DialogFlow you can also write responses when an Intent has been detected or a prompt when a parameter is missing. These responses can be used by Dialogue Studio, this is done using a Say node.

Steps:

  1. Drag and drop a Say node

  2. Open node

    1. Change to expression

    2. Use the following expression:

      Copy
      Expression
      payload.fulfillmentMessages[0].text.text[0]
  3. Connect end of previous node to begin of the Say node

(Optional) Handle end of DialogFlow conversation

In DialogFlow you can set if a conversation is completed. This will tell you if you need to send another message to DialogFlow or if you can continue with you own flow. When handling the response we stored the complete state in msg.complete. Now we need to check the value, this can be done with a Switch node.

Steps:

  1. Drag and drop a Switch node

  2. Open node

    1. Change Property to msg.complete

    2. Add an option to check if completed

      is true

    3. Add the option otherwise, in the case the DialogFlow isn't finished yet.

  3. Connect end of the previous node to begin of the Switch node

If not completed you can continue with (Optional) Send another response

(Optional) Handle missing parameter

In DialogFlow you can configure that some parameters are required. For example when you want the schedule a meeting, in that case you need to know the date and time.

An intent with missing parameters, will be marked as incomplete (msg.complete is false). To verify our parameter we need to check the value, this can be done with a Switch node.

Steps:

  1. Drag and drop a Switch node

  2. Open node

    1. Change Property to msg.payload.parameters.fields.[parametername].stringValue

    2. Add an tow options to check if the parameter is filled

      is empty

      is not empty

  3. Connect end of the previous node to begin of the Switch node

If missing, you can continue with (Optional) Send another response

(Optional) Handle unknown intent

When DialogFlow cannot detect the intent, it will select the "Default Fallback Intent". This will also contain a response, which can be send to the customer. This will trigger the customer to send another response. By default this will be marked as an incomplete conversation (msg.complete is false).

During fallback, you can continue with (Optional) Send another response

(Optional) Send another response

When making a Voice-bot, you don't need to do anything. The Transcriptor node (added in Start Transcriptor) keeps listening and sends a new output to DialogFlow. (unless you have stopped you transcriptor, in that case you need to start it again)