Skip to main content

How to build FAQ Chatbot on Dialogflow?

 

After Google I/O I’m inspired and excited to try new APIs and learn new stuff from Google. This time I decided to try Dialogflow and build a Flutter Chatbot app that will answer some frequently asked questions about Dialogflow. This time I want to be focused more on Dialogflow rather than Flutter.

Firstly, go to Dialogflow ES console, create a new Agent, specify the agent’s name, choose English as a language and click “Create”.



As you created a new agent go to setting and enable beta features and APIs and Save.



Now let’s model our Dialogflow agent

When you create a new Dialogflow agent, two default intents will be created automatically. The Default Welcome Intent is the first flow you get to when you start a conversation with the agent. The Default Fallback Intent is the flow you’ll get once the agent can’t understand you or can not match intent with what you just said.

  1. Click Intents > Default Welcome Intent
  2. Scroll down to Responses.
  3. Clear all Text Responses.
  4. In the default tab, create the following 2 responses, for example:
  • Hi, I am the Dialogflow FAQ bot, I can answer questions on Dialogflow. What would you like to know?
  • Howdy, I am the Dialogflow FAQ bot, do you have questions about Dialogflow? How can I help?

This should look like this:



Now, edit the default fallback intent

  1. Click Intents > Default Fallback Intent
  2. Scroll down to Responses.
  3. Clear all Text Responses.
  4. In the default tab, create the following response:
  • Unfortunately, I don’t know the answer to this question. Have you checked our website? http://www.dialogflow.com?
  1. Click Save

Lastly, connect to the online knowledge base! I’m using Google Cloud docs for this.

Knowledge connectors complement defined intents. They parse knowledge documents to find automated responses. (for example, FAQs or articles from CSV files, online websites, or even PDF files!) To configure them, you define one or more knowledge bases, which are collections of knowledge documents.

  1. Select Knowledge (beta) in the menu.
  2. Click the right blue button: Create Knowledge Base
  3. Type as a Knowledge Base name; Dialogflow FAQ and hit save.
  4. Click Create the first one link


And click Create

When you click “View detail” next to your Document name you can see that Google APIs selected correctly questions and answers from the web. And there you have it! Now if you want to use it and test it you have to enable Dialogflow API in your Google Cloud console. Just go to Google Cloud console, search for Dialogflow API, and enable it for your project which was created automatically when you created a new agent in Dialogflow.



Comments

Popular posts from this blog

Vertex AI – One AI platform, every ML tool you need

  This year on Google I/O (Google’s Developer conference) Google presented a new platform that unites all ML tools. Vertex AI brings together the Google Cloud services for building ML under one, unified UI and API. There are many benefits to using Vertex AI. You can train models without code, with minimal expertise required, and take advantage of AutoML to build models in less time. Also, Vertex AI’s custom model tooling supports advanced ML coding, with nearly 80% fewer lines of code required to train a model with custom libraries than competitive platforms. Google Vertex AI logo You can use Vertex AI to manage the following stages in the ML workflow: Define and upload a dataset. Train an ML model on your data: Train model Evaluate model accuracy Tune hyperparameters (custom training only) Upload and store your model in Vertex AI. Deploy your trained model and get an endpoint for serving predictions. Send prediction requests to your endpoint. Specify a prediction traffic...

What’s new in ARKit 4

  Here it is. The latest version of Apple’s Augmented Reality framework,  ARKit 4 , has just been announced at  WWDC 2020 . Let’s see what’s new in ARKit 4 and for Augmented Reality app development on iOS.  If you’re more inquisitive I linked all stuff directly to Apple documentation. Location anchors ARKit  location anchors  allow you to place virtual content in relation to anchors in the real world, such as points of interest in a city, taking the AR experience into the outdoors.   By setting geographic coordinates (latitude, longitude, and altitude) and leveraging data from Apple Maps, you can create AR experiences linked to a specific world location. Apple calls this process “visual localization”, basically locating your device in relation to its surroundings with a higher grade of accuracy.   All iPhones & iPad with at least an A12 bionic chip and GPS are supported. Depth API The new ARKit  depth API , coming...