Photo by Arseny Togulev on Unsplash

Create a very smart chatbot with BlenderBot

Samuel Ronce
Chatbots Life
Published in
4 min readJul 29, 2021

--

Released in April 2020 and created by Facebook, BlenderBot makes a super-chatbot. In other words, it is able to speak on open questions

Recently, version 2 has been released but this article remains on version 1 for simplicity

Our goal, create a server with Flask that will allow to have an API to communicate with the chatbot

1. We will be able to reset the conversation (remove the context)
2. Put a first context, which would allow to give a personality to the chatbot
3. Send a message and get the chatbot’s answer

Prerequisites

  1. Have Python 3 and Pip installed on your machine
  2. Have some knowledge of the Python language

Note that a good computer is necessary. The worse your machine’s capabilities are, the slower the chatbot will respond.

Installation

1. Create a file named main.py

2. Go ahead, install the packages

pip install flask transfomers torch

Remember to create a virtual environment to initialize a Python project

Get Started

We import the library:

1. transformers and classes for the conversation
2. flask for the server and jsonify because we will return the JSON format to the client

We get the pre-trained model (line 5) and the tokenizer (line 6)

A tokenizer is a tool based on an algorithm based on a set of rules or on learning from a manually tagged corpus. It allows to split the text into words.

The model will be downloaded at the first start of our application

We take the model with 400M parameters because the results are rather correct. You can take larger models but it will consume more resources on your machine

We create a variable “nlp” which will allow, later, to have a generated text.

Trending Bot Articles:

1. How Conversational AI can Automate Customer Service

2. Automated vs Live Chats: What will the Future of Customer Service Look Like?

3. Chatbots As Medical Assistants In COVID-19 Pandemic

4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?

Endpoint to process a message with AI

Our enpoint will be /add_input which we can call with the POST method

* (Line 3) We retrieve the text in the corpt of the request
* (Line 4) Add the user’s input to the conversation
* (Line 5) Process the message to get a response from the chatbot
* (Line 7 to 11) We browse the result to form a list of messages (and form our dictionary to return)

Just with the two code blocks above, you can already test:

1. Start the server:

By default, the server runs on port 5000

2. Test

On my end, here is the return:

Fun :)

Endpoint to reboot

Why? Because if you continue the conversation, it will keep the context. Ideally, we’d like to start from scratch.

We add an endpoint for that, which I call /reset:

Endpoint to give a personality

What can be interesting is to give a personality. How, by adding our own context.

* (Line 4) — We add a default text (we deduce that the user says hello at the beginning)
* (Line 5) — We give a default text, which represents the personality of the chatbot
* (Line 7) — We “archive” the previous messages and consider them as a context
*
And if we test with the commands

Here is the return

Bonus

Now that we have the API, we can create the frontend.

I’m not going to explain it here (because I focus on BlenderBot) but if you want to have the explanation, tell me in comment

Giving a personality:

And this is the beginning of a conversation:

Our friend is not a very good developer :D

Done :)

Don’t forget to give us your 👏 !

--

--

I’m Samuel Ronce, web developer with a specialization in artificial intelligence. I’m experimenting with concepts to use them in real life