4 DO’s and 3 DON’Ts for Chatbot Testing Strategies

Florian Treml
Chatbots Life
Published in
4 min readFeb 9, 2021

--

A quick summary of 7 important DO’s and DON’Ts when designing a chatbot testing strategy. We are continuously seeing teams ignoring those actually rather simple rules.

DO’s and DON’Ts

DO plan for iterations

In german we say rome was not built in a day — same applies for your chatbot training data. A robust chatbot is built by multiple iterations, training and testing cycles and by ongoing monitoring and performance tuning: CODE, TEST, DEPLOY, REPEAT

DON’T underestimate the need for constant performance measurement

Without measuring performance with real user conversations, you will never know if your chatbot is really working for your users.

✅DO apply the 80/20 rule for testing utterances

Most teams are tempted to use 100% of the available data for training. Do not do this. You won’t know if your training data works if you use parts of the training data additionally for testing. Rule of thumb is to use 80% of data for training and 20% for testing.

If and only if the amount of available data is very small you may try K-Fold validation to get some insights about the quality of your data.

❌ DON’T rely on smoke tests or happy path tests

Again the 80/20 rule — 20% is the work spent for the comfort zone, 80% of the work is testing and bugfixing. 20% of your users will follow the happy path, 80% will break out. Prepare for this.

✅DO: spend a reasonable amount of time with explorative testing

Automated regression testing is superior for finding defects that you know can happen. It won’t help to find defects you don’t know about. Spend some time with explorative (=manual) testing: try to bring your chatbot to its limits and beyond.

Trending Bot Articles:

1. How Chatbots and Email Marketing Integration Can Help Your Business

2. Why Chatbots could be the next big thing for SMEs

3. My Journey into Conversation Design

4. Practical NLP for language learning

❌ DON’T: ignore the need to re-test after training

You can never know what effect adding some training data on one end of your fine-tuned NLU model will have on the other end, until you try out. Do a full regression test of your NLU model every single time you make changes.

✅DO: test processing of out of order messages

One of the most human-like bevaviour is to scroll up the conversation history in the chatbot window and resume from a previous step. Most chatbots out there will fail this challenge if not prepared accordingly.

Action Plan

Here are suggestions to address the DO’s and DON’Ts.

Establish Continuous Testing Mindset

Testing is a crucial part of the development process. There is no such thing as a single testing phase when bringing a chatbot to life. Testing has to be part of the team’s daily business, just like coding, design and monitoring.

Awesome picture showing continuous testing

Holistic Testing

For chatbots as for software products in general, there is more than unit tests coded by the programmers.

Botium Test Project Types

Get the Right Tools In Place

Without the right tools you will be lost. With Botium Box you are prepared for the challenges of getting them in place and integrate them into your chatbot development lifecycle.

Get your free Botium Box Mini instance here

See this article in spanish here! 🇪🇸

Don’t forget to give us your 👏 !

--

--