Deep logic for support AI chatbots

Martin Lambert
Chatbots Life
Published in
7 min readMar 2, 2018

--

Deep logic #2: How to move beyond trivial chatbot logic?

This is the article #2 in a series about the challenges of adding deep logic (expertise) to product support chatbots. I work for eXvisory.ai, an Oxford UK startup that provides visual dev tools to scalably and maintainably add deep logic to chatbots. The idea of this article is to provide insight into how leading AI chatbot frameworks work so you can form educated opinions on big questions like — How intelligent can product support chatbots be? What can and can’t they do? How hard is it to build useful support chatbots?

What is AI?

Here’s my utilitarian definition of artificial intelligence — software able to independently do something non-obvious that otherwise requires human intelligence. So filling out a web form is not AI (the human is doing the work). Google search is AI. Answering trivial questions is … trivial AI.

What are AI chatbots?

Instead of chatting with a human to obtain some service, we chat with an artificially intelligent software entity called a chatbot. Within this article I mean text chat (messaging) but it can also be voice chat (like Amazon Alexa or Google Assistant). So by my definition chatbots are AI, if the conversational service they offer previously required reasonably intelligent humans.

Trending AI Articles:

1. Machine Learning for Dummies

2. Text Classification using Algorithms

3. Regularization in deep learning

4. Chatbot Conference in San Francisco

What are good applications for AI chatbots?

The most enthusiastic adoption of AI chatbots seems to be in sales and customer service. Sales chatbots are used for customer engagement (the little chat messages that pounce from the corner of modern websites, like greeters in clothes stores) through to fully automating straightforward purchases (like selecting and paying for concert seats). But product support (within customer service) is the area I’m interested in, because it has the most disruptive potential. It costs money to hire humans to do sales, but ideally they generate revenue. Product support just costs money, which is why it is often done so badly by minimum-wage humans in out-sourced call centres. Can AI chatbots finally improve product support as well as reducing its cost?

Utterances, entities, intents and actions

Let’s look at the chatbot development frameworks of the big platform vendors: Microsoft’s BotService, IBM Watson, Oracle Intelligent Bots, Amazon Lex, SAP Recast.ai, etc. Broadly speaking they all work the same.

IBM Watson (https://console.bluemix.net/docs/services/conversation/intents.html#defining-intents)

A chat conversation is built from question and answer pairs. Each possible customer input is called an utterance (e.g. “Can I buy an iPhone?”). Chatbot developers manually program intents that capture the essential meaning of every foreseeable customer question (e.g. “Buy a phone?”) and identify entities within the question that focus it on specifics (e.g. iPhone). Because each question can be articulated in thousands of different ways (e.g. “Do you sell Samsung S9s?”) chatbot developers provide 10–20 sample utterances that map to the same intent and entities. Each intent is then associated with an action, which can be another question. If further questions are required, based on previous answers, the various intents are connected together in programming code that looks suspiciously like chained IF…THEN statements but with reassuring names like dialogs and flows.

IBM Watson (https://console.bluemix.net/docs/services/conversation/dialog-overview.html#dialog-overview)

If this sounds disappointingly laborious and old-school to you, and not very AI, I can see why. It is. But powerful AI is present in most chatbot frameworks. Natural language AI is used to train against the created intents, linking thousands of ‘similar’ utterances to the 10–20 sample utterances provided for each intent. IBM Watson and the other leading chatbot frameworks effectively know how to map any natural language utterances to each intent’s 10–20 sample utterances, which is how modern AI chatbots are better at intelligently (and reliably) extracting customer intents and entities.

How intelligent are AI chatbots?

This is a loaded question, for two reasons. Firstly, when computers ‘talk’ the obvious implication is that they are intelligent. AI chatbots dramatically raise customer expectations, compared to conventional user interfaces (no one ever accused a web form of being clever), so if the don’t deliver intelligence they disappoint and frustrate. Secondly, there are two AI components essential to truly intelligent chatbots, one of which is always overlooked (because it’s still hard). The AI component generating all the buzz is machine learning AI, which enables the chatbot to understand natural language. And this side of AI is genuinely rocket science and wonderful, not least because it works without explicit programming (don’t worry programmers — there’s still plenty of work defining intents, entities, actions and dialog flows). But it’s very limited without the other AI component, which is a scalable way to encode the business process logic the AI chatbot is trying to automate. It’s all very well conversing with an AI chatbot, but what if it’s an idiot?

IF…THEN spaghetti code

What’s wrong with IF…THEN logic?

You’ll probably need some programming experience to fully understand why linking intents together via IF…THEN statements (aka dialogs or flows) to capture more complex business logic isn’t going to get very far. But you’ve probably heard of spaghetti code. The problem with IF…THEN statements is that they have no intrinsic structure, nothing to guide a developer how to keep on adding deeper logic. Five IF…THEN statements chained together are OK, but twenty are incomprehensible. The problem is combinatorial complexity and it’s the same reason why web-based product support sites only ever take you through a few half-hearted linked troubleshooting pages before accepting defeat and connecting you to a call centre.

Why can’t machine learning do product support logic?

Machine learning AI learns-by-example from large volumes of training data without requiring manual programming. So why can’t it self-learn product support ‘expertise’ (logic) by training against large incident databases of historical support questions and answers? The answer to this lies in the machine learning version of Murphy’s Law — “garbage in, garbage out”. In existing support incident databases it’s possible that the simplest questions (“I’ve forgotten my password?”) are answered correctly most of the time. So it’s reasonable to assume that a machine learning AI can self-learn to automate simple incidents. It’s also eminently possible that there are more simple incidents than complex ones. So machine learning AI should be able to automate a sizeable chunk of more mundane product support issues.

But what about more complex support problems that can only be resolved today by deeper logical expertise? For example troubleshooting mobile phones, or central heating systems, or any complex technological product? The ugly economics of human-based support for complicated, high-volume, low-margin products means that the data quality in their support incident databases is abysmal (check out your cellular provider’s support forums). Machine learning is amazing but not magic. Without quality training data it cannot, and never will be able to, extract deep troubleshooting expertise.

Don’t worry, be happy

This limited ability to capture deep troubleshooting logic limits product support AI chatbots to the subset of comparatively trivial problems that can be resolved by one or two simple questions and answers, which is disappointing because that’s the problem subset that can already be resolved by out-sourced human support (or a Google search). But at least the AI chatbot is more convenient and less apologetic. And lots of problems are simple. And your provider is saving money. If you think this is pessimistic look at chatbot framework taglines, like “Automate the simple stuff so you can focus on the things only humans can do”. Umm…hmm. I love the idea of out-sourced call centre employees being freed up for really complicated problems.

eXvisory.ai — deep logic network [mobile device troubleshooting]

Deep logic networks

My startup eXvisory.ai adds deep logic to support AI chatbots. It uses AI we imaginatively call deep logic networks, which are constrained palettes of logic rules organised into network ‘shapes’ that naturally fit specific applications — for example fault diagnostic eliminator rules that model the ‘process of elimination’ (elementary … my dear IBM Watson), organised into fault containment hierarchies. Visual editors (see screenshot above) guide programmers or senior support engineers through a manual but straightforward process that converts existing knowledge bases into product support AI chatbots capable of matching the troubleshooting expertise of the best human troubleshooters, but without the combinatorial complexity and scalability problems that previously plagued logic programming.

You can see the depth of troubleshooting logic eXvisory.ai can automate by trying out our mobile device troubleshooter pilot AI chatbot, or checking out one of our sample eXvisory chat sessions.

Build your own deep logic chatbot

To be notified of more deep logic articles follow me on my Medium profile page. Or say hello at martin@exvisory.ai if you’d like access to our dev documentation, online tutorials, a web demo — or your own free dev instance to build your own deep logic chatbot. If you enjoyed this article please recommend it to others by clapping madly (below) and sharing or linking to it. You can also leave feedback below.

Thanks!

Don’t forget to give us your 👏 !

--

--