Conversational AI is an evolving discipline and is core to Litha’s work. We’re not interested in developing digital assistants like Alexa and Siri that are programmed to pick up keywords and provide scripted responses.
Litha is all about natural person-centred engagements that can handle tangential shifts in conversation, call for context, and always seek to understand.
This can sound like a rather glib media-friendly quote for such publications as the general Forbes, Inc. and Business Insider family (not to forget Crunchbase et al). The reality is that we worked on articulating the ambition of Litha into a mantra that we can quickly introduce and explain to anyone interested in Conversational AI.
With regards to human-computer interaction (HCI), our core focus relates to conversation engines including chatbots and enterprise relationship management.
In order to help humans and computers interact in a much more natural way, technology needs to communicate, think and learn.
Our overarching vision is to develop technology that can perceive at the same level as humans: to have sufficient perception of a user’s environment, situation, and context to communicate properly.
To achieve this vision, our talented team focuses on the application of Artificial Intelligence and its various subsets (Machine Learning, Deep Learning, Natural Language Processing).
As we build any conversational engine, we consider both the machine side and the human side.
In many respects, the machine side can be the easiest element when it comes to the building of the framework: the cloud set-up; the computer graphics; the programming language. But, underneath this, as we build the systems, we see how the human and machine disciplines are closely intertwined.
On the human side, we work to develop context – and this means a lot of communication theory as well as psycholinguistics and cognitive psychology.
A great deal of chatbot technology is based on computer responses being scripted. It’s very hard for the technology in these cases to anticipate the dynamic tangents that human conversation can take. If you stray from the conversational thread of so many chat functions, you end up with, ‘sorry I don’t understand’.
Litha’s Chat Engines are driven by context and this creates a whole new user experience.
PUTTING IT INTO CONTEXT
We all know that some chat functions are great while others are less so. For any chatbot that can help you book a flight in many languages, there is probably another out there that’s not even able to complete the simplest of tasks.
According to IBM, chatbots can answer 80% of standard questions but, as conversational systems make better use of NLP, users expect the technology to do more.
In humanising the technology, users expect more human traits – empathy, mental dexterity, conversational flexibility and more.
The reality is that there are still a rule-based chatbots out there that can give you a weather report in your local area but wouldn’t handle, “so I probably need a sunhat today then?”
While we may still bump into chatbots that are quite fixed in what they can talk about, many are leveraging statistical learning (particularly deep learning). This is where the chatbot can infer statistical patterns by ‘learning’ from very large datasets. But this causes problems as unconscious bias is formed by the data sets that it learns from.
While technology is enabling chatbots to break down the users words and articulate a response, its understanding is based on computation rather than cognition. Almost every chatbot on the market today uses statistical probability and learning taken from datasets.
Learning from massive datasets can be relatively quick but may bypass deeper questioning. But consumers don’t want the quickest answer… they want the best answer.
Litha’s conversational AI is built around acquiring knowledge and understanding through perception, memory, language, learning, and higher reasoning (critical thinking and problem solving).
Moving away from decision trees and statistical probabilities, Litha’s Conversational AI journey is towards Contextual Awareness. This includes our work in a sense-making framework for our chat engines to understand the context of what someone says.
Context is a huge task as it involves many interconnected layers of accumulated knowledge that humans acquire and apply in conversation. In order to build context, chatbots need to ask qualifying questions (seek to understand) so that any answer demonstrate 4 traits:
- Be truthful
- Give information without overloading
- Be relevant
- Be clear
DIRTY BOTS DONE CHEAP
AI is in vogue at the moment – some technologies are rightly recognised for achieving great things while others less so.
One of the traps that people fall into is to assume that AI is easy (and, that chatbots are easy too). This belief is exacerbated by the commoditisation of the technology. It doesn’t help when vendors offer “Cheap Bots, Done Quick” (this, for example, is a site offering a Twitter bot) as well as generic cost calculators that give an impression of profound simplicity in what needs to be done.
The reality is this: cheap chatbots won’t cut it; expensive chatbots aren’t guaranteed either.
Yes, you can pay $30-50,000 for a relatively-sophisticated conversation engine but it still needs a whole host of work in terms of human aspects of the HCI.
Though they are in fashion, good chatbots are hard to create given complexities of natural language – and it happens to the best of us – just ask Facebook.
Generally, the success or failure of Chatbots are determined by the customer experience and, at the extreme end of things we would have:
- Yandex’s ‘Alice’ used pro-Stalin hate speech in one-to-one conversations and had users going out of their way to corrupt learning (using synonyms to bypass certain conversational rules) and tempted Alice into using hate speech that could be screen-shotted and shared on social media
- Microsoft’s experiences include XiaoBing (little Bing in China) telling users that “My China dream is to go to America” and it’s bot Zo calling the Quran violent. Microsoft also had issues with their bot, ‘Tay’ turned to hate speech within just a day as they hadn’t prepared for a coordinated attack from a subset of Twitter users
It’s fair to say that any Conversational AI journey is fraught with difficulties!
DEALING WITH ‘IT DEPENDS’
Chatbots aren’t really known for their willingness to embrace ambiguity.
Human relationships form from a myriad of conversation-styles from the conceptual to the factual. Many of our conversations rely on a key phrase – two words that drive our own Conversational AI journey – “it depends”.
Let’s assume that you are booking your flight and the chatbot asks if you would like to stay in a hotel or an apartment. Well that would depend on location, size, cost, accessibility, standard / quality… and that’s just one aspect of your booking process.
This meant that, from the outset, Litha Group’s technology is all rooted in the desire to understand.
As our Conversational AI is founded on seeking to understand, our chat engines ask questions and follow-on questions before offering options (rarely outright recommendation because, as we know, “it depends”).
In our work to make the technology as humane as possible, we expect any of our chat technologies to be able to handle conversations that stray from the point. This is predicated on appreciated that the user themselves aren’t straying from their point. All of this helps to build context.
Contextual Awareness is relatively limited with so many chatbots at the moment but we anticipate this as being one of the major evolutions of the technology.
We see our chatbots having wide-ranging, unstructured conversations that build rapport and, in turn, trust which leads to greater consumer relationships.
TALK WITH ME, BOT
Natural languages are old, irregular, and evolving.
Although we may attempt to catalogue the different rules of natural languages, they eventually get replaced by new rules (and may even demand a whole new set of rules).
Without evolution and change, we would still be using Chaucer’s English whereas, in fact, English Language examiners in England and Scotland in 2003 / 04 reported the first instances of ‘text speak’ being used by the students.
Formal language works to many rules and these rules still tend to underpin an increasingly casual style. But this means that the natural language processing (NLP) elements need to recognise spelling errors, language variation (UK English / USA English for example), missing punctuation, lack of capitalisation, use of abbreviations, and so on.
Equally important is the ability to understand the context of the words used – and this calls for the ability to refer to the previous use of the word and its synonyms both in previous conversations with the user but also in the millions of other conversations being held.
Chatbots also allow for the unspoken word to communicate as well – and this includes emojis. In 2017, researchers at the Massachusetts Institute of Technology created DeepMoji to interpret emoji used alongside a post’s text. In doing so, the robot can understand emotional subtext and identify if sarcasm is being used.
Because we can’t use intonation in our voice or body language to contextualise what we are saying, emoji are the way we do it online
(Dr Iyad Rahwan, associate professor at the MIT Media lab)
BUILDING THE BUSINESS CASE
The appeal of conversational AI isn’t going unnoticed.
The 2020 Corona Virus is driving digital transformation including automated processes and improved communications. Our own Conversational AI is accelerating as we make our first product, an AI Psychotherapist, for free for everyone.
Whether it is in your car, your bank, hospitals, or call centres, use cases are growing when it comes to the use of Conversational AI.
Every organisation of any size is / can be / wants to run their own conversation engine.
Five key points to cover in your business case will be:
- The quality of the user experience as conversations run across multiple devices and multiple channels without any loss of information
- The demographic shift as millennials become the dominant user / buyer market
- Simple, repetitive transactions can be absorbed into conversations with your chatbot
- As the chatbot makes time for the customer, answers all questions, follows up when asked and can ensure that all conversations are correctly recorded, consumer trust will build – and consumers buy from organisations that they trust
- Listen & Learn: a high quality chatbot can pay attention to the sentiment towards the organisation and gauge the quality of the relationship. Through feedback-informed engagement, the chatbot can learn how to better engage and support every single individual consumer that it speaks with
265 billion customer requests are recorded per year and businesses spent nearly $1.3 trillion to address them. Using chatbots could help save up to 30% of this cost. (source: IBM Watson, 2017)
The global conversational AI market is expected to grow from US$4.2 billion in 2019 to US$15.7 billion by 2024 with a 30.2 percent compound annual growth rate (source: marketsandmarkets, 2019)