Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Conversational AI Gets Clever

human hand and robot hand about to touch

At Google’s annual developer conference, Google I/O, much of the keynote was dedicated to advances in conversational AI.

The segment was split between new capabilities for Google Assistant, which is a conversational AI tool that you can use today, and LaMDA, Google’s research program that’s creating the conversational AI of the future.

Google is making its assistant ever more accessible with quick-access commands that don’t require a wake-word and ‘look and talk,’ where the assistant takes cues from your gaze to know when you’re talking to it. Or her. Or him, depending on how anthropomorphic we wanna be and which voice you choose!

The Google Assistant announcements are small, important, but incremental improvements to the tech and the experience. But LaMDA is much more future-focused.

LaMDA stands for Language Models for Dialog Applications. A language model is a tool for helping computers understand language. And dialog applications are what we build to help users, typically customers, interact with business IT systems to get stuff done using conversation. So LaMDA is super relevant and super interesting to the future of conversational AI.

 The key advance that was announced was a new way to ‘prompt’ language models.

In typical conversational AI platforms today, we take a large background language model (like GPT-3 or BERT) and then ‘prompt’ the language models by providing example phrases for each intent that a customer might have when contacting a business. Then when a customer calls or opens a chat session and says what they want help with. The platform uses the language model to figure out which intent best describes the customer’s query based on the prompts we gave it.

The big innovation announced at I/O was the idea of ‘chain of thought’ prompting.

Here, the ‘prompts’ are much more complex. We’re asking the language model to solve a problem. The simple way to do it would be this:

Standard Prompting

The language model got the wrong answer in this case. But if we prompt it with a more complex answer, which involves breaking down the problem into the stages to complete it, suddenly the results are much more accurate:

Chain of Thought Prompting

Quite impressive, eh?

It just goes to show how much richness exists in the language models. They’re going beyond language and have learned mathematical and other reasoning concepts we express through language. And if we prompt them in the right way, they can use that knowledge.

Let’s not get carried away. This technology is still in its very early days. But the direction seems pretty clear to me.

Large language models encapsulate much of the richness of language but also many of the simple concepts that we express through language. If we can find a way to ‘prompt’ the models in a way that allows them to apply the model to our task, the results can be quite impressive.

But — and it’s a big but —there are still so many ways in which this technology can and will fail.  Prompt it in too simplistic a way, and 23-20+6 = 27. Prompt it right, and you’ll get the correct answer. But what’s the right way to prompt it? What examples should you use? In which areas can conversational AI be successful? Where do we need humans? Where would a different channel, like web, or mobile, be more effective?

Advances in conversational AI will extend the variety of use cases where IVR, IVA, messaging, and chatbot applications can be effective. It will reduce the cost of creating applications. But for the foreseeable future,  and I’m talking ten years minimum here, it’s our partnership with AI that will bring the best out of it.

About the Author

Kerry Robinson

Kerry Robinson
VP of Conversational AI
Waterfield Tech

An Oxford physicist with a Master’s in Artificial Intelligence, Kerry is a technologist, scientist, and lover of data with over 20 years of experience in conversational AI. He combines business, customer experience, and technical expertise to deliver IVR, voice, and chatbot strategy and keep Waterfield Tech buzzing.

Follow Kerry on LinkedIn

To receive insights like this in your inbox, sign up to receive Kerry’s weekly newsletter.