Personal AI is going to over run your contact center

Personal AI Will Swamp Your Contact Center

June 5, 2024/ Under Conversational AI, Generative AI, IVR,

Personal AI is getting ready to swamp your contact center with more calls and chats than you could possibly handle. Here’s how, and why, and what you can do about it.

Read More

Agent assist is dead

Is Agent Assist Dead?

May 24, 2024/ Under Conversational AI, Generative AI, IVR,

Agent Assist is dead. Long live knowledge-worker co-pilots.

Perhaps a bold claim but let me explain.

Read More

AI vs Human Cost Comparison

Comparing the Cost of Generative AI vs. IVR vs. Human Agents

When it comes to handling customer contact, you have choices: Humans, Legacy IVR / NLU, and Generative AI.

Which is right for your business, your customers, and your use case?

The most obvious and objective way to look at this is through the lens of the cost per contact. If we make a bunch of assumptions, do the math, and plot the equivalent cost per contact vs contact volume, we get this little beauty:

Human vs Gen AI vs IVR

Agents are about the same cost whatever the volume. That’s the red line in the chart above. I’ve kept things simple, with the average handling time (AHT) at 5 minutes, and the human agents costs $1 per minute – a little bit above North American averages, but it keeps the numbers simple. I’m assuming that includes hiring, training, management, infrastructure, and a realistic utilization level is factored in.

For IVR and NLU, it’s more like 6c per minute. That’s what Google charges for their Dialogflow platform.

For generative AI powered voice bots, it seems the market consensus is around 12c per minute.

Dialogflow wins, right? 5c vs one dollar is a no brainer, surely?

Not so fast. The deployment and ongoing monitoring/optimization costs vary significantly between IVR/NLU and generative AI, if you do it right!

Now there are plenty of people that’ll offer to deploy your IVR (or Google Dialogflow app) for next to nothing. But that’s part of the reason there are so many bad experiences out there. Done right, a decent Dialogflow deployment is going to cost you around $250k. It’ll go up from there (but not down much) depending on the use case and complexity. Let’s peg the ongoing services at 20%… a de-facto standard for IT deployments that most procurement departments won’t blink at. So that’s $50k a year, once in production.

For generative AI, the dynamics are different. It’s more expensive per minute. And because it ‘generates’ responses, rather than designers and developers hand-coding everything, it’s faster and cheaper to deploy. The market is still figuring out what it really costs, but we’re looking at around $150k. Once deployed, you need to watch those voice bots very carefully, and you’ll want to constantly optimize the prompts, data, and knowledge sources to keep it out of trouble, and delivering value. There’s literally no one I’m aware of with a voice bot in production for a year, so we’re guessing a bit here, but I’m going to peg it at 100% of deployment costs. So that’s $150k per year, once in production.

For humans, that $1 per minute covers hiring, training, and management, the equivalent of ‘deployment’ and ‘optimization’, so nothing to add there.

We also need to consider how good IVR/NLU and generative AI powered voice bots are. In spite of what your deceptive containment reports say, your IVR is not going to handle every contact you send to it. Not even close. I’ve assumed 75%, which is still quite toppy.

For generative AI done right we can expect much better success. Generative AI is much more robust. It’s more like a junior agent than IVR. I’ve gone for 90% success rate.

The chart above models how this plays out over a 2-year period. The vertical axis is the equivalent cost per contact, after we’ve allowed for the setup, optimization, and usage costs. The horizontal axis considers the volume of contact handled each year.

For low volumes, humans win. But as the volume goes up, the equivalent costs of generative AI and IVR/NLU goes down.

Beyond 40k contacts per year, a well deployed IVR/NLU use case starts to get cheaper than humans.

Same happens for generative AI a bit later.. around 55k contacts.

But remember, this is – roughly – per use case. If you’ve got a million calls per year, you’ll need 4% of them handled by a single IVR use case or 5.5% handled by a generative Ai use case, before it makes sense, based solely on the cost-per-contact metric.

Further out to the right, the cost per contact of IVR is generative AI actually becomes much closer: about 90c per contact for generative AI, 70c for IVR/NLU. It’s not ‘twice the price’ as a casual comparison of Dialogflow and generative AI powered voice bot pricing might suggest.

What does this mean? Let’s boil it down to a few simple principles:

  1. IVR is more expensive up front, but cheaper in the long run than generative AI powered voice bots, as long as your IVR is done really well. Usually it’s not.
  2. Generative AI is cheaper up-front, and more expensive in the long run, but the difference isn’t as big as you’d think, because generative AI is cheaper to set up, and performs better.
  3. With generative AI, you need around 50k contacts per year to break even vs human agents.

This analysis is most relevant when you’re considering whether to build a new use-case with IVR/NLU, or generative AI. But if you’ve already got a bunch of decent IVRs, you’ll find it hard to justify moving to generative AI, because you’ve already incurred the big up-front cost. So don’t. Sweat your existing IVR apps and build new generative AI apps that focus on use-cases where IVRs struggle.

Kerry

PS: You are building with genAI right now, aren’t you? If not, what’s stopping you? Check out our latest blog on gen-AI blockers, or sign up for a complimentary Strategy Workshop to help you get started.

PPS: If you want a more regular dose of insights, subscribe to our weekly email that’ll make you think differently about your IVR, voice, and chatbots.

About the Author

Kerry Robinson

Kerry Robinson
VP of AI Strategy
Waterfield Tech

An Oxford physicist with a Master’s in Artificial Intelligence, Kerry is a technologist, scientist, and lover of data with over 20 years of experience in conversational AI. He combines business, customer experience, and technical expertise to deliver IVR, voice, and chatbot strategy and keep Waterfield Tech buzzing.

Methods, Monoliths, and Modules – How Applied AI Solutions Will Evolve (and how things tend to happen in the wake of major disruptive innovations)…

April 11, 2024/ Under Cloud Migration,

I’ve spent the better part of the past year thinking about the application and commercialization of Artificial Intelligence (AI), especially within customer experience and contact center domains. The pace of innovation and the rate at which capabilities have expanded has been absolutely mind-blowing – and it’s been fun and challenging to keep up. But, as radical and as rapid as the change has been, my take is that it will follow a very familiar pattern on how it gets actually incorporated into real business use cases on a wide scale.

I’m heavily influenced on this by Clayton Christensen’s theories, notably his model of disruptive innovation and his modularity theory and by Ben Thompson’s Stratechery (which in turn has a pretty strong Christensen bias, though he notably argues convincingly that this doesn’t apply to consumer products). We are navigating through the first of three evolutionary stages in AI solutions, which can be thought of as methods, monoliths, and modules. If you understand where we are in the process, you can make better choices about how to allocate resources to leverage the power of generative AI in each phase.

 

Methods – You are Here!

The advent of any groundbreaking technology typically ushers in a bespoke phase characterized by custom-built and custom-implemented solutions. In the realm of AI, this phase is dominated by what can be collectively referred to as “methods” – solutions born from domain expertise and deliberate design patterns. This might be best illustrated by what it isn’t – today’s generative AI is not a product.  Sounds funny from a guy whose title is Chief Product Officer and is charged with bringing AI enabled and enabling products to market.  But that is where we are today, i.e. despite some compelling marketing campaigns to the contrary, you can’t buy a generative AI product and just drop it in and expect it to work (today, anyway).  Keep in mind, many firms will try to sell you exactly that today.  It just isn’t going to work without real effort, expertise, and iteration.  But, you can start building and be successful.  You just need to focus on the methods.  The patterns and the people who understand them.

A powerful example of this custom build deployment is Klarna‘s AI rollout. Collaborating directly with OpenAI, Klarna embarked on an 18-month journey of designing, building, and refining their AI capabilities, a process that culminated in a successful implementation without the procurement of an off-the-shelf product. This approach, which prioritizes tailored solutions over generalized offerings, is particularly appealing to firms aiming to leverage AI as a disruptive innovation, often resulting in cost-effective, down-market strategies.  It took time and effort, but they are already realizing the benefit.  If your firm can make that kind of internal pivot, that’s the best way to approach and achieve in the first phase.

If your firm doesn’t have the internal talent or organizational capacity to conceive of, create, and deploy (and then iterate, and iterate, and iterate), then you’ll need a trusted partner to bring the patterns, practices, and people to you.  Convenient side note – that’s what we do at Waterfield Tech. But if you want to capitalize on the capabilities today, be prepared to do the work. And if you follow the space, I think you’ll see the real success stories at this point the game are bespoke and driven be firm-specific needs.

 

Monoliths – Coming soon!

The subsequent phase in the evolution of AI solutions is marked by the emergence of integrated platforms that offer tightly coupled, comprehensive solutions. This phase, which I refer to as “monoliths,” is currently (rightly) being pursued by Contact Center as a Service (CCaaS) platforms, enterprise software firms, and Customer Relationship Management (CRM) systems. These monolithic solutions, designed for ease of deployment within existing platforms, sacrifice flexibility to cater to a broad audience. However, they can effectively serve niche or vertical markets, provided these markets are large enough to justify the development of a dedicated solution.

We already see this happening in some other domains. A great example is in the enterprise offices suite, e.g, Microsoft‘s suite-wide integration of Copilot throughout Office, Teams, and Dynamics. It has both the pros (it’s already built in and easy to use – why open another window for ChatGPT when summary and knowledgebase info are in the window you’re working in) and the cons (it solves some problems well and others not at all).

It’s worth noting that for most of the firms that pursue this course, it will be an effort to capture the change unleashed by generative AI from running wild as a truly disruptive innovation, and corral it into a sustaining innovation.  You can see this most clearly in the way these guys attempt to preserve the business model and extract incremental value, as opposed to flipping the business model.  Whether or not it works will depend on how well they execute on building the monolith and if the incremental value is enough to prevent customers from switching.

 

Modules – Endgame

The final and most mature phase of AI solution evolution is characterized by modularity. This phase emerges when the industry has developed sufficient standards and API normalization to allow for the assembly of diverse solution components through simple configuration rather than complex customization. The modular phase represents the likely endgame for AI solution evolution, where flexibility, efficiency, and customization converge to meet the diverse needs of users and industries. It seems like we are pretty far away from this segment, but once the integration points are mature we should expect to see competition drive down price with good enough results. And at the rate things are changing, perhaps we aren’t that far away after all.

What I suspect will be different about the modular phase in this disruptive cycle is that the modules will appear primarily as options within the big cloud computing platforms, i.e. AWS, Azure, and GCP will continue to be platforms but also serve as aggregators for various options for each piece of the stack for GAI solutions (but that is a whole different article).

The trajectory of AI application and commercialization, from bespoke methods through integrated monoliths to modular solutions, reflects a broader pattern of technological evolution observed across various industries. As we navigate these stages, the key to success lies in understanding the inherent trade-offs and opportunities each phase presents. By anticipating the shift towards modularity, firms can strategically position themselves to harness the full potential of AI, fostering innovation that aligns with evolving market demands and technological capabilities.

About the Author

Michael Fisher

Michael Fisher
Chief Product Officer
Waterfield Tech

Michael Fisher (a.k.a Fish) has built and led product, technology, and operations teams in organizations ranging from early-stage startups to publicly traded companies. Over the past twenty years, he has guided companies from inception to sale and through mergers, acquisitions, and complex integrations.

MythBusters: Debunking Four Common Myths Surrounding CX Cloud Adoption

November 30, 2023/ Under Cloud Migration,

In a landscape where digital transformation is paramount, cloud technology stands as a cornerstone for enhancing customer experience.

Yet, misconceptions persist, derailing well-intentioned strategies and hindering excellent results.

Here are four of the most common and destructive myths regarding CX cloud adoption.

 

Myth 1: Cloud Migration is Effortless

Contrary to the popular narrative, cloud migration is not a straightforward task. It requires careful planning and expert guidance. Migrating to the cloud involves more than a simple transfer of existing systems; it’s an opportunity to reevaluate and enhance your processes. Engaging with a Value Added Reseller (VAR) can yield significant benefits, offering expertise and broader service scope without the additional overhead of direct manufacturer dealings.

 

Myth 2: One Cloud Solution Fits All

The idea that one cloud solution can meet all business requirements is a fallacy. Cloud strategies should be tailored to specific business goals, mixing various technologies for optimal flexibility and effectiveness. Customized cloud solutions ensure that each tool is utilized for its strengths, aligning with the unique needs of the business.

 

Myth 3: Cloud is Always the Optimal Choice

Cloud technology, while advancing rapidly, is not a universal solution for every situation. Decisions to migrate should be based on specific business outcomes. Sometimes, a hybrid approach or a phased transition to the cloud is more suitable, especially when considering existing infrastructure and strategic business objectives.

 

Myth 4: Migrating ‘Like for Like’ is the Best Approach

Moving to the cloud should not be about replicating the existing infrastructure in a new environment. It’s a chance to reassess and optimize your business processes. A cloud migration strategy should start with a clear understanding of your business objectives, guiding a more strategic and effective migration process.

 

Dispelling these myths is more than an academic exercise; it’s a strategic imperative for any business eyeing digital transformation. Cloud adoption in customer experience is about making informed choices, leveraging the right partnerships, and optimizing technology to align with your unique business needs.

By understanding these misconceptions, businesses can approach cloud migration with a clarity that not only avoids common pitfalls but also maximizes the potential of their digital investments. As the cloud landscape continues to evolve, staying informed and adaptable will be key to harnessing its full power for an enhanced customer experience.

About the Author

Owen Robinson

Owen Robinson
VP of CX Modernization
Waterfield Tech

Owen is responsible for leading Waterfield’s world-class CX organization and go-to-market strategy. He has spent over 20 years crafting differentiated customer experiences at scale for some of the most innovative companies on the planet and has deep experience in leading brilliant people who evolve Cloud CX and WEM platforms.

Generative AI- What’s Stopping You?

It’s been a year since OpenAI released ChatGPT to the world and sent everyone into a frenzy. What is it? How does it work? How could it work for my business?

While the early adopters and industry disrupters raced to deploy early generative AI applications, there are still many businesses watching and waiting from the side-lines for this next generation of AI to prove out. But you can’t wait. You must get started today. So what’s holding you back? There are seven main objections I hear from clients and I’m going to walk you through these and why I believe they’re wrong.

#1 It’s too early.

It’s too early. It’s too risky. It’s too dangerous to dive into this generative AI revolution. No, it’s not. These are just self-doubts fueled by fear, and maybe the worry that you don’t fully understand it all. Yes, it’s early, but it’s not too early. Your competitors are already embracing this technology.

Your team, your kids, even your future boss are likely leveraging it. Think of it like a fast-flowing river. Crossing to the other side may seem challenging, but the longer you wait, the tougher it becomes, and the faster the river flows. The same principle applies to AI. Technology adoption is accelerating, and the window to start won’t get any wider.

Consider this: ChatGPT launched in November 2022. Since then, we’ve witnessed the emergence of Anthropic’s Claude, Google’s Bard, and Meta’s Llama, all ChatGPT competitors. Not to be outdone – OpenAI themselves released GPT-4 – an upgraded ChatGPT, and recently GPT-4-turbo that’s even better, faster and cheaper! Don’t procrastinate. Begin with small steps, and let data guide your strategy. If you do nothing else, just visit chat.openai.com and experience ChatGPT for yourself.

 

#2 It’s just a hype cycle.

It’s just another hype cycle, this generative AI stuff. Maybe it’s best to wait for the buzz to settle and then consider diving in. Well, not really. The hype is justified, this technology is a game-changer.

Initially, I was skeptical. Having been in the AI field since I was 10 and working in the industry for 25 years, I’ve seen many false dawns. But when I started using ChatGPT daily and building real solutions, my perspective changed. Trust me, this tech is a game changer.

Consider the internet’s trajectory. Yes, there was hype, a bubble, a crash, but from those emerged giants like Amazon, Google, Netflix, followed by Facebook, Uber, Airbnb. Now, could you even imagine life without Wi-Fi and instant access to the world’s information?

Your customers’ expectations are shaped by this hype too. They’ve experienced ChatGPT, Siri, Alexa, and now they wonder why your systems seem so clunky in comparison. Don’t you wonder that too?

 

#3 I can’t manage another migration.

Tired of migrations? I get it. But here’s the thing: getting into generative AI, especially with ChatGPT, isn’t another endless migration. Unlike the constant upheavals of the past decade with cloud, SaaS, workforce engagement management, IVR changes, and CCaaS transitions, generative AI is a core technology that seamlessly integrates.

No need to cringe at the thought. It effortlessly plugs into your existing website with just a single line of code. It connects flawlessly to your current chat infrastructure and smoothly fits into your agent’s desktop, neatly on top of your existing knowledge base.

It can effortlessly position itself in front of your existing IVR or contact center. Think of it as an add-on, not a migration. So, say goodbye to migration fatigue. Embrace the simplicity of integrating and leveraging this transformative AI technology.

 

#4 There’s no proof that it will work.

Skeptical if ChatGPT and generative AI can deliver? It’s a valid concern, but let me clear the air – it absolutely works. However, it’s crucial to understand what generative AI, like ChatGPT, truly represents. It’s not just another tool; it’s a form of intelligence that closely mirrors human thinking.

With robust reasoning abilities, it can translate diverse languages and styles, even into computer code, showcasing its versatility. The most impressive part? It engages in convincing conversations, often passing the Turing test. Users can’t easily distinguish between interacting with a generative AI bot and a real person.

Yet, it lacks knowledge about your business, processes, customers, and objectives. Harness this powerful tool wisely.

 

#5 We need a use case.

Struggling to find the right use case for ChatGPT and generative AI? Let me guide you. Think of it in three key categories: Automation, Assistance, and Insight.

Automation involves routing customers efficiently, answering queries, and providing transactional self-service. Whether it’s tracking orders, filing claims, or checking claim status, generative AI streamlines these processes.

Moving on to Assistance, it’s about enhancing the performance of human customer service advisors. Provide suggested responses, speed up interactions, and enable chat with knowledge bases for quicker information retrieval.

Insight is the third category. Dive into mining calls and chats for customer interaction insights. Automatic summarization for rapid analysis by quality assurance personnel, duty managers, and agents can provide valuable information. Perform sentiment analysis to identify positive and challenging interactions, offering opportunities for coaching and improvement.

Remember, there are ample opportunities to incorporate generative AI in the contact center, business, and your daily workflow. Dive in, get started, and explore the potential of generative AI in your life and business.

 

#6 It’s too risky.

Concerned about security and compliance in the contact center with generative AI? Your concerns are valid. With evolving standards like GDPR, HIPAA, PCI DSS, and traditional data protection norms, precision is paramount.

Essential data security considerations remain constant—data location, access controls, retention policies, and encryption at rest and in transit. These principles are transferable from conventional IT systems to generative AI. Yet, the unscripted nature of generative AI demands additional scrutiny. Strategic use of scripted responses, especially for critical legal and compliance interactions, becomes crucial.

Generative AI introduces a unique challenge—hallucination, where it might generate fictional responses. Actively seek user feedback and implement automated filters for response checking to detect and rectify any deviations. It’s a realm that merges the familiar with the novel—don’t let security and compliance reservations hinder your plunge into the revolutionary landscape of generative AI.

 

#7 I don’t understand generative AI.

Curious about generative AI but feel like you lack the knowledge? There’s a simple solution – immerse yourself in it. Visit chat.openai.com, sign up for ChatGPT, and use it daily. It’s free, and it’s a game-changer for working faster and smarter. Encourage your team to join in; it can revolutionize their workflow too.

When interacting with ChatGPT, don’t just ask questions—engage in a dialogue. Interact purposefully. To get the most out of it, be explicit about your expectations. Don’t let ChatGPT guess; guide it on how you want responses formatted and styled.

 

If you don’t already have an AI project in production, don’t get left behind.

Want more insights? Sign up to receive our weekly applied AI newsletter- Teaching Robots to Talk.

About the Author

Kerry Robinson

Kerry Robinson
VP of AI Strategy
Waterfield Tech

An Oxford physicist with a Master’s in Artificial Intelligence, Kerry is a technologist, scientist, and lover of data with over 20 years of experience in conversational AI. He combines business, customer experience, and technical expertise to deliver IVR, voice, and chatbot strategy and keep Waterfield Tech buzzing.

5 Reasons You Should Rethink How to Shop for WEM

August 17, 2023/ Under Carlton Perkins, Contact Center, Gabe Harris,

 

“Nah, we’re good. We contract directly with the manufacturer to get the best price and support.”

We hear this a lot – and hey, we get why anyone shopping for a new WEM solution might think this is true. But it isn’t. Navigating the intricacies of a workforce engagement ecosystem can be a daunting task for any business. But these workforce engagement shoppers probably don’t know that working through a Value Added Reseller (VAR) reseller and implementation partner gets you several perks that manufacturers aren’t equipped to offer. Even better… You get those perks for less than what you’d pay the manufacturer.

 

Read More

Genesys FedRamp Authorization – What it is and why you should care

August 14, 2023/ Under Carlton Perkins, Cloud Migration, Contact Center,

Genesys recently announced its attainment of Federal Risk and Authorization Management Program (FedRAMP) authorization for its Genesys Cloud CX platform. It’s a significant milestone, enabling U.S. government agencies to securely transition their contact center and communication platforms to the cloud.

Read More

migrate from twilio flex 1 to flex 2

Everything You Need to Know to Migrate from Twilio Flex UI 1 to Flex UI 2

June 27, 2023/ Under Customer Experience (CX), Partners,

If you’re using Twilio Flex as your customer experience (CX) platform, an important change is on the horizon. On May 16, 2023, Twilio announced they are dropping support for Flex UI 1 as of July 2024, requiring all Twilio Flex customers to migrate to their new 2 UI. Here’s what you need to know to plan your migration.

Read More

Never Do Like for Like Again

Let’s Never DO “Like for Like” Again

I am a proud Texas Aggie, as was my father before me and is my son. Once the blood takes on that maroon hue, there’s no returning to a sense of normalcy. One of the many things that distinguish Texas A&M is the rich panoply of traditions that infuse every aspect of daily life. Some of the traditions are quirky, like the mystical numerology of your class year. Some of the traditions are incredibly profound, like Muster or Silver Taps.

Read More

Join Our Team

We're hiring innovative, passionate team players.

See all open positions
NEXT Shuffle