Every Big AI Question is a CX Question

By Damian Mathews and The Last Mile Team

Yesterday on LinkedIn Live, I asked Kerry and Fish a series of deliberately oversized questions. Consciousness. College majors. Whether humans belong in the contact center at all. What a great customer experience looks like in 2031. You can watch the full recording here.

I expected a scattered conversation. What I got was the opposite. Every question, no matter how philosophical or far-flung, kept pulling us back to the same set of problems that CX leaders are already dealing with.

Take consciousness. Kerry went deep, referencing Nagel’s famous paper on what it feels like to be a bat. Can we know if there’s something it feels like to be an LLM answering its millionth chat? Probably not on current hardware. You can unplug it and plug it back in and it’s fine, which you can’t do with a person or a bat or even a flower.

But then he brought it somewhere practical: whether or not AI is conscious doesn’t matter, because customers will treat it like it is.

Humans personify machines. We always have. So every customer walking into an AI interaction expects a smart, helpful, resourceful person on the other end. That expectation sets the bar regardless of what’s happening inside the model. Suddenly a philosophy question is a CX design question.

Trust went the same direction.

Fish made the point that the trust questions for AI are identical to the ones we ask about humans: identity, competence, intentions. The difference is that we’re used to computers being deterministic. When a computer was wrong before, it was because you gave it bad input. Now these systems are probabilistic, like people. Sometimes just wrong.

Kerry added the CX edge… he said every traditional identity marker, voice, knowledge, appearance, has been undermined by AI. Your contact center can’t rely on knowledge-based authentication anymore because AI can fake all of it. You need cryptographic identity, the same infrastructure that secures IT systems.

A trust question becomes an architecture question.

I asked what they’d tell a kid choosing a college major.

Kerry said don’t go. Fund their entrepreneurial experiments with AI instead.

Fish said study what interests you, but if you want something durable, learn how to think. Liberal arts, systems design, evaluation, judgment. Then Kerry tied it to CX: contact center agents will change roles many times over the next decade. The demands on humans will increasingly be taste, vision, supervision of AI. Whatever builds curiosity and resilience is the right preparation, whether you’re eighteen or a twenty-year veteran on the floor.

I asked whether AI will make customer experience worse before it makes it better.

Kerry said probably yes, and pointed to Fish’s J.A.R.V.I.S Paradox article: your customers are about to have AI too. Their agents will call, email, negotiate, and never hang up. You’re deploying AI to improve your operation while your customers are deploying AI to stress-test it. Fish added the timeline: short term, better. Midterm, worse as the second-order effects hit. Long term, better again. Probably.

Regarding CX in the year 2031, Kerry thinks shopping, paying, getting support all start to dissolve. Your personal AI watches, learns, procures, course-corrects. The experience becomes invisible. Fish half-agreed but pushed on timing. That’s what great will look like. Only some organizations will have achieved it in five years.

The thing that stuck with me is that every one of these huge abstract questions, consciousness, trust, education, the future of work, collapsed into a concrete CX problem within about ninety seconds.

The philosophical questions and the operational questions are kind of… the same questions.

And the people developing the clearest answers are the ones who are using AI deeply enough to see where it works, where it breaks, and where human judgment still makes the call.

Do you think AI could ever become conscious?

— Damian

 

Here’s what went down this week.

 

Bleeding Edge

Early signals you should keep on your radar.

NVIDIA’s Vera Rubin platform lands at GTC 2026, combining seven new chips into what the company calls a unified AI factory architecture. The flagship NVL72 packs 72 Rubin GPUs and 36 Vera CPUs into a fanless, liquid-cooled enclosure rated above 200 kW. At this density of integration, the rack itself may become the primary unit of competition in enterprise AI procurement.

OpenAI’s M&A pace is accelerating, with six deals closed in 2026 already, including developer tooling startup Astral on March 19. The streak nearly matches OpenAI’s total acquisition count from all of 2025, suggesting a push toward platform ownership over model development. Enterprise teams building on open-source AI infrastructure should watch which tools end up inside OpenAI’s ecosystem next.

Leading Edge

Proven moves you can copy today.

Accenture and Microsoft launched a Forward Deployed Engineering practice, embedding engineering teams directly inside enterprise clients to build and operationalize AI. The model addresses the execution gap many enterprises face between adopting AI tools and actually deploying them in production. For teams with the strategy but not the staffing to scale AI, a co-embedded engineering model may close that gap faster than hiring.

Enterprise Connect 2026 delivered a blunt message to CX leaders: AI in the contact center has to pay off now. The dominant theme across sessions was that AI must show measurable outcomes for customers, agents, and the business, not just adoption metrics. Teams still running pilots instead of production deployments should expect harder questions from finance before the next planning cycle.

Off the Ledge

Hype and headaches we’re steering clear of.

Oracle is reportedly planning 20,000 to 30,000 layoffs, redirecting $8 to $10 billion toward AI data center construction. The company already raised its restructuring budget to $2.1 billion this fiscal year, representing up to 19% of its global workforce. Betting your headcount on infrastructure that may not generate returns for several years is a wager that employees are effectively funding first.

Tech layoffs topped 45,000 in March 2026, with executives at Atlassian, Block, and Amazon explicitly citing AI’s expanding capability as the cause. Atlassian’s approach is particularly candid: it cut 1,600 roles while announcing 800 new AI-specialized hires in the same week. Whether AI is driving these decisions or serving as convenient framing for restructuring that was coming anyway is a fair question.

Sorry, no content found.