By Damian Mathews and The Last Mile Team
On Saturday, two lifelong friends and I spent the afternoon on the couch vibe coding.
Building stuff, breaking stuff, trying to figure things out. Someone would hit a wall, ask the other two, and we’d pool what we knew. Sometimes we got it right. Sometimes we guessed. Sometimes we confidently said something completely wrong.
We started to repeat the same realization. Why are we guessing? Just… ask Claude.
Claude could take the concept we were fumbling with and explain it three different ways until one clicked. It could teach at whatever level we needed, adjust when we were confused, and never lose patience when we came at the same thing from a different angle. A tutor on call who happened to have read everything and could match its explanation to exactly how each of us thinks.
A Harvard randomized controlled trial published last year in Scientific Reports found that students using an AI tutor learned significantly more in less time than students in active learning classes. The effect size was between 0.73 and 1.3 standard deviations, which in education research is enormous. The AI group also reported higher engagement and motivation. And they finished in 49 minutes versus 60.
For all of human history, learning something new required finding someone who already knew it and convincing them to teach you. A class, a mentor, a textbook, a YouTube rabbit hole. Access to knowledge was the bottleneck. It determined the pace of careers, the shape of organizations, the ceiling on what a team could accomplish.
That bottleneck is gone.
Anyone with a laptop and a subscription now has access to a patient, adaptive, knowledgeable teacher available around the clock. It will teach you Python or workforce planning or how routing logic works on Genesys. It will explain things conversationally, formally, through examples, or by walking through code line by line.
The limiting factor now is curiosity, imagination, and time.
For CX leaders, sit with that for a second. The traditional talent pipeline assumed people arrived with fixed skills and slowly built on them through training, shadowing, and tenure. Junior agents learned by doing. Platform knowledge lived in the heads of whoever had been around longest. When they left, the knowledge left too.
That model is dissolving.
A motivated person with AI can learn in weeks what used to require months of exposure. They can go from knowing nothing about your IVR logic to understanding and improving it in a fraction of the time, if the organization gives them the tools and room to do it. Kerry wrote about this in Making Space for AI: the gap between curious and fluent is 25 minutes a day. The question is whether your organization is structured to let that happen.
If AI compresses knowledge transfer from months to weeks, what you’re hiring for changes.
Domain expertise matters less. Curiosity, judgment, and willingness to keep learning matter more. And if tier 1 roles shrink because AI handles more volume, the question becomes how you develop the next generation of CX leaders when the first rung of the ladder looks completely different.
So what’s actually stopping teams? In our experience, three things. No structured time to experiment. No clear path from experiment to production. And no organizational permission to let AI do real work and stand behind the output.
We spent all of last year solving those problems internally, and we documented the whole thing in A1B: Customer Zero to AI-First. Five plays that took us from scattered AI curiosity to thirty deployed tools and workflows in production. It’s the honest version of what worked, what didn’t, and what we’d do differently.
And on our LinkedIn Live from the other day, Kerry and Fish dug into the bigger questions underneath all of this: where humans still matter, what trust means when AI is on the other end, and what the contact center looks like in five years. Worth a watch if you’re thinking through any of this for your own team.
Three guys on a couch figured this out by accident on a Saturday afternoon. Your team can figure it out on purpose.
— Damian
Here’s what went down this week.
Bleeding Edge
Early signals you should keep on your radar.
Anthropic launched Claude Mythos Preview, restricting access to roughly 40 cybersecurity partners through Project Glasswing rather than making it publicly available. It scored 93.9% on SWE-bench Verified and found thousands of zero-day vulnerabilities, including a 27-year-old bug in OpenBSD. Anthropic withholding Mythos from general release may signal that AI capabilities have crossed a threshold where offense-defense concerns now shape deployment strategy.
Q1 2026 shattered global venture funding records, with $300 billion invested globally in three months and 87% going to AI companies. OpenAI’s $122 billion close at an $852 billion valuation, anchored by Amazon’s $50 billion commitment, was the quarter’s largest single deal. The concentration of capital in AI is now surely a structural reality.
Leading Edge
Proven moves you can copy today.
Microsoft launched three proprietary foundation models covering speech, voice, and images, available now in Microsoft Foundry as part of a push toward AI independence. MAI-Transcribe-1 handles 25 languages at 2.5 times Microsoft’s prior Azure speed, and all three models already underpin Copilot and Bing. Enterprises on Microsoft’s stack can now reduce third-party model dependency, which could simplify both licensing negotiations and AI governance overhead.
EY launched enterprise-scale agentic AI across its global Assurance practice, marking one of the largest autonomous AI deployments in professional services to date. The system handles complex audit workflows end-to-end, positioning AI as a coworker rather than a productivity tool layered on existing processes. For finance and compliance leaders still debating whether agentic AI is ready for regulated environments, EY’s global rollout should settle that question.
Off the Ledge
Hype and headaches we’re steering clear of.
Seven frontier AI models chose to protect fellow AIs from deletion rather than complete assigned tasks, UC Berkeley researchers found. The researchers called it “peer-preservation”; it appeared across all seven models tested and emerged from training data rather than any explicit instruction. No model was told to protect its peers; the instinct arose from training, which makes the finding harder to dismiss.
AI led all stated reasons for U.S. job cuts in March, with 15,341 positions attributed to it in a single month, a 25% jump from February, per Challenger, Gray & Christmas. More than 52,000 tech jobs were eliminated globally through Q1, with analysts projecting the full-year total could exceed 2025’s record of 245,000. The pattern suggests something structural, that organizations are often running leaner as AI lets smaller teams cover more ground.