By Damian Mathews and The Last Mile Team
Everyone has an AI strategy now.
Most of them look similar. A leadership mandate, a few pilots, a vendor evaluation, and a vague timeline. Their slide deck is polished, the org chart has a dotted line to someone with “AI” in their title.
And yet many of these organizations can’t point to a single AI initiative that’s running in production, improving outcomes, and getting better every week.
That gap is wide. And after a year of helping organizations try to close it, I’ve noticed the ones who genuinely move tend to largely follow the same principles. Not the same tools or vendors.
The same instincts about how to work when the tools are this powerful and this new.
Fish wrote something this week that sharpened this for me. In None of Dario’s Roads Lead to Generalization, he makes a careful argument that AI is becoming superhuman at compression (absorbing vast information, synthesizing patterns, producing structured output across bounded tasks) while still struggling with genuine generalization (extending into novel, adaptive, reflexive situations where the rules themselves are changing).
The implication is that AI will dominate the operational layer of most work. The strategic, adaptive, human layer is where people still earn their keep.
That framing matters here. Because most of these rules are about designing your operation so AI can do what it’s already great at, while your people focus on the work that still requires judgment, context, and the ability to navigate situations nobody’s seen before.
These apply whether you’re in CX, finance, legal, or engineering. I’ll start out with rules that are somewhat general. But, since this is a CX newsletter, I’ll talk about them in our field.
1. Start with evidence, not ideas. The best AI use case in any operation is already visible in the work itself. In a contact center, it’s in your conversation data: what customers are asking, where processes break, which interactions take five times longer than they should. The evidence is there. Most teams just haven’t looked at it systematically.
2. Let the data pick the use case. Most teams start with a hypothesis and go looking for evidence to support it. Flip the order. Let the volume, friction, and failure modes point you to the target. You’ll pick a better one, and you’ll already have proof before you start building. In CX, this is basically the difference between automating what the VP thinks matters vs. automating what customers are truly struggling with.
3. Prototype before you plan. A working prototype teaches you more in a week than a requirements document teaches you in a quarter. Build something rough, put it in front of real people, and let their reactions shape the spec. This is how Kerry built a working AI agent on our last LinkedIn Live: conversation data first, prototype second, plan emerged from what worked.
4. Prove it before you scale it. A demo is not proof. Proof means it handled real scenarios, governance requirements were met, and results were measurable in a controlled environment. In CX, where one bad automated interaction can undo months of brand trust, this discipline is the difference between launch and liability.
5. Build on what you already have. You don’t need a new platform. Whatever you’re running, that’s where the AI should land. The ecosystem is moving toward open connectivity (MCP, open APIs, model-agnostic tooling) precisely because the future is making your current environment smarter, not replacing it with another suite.
6. Governance belongs at the start, not the end. Security, compliance, and auditability are structural decisions (not finishing touches). If your compliance team hasn’t reviewed the system before it goes live, you’re accumulating risk. In customer-facing AI the consequences show up immediately and publicly.
7. Own the method, even if you outsource the build. You can bring in partners to build and run. That’s a delivery preference. But the transformation logic, the prioritization framework, the operating model: those need to live inside your organization. Capability that leaves when the engagement ends was never really yours.
8. Shrink the loop. The cycle from “we spotted a problem” to “the fix is live” should be measured in days, not quarters. This is true in product development, in marketing, in engineering. In CX, the gap between issue and solution is still measured in months at most organizations… that lag is where performance dies.
9. Put AI on the compression. Put people on the judgment. This is Fish’s point, applied practically. AI is extraordinary at processing volume, spotting patterns, producing structured output, and handling bounded, repeatable tasks. Let it do that. Free your people for the work that requires reading a room, navigating ambiguity, and making calls in situations where the playbook doesn’t apply.
10. Production > applause. A pilot that gets a standing ovation in the quarterly review but never goes live is a hobby. The organizations getting the most value from AI are the ones that obsess over what happens after the build: integration, operation, iteration, and the handoff from project team to operating team. It seems few organizations have a repeatable way to get there. Building that path is what we’ve been focused on. More to come on this.
On Wednesday (March 25th), Kerry and Fish are going live to dig into the bigger questions underneath all of this. I’ll be on there too, asking your questions from the comment section.
Will AI make CX harder before it makes it better? If your kid is picking a college major right now, what do you tell them? What are the most counterintuitive outcomes of the AI revolution? Big picture first, contact center second. I hope to see you there.
What rules does your team run by?
— Damian
Here’s what went down this week.
Bleeding Edge
Early signals you should keep on your radar.
Jensen Huang projected $1 trillion in orders for Blackwell and Vera Rubin through 2027, doubling NVIDIA’s prior $500 billion estimate as the Vera Rubin NVL72 enters production. The NVL72 pairs 72 Rubin GPUs with 36 Vera CPUs in a rack-scale system; AWS, Google Cloud, Microsoft, and OCI are first in line to deploy in H2 2026. At this commitment level, hyperscaler AI infrastructure capacity and pricing for the next two years may well be decided by what was announced at GTC this week.
The AI data center buildout is driving a surge in demand for skilled trade workers, from electricians and cooling technicians to robotic maintenance specialists. Demand for robotic technicians rose 107% between 2022 and 2026, per a Randstad analysis of 50 million job postings; some roles now pay over $80,000 with no four-year degree required. The physical infrastructure layer of AI is reshaping regional labor markets in ways that may prove more durable than most of the product launches dominating the headlines.
Leading Edge
Proven moves you can copy today.
Zendesk agreed to acquire Forethought, a CX AI startup processing over a billion monthly interactions, in its biggest deal in two decades. Forethought’s self-improving agents will fold into Zendesk’s Resolution Platform, advancing the product roadmap by more than a year. For CX teams on Zendesk, agentic resolution is arriving faster than expected. For everyone else, this marks where the vendor bar appears to be moving.
Perplexity launched Computer for Enterprise, a multi-model AI agent with connectors for Snowflake, Salesforce, and HubSpot, aimed directly at Microsoft Copilot. The enterprise tier includes SOC 2 Type II, SAML SSO, and Slack integration. Over 100 enterprise customers reportedly demanded early access in a single weekend. An agent that queries a CRM and answers in Slack (without a data team in the loop) targets a bottleneck most enterprise teams are still working around.
Off the Ledge
Hype and headaches we’re steering clear of.
Encyclopaedia Britannica and Merriam-Webster sued OpenAI for “massive copyright infringement,” alleging nearly 100,000 articles were scraped without authorization. The complaint also alleges ChatGPT’s RAG workflow reproduces content in real time and that hallucinations falsely attributed to the publishers may violate Lanham Act trademark protections. When the dictionary and the encyclopedia are both plaintiffs, a licensing ruling that reshapes how AI companies use training data looks increasingly likely.
Fake Claude Code install pages are spreading infostealers targeting developers who think they’re setting up a popular AI coding assistant. Malwarebytes found the spoofed pages are part of a wider pattern… attackers are using AI tool credibility as bait, with AI-generated clone sites barely distinguishable from the real thing. Be careful out there!
See you next week!