By Damian Mathews & The Last Mile Team
This week, a startup that answers missed calls for HVAC and plumbing companies hit a $1 billion valuation. $125 million raised. On track to book $1 billion in jobs this year. 800+ customers.
The product is AI voice agents for home services businesses. They answer every inbound call within seconds, book jobs into the CRM, follow up on old estimates, and reroute leads based on technician capacity. They work at 3am. They don’t call in sick.
That’s interesting. But the reason I’m writing about it has almost nothing to do with the company.
It’s the approach.
These are not enterprise CX teams with 18-month deployment timelines. These are 30-person HVAC companies where the owner is in a truck and the receptionist is drowning in calls by mid-morning. They didn’t start with an AI strategy. They started with a problem: we’re missing calls and losing jobs. Then they found something that fixed it. Then they scaled it. No vendor evaluation committee. No 6-month pilot or slide deck.
Problem. Evidence. Solution. Scale.
That sequence should sound familiar.
In our ebook A1B: Customer Zero to AI-First, Kerry describes the same pattern we followed internally. Two of our five plays are especially applicable.
Play 1: Show Don’t Tell. Don’t plan it, build it. Try it on a real problem with real data and see if it works.
Play 3: Possible, Practical, Profitable. Prove that AI can do the job, prove it works in a realistic environment, then scale what’s proven. A plumbing company in Dallas followed that exact sequence to go from missing calls to booking $1 billion in jobs with AI.
Meanwhile, most enterprise contact centers are still in the planning phase.
Think about why…
A 30-person HVAC company has one decision-maker, one problem, and zero patience for anything that doesn’t produce results in weeks. An enterprise CX org has multiple stakeholders, competing priorities, a vendor landscape that takes months to evaluate, and a governance process that can stall any initiative indefinitely.
The technology available to both is roughly the same. The speed at which they can move is completely different.
And that speed gap is the real story. The plumber didn’t have better AI. They had a shorter path from problem to production. A shorter OODA Loop. They knew what the problem was (missed calls), they had the evidence (lost revenue), they tested the solution (did it book the job?), and they scaled what worked. The whole cycle took weeks.
For CX leaders running larger, more complex operations, the question is obvious.
Can you build that kind of path? Can you go from customer evidence to a working prototype to governed production deployment without it taking a year and a half? Can you give your team the room to try something, prove it, and scale it with the same urgency a plumber has when the phone rings at 3am and nobody’s there to answer?
That’s what we’ve been building toward.
A repeatable system for doing exactly that, on enterprise contact center stacks, with the governance and security that enterprise requires.
A plumber’s AI is booking a billion dollars in jobs. What’s yours doing?
— Damian
Here’s what went down this week.
Bleeding Edge
Early signals you should keep on your radar.
DeepSeek open-sourced its V4 model family, dropping a 1.6-trillion-parameter flagship that benchmarks alongside GPT-5.5 and Claude Opus 4.7. V4-Pro costs $3.48 per million output tokens, roughly a tenth of what Anthropic and OpenAI charge for comparable performance. Frontier-level capability at commodity prices m eans AI continues down the path of becoming cheaper and better.
OpenAI and Microsoft dissolved their exclusivity arrangement, making GPT-5.5 available on AWS Bedrock within 24 hours of the announcement. Microsoft’s IP license transitions from exclusive to non-exclusive by 2032, and OpenAI can now serve all products on any cloud. Seven years of lock-in just ended… so enterprise teams with multi-cloud strategies may finally get the model portability they’ve been asking for.
Leading Edge
Proven moves you can copy today.
OpenAI shipped GPT-5.5, its most capable model yet, to ChatGPT Plus, Pro, Business, and Enterprise users on April 23. The model scores 88.7% on SWE-bench Verified, halves hallucination rates, and handles 12-million-token contexts for agentic coding and research workflows. Six weeks after GPT-5.4, the pace alone signals that the release cadence may matter as much as any single benchmark improvement.
Merck committed up to $1 billion to make Google Cloud its primary AI partner, deploying Gemini Enterprise across R&D, manufacturing, and commercial operations. Google engineers will embed with Merck’s teams to build agentic workflows spanning its 75,000-person global workforce. A pharma giant wiring AI into every function, not just the lab, is the kind of commitment that could pull an entire industry forward.
Off the Ledge
Hype and headaches we’re steering clear of.
Meta and Microsoft announced more than 20,000 job cuts in a single week, both citing AI-driven efficiencies. Meta will eliminate 8,000 roles starting May 20; Microsoft offered buyouts to 7% of U.S. staff. The firms spending the most on building AI are now cutting the most people because of it, and that irony likely has legs.
Google signed a classified AI deal with the Pentagon, granting Gemini access on defense networks after Anthropic refused similar terms. As a result, some 600 Google employees protested; the company also exited a $100 million drone swarm contest after an ethics review. Saying yes to classified work while saying no to autonomous drones is a line Google may find harder to hold than to draw.