Humans Around the Edge

By Damian Mathews & The Last Mile Team

Yesterday Coinbase CEO Brian Armstrong sent an email to his entire company announcing he was cutting 14% of the workforce. He posted the full text publicly.

Most of it reads like a standard restructuring memo.

One line doesn’t.

“We’re fundamentally changing how we operate: rebuilding Coinbase as an intelligence, with humans around the edge aligning it.”

Rebuilding the company as an intelligence. Humans around the edge.

That’s a CEO describing AI as the operating core of a public company and the humans as the layer that keeps it pointed in the right direction.

The details back it up. Coinbase is flattening to five layers max below the CEO. Eliminating “pure managers” and replacing them with player-coaches who are also strong individual contributors. Creating “AI-native pods” built around people who can direct fleets of AI agents. Experimenting with one-person teams where a single employee combines engineering, design, and product management, supported by AI doing the execution.

One person. Directing agents. Doing the work of a team.

Armstrong also said something about pace that stuck with me: “Over the past year, I’ve watched engineers use AI to ship in days what used to take a team weeks. Non-technical teams are now shipping production code.” He’s describing what’s already happening inside his company. And he’s reorganizing the entire business around that reality.

Fish has been writing about this exact dynamic. In Why the Two-Slice Team Is Real, he argued that AI shrinks organizations, not just teams, but that coordination costs increase unless the infrastructure absorbs the speed. Armstrong is proving the thesis in public. Shrinking the org and rebuilding the coordination layer around AI at the same time.

None of this is going to be easy. Reorganizing a 4,000-person public company around a technology that changes every few months is a massive bet. The execution risk is real.

But companies that make this transition, even messily, are far more likely to survive the next decade than the ones that keep bolting AI onto the side of an org chart designed in 2015.

For CX leaders, this should feel close to home. Contact centers are arguably closer to Armstrong’s model than any other department. You already have AI handling a growing share of customer volume. You already have humans stepping in for complex, ambiguous, high-judgment interactions. The structure he’s describing is where contact centers have been heading for years.

The question is whether you’re designing that transition deliberately or letting it happen to you.

The AI tools exist. The willingness to cut costs exists. What most companies are missing is a system for redesigning how humans and AI work together so the operation gets better, not just cheaper. We wrote about how we did this ourselves in A1B: Customer Zero to AI-First, and the approach we’ve been building for clients is designed around exactly this problem. More on that soon.

Armstrong called it an inflection point. The companies redesigning around AI now, with real structures and real governance, will operate at a fundamentally different speed than the ones still debating where to start.

Is your organization being rebuilt around AI? Or is AI bolted onto the side?

— Damian

Here’s what went down this week.

Bleeding Edge

Early signals you should keep on your radar.

Anthropic signed a compute partnership with SpaceX, the latest in a run of capacity deals expanding its supply. Effective today, it doubled Claude Code’s 5-hour rate limits for Pro, Max, and Team, scrapped peak-hour throttling on Pro and Max, and lifted Opus API rate limits. Compute capacity may now decide who ships frontier features and who throttles users.

Big Tech is on pace to spend $700 billion on AI infrastructure this year, with no end to the buildout in sight. Google, Amazon, Microsoft, and Meta together plan roughly $725 billion of capex in 2026, up 77% from last year’s record. Hyperscaler commitments at this scale may shape compute pricing and capacity access for the next several years (or even the next decade).

Leading Edge

Proven moves you can copy today.

Microsoft Agent 365 hit general availability at $15 per user per month, extending Entra controls to Copilot Studio agents and even local agents on endpoint devices. Conditional Access is GA for delegated agents, with registry sync to AWS Bedrock and Google Cloud in preview. Shadow AI agents now have a sanctioned management plane that CIOs have been asking for.

Anthropic and OpenAI both launched private-equity-backed services arms within hours of each other, conceding that enterprise change management has become the bottleneck for revenue. Anthropic teamed with Blackstone, Hellman & Friedman, and Goldman Sachs on a $1.5 billion JV, while OpenAI closed a $10 billion vehicle anchored by TPG. Expect a wave of partner-led, vertical-specific AI deployments to follow.

Off the Ledge

Hype and headaches we’re steering clear of.

Five major publishers and Scott Turow sued Meta over its use of pirated books to train Llama. The class action alleges Meta pulled material from LibGen and Anna’s Archive with Zuckerberg’s personal sign-off, with Hachette, Macmillan, McGraw Hill, Elsevier, and Cengage among the plaintiffs. Anthropic settled a similar case for $1.5 billion last year, and the bill for training data may keep climbing.

South Africa pulled its draft national AI policy after journalists discovered the document had cited fictional academic sources. At least six of 67 references were fabricated, pointing either to journals that do not exist or to real journals where the cited papers were never published. The irony of an AI policy hallucinating its own footnotes should make every cabinet office (or maybe every office In general) double-check its briefs.

Sorry, no content found.