The Connected Ideas Project
Tech, Policy, and our Lives
Ep 60 - The Org Chart Dies Last
0:00
-21:20

Ep 60 - The Org Chart Dies Last

Microsoft says the Frontier Firm is born. The real frontier is whether we can govern what we’re about to delegate.

This is a special edition of The Connected Ideas Project, because while it’s Episode 60 of the podcast, it’s the 100th edition of this newsletter since launch! Thank you for being part of this community. If you’re finding value, please share with your friends and colleagues!

Share


Every few years, a major technology company publishes a report that tells you more than it intends to.

Microsoft’s 2025 Work Trend Index — The Year the Frontier Firm Is Born — is one of those reports. On the surface, it’s a well-produced argument for why every company needs to reorganize around AI agents. Survey 31,000 workers across 31 countries, add LinkedIn labor data and Microsoft 365 telemetry, wrap it in a compelling narrative about hybrid human-agent teams, and you’ve got a document that every Fortune 500 CEO will have on their desk by Friday.

But read it a second time. Read it the way you’d read a national security assessment — not for the headlines, but for the assumptions underneath. And something more interesting emerges.

Microsoft isn’t describing a productivity tool. They’re describing a new theory of the firm. And they’re describing it almost entirely in the language of efficiency, without ever seriously grappling with the governance architecture such a firm would require.

That gap is where the real story lives. Every time we build autonomous capability faster than we build accountability, the system doesn’t fail immediately. It fails later. And it sends the bill.


The podcast audio was AI-generated using Google’s NotebookLM.


What the Report Actually Says

Give Microsoft credit: the diagnosis is sharp. Business demands are outpacing human capacity. Eighty percent of the global workforce says they lack enough time or energy to do their work. The knowledge worker, as currently configured, is maxed out.

Microsoft’s answer: intelligence on tap. AI agents that can reason, plan, and execute tasks autonomously — not chatbots, but digital colleagues joining teams with increasing independence. The report envisions three phases, from AI as assistant to AI as operator of entire business processes, and argues that companies embracing this trajectory are already pulling ahead.

The numbers tell the story. Eighty-two percent of leaders expect to deploy agents to expand workforce capacity in the next eighteen months. Forty-six percent are already using them to automate entire workflows. A third are considering headcount reductions. And here’s the number that stopped me: 78 percent are considering hiring for AI-specific roles that didn’t exist a year ago.

This is not incremental. Microsoft even coins a term — the “Work Chart” — to replace the org chart: a dynamic, outcome-driven model where teams form around goals, not functions, powered by agents that expand what each person can do.

The Movie Production Model — and Its Missing Script

One of the report’s most revealing analogies compares the Frontier Firm to movie production. Teams assemble for a project, agents fill specialized roles, the work gets done, and everyone disbands. It’s a compelling image. Lean, high-impact, fluid.

I’ve been thinking about that analogy. Because it captures something real about where organizational design is heading. But it also reveals what the report doesn’t address.

Movie productions work because of something the report never mentions: an extraordinarily mature governance infrastructure. There are unions, guilds, contracts, liability frameworks, insurance requirements, safety protocols, credential verification systems, and chain-of-command structures that have been refined over a century. The fluidity of production is enabled by the rigidity of the rules governing it.

What is the equivalent for human-agent teams?

When Microsoft describes a world where every employee becomes an “agent boss” — someone who builds, delegates to, and manages AI agents — they’re describing a massive delegation of judgment. And delegation of judgment, in any complex system, is a governance problem before it’s a productivity solution.

I keep thinking about this because it mirrors a challenge I’ve watched play out in a different domain entirely.



What Defense Planners Already Know

I remember a briefing at the Pentagon — one of many, but this one stuck. A program manager was presenting an autonomy roadmap for a logistics system. Slides were clean. The capability curve was steep. Savings projections were compelling. And then someone from the policy shop asked a single question: “Who signs for the decision when the system gets it wrong?” The room went quiet. Not because the question was unexpected. Because everyone knew the answer wasn’t in the slides.

That moment comes back to me reading Microsoft’s report. Because the concept they’re selling as a business revolution — human-machine teaming with autonomous systems — is something the Department of Defense has been grappling with for over a decade. Different vocabulary, same structural problem: How do you maintain meaningful human oversight when the systems you’re working with can operate faster, and increasingly more capably, than the humans directing them?

The defense community learned several things the hard way.

First, that the “human in the loop” is not a design feature — it’s a design requirement that must be engineered deliberately, or it erodes. Systems that are faster and more capable than their operators create irresistible pressure to defer. The human becomes a rubber stamp. In military contexts, this is called automation bias. In Microsoft’s Frontier Firm, it has no name yet. But the dynamic is identical.

Second, that trust calibration matters as much as capability. The report’s own data hints at this: 52 percent of workers see AI as a command-based tool, while 46 percent see it as a thought partner. That split isn’t a preference — it’s a reflection of how well people understand what they’re delegating. Miscalibrated trust — too much or too little — is how autonomous systems fail in operational environments. The military has spent billions learning this lesson. The report proposes that every employee learn it on the job.

Third, and most importantly: the governance architecture has to be designed before the capability scales, not after. The DoD doesn’t deploy autonomous systems and then figure out the rules of engagement. The rules come first. They’re imperfect, they evolve, but they exist before the system is operational.

Microsoft’s report proposes deploying autonomous agents across entire business functions and then building the governance afterward. They call this journey “Phase 1 to Phase 3.” A defense planner would call it an operational risk.



The Responsible Innovation Gap

Here’s what fascinates me about the Frontier Firm concept. It’s a genuinely interesting framework for thinking about organizational transformation. The capacity gap is real. The potential for AI agents to expand what small teams can accomplish is real. I personally see this every day with the engineering teams I run. The shift from functional org charts to outcome-driven work charts is a prediction I think will prove directionally correct.

But the report treats governance as an afterthought — a problem to be solved after the productivity gains are captured. And this is a pattern I’ve seen before.

One of the themes we’ve explored in the Science of Responsible Innovation is that the time to design governance into a system is during the architecture phase — not during deployment, and certainly not after failure. Violet Teaming exists precisely because the traditional approach — build it, ship it, regulate it — doesn’t work when the systems in question are capable of autonomous action.



The Frontier Firm, as Microsoft describes it, would have AI agents running supply chains, managing customer relationships, executing financial analysis, and operating business processes end-to-end. Each of these involves decisions with consequences for real people — employees, customers, communities. The report mentions a “human-agent ratio” as a new business metric. But a ratio tells you headcount, not accountability. Who is responsible when an agent makes a consequential error in a process it was running autonomously? The agent boss? The agent’s developer? The company that deployed it? The platform provider?

These are not hypothetical concerns. They’re the same questions that biosecurity experts ask about autonomous laboratory systems, that defense ethicists ask about lethal autonomous weapons, and that financial regulators ask about algorithmic trading. The pattern is consistent: autonomous systems that operate faster than human oversight can track them create accountability vacuums.

There’s a concept the defense community uses that Microsoft’s Frontier Firm badly needs: rules of engagement. Before any autonomous system operates, there are explicit boundaries — what it can do, what requires human authorization, who owns the consequence of each class of action. Call it an Accountability Ledger for the Frontier Firm: a document, maintained alongside the Work Chart, that maps every agentic process to a human owner who answers for its outputs. Not the person who prompted the agent. The person who is responsible when the agent’s decision costs someone their job, their loan, their medical claim. The Work Chart tells you who does what. The Accountability Ledger tells you who answers for what. You need both, or you have neither.

If you can’t name the human who owns the downside, you don’t have automation.

You have abdication.

Microsoft mentions Daniel Susskind’s hypothesis that human work will persist because of three limits: efficiency, human preference, and moral judgment. That’s a reasonable framework. But notice the order. Efficiency is the domain AI masters first. Human preference erodes as people habituate. Moral judgment is the last holdout — and it’s the one the report spends the least time on.



The Real Frontier

I don’t think Microsoft is wrong about where organizations are headed. The data is too consistent, the economic pressure too strong, the capability curve too steep. Some version of the Frontier Firm is coming. The question is whether it arrives as a thoughtfully governed institution or as a productivity-optimized system that discovers its governance gaps through failure.

The report notes that 33 percent of leaders are considering headcount reductions. It notes that 81 percent of employees haven’t changed jobs in the past year and that the labor market is functionally frozen. It notes that AI literacy is now the most in-demand skill on LinkedIn, alongside conflict mitigation, adaptability, and innovative thinking.

Read those data points together. They describe a workforce being asked to adapt to a fundamental restructuring of their relationship to institutional output — while the labor market offers them no exit, the governance frameworks offer them no protection, and the timeline offers them no breathing room.

That’s not a productivity story. That’s a social contract story. And it deserves the same rigor we bring to governing autonomous systems in defense, in biosecurity, in any domain where the speed of the system can outpace the judgment of the humans nominally in control of it.

The most startling finding in the entire report might be the smallest: when asked why they turn to AI over a human colleague, the number-one reason employees cited was 24/7 availability. Not quality. Not speed. Not creativity.

Availability.

They chose the machine because the machine is always there.

That’s not convenience. That’s a new dependency.

There is a version of the Frontier Firm that works — one designed with governance, accountability, and human agency built in from the start. Where the human-agent ratio reflects not just efficiency but responsibility. Where “agent boss” means not just managing outputs but owning consequences. Where the Work Chart includes not just who does what, but who answers for what when the system does something no one intended.

That version requires the kind of cross-domain thinking that doesn’t live in any single corporate report. It requires people who understand autonomous systems governance AND organizational design AND labor economics AND the specific ways that speed and capability create accountability gaps.

The org chart is dying. Microsoft is right about that.

But the thing that replaces it will be defined not by the companies that move fastest, but by the ones that build the governance architecture to match the capability they’re deploying. The history of autonomous systems — in defense, in finance, in biosecurity — teaches this lesson with uncomfortable consistency: velocity without accountability doesn’t scale. It detonates.

We are not building a new kind of company. We are building a new kind of institution. And the institutions that last — the ones that earn trust, that survive their own power — have never been built on efficiency alone. They are built on the willingness to answer for what they do.

Intelligence on tap is a capability. Judgment-by-design is a choice.

The Frontier Firm will be defined by which one it optimizes for.

— Titus

Discussion about this episode

User's avatar

Ready for more?