The Gaussian Squeeze
March 23, 2026
Most discourse around AI and labor falls into one of two camps. Either AI replaces everyone and we're headed for mass unemployment, or AI is just a tool and people will adapt like they always have. Both are lazy. Both are wrong. The interesting question is not whether AI replaces people, but which kind of work it structurally eliminates — and what that does to the shape of output in an economy.
I want to make a specific claim: AI is compressing the distribution of work output. It is raising the floor, barely moving the ceiling, and hollowing out the variance. And the downstream effects of this compression are almost entirely misunderstood.
The semi-knowledge layer
There's a category of work I'd call non-repetitive semi-knowledge. It's not deep expertise — the surgeon, the compiler engineer, the trade lawyer. And it's not mechanical repetition — the assembly line, the data entry clerk. It sits in between: work that requires some context, some judgment, some synthesis, but never enough of any one domain to constitute real depth.
Think of the generalist chief of staff who sits in every meeting, takes notes, and "keeps things moving." The operations person who builds dashboards from templates. The executive assistant who drafts emails, summarizes reports, and manages information flow. The junior strategy analyst who pulls comps and formats decks.
These roles exist because information is expensive to move between people, and humans have finite bandwidth. You need someone to sit at the junction points, absorb context, and route it. The semi-knowledge layer is, in essence, a human middleware stack.
Large language models are a near-perfect substitute for the information-processing component of this. They absorb context without fatigue, synthesize across domains without bandwidth constraints, and produce structured output on demand. To be clear: many of these roles also involve political judgment, relationship management, and organizational intuition that no model captures. The best chiefs of staff don't just route information — they read rooms, manage up, and know which battles to pick. But the majority of their time is spent on the synthesize-and-route function, and that's the part that's being automated. The replacement isn't hypothetical or five years out. It's happening now, in every company that's paying attention.
What this means for organizations
The implication is not that companies become one founder and a fleet of AI agents. You still need humans — probably somewhere between a handful and twenty for most startups doing non-trivial things. But you need them for a fundamentally different reason than before.
You need domain experts with decision-making authority. People who know a domain well enough to have taste — to look at AI output and catch when it's subtly, confidently wrong. And you need them to operate with real authority, not as advisors or reviewers, but as principals who own outcomes.
The organizational structure that emerges from this looks nothing like the hierarchies we've been building. It's flat by necessity, not ideology. One person with deep expertise manages AI agents for execution in their domain, and collaborates laterally with peers who do the same in different domains. No delegation chains. No middle management translating context between layers. No weekly alignment meetings that exist because the org chart is too tall for information to travel unimpeded.
The coordination cost that justified most organizational complexity is being absorbed by AI. What's left is the part that was always supposed to be the point: people with knowledge making decisions.
The distribution argument
Here's the part I find most interesting, and the part almost nobody is discussing.
Consider the aggregate distribution of work output across an economy — or even within a single company — as approximately Gaussian. There's a mean, there's a standard deviation, there's a spread of quality from terrible to exceptional.
AI is doing something structurally significant to this distribution.
The mean is shifting right. This is the obvious part. Your worst first draft is now competent. The most junior person's analysis has proper structure. Cold emails are grammatically correct and superficially personalized. The floor is rising everywhere, and it's rising fast.
But the standard deviation is compressing. The variance in output quality is shrinking. And this is the part that matters.
When you give everyone access to a tool that produces consistently above-average output, you don't get a world where everything is excellent. You get a world where everything is the same. The terrible stuff gets pulled up toward the mean. But the exceptional stuff — the work that sat three or four standard deviations out — doesn't get more exceptional. If anything, it gets harder to distinguish from the increasingly competent middle.
Distribution of Work Output Quality
I see this concretely in VC deal flow. Every deck is well-structured. Every market sizing follows the right framework. Every competitive analysis hits the expected categories. The quality floor has risen dramatically. And yet the signal-to-noise ratio feels worse, because the noise is now high-quality noise. Polished mediocrity is harder to filter than obvious mediocrity.
We are entering an era of radical convergence toward the mean at scale, and most people are celebrating it because the average got better. But the average was never where value was created.
Tails are all that matter
In any competitive system — a market, an organization, an economy — disproportionate value accrues to the tails of the distribution. The outlier product decision. The strategy that violates every framework. The insight that couldn't have been derived from existing data because it's about something that hasn't happened yet.
These are tail events. And AI is, quite literally, a regression-to-the-mean machine.
A fair objection: token-level prediction doesn't mechanically prevent novel outputs. A sequence of individually probable tokens can compose into something genuinely surprising — language models do produce creative work, and dismissing that would be dishonest. But the relevant question isn't whether an LLM can produce a tail event in isolation. It's what happens when the same tool is used by millions of people to produce the same categories of output. The compositional space is vast, but the attractors within it are not. Temperature and clever prompting expand the distribution at the margins, but the default mode — the thing that happens when a hundred thousand people use the same model for the same kind of task — is convergence. Not because each output is mediocre, but because the aggregate is homogeneous.
The things that actually change a company's trajectory — or a market, or a field — are precisely the things that sit outside what any model trained on historical data would predict. They come from someone seeing a pattern that isn't in the distribution, or deliberately breaking a pattern that is.
The case for junior people
This is where the standard narrative inverts.
The prevailing take is that AI replaces junior people first. They do the "simple" work. They're the most substitutable. Cut them, save burn, let AI handle the grunt work.
This is wrong, or at least far more wrong than people realize.
What AI replaces is average work, not junior work. Those overlap, but they're not the same thing. A senior person producing templated strategy decks is doing average work. A junior person asking a question so naive it reframes the entire problem is not.
The real argument for junior people in an AI-compressed world is an options-theoretic one. Juniors are cheap optionality on tail events.
They haven't been pattern-matched into conventional wisdom. They haven't internalized the frameworks that AI was trained on and can now reproduce infinitely. They don't know what's "supposed" to work, which means they haven't pruned their hypothesis space to match the consensus distribution. Their priors are messy, uncalibrated, and occasionally wrong in exactly the right way.
Most of the time, this produces noise. The dumb question that's actually just dumb. The unconventional take that's unconventional because it's wrong. But in a world where the middle of the distribution is automated — where competent, framework-adherent, slightly-above-average output is essentially free — the expected value of that optionality changes. When the baseline is commoditized, the variance premium goes up.
This is not a romantic argument about the wisdom of youth. It's a portfolio construction argument. If AI gives you infinite median-quality output at near-zero marginal cost, the scarce resource is no longer competence. It's deviation from competence in the right direction. And you can't generate that deviation by prompting a model that was trained to converge.
What to actually do
If you're building a company right now, the implications are fairly concrete.
Eliminate the coordination layer. The roles that existed to move information between people, maintain alignment, summarize and synthesize across teams — that's AI work now. It does it better, faster, without context loss, and without needing a skip-level to stay engaged.
Hire domain experts with authority. Not advisors. Not consultants. People who own decisions in their area and have enough depth to exercise taste over AI output. Taste — the ability to distinguish subtly wrong from subtly right — is the skill that AI makes more valuable, not less. It's also the skill that's hardest to acquire without genuine depth.
Preserve optionality on the tails. Keep people in the room who haven't been optimized yet. Whose value isn't in reliable execution — AI handles that now — but in the chance that they see something no model, and no pattern-matched senior, would see. The hit rate will be low. The expected value, in a variance-starved world, will be high.
The companies that define the next era won't be the ones with the most sophisticated AI workflows. They'll be the ones that understood what AI actually does to distributions — and built for the tails while everyone else optimized for the mean.