There’s a story technologists like to tell about AI agents: software will do most cognitive work humans currently do. Transaction costs, the friction of coordinating economic activity, collapse toward zero. Markets become faster, smarter, more efficient. Everyone benefits.
It’s tidy but conveniently missing about half the economics.
The moment you treat agents as an economic object, the familiar frameworks stop fitting. Agents aren’t workers with lower wages. They aren’t software with better features. The market they’re creating doesn’t behave like any market we’ve built institutions around.
In an endpoint when transaction costs drop so far that the logic of firms and employment starts to dissolve, the “Coasean singularity” named by the MIT and Harvard researcher, the supply and demand in Economics 101 and what people take for granted in labor economics become… strange.
Demand has two regions
Demand for AI agents splits into two structurally different regions:
Substitution demand: agents doing things humans currently do: drafting, summarizing, screening, supporting. This has a natural price ceiling: no buyer pays more for an agent than the human alternative costs. The whole substitution, or a more fashionable word today, displacement market trends toward commodity, and it is deflationary. As agents displace workers, human wages fall, and the ceiling they’re priced against falls with them.
Frontier demand: agents doing things humans couldn’t do at scale regardless of cost. Monitoring every transaction across a financial system in real time. Personalizing every customer interaction for millions of people simultaneously. Operating in hazardous mining environments. There was no human version to substitute. These get priced against the value of the outcome, not the cost of an alternative, hence no ceiling, no floor.
This echoes MIT’s research that identifies the two scenarios for agent deployment: making decisions of similar quality at dramatically lower cost, or making higher-quality decisions than humans. The first maps to Substitution demand. The second is Frontier.
The boundary between the two regions is not fixed. As agents improve, tasks that once required human judgment slide into the substitution category. The frontier keeps moving. The people on the wrong side of that line don’t get much warning.
boundary
*assumes agents keep improving
replaceability →
Supply has a gap where the middle used to be
In a normal labour market, supply is continuous. Buyers choose along the spectrum based on need and budget.
AI agent supply doesn’t work this way.
At one end: the commodity tier. Once an agent is trained, copies deploy at near-zero marginal cost. Supply, at least in terms of quantity, is effectively unlimited.
At the other end: the differentiated tier. Agents with genuine domain depth, trained on proprietary data, fine-tuned on years of professional feedback. These are scarce, not because copying is expensive, but because the inputs that make them valuable are slow to accumulate. You can’t rush the training data and pile garbage data that makes a legal agent reliable in contested litigation.
Between these two tiers, there is a chasm. AI does not optimize for competent generalist. An agent that is almost as good as the best one is nearly worthless if buyers can identify the best one, exactly the same as why Google dominates the search engine market and confirmed by this NBER paper.
middle tier
Zero friction is real.
Agents are, for most practical purposes, location-less. An agent trained in Tashkent deploys in Copenhagen at no incremental cost. For cognitive tasks, there is no Geneva wage premium anymore. There is just a global market.
Labour economics has always depended on friction. Workers can’t instantly move between markets. That friction sustains wage differentials and creates floors. Agents collapse the friction on the cognitive side entirely. Wage arbitrage that took decades in manufacturing, a.k.a. move production to cheaper labour, takes months or even shorter for knowledge work.