What does your company actually own? The answer used to be simple, roughly: physical capital, intellectual property, customer relationships, and people, or more precisely, the right to direct your employees’ time. The people themselves walked out the door every evening. You owned their output, not them.
That last category is changing, changing in a way that sounds incremental and is not.
When a company deploys an AI agent to handle work that a person used to do, it has crossed a line that most executives have not thought carefully about. Yes, the machine is running leaner and is automating workflows. But, at the same time, it is replacing a recurring labour cost with an owned or licensed productive asset, something that generates output, can be copied, improved, and in principle sold.
Hence, the firm’s relationship to its own productive capacity has changed structurally.
Most companies undergo AI transition and operationalize it as a series of disconnected operational decisions. This is not wrong, but a huge, missed, opportunity. The firms that understand what is actually happening are making a different kind of decision: not “how many agents do we deploy” but “which cognitive capacities do we want to own, which do we license, and which do we let someone else commoditize for us”.
This is a portfolio construction question. And most firms are not yet treating it like one.
The firm used to be a coordinator of people. Now it is something else.
Ronald Coase asked in 1937 why firms exist at all. His answer is because coordinating work through markets is expensive. Hiring, contracting, supervising, aligning, all of it costs time and money. The firm internalizes those costs because doing so is cheaper than transacting for every task in the open market.
That logic held for eighty years. The firm as an institution was, at its core, an employer.
Agents don’t change the fact that coordination has costs. But they change what the primary coordination challenge is. When cognitive tasks can be handled by software you own or license, the question is no longer “how do I attract and retain people who can do this work”, but now “where does the productive capacity live, and who controls it”.
Agents do not dissolve the firm. Coase's logic still holds. Coordination still has costs. What changes is what the firm is coordinating: other than the time and judgment of people, the firm is also coordinating the deployment and improvement of productive assets it owns or licenses. The firm persists. What it is persists less clearly.
The firm that grasps this is already restructuring around it. Many AI-attributed layoffs are ambiguous: companies that over-hired during the pandemic can now cite AI as the reason for corrections they would have made regardless. The restructuring signal worth attending may lie in the asymmetry in what firms are hiring for instead.
Salesforce is the clearest example. Its CFO disclosed on an analyst call that AI tools allowed the firm to reassign customer service workers internally, saving $50 million in costs, while simultaneously announcing 2,000 new sales roles to sell AI capacity to others. Total headcount grew 9% year-over-year. The firm recomposed: fewer people handling cognitive tasks that agents now perform; more people selling the agents. Similarly, Citigroup indicated that automation and AI-enabled systems will allow it to run middle-office and operational functions with fewer employees while simultaneously investing in the AI infrastructure that will handle those functions.
The pattern extends beyond any single firm. A Federal Reserve Bank of New York survey of AI-using service firms found that actual AI-driven layoffs had nearly disappeared, falling from roughly 10% of firms in 2024 to just 1% in 2025. The dominant workforce adjustment was retraining, reported by a third of firms. The share of firms reducing planned hiring due to AI is expected to nearly double over the next six months. Firms are not firing their way to an AI workforce. They are quietly stopping the intake of the roles that agents are beginning to fill.
Where the compounding advantage is moving and why most firms haven’t noticed
Competitive advantage in knowledge-intensive work had a structural foundation: expertise was tacit. A good analyst or seasoned lawyer was not valuable because of the facts they knew, but because their judgment could not be copied. It lived in their head. You could only hire it, develop it, and hope to retain it.
That is changing. I have to make the disclaimer that this is a prediction and not a description of the present. But it is a prediction with a mechanism behind it, and mechanisms compound before the consequences become visible.
When skilled people work alongside agents, correcting outputs, validating decisions, flagging errors, they are transcribing their expertise into training data, usually without realizing it. Research confirms the tacit judgment embedded in how experts interact with AI systems becomes legible, extractable, and ownable in a way it never was before. Once knowledge is legible, the competitive question changes. Agents can be replicated, but what cannot be replicated is the data that made your agent better than a generic one: years of real decisions, real corrections, real domain feedback from actual operations. A competitor can build a comparable agent but they cannot buy your history of building it. This only holds if the firm is actually capturing that history: systematically logging decisions, corrections, and domain feedback rather than routing everything through a hosted API where the data stays with the vendor. In reality, most are not.
The compounding logic is structurally sound. The NBER analysis of competitive dynamics in generative AI identifies exactly this feedback mechanism as the basis for durable advantages. More feedback improves the agent. A better agent widens the gap from competitors running generic models. The asset appreciates through use. This is opposite to how people work: talent depreciates when it walks out the door.
Value appreciates in the person.
The firm rents it.
The individual's expertise genuinely appreciates. The firm's problem: it cannot own that appreciation. At departure, accumulated value resets.
Value appreciates in the asset.
The firm owns it.
Each deployment generates domain feedback. Each correction improves the model. The accumulated knowledge does not walk out the door.
The compounding logic matters because agent markets are not continuous. As I argued in my first instalment of this series, there is no viable middle tier, commodity agents compete on price alone, and the differentiated tier is accessible only to whoever accumulated the relevant data first. The window to choose which tier you end up in is open now, as the proprietary datasets that will define differentiated agents are still being generated. Once a competitor has three years of labelled domain operations behind their model and you do not, the gap is structural. You cannot compress accumulation.
The decision most firms are avoiding.
Executives are quite comfortable deploying agents. The uncomfortable decision for them is whether to treat the deployment decision as a strategic asset allocation problem or a series of operational patches.
Own means controlling the data. If you are systematically capturing the domain-specific decisions, feedback, and corrections that your people generate and using that to train or fine-tune agents, you are building a proprietary productive asset. That asset appreciates as you deploy it, because deployment generates more data, which improves the agent, which widens the gap from competitors. This is the data flywheel as the defining moat of the AI era: the feedback loop that makes the model better than anyone else’s.
License means depending on someone else’s moat. You are renting cognitive capacity that your competitors can rent equally. Your efficiency gains are real. Your competitive advantage from them is temporary, because the gap closes the moment your competitor licenses the same thing.
Cede means competing on price in a commodity market you did not choose to enter. If the task is something a generic agent can do adequately, and you have not built proprietary data around it, you are in that market regardless of whether you intended to be.
Most firms right now are making licensing decisions while believing they are making ownership decisions. The distinction, do we control the training data, or are we just a customer of someone who does, is not yet part of most enterprise AI conversations. It will be, once the competitive consequences become visible. By then, the data advantages will already be compounding.
What this means for the firm
The firm is not disappearing.
However, what the firm does and owns is shifting.
For most of the 20th century, the firm’s primary job was to attract, organize, and retain human productive capacity. HR, management theory, organizational design - the entire apparatus existed to answer the question of how to get people to do good work reliably.
Today, the firm’s primary job, as agents take over more cognitive tasks, is to decide which productive capacities to own as assets, which to license, and which to cede to the commodity market. That is closer to what a fund manager does than what an employer does.
The executives who are ahead of this are not asking “how do we manage the AI transition”. Acemoglu, Autor and Johnson argues the direction of AI deployment is a choice, that firms can shape whether agents augment workers or replace them. But a choice made by default, through a series of disconnected operational decisions, is still a choice.
The firms ahead of this are making it deliberately. They are asking a harder question: in five years, when the agents that perform our core functions exist and are owned by someone, who do we want that to be?
That question has an answer. Most firms just haven’t started thinking about it.