Res Politica Posthumana - Singularity & Scenarios (3/5)
January 2025
The Scenarios
What scenarios come out of this lens really come down to a question investors don’t always say out loud, but we’re all underwriting one way or another: when intelligence gets cheap, what stays scarce—and who ends up owning the bottleneck?
That’s why Marx is oddly helpful here, if you treat him less as an ideological figure and more as a sharp analyst of how value moves through a system. You don’t need nineteenth-century philosophy in a term sheet. What you do need is a clean way to separate two things people constantly blur: how much output we can produce versus who actually captures the economics. Marx forces that distinction. He basically asks: Is AI expanding the pie, breaking the mechanism that used to distribute the pie, or just moving the toll booth?
In Marx’s baseline frame, value is tied to “socially necessary labor time.” Machines raise productivity, but they don’t create new value—they transmit what Marx calls “dead labor,” meaning past work embedded in capital. Living labor is what generates new value and surplus, which is why wages, labor intensity, and surplus extraction matter so much in his model. If you adopt that scoreboard, AI starts to look like a stress test for the system. It’s an extreme form of constant capital: the capital denominator rises, the labor numerator shrinks, and you drift toward a world where use-value explodes but the value base—defined as labor time—thins out. You get abundance in outputs, but fragility in the mechanism that converts production into broad purchasing power and stable aggregate profits.
Most investors don’t use Marx’s model. We focus on prices, margins, market power, and cash flow. Here, the opportunities diverge around what remains genuinely scarce. AI makes some things cheap, but enforces scarcity elsewhere. The investable edge is in controlling the workflow, permissions, and data rights that AI can’t commoditize.
The first scenario to consider is basically the Marx baseline. Labor share compresses, demand becomes the binding constraint, and profits get harder to sustain in aggregate even as capability accelerates. The economy can do more, but fewer people are needed to do it, and the political system struggles to keep the demand side healthy without redistribution, public employment, or some new consumption engine. Capital chases returns into financial assets because the real economy can’t absorb the productive capacity cleanly. Volatility rises. Inequality widens. You still get extraordinary company outcomes, but the macro backdrop becomes more political, which changes discount rates, regulation risk, and the durability of business models that rely on smooth legitimacy.
The second scenario flips the emphasis from value creation to value capture—the “toll booth economy.” Here, the story isn’t that profits collapse; it’s that they concentrate. Core cognitive capability drifts toward commodity status, but the right to deploy it safely at scale doesn’t. Scarcity sits in distribution, workflow ownership, data rights, integration, regulatory permissioning, and trust. Profits become rents earned by whoever controls those choke points. That’s the world where labor share can fall while frontier margins rise, because the unit of advantage isn’t “who can do the work,” it’s “who owns the system the work has to pass through.”
That produces a pretty specific market structure. Winner-take-most dynamics emerge at layers where learning loops compound and switching costs harden. And you get a barbell: a small number of dominant platforms or infrastructure providers, plus a long tail of implementers and feature companies that get competed down. The diligence question becomes simple and unforgiving. It’s not “does this company use AI?” It’s “does it own the learning loop, the workflow surface area, and the trust boundary?” If not, margins are probably temporary.
The third scenario is a “hybrid labor renaissance,” which sounds counterintuitive but keeps popping up in real markets. In high-liability domains—healthcare, safety-critical systems, law, finance—society insists on human accountability even when automation is technically feasible. That creates a durable scarcity: not intelligence, but responsibility. In this world, AI amplifies rather than replaces scarce human judgment. Unit economics improve, but humans remain in the loop because institutions want a named accountable party. The winners aren’t just software vendors. They’re workflow owners who can bundle AI plus governance into something that enterprises and regulators will actually sign. The monetization line is basically: turning “we can” into “we’re allowed to.”
The fourth scenario is the inversion: synthetic labor becomes the effective source of production, and profit becomes primarily a function of owning and coordinating scalable intelligence rather than extracting surplus from human labor. This is the most destabilizing politically and, in some ways, the most attractive economically for incumbent capital, because it implies that surplus can be generated at near-zero marginal labor cost. The binding constraints shift. It’s no longer wages and working hours that matter most. It’s attention, trust, institutional absorption, and governance capacity. In markets, this looks like outsized value accruing to the owners of models, distribution, and governance rails—while human labor becomes economically peripheral across an ever-larger share of production. “Governed automation” ceases to be a feature. It becomes the permission layer for capturing surplus.
No matter the scenario, investors must find the new source of scarcity. What matters is how value is measured, allocated, and captured. In an AI-driven economy, scarcity shifts to coordination and control. The core investment question: Where is the required constraint, and who owns it? That's where returns concentrate.
So the scenario question isn’t really “does AI destroy labor” or “does AI create value.” It’s “which scoreboard ends up governing the economy, and which scarcities become enforceable?” If we hold fast to labor time as the metric, contradictions intensify, and redistribution politics dominate the macro. If we accept that value has partially escaped the labor frame, we get a capitalism defined by ownership and governance of scalable intelligence, with concentration at the top and legitimacy as the binding constraint. Either way, the investor playbook converges on something pretty practical: don’t just buy “AI.” Buy constraint ownership. Back businesses that control deployment, feedback, and trust—because that’s where scarcity hides once cognition stops being scarce.