Res Politica Posthumana - Singularity & Scenarios (2/5)
January 2025
The Framing
The “Singularity” debate tends to go off track the moment it forces a binary: intelligence is either human or machine, with nothing in between. That split feels intuitive, but it doesn’t survive a closer look.
Human intelligence was never a sealed, self-contained thing. It has always been shaped and amplified by tools and systems around us, language, writing, numbers, diagrams, institutions, instruments, and now computers. So if “natural” means unassisted or unscaffolded, that version of intelligence is mostly a myth. In a literal sense, intelligence has always depended on artifices.
That reframing matters for investors because it changes the direction of the story. AI isn’t “entering” the economy from the outside. It’s tightening the scaffolding that already makes modern cognition possible. It narrows the gap between thinking and doing, and when that gap compresses, what used to be scarce can become cheap.
Markets have a consistent response to that kind of shift. When a constraint collapses, value doesn’t disappear. It migrates. It moves away from what has been commoditized and toward whatever controls deployment, improvement, and trust. So the knee-jerk move, hunting for “AI companies”, is often the shallow move. The deeper move is to look for industries where cognition is the binding bottleneck in the value chain and assume that bottleneck is about to be repriced.
None of this is about turning people into machines. It’s the opposite: it’s noticing that the human has always been technical. What’s new is not that tools assist us, but that they now sit closer to judgment. These systems don’t just help us lift and remember. They increasingly help us decide, explain, and communicate. That’s why the “product cycle” metaphor breaks down. This is closer to a new infrastructure layer being laid down, a cognition layer, that runs underneath many industries at once.
Once you see it that way, flashy demos matter less than placement. Who sits inside the workflow? Who touches the decision? Who controls the channel through which cognition gets applied at scale?
That’s also where the real tension shows up. If cognition is partly autopilot at the neural level, and if institutions shape what choices are even available, then who is doing the deciding? Neuroscience dissolves free will into biology. Social theory dissolves the sovereign individual into incentives and systems. Put those together and you don’t get a metaphysical puzzle so much as a practical one: a crisis of decision.
A useful way to handle that crisis is structural. Human intelligence lives in the gap between two forms of automaticity: biological automaticity on one side, technical automaticity on the other. Thinking shows up in the space between them. And historically, what we call intelligence isn’t smooth optimization. It’s interruption, breaking a habit, pausing a default response, reframing the situation so something genuinely new can happen. Automatic systems generate order. Intelligence redirects that order.
In companies, that maps cleanly to where value will be created. The highest-leverage AI won’t just generate content faster. It will sit at the interruption points: where a process breaks, where exceptions pile up, where decisions must be justified, audited, defended, and owned. In other words, the money isn’t only in automation. It’s in governed automation, permissions, overrides, audit trails, accountability, because the closer tools get to judgment, the more the market demands control.
If you strip away the science-fiction tone, the Singularity is less about machines becoming alien and more about a regime change in how progress propagates. It’s what happens when intelligence becomes a scalable infrastructure layer, built, distributed, and improved like software. Once the outputs of intelligence become inputs into more intelligence at industrial scale, feedback loops compress and improvements begin to compound.
That creates mismatch. Institutions, rules, and job categories built for slower cycles start to strain. You can already see it in the safest, dullest corners of the economy, compliance, audit, insurance, legal operations, procurement, clinical administration, where the pace has been slow not because people are lazy, but because cognition and verification were scarce and expensive. As cognition gets cheaper, bottlenecks move to distribution, integration, data rights, regulatory approval, and, most of all, trust.
And the “trust” point is not a footnote. It’s one of the durable implications. As soon as models sit inside high-stakes decisions, the demand for observability, evaluation, provenance, and model-risk discipline stops being niche. It becomes part of the operating fabric. In the same way the internet created a durable security category, this shift creates a durable trust category: tools and rails that make cognition usable in regulated, high-liability environments.
We’re noticing this now because a long philosophical argument got turned into an engineering program, and that program finally hit scale. Turing’s move in 1950 was pragmatic: stop arguing about what intelligence “really is” and focus on observable performance. If you can’t define it cleanly, define a test that stands in for it. Intelligence becomes operational, something you build toward rather than debate endlessly.
Modern machine learning supplies the accelerant: clear targets, massive scale, and self-reinforcing learning loops. Each cycle produces artifacts, data, tools, scaffolding, that feed the next. At some point, scale stops looking incremental and starts producing qualitative shifts. That’s why the “sudden breakthrough” story is misleading. What looks abrupt is often long gestation crossing a threshold as constraints loosen.
There is also a deeper fault line in what we’re building. Some systems are trained primarily to imitate human discourse and become extraordinarily fluent. Others are trained to act under uncertainty, inside feedback loops where behavior has consequences and surprise forces updating. Humans learn through both: culture is accumulated imitation layered on top of trial, error, and consequence. The open question is how far imitation alone can go, and how essential grounded feedback will be for robust intelligence. Either way, the market lesson is the same: defensibility shifts toward privileged feedback, workflows where actions create data, outcomes are measurable, and the system improves as it’s used. The moat is not “having AI.” The moat is having the right to learn.
And this brings the story full circle. Intelligence isn’t arriving from outside. It’s being reimplemented. Nature built cognition under severe constraints, slow neurons, fragile biology, systems optimized for survival rather than efficiency. Now many of those constraints are loosening. We’re recreating cognitive functions digitally, accelerating them, and coupling them back into human systems through tools and interfaces.
That’s why the value won’t accrue only to the most visible models. It will accrue to the companies that refactor work itself, compressing cycle times, reducing handoffs, reshaping roles, and changing unit economics by reorganizing how decisions are made and verified. That is what “AI-native” really means: not branding, but a new operating model.
So the Singularity, as people actually experience it, isn’t just smarter software. It’s the moment the technical scaffolding becomes powerful and intimate enough that our inherited picture of autonomy no longer holds by default. The crisis is perceptual before it’s existential, and that crisis is also the opening. Intelligence isn’t being replaced. It’s being reworked. And once intelligence becomes reworkable, the rules of change begin to shift too.