Res Politica Posthumana - Singularity & Scenarios (1/5)
January 2025
A couple of weeks ago, Elon Musk tossed out a line on X that lit up the usual circuit: “We have entered the Singularity.” Then he doubled down: “2026 is the year of the Singularity.” Was it a plug for Grok, or just Musk doing his future-gazing routine? Probably both.
But the point isn’t whether he’s right on the timestamp. Musk is useful here less as a prophet and more as a sensor. He’s picking up on something real: the pace has changed.
If you’ve tried to build anything recently, you’ve felt it. Work that used to take a big team and months of coordination can now be done by a small group in a few intense weeks. You blink, and a product ships. You blink again, and it’s everywhere, or at least it could be. Distribution is no longer the hard part. The bottleneck is what the tools now let you do.
That’s why “Singularity” keeps coming back. Not as sci-fi hype, but as a practical label for a threshold we’re approaching. The Singularity isn’t a date you circle on your calendar. It’s a threshold you cross: the moment intelligence stops being a scarce, embodied bottleneck and starts behaving like a scalable resource.
Once thinking becomes software, the dynamics change immediately. One mind becomes many. A breakthrough doesn’t stay local; it propagates. Capabilities don’t just improve, they replicate. And the compounding starts to feel less like “progress” and more like a system entering a different regime.
Seen this way, AI isn’t just another tech cycle. It sits in the small class of general-purpose technologies that reorganize everything above them by changing the foundations underneath. Electricity didn’t just power devices; it reshaped production and daily life. The microprocessor didn’t just speed up calculations; it turned information into a central economic input. AI belongs in that category because it changes something even deeper: how cognition itself is produced, scaled, and distributed.
This idea has been in the air for decades. John von Neumann hinted that progress might be accelerating toward a point where “the affairs of men” would be transformed so completely that old categories couldn’t hold. Vernor Vinge sharpened that into a mechanism: once we build minds smarter than our own, the rate of change stops being something we steer and becomes something that happens to us. Kurzweil later wrapped it in an optimistic story: not a single explosion, but a gradual crossing.
Strip away the mythology and you get a simple operational claim: technology is starting to change faster than our institutions can adapt.
This is where meta-acceleration matters. It’s not just that things are speeding up. It’s that the pace at which they speed up is increasing.
When that happens, institutions built for slower feedback loops break in predictable ways. The legal system starts to feel like batch processing in a world that’s gone real time. Education keeps preparing people for jobs whose half-life is shrinking. Credentials lag behind what people can actually do. Monetary and regulatory systems get asked to manage dynamics they were never designed to handle.
Labor markets, which are among the slowest systems we have, absorb the shock late and unevenly. The surface can look stable right up until it doesn’t.
Economically, this doesn’t produce an instant, clean utopia of abundance. It looks more like a collision. On one side you get deflation in anything that can be digitized: software, media, routine knowledge work, large parts of “analysis,” even pieces of design and strategy. As automation and cheap compute push marginal costs down, prices get pressured toward zero.
On the other side, the bottlenecks tighten in places that don’t scale frictionlessly: compute, chips, energy, grid capacity, physical supply chains, data rights, verification, and even legitimacy itself. The more cognition scales, the more these constraints become the real pressure points.
As this shift unfolds, wages and prices start to weaken at the margin as the primary allocation mechanism across more and more domains. Allocation drifts toward access: who has compute budgets, who has energy, who has rights to data, who controls identity and verification, who sets protocols, who has a privileged position in the networks that decide what counts.
From a distance, the outcome can resemble “socialism beyond capitalism.” But it doesn’t arrive through ideology or central planning. It emerges because the old coordination mechanisms stop working the way they used to when scalable cognition collides with physical limits and institutional lag.
And this is why ideology starts to explain less. Not because people stop believing things, but because belief isn’t at the steering wheel. The steering wheel is the techno-economic stack: physical constraints, informational constraints, fast feedback loops, and scaling dynamics that don’t particularly care what anyone prefers. Politics becomes the fight to regain agency inside a system that is accelerating faster than the mechanisms built to govern it. Governance becomes less about persuasion and more about protocols.
That also explains why the “sudden breakthrough” story doesn’t really hold. What we’re watching is closer to an eighty-year overnight success. The foundations were laid in the 1930s and 1940s as computation theory matured and early models of neural activity were formalized. In 1943, McCulloch and Pitts published what’s often treated as the first neural network paper. After that came waves: bursts of optimism, hard ceilings, AI winters, and long stretches where it looked like an academic side quest.
The inflection around 2022 wasn’t a magic new idea. It was convergence. Enough theory, enough data, enough compute, finally at scale. ChatGPT didn’t invent intelligence. It made it operational.
That same perspective helps explain why AI is spreading faster than the internet did. The internet had to be physically built: fiber in the ground, towers on hills, hardware in homes, years of coordination. AI rides on an infrastructure that largely already exists: data centers, broadband, smartphones, cloud platforms, software distribution. It spreads at software speed.
But anything that spreads frictionlessly in software still collides with physics. You can copy a model in seconds. You can’t instantly expand a power grid.
Some skepticism about AI also comes from a romantic picture of what “real intelligence” is supposed to look like. People say language models don’t truly understand, but they’re often comparing them to an idealized human, not to how cognition actually shows up in most of the economy.
In practice, a lot of work runs on patterns: playbooks, inherited categories, narrow problem spaces, bounded creativity. The people who consistently generate genuinely new concepts, transfer insight across distant domains, or synthesize at a very high level are rare even in elite circles. Against that baseline, a system that can match or beat the median human’s functional output reliably, cheaply, and at scale is already a civilization-scale resource. It doesn’t need to be philosophically “superhuman.” It just needs to outperform the middle of the distribution, where most economic activity lives.
The same logic applies to creativity. Most human creativity is recombination under constraint. Beethoven built on Mozart and Haydn. Van Gogh pushed forward traditions that were already in motion. Hip-hop makes the mechanism explicit: sample, flip, recombine. Foundational geniuses who redraw the whole board are rare.
So if AI captures even a small slice of high-end creative capacity, it won’t simply “help artists.” It will reshape culture, media, design, and a large chunk of knowledge work, because structured recombination at scale is exactly the kind of thing machines are getting better at.
All of this sets up the next shift, and it’s going to feel less like a fun chatbot novelty and more like a change in the economic operating system. We’re moving from “talking to AI” to AI running in the background.
The leap isn’t just better answers. It’s sustained, unattended productivity. You hand over an objective, the system goes to work, and it comes back with drafts, options, checks, citations, revisions, sometimes in parallel streams. Work that used to take hours in a linear sequence gets compressed into minutes because multiple agents can run at once. Humans get pulled upward from doing the work to supervising the work.
The mental model is simple: we shift from prompting to management. Less “what’s the perfect question?” and more “here’s the objective, here are the constraints, here’s what good looks like.” You’re sequencing tasks, choosing tradeoffs, auditing outputs, and verifying the process. Once that becomes normal, AI stops feeling like a tool you occasionally use. It starts feeling like ambient capability.
And that’s where the political economy really turns. In the industrial era, the big fights were over land, capital, labor, and credit. In the coming era, the fights migrate toward access and governance: who gets compute, who gets energy, who controls data rights, who manages identity, who provides verification, who writes the protocols, and who decides what counts as legitimate as old gatekeepers wobble.
The institutions that mediate trust, courts, regulators, credentialing bodies, money, are being asked to operate at a tempo they weren’t built for. Some will fail slowly through legitimacy decay. Others will fail quickly through crisis and improvisation. Either way, institutional change starts to look less like a tidy program of reform and more like adaptation under pressure.
So yes, if Musk wants to pin the Singularity to a year, he can. That’s mood-setting, marketing, personal mythology. But the deeper point isn’t calendar-based. It’s structural.
We’ve crossed into a world where intelligence is increasingly digital, replicable, and fast, and institutions built for the old tempo are starting to shear.
In this telling, the Singularity isn’t a thunderclap. It’s a threshold. And the real question that follows isn’t whether the machine is “really intelligent.” It’s whether our systems of law, education, money, labor, and legitimacy can survive scalable cognition without being rebuilt.
Res Politica Posthumana isn’t a destination. It’s a name for the negotiation we’re now in: a politics of access, allocation, and legitimacy in an age of scalable cognition, when cognition becomes infrastructure, when allocation drifts from wages and prices toward access and protocols, and when the map has to be redrawn not because the future is unknowable, but because the terrain is moving faster than we can keep it updated.