AI and the New Economic Order (2/3)

June 2025 

What If the AI Revolution Stalls?

There is, however, a contrarian possibility. What if the AI revolution underdelivers? While current advances are impressive, they remain fragile. Large language models hallucinate. Agentic systems struggle with multi-step reasoning. Contextual reliability is inconsistent. Scaling remains capital-intensive, and monetization pathways unclear.

In this view, AI resembles past overhyped technologies, glossy in promise, narrow in actual utility. The late-stage enthusiasm for crypto or the overinvestment in fiber-optic networks before the dotcom crash offer cautionary tales. If capital continues to flow faster than adoption or regulation allows, the next financial correction may not come from banks or housing, but from overbuilt AI infrastructure, data centers, GPUs, and cloud capacity.

Social resistance is also likely to grow. Just as nuclear energy faced a political backlash in the 1970s, AI could provoke populist uprisings, digital boycotts, or coordinated regulatory pushback. Already, major jurisdictions, from the EU’s AI Act to China’s content controls, are beginning to assert sovereignty over algorithmic deployment. The global AI landscape may fracture, not unify, with divergent models for ethics, control, and access. Even if AI adoption stalls, its deployment and consequences may only be delayed, not avoided. Once the technology exists, it cannot be uninvented. The underlying assumptions may be postponed, but not permanently reversed.

What if AI, despite its promise, falls prey to Solow’s paradox—visible everywhere except in the productivity statistics? 

In 1987, Robert Solow, a Nobel laureate in economics, famously quipped: “You can see the computer age everywhere but in the productivity statistics.” It was a paradox wrapped in irony: microchips were proliferating, yet GDP was barely stirring. The remark, now part of economic folklore, captured a troubling disconnect between visible technological progress and invisible economic payoff.

Four decades on, the paradox has returned with a vengeance. Artificial intelligence generates code, diagnoses disease, and composes essays. Cloud platforms and digital tools have become the nervous system of modern enterprise. And yet, productivity growth across advanced economies remains, at best, anaemic. The digital age is in full bloom—but its bounty remains elusive.

The original Solow Paradox haunted the 1970s and 1980s, a time when computers became ubiquitous but output per worker stalled. It wasn’t until the mid-1990s that the fog began to lift. Productivity surged in a few sectors—retail, wholesale, semiconductors, finance—thanks not to silicon alone, but to deregulation, competition, and bold organisational change.

Walmart’s embrace of electronic data interchange, barcoding in distribution, and online trading platforms reshaped industries. But these gains were the product of transformation, not mere digitisation. It was not IT itself that delivered returns, it was the rethinking of business models and supply chains.

Today, too many firms are digitising defensively. They deploy customer-relationship tools and compliance dashboards not to innovate, but to avoid falling behind. CRM systems abound, but most employees toggle between apps rather than transform workflows. In banking and retail, kiosks and apps serve consumers more efficiently but often do little for internal productivity. The result: technology adoption without organisational evolution.

Worse still, much of the digital economy’s value, convenience, time saved, reduced friction, slips through the cracks of conventional measurement. GDP tallies widgets and hours, not personalised services or algorithmic accuracy. In a world increasingly run on intangibles, official statistics often miss the point.

Artificial intelligence, the internet of things, and cloud computing dominate boardroom slides and venture pitch decks. Yet the productivity dividend remains elusive. In America, labour productivity growth has hovered around 0.5% in recent years—a far cry from the 2.5% surge during the late 1990s. Europe fares no better: between 2019 and 2023, output per worker rose by just 0.6%, and even that modest figure is flattered by a rebound in hours worked. Despite grand ambitions, only around 40% of large European firms report using AI tools, while penetration among small businesses remains under 12%. The infrastructure is patchy, the skills base thin, and the digital plumbing underdeveloped. At current trends, only 17% of European firms are expected to adopt AI by 2030. The result is a persistent productivity gap: the average European worker produces just 76% of what their American counterpart does.

Japan, meanwhile, has long been mired in economic stagnation and demographic decline. With its working-age population shrinking below 60% of the total, its productivity has barely budged for decades. Initial estimates suggest AI could boost Japanese labour productivity by 0.5–0.6% over the medium term, with total factor productivity rising just 0.5% and GDP gaining less than 1% over the coming decade.

Autonomous driving makes headlines, but mass deployment lies years away. Online retail has grown, yet remains a fraction of overall sales. In energy, AI could lift margins by 30%, but deployment is patchy and underfunded. Even renewables, more productive by design, are still a minority share of global output.

Optimists, such as consultants at McKinsey, argue that with sustained effort, annual productivity growth could quadruple to 2% over the next decade. But there’s a catch: nearly two-thirds of that upside depends on more than technology. It requires new workflows, rewired supply chains, and reimagined incentive structures. Bolt-on AI will not suffice.

Technology does not self-actualise. It is inert without transformation. The gains of the 1990s flowed from strategic audacity as much as computational horsepower. The same principle applies today. It is not the algorithm, but the architecture of the organisation, that unlocks value.

To longstanding explanations, secular stagnation, demographic drag, and innovation fatigue, a new hypothesis has emerged. Economists Erik Brynjolfsson and Seth Benzell argue that in a world awash in code and compute, the bottleneck is no longer capital, but capability. They call it the “Genius Bottleneck.”

The idea is simple but unsettling: only a small cadre of firms and individuals possess the expertise to deploy AI at scale. This “genius,” defined not romantically but operationally, becomes the third factor of production, alongside capital and labour.

First, individual talent: a thin layer of machine-learning engineers, data scientists, and systems architects command eye-watering salaries and increasingly dictate where innovation occurs.

Second, organisational capital: tech giants, Amazon, Alphabet, Microsoft, have embedded AI into their core. They benefit from feedback loops, proprietary data, and flywheel effects. Others lack the digital plumbing to compete.

Third, intangible real estate: in the digital economy, the scarce assets are no longer land or labour, but proprietary datasets, standards, and platforms. These are hard to copy and even harder to displace.

The result is a growing divide. Those who control these assets see productivity gains. The rest remain stuck in a cycle of incrementalism, digitally busy but economically stagnant.

Sceptics contend that AI’s headline feats, composing music, beating humans at games, generating code, mask more modest realities. Much of its application to date has focused on low-productivity domains: ad targeting, chatbots, and trading algorithms. These may optimise margins, but they rarely expand the pie.

Others point to mismeasurement. AI improves product quality, reduces errors, and personalises services. But GDP, rooted in physical outputs, struggles to register these gains. There is value being created, it simply escapes the ledger.

Still others see a redistribution problem. AI may shift profits within sectors, say, from laggards to leaders, but without lifting total output. The economy gets more unequal, not more productive.

Yet the most widely accepted theory remains that of the lag. As with electricity or the motor, general-purpose technologies require time. Firms must integrate them into workflows, retrain staff, and develop complementary innovations. Until then, costs rise before benefits appear, a productivity J-curve.

On all counts, AI fits the mould of a general-purpose technology. It is pervasive, improving, and enabling. It affects logistics, law, design, and diagnostics. But, like electrification in the 1920s or computing in the 1980s, its payoff is deferred.

Investment accounting doesn’t help. Expenditure on model training or reskilling is often expensed rather than capitalised, depressing short-term earnings and obscuring long-term gains. Meanwhile, many of AI’s outputs, better decisions, faster iterations, more fluid interfaces, reside in the realm of the intangible.

Artificial intelligence is not a mirage. Its promise is real. But its rewards, like those of every foundational technology before it, will take time. The Solow Paradox, updated for the digital age, is not about the failure of machines, it is about the slowness of human systems to adapt.

The computer age, once invisible in productivity data, eventually delivered. So too will AI, but only if firms are willing to rewire how they work, not just what they use. The lesson is one of temporal humility: technology may evolve in months, but institutions evolve in decades. Bridging the two remains the defining economic challenge of our time.

One scenario that may emerge in the short run is a troubling one: artificial intelligence displacing jobs without delivering meaningful productivity gains. Once the preserve of science fiction and academic conjecture, this outcome is increasingly plausible—and already beginning to materialise. In its current incarnation, AI functions less as a revolutionary force and more as a cost-cutting instrument. Rather than reimagining business models or unlocking new sources of value, many firms are using it to automate routine tasks—customer service, legal review, content generation. The question is not just how much transformation is taking place, but how much is merely trimming: a quiet wave of workforce reduction masquerading as innovation.

The result is a troubling asymmetry. Employment is shrinking in some sectors, but the promised productivity surge has yet to materialise. This, in itself, is not unprecedented. General-purpose technologies, from electrification to computing, have always required long implementation periods before yielding measurable gains. But unlike previous waves of innovation, AI’s displacement effects are immediate. The disruption is fast. The dividends are slow.

To make matters worse, the benefits AI does deliver are often intangible, and therefore invisible to traditional metrics. Better search results, smoother interfaces, or personalised recommendations are scarcely captured in GDP. Firms may become more efficient at the margins, but the productivity statistics remain stubbornly flat. The risk is a structural mismatch between what technology delivers and what the economy registers.

There is also the question of who is equipped to use AI well. The so-called “genius bottleneck”-a scarcity of talent, infrastructure, and managerial vision - means that only a narrow cohort of firms can effectively deploy the technology. These organisations enjoy the necessary data quality, systems integration, and human capital. Most do not. Hence the prevalence of fragmented efforts: AI tools bolted onto legacy systems, deployed without strategy, and yielding little real improvement.

Labour-market data suggest the damage is already underway. Mid-skill, routine jobs, the traditional backbone of middle-class employment, are thinning out. Retraining opportunities remain fragmented. Institutional responses have been slow and uneven. Unless policymakers act swiftly, AI could exacerbate inequality and embed underemployment, especially in economies already struggling with stagnation.

Yet the story need not end in divergence. Like its predecessors, AI is a general-purpose technology. Once fully integrated into firm-level processes, once workflows are redesigned, incentives realigned, and new models of work emerge, the payoff could be substantial. The 1990s IT boom offers a precedent: productivity did eventually rise, albeit after years of trial and error.

New kinds of jobs will emerge too. Roles in AI safety, data stewardship, model interpretation, and prompt engineering already suggest that technology is as likely to reallocate work as to eliminate it. As with past industrial transitions, the issue is less whether work persists, and more where, and for whom, it does.

AI’s most significant impacts may lie further downstream. It may not be the automation of existing tasks that drives the next wave of productivity, but entirely new capabilities: faster drug discovery, dynamic supply chains, decentralised energy optimisation. These second-order effects, difficult to model, harder still to measure, may be where AI’s real promise lies.

In the short term, however, the risk is plain: a lopsided transition in which jobs are lost faster than output rises. The path to productivity is not automatic. It is paved with institutional adaptation, organisational change, and political will. The challenge is to close the gap between digital capacity and economic performance. The paradox is not that AI isn’t working, but that we aren’t ready.