The just-published Anthropic paper by Massenkoff and McCrory is a genuinely careful, well-intentioned and much-needed piece of work. The authors build a thoughtful exposure measure, draw on real usage data, apply a clean difference-in-differences framework and find limited employment effects to date.
Every step is methodologically defensible. The conclusion they reach is probably entirely accurate for the specific period studied. And yet, it is almost certainly misleading as a guide to the future.
This is not the paper’s fault. The problem is not its precision. It is its framing, and as a result, how it will be read. While the paper’s intention is undoubtedly not to lull anyone to sleep, many will eagerly read its commentaries to paint a picture of an unchanged world.
They will use it to debunk AI impact, dismiss the hype, or advocate for avoiding substantial, long-term policy decisions. Papers that find “limited effects so far” get used to arguing that the disruption is manageable, that there is time to adjust, and that present plans are adequate. They become the intellectual permission slip for doing nothing.
Hope, as the adage goes, should not be a strategy. This most necessary human sentiment, unfortunately, is abysmal beyond a point while navigating structural upheaval. At the same time, we must be incredibly mindful not to push fear. Used as rhetoric, fear can cause far more damage, paralyzing stakeholders and generating reactive rather than visionary policy.
We are not trying to instill fear; we are trying to rigorously examine the implications of how rising conversations around AI’s impact will unfold. This fast-moving field has decisively breached the boundaries of mere technological development.
It is now, increasingly and irreversibly, about how these forces will dictate politics, shape policy and drive macroeconomic realities that will inevitably ripple through our portfolios and daily lives. To assess the true impact of the growing collection of economic papers and political opinions, let’s start differently in this article.
The five certainties of today
Before we attempt to model the future, we must ground ourselves in the absolute realities of the present. Here are five certainties, almost bordering on being truisms, that we can assume with 100% probability:
- Time will not respect analysts’ horizons. One day, we will wake up and it will be 2037; another day, it will be 2047. AI’s secular impact will not politely stop at the arbitrary three- or five-year time horizons selected by economic forecasters.
- Political tectonic plates will shift. Today’s politicians or political regimes will not be in power everywhere tomorrow. Many of those with radically different views on technology progression, labor protection, and capital taxation will inevitably rise to ascendancy at some point, changing the rules of the game mid-flight.
- Tsunami and aftershock risks rise following an earthquake. The secondary and tertiary effects of AI’s primary impact may prove far more consequential than the initial displacement. Capital concentration, geopolitical rivalry, wage compression across sectors not directly automated, and the political economies of disrupted communities are already forming. They will not wait for academic consensus, not that there will be any, like in the political circles.
- Machine cognition will escape its digital crib. Today’s exposure estimates are largely confined to cognitive, digital work. But a general-purpose cognitive force does not stay where it starts. As AI learns to interact with real-world machinery, physical domains assumed safe, engineering, skilled trades, construction, and logistics will face their own version of the same reckoning. Many sectors not yet on the disruption list are the next phase, not the exemptions.
- The global impact will be unequal and rivalrous. Local thinking cannot contain global forces. Most observers, and nearly all policymakers, are thinking locally. But the rising, unequal global impact of AI will not only create entirely new geopolitical dynamics but also lead to differing, adversarial controls, where rivaling countries will implement AI policies specifically designed to undermine others. In short, there will not be any meaningful, global collaborative efforts on AI.
The measurement trap and the irreversibility of efficiency
With these certainties established, we must address the most dangerous analytical blind spot of our time: the reliance on simple patterns and statistical data points to predict the future of a general-purpose cognitive force.
This is not another piece debunking those stuck with their historic studies to decide where AI capabilities may peak or base their conclusions of aphorisms like “the impact will be slower than what the optimists predict, but faster than those of the pessimists.” We are moving on to discuss the decision-making of the folks with eyes wide open.
Our collective faith in linear explanations and neat S-curves represents a dangerous comfort. These models do not represent reality as much as they represent the limits of the analysis we were capable of performing until today.
If such neat models have historically helped real-life practitioners navigate financial markets so little, no matter their addictive readability for those who like to keep it simple, it is entirely impractical to assume they could work to model far more complex real-world situations. And, that also to help assess the impact of a force so undefinable.
We are in what might reasonably be called the Super-Moore era, a term we coined in 2023. Certain AI parameters and capabilities have been doubling every few months, not years. Modest productivity changes today can become dramatic shifts very quickly as new models arrive, hardware improves, and adoption cascades through competitive networks.
AI does not sit still while we measure its current effects. New AI systems contribute to research, chip design, and their own further integration. This creates feedback loops that multiply progress in ways that additive measurement frameworks fundamentally cannot capture. We measure AI’s impact additively. AI’s capability grows multiplicatively. That gap matters more with every passing quarter.
By basing our conclusions solely on what is still not done by machines today, we might not only be underplaying the impact of what is already done, but more crucially, what could get done in double time, as we recently witnessed with coding capabilities transforming in one short year.
We tend to use current “real-world usage” as a proxy for “capability.” This is a profound analytical error. Current usage primarily represents human adoption, institutional friction, and natural hesitation; it does not represent technological potential. Human hesitation is merely a temporary social friction.
It is a barrier that evaporates the exact moment a competitor uses AI to ruthlessly underprice the market. If there is a genuine capability overhang, which we clearly observe from the readings and our own experience with frontier model capabilities, it will dissipate suddenly due to specific, unpredictable competitive decisions, even without the need for compounding secondary innovations.
Policymakers generally require empirical evidence of harm to justify interventions, and economists require empirical evidence of displacement to acknowledge a trend. But accepting current, localized findings does not mean we must accept relatively static frameworks as a reliable predictor in a world of continuous, massive, and unexpected change. Inertia today does not prevent tipping points tomorrow.
Pipes, water and the illusion of gradualism
To understand why the data looks muted today, we must look at how businesses actually integrate technology. Enterprise-wide restructuring is not a software update; it takes years of agonizing work building data pipelines, structuring proprietary knowledge and finalizing API plumbing. Current economic metrics measure the construction of the pipes, not the eventual flow of the water.
Because we are in the plumbing phase, observers assume the transition will happen gradually, user-by-user. But once this infrastructure is built, whole departments will be automated overnight. A reservoir that is filling at an accelerating rate does not become a flood gradually. It floods when it overflows.
Furthermore, as AI workflows develop exponentially, they will begin to make the very “micro-decisions” that reports claim only humans can make. AI will optimize for its own further integration.
New “AI workers” can already aid in R&D or chip design, creating autogenous feedback loops that multiply progress. AI is not a static technology that we deploy; it is a dynamic technology that improves itself.
Plus, many times we adopt this all-or-nothing approach. We witness this regularly these days, when the pessimists’ arguments sound as if all service jobs will disappear to the optimists, and the optimists’ arguments sound as if they do not see any change to the pessimists. In real-life, the secondary impact of a sector on a secular low- or ex-growth trajectory are meaningful enough to be decided without ascribing illogical positions to people in the opposite camp.
When we attempt to forecast the impact of nuances on the ongoing upheavals, most reports focus obsessively on “displacement” or the complete loss of jobs. In doing so, they entirely ignore the silent collapse of the wage premium.
By focusing excessively on the comforting narrative that a large portion of jobs will remain only “slightly impacted,” reports underplay the profound economic and psychological factors impacting those who retain their work but lose their leverage.
Moreover, domestic employment models are entirely blind to rapid global substitution. We are underestimating the cascades in adoption created through network effects and competitive pressures because we base our spread analysis on the habits of early adopters. We are also underestimating similar cascades in adoption, created through network effects and competitive pressures, by basing our analysis of the spread through early adopters.
One is tempted to point to the examples of mobile technology spread, but that will not only be against our grain of not using historical episodes as a guide for this completely different world, but also will underestimate the speed and impact.
The recursive and unstabilizing powers of something that works on itself are known to anyone who has worked on spreadsheet formulas feeding each other. With AI, we are getting the first real-life example.
Phase transitions and the logic of work
To crystallize how this unfolds in reality, consider the micro-aggregation of efficiency through a practical example: a financial services firm where AI can currently handle 40% of an analyst’s tasks.
At 40% coverage, the analyst is still fully employed, highly valuable, and the economic data shows “no displacement.” AI is viewed as a helpful copilot. At 60% coverage, the analyst is still employed, though somewhat underutilized, and increasingly managed as a reviewer rather than a producer. The data still shows no job losses.
But at 75%, the underlying economics of employing that analyst change qualitatively. The remaining 25% of tasks may simply not justify a full salary, a benefits structure, office space, and management overhead. It is at this moment that the firm restructures. It does not do so gradually at 40% or 60%. It happens all at once. Phase transitions, as in materials science, occur without warning and happen suddenly.
This brings us to the O-ring models of economic theory. AI does not need to do 100% of a job to destroy its structural value; it only needs to successfully execute the “bottleneck” task.
The fundamental error in our current forecasting is that we are calculating impact based on work as designed, not work as needed in the new world. The organizational logic of the modern enterprise was specifically designed around human cognitive capacity.
Humans are context-switching, single-threaded agents with highly limited working memory. Work was organized into discrete tasks, silos, and roles partly to accommodate these biological constraints.
AI agents do not share these constraints. A customer service AI agent does not “confer with customers to provide information” as one task, and then separately “document case outcomes” as another task, and then “escalate complex issues” as a third. It executes all of these simultaneously, seamlessly, and continuously across thousands of parallel interactions.
Therefore, the unit of displacement for agentic AI is not the task. It is the role. These structural hurdles are not permanent barriers; they are one-time frictions.
In general, we assume that corporate calculations will be based on neat Return on Investment (ROI) spreadsheets. The first-level analysis usually discusses the costs and productivity benefits of working with machines versus humans. More evolved analysis acknowledges the deployment benefits of machines, providing more revenue opportunities.
But the most real-life, action-generating calculations are highly unlikely to be spreadsheet-based, long-term IRRs built on assumptions that generate arguments rather than guidance. Ultimately, integration will be driven by raw survival instincts based on competitive dynamics.
As always, it will revert to Mr. Nash’s game theories more than expectations of any ROI: if your competitor deploys, you must deploy, or you perish.
Five certainties for what comes next
We are not advocating against hope, nor are we suggesting the futility of any forecasts. But we must brace ourselves for the unpredictability of the same, or worse, than, the impacts of regularly occurring giant events we have already lived through, like ChatGPT in 2023, DeepSeek in 2024, or Claude Code last year.
This is not the medium to offer precise prescriptive policy suggestions or describe the social aspects. For the stakeholders of GenInnov, we list what is certain ahead and what it means for the factors driving our decisions:
- AI is not just a sector in the financial world. It will drive our economics and politics. As we detailed in previous writing, AI’s implications extend far beyond any investable theme or market vertical. Understanding the actual nature of the forces at play has become a prerequisite for consequential decision-making in almost every professional domain. The conclusions that matter do not come from simple charts or from analyses whose validity expires within months.
- The Silicon Shock is the first major secondary impact. It will not be the last. The wave of capital expenditure generating power demand, chip shortages, and supply-chain disruptions is the initial large-scale economic consequence of AI deployment at scale. It signals broader resource reallocations that will redefine industries that appear unaffected by AI’s disruptive capabilities today. There are more ways our lives are being impacted by AI than through the jobs.
- Technology is now a capital-intensive industry. The long period when technology companies required minimal physical capital is over.With the disappearance of the last major non-capex industry, the economic and financial world turns inherently more cyclical. The likes of us may want to remain steadfastly focused on the secular trends, but we must acknowledge that the cyclical phases driven by massive capital expenditures will be absolutely brutal.
- AI pessimism, and its policy consequences, will rise as assuredly as current capex levels will eventually correct. Today, in most countries, criticism of AI largely comes from the political opposition. This is temporary. When employment impacts become measurable and visible to constituencies that vote, the policy environment will change. The speed at which public sentiment shifts once a threshold is crossed has rarely been gradual. Investors and executives who assume the current regulatory environment is stable are banking on policy certainty that does not exist. The political and policy pendulums, when they swing, will swing hard.
- Secular forces will ultimately overwhelm cyclical noise. For the media, for financial markets in the near term, and in our own personal conversations, cyclical or technological events just around the corner will continue to dominate the headlines. But make no mistake: the relentless, irreversible secular forces of cognitive automation operating beneath the surface are materially more important. At least in our eyes.
Nilesh Jasani is the founder and CEO of GenInnov Pte Ltd Singapore. This article first appeared on www.geninnov.ai and is republished with permission. Read the original here. Read more at www.geninnov.ai/blog







