The Real “Intelligence Crisis” is that Everyone is Guessing 

The Real “Intelligence Crisis” is that Everyone is Guessing 

March 2026 | Marina Meyjes, Policy Analyst

The viral “2028 Global Intelligence Crisis” essay from Citrini Research imagines a scenario in which advanced AI capabilities displace broad swathes of white-collar work, benefitting some but otherwise unraveling the broader economy. Many jumped to refute the scenario. However, its core concept – that AI will increase productivity to such a degree that it radically upends the entire economy – is worth unpacking, especially given that economists have yet to find convincing macroeconomic evidence of AI-driven productivity gains. 

Generative AI is being adopted across workplaces on a massive scale, yet productivity metrics – the measure of economic output per worker – haven’t followed suit. Although some economists claim that faint signals are starting to appear, there would need to be dramatically more evidence to reliably understand what is happening. We’ve seen this story before. In the 1980s, the personal computer revolutionized office life but left little trace in the ledgers. Robert Solow, the Nobel-winning economist, captured this “productivity paradox” with the often-quoted: “You can see the computer age everywhere but in the productivity statistics.”  

Economists offer a few explanations, some or all of which may be at play. First, adoption might still be too shallow. Many firms could still be in the “pilot phase”, testing and lightly using generative AI tools rather than deeply embedding them into workflows. In this scenario, the productivity gains from AI are simply not meaningful enough to move the needle. 

Second, AI usage could be clustered in pockets of the economy in a way that fails to translate to the macroeconomic picture. Data illustrates that while AI adoption is high in sectors such as tech and finance, it lags in industries such as retail. In this case, benefits accumulate in narrow bands of the economy, producing micro-level efficiencies without moving productivity on the macro scale. 

Third, we may be in a productivity J-curve – a pattern in which general-purpose technologies initially depress measured productivity as organizations adjust, investing in things like reskilling and reorganizing workflows, before ultimately driving a take-off in output. Electrification followed a similar trajectory. Factories that simply swapped steam engines for electric motors saw little benefit. Rather, it took roughly two decades of redesigning layouts and workflows around the new technology before substantive productivity gains materialized.

Fourth, our measurement frameworks might be failing to register much of the value that AI creates. For example, improvements in quality, creativity, or decision-making may lie outside traditional productivity measures. In this scenario, the gains are real, but uncountable with the tools we have. Economists have encountered this problem before with other intangible digital goods and macroeconomic measures. 

And finally, generative AI may simply never deliver on the productivity gains many expect. Because the capabilities of the technology are highly uneven across tasks – and because translating capability into usable output requires substantial oversight and organizational change – some economists anticipate only modest potential for productivity gains in both the near and far term.

The reaction to the Citrini Research report revealed how little clarity exists about how to interpret AI’s productivity trajectory. This confusion is compounded by headlines promising imminent, sweeping gains – claims that reflect investor signalling rather than substantiated economic evidence. It may well be that AI will appear in the macroeconomic data in the near future. Or, as in the case of the personal computer, it might take over a decade for AI to move the productivity needle. But there isn’t yet enough data, or the right measurement frameworks, to make a confident prediction about what will happen one way or another. 

Rather than watch and wait, we need to take steps to reduce this uncertainty. Researchers are already exploring new measurement approaches to capture AI's productivity effects – building on methods proposed for other intangible, digital assets. One method is to adjust Total Factor Productivity (TFP) frameworks to capture quality improvements and time savings. TFP is a widely used measure of how efficiently an economy turns its inputs into output. Essentially, it looks at how much economic output grows beyond what can be explained by adding more workers or more capital. But this means it can overlook the contributions of technologies like AI, where much of the value lies in improving the quality or speed of output rather than increasing its volume. A technology-specific adjustment would account for this by measuring improvements in quality, not just the volume of output, and by capturing productivity gains from time saved rather than from additional production.

Other approaches and metrics can also provide some clues to help rule out competing explanations for the paradox. For instance, more precise, firm-level measures of depth of generative AI adoption would help better determine questions about whether usage indeed remains shallow across the economy, or whether intensive adoption is concentrated in narrow pockets that do not yet move aggregate productivity statistics. While this wouldn’t “solve” the paradox outright, it would provide a more robust picture of what’s happening.

There is no silver bullet here. We need an all-of-the-above-and-then-some approach. As AI investment accelerates and expectations about its economic impact grow, the need for better ways to measure its effects on productivity will only become more pressing. Fortunately, some progress is underway. America’s AI Action Plan and recent legislative proposals recognize the importance of better data and standards around AI’s impact on economic performance and adoption. But these efforts have largely focused on tracking adoption within specific sectors and in relation to labor markets. While no less important to measure, policymakers should also update the macroeconomic infrastructure itself – equipping statistical agencies such as the Bureau of Labor Statistics (BLS) and Bureau of Economic Analysis (BEA) with economy-wide, firm-level adoption intensity data and new productivity frameworks adjusted for AI-specific contributions. 

Approaches like those outlined above point to where that work could begin. Without this data, policymakers will lack the tools to determine whether the productivity paradox reflects delayed diffusion, concentrated adoption, measurement gaps, or genuinely limited gains from the technology. Until then, debates about AI's economic future will continue to be steered more by hype and alarmism than by evidence.

Marina Meyjes is a policy analyst at SeedAI examining how AI is transforming the economy, including labor markets, productivity, and institutions. Her recent co-authored work, "The U.S. Needs a Generative AI Intensity Index," calls for the creation of a new economic index tying generative AI usage -- measured in tokens -- to economic data to better capture AI adoption intensity at the firm and sector level. Prior to SeedAI, Meyjes worked at the Atlantic Council's GeoTech Center at the intersection of emerging technologies and geopolitics. She holds an MPhil in Politics and International Studies from the University of Cambridge, where she focused on political economy and feminist technoscience, and a BA in History from UCLA