I don't know about you, but my LinkedIn is full of posts like: AI WILL TAKE ALL WHITE COLLAR JOBS, SOON YOUR JOB WILL BE REPLACED! Yet, I also see headlines like: "Thousands of CEOs just admitted AI had no impact on employment or productivity." The story invoked a paradox from 40 years ago — Robert Solow's famous 1987 observation that computers were everywhere except in the productivity statistics. Economists are treating this as history repeating itself. But as a historian, I think we can go much further back. Because we've been here before. Not once. Every single time.

Let’s Have a Look at the Data

In February 2026, the National Bureau of Economic Research asked nearly 6,000 executives across the US, UK, Germany, and Australia a simple question: has AI actually moved the needle? The answer was almost embarrassingly consistent. Roughly 90% reported AI has had no measurable impact on employment or productivity over the past three years. In the US, that number hits 91%. And when researchers dug into how these companies were actually using AI, the picture became clearer: occasional text generation, some visual content, a bit of data processing. About an hour and a half per week on average. A quarter of executives weren’t using it at all.

Yet ask those same executives about the future and suddenly optimism returns — AI will boost productivity by 1.4% globally over the next three years, they predict. 2.3% in the US.

So CEOs expect some gains. But these numbers aren’t that spectacular. Where are the incredible returns on investment? Where are those big numbers the AI CEOs keep talking about?

Well, We’ve Been Here Before

Forty years ago, Solow identified the IT Paradox: computers were everywhere, but productivity gains were invisible. But in 1990, economist Paul David drew a parallel I find even more instructive.

In 1882, Edison opened his first central power station. Electricity worked. It was real, it was proven, and factory owners believed in it. So they adopted it immediately. But here’s what they actually did: they took out the steam engine and replaced it with a single large electric motor — which then drove the same central shaft, the same belt system, the same machines, in the same layout. Same mechanics. Different energy source.

The factory still worked as one large machine. Just with a dynamo in the middle instead of a boiler.

And productivity? Flat. For four decades.

It wasn’t until manufacturers started attaching smaller motors directly to individual machines that everything changed. Suddenly the factory didn’t need to be built around a central shaft. You could arrange machines however made sense. You could reorganize entire production flows. Workers’ jobs changed. Management changed. The building itself changed. Only then — roughly forty years after Edison’s first power station — did productivity start to climb.

The technology was the same. What changed was whether people redesigned their world around the technology’s actual logic — or simply used it to replicate what they already did, faster.

Sound familiar? Most companies today are doing exactly what those factory owners did. Buying an LLM to summarize emails, generate reports, speed up a single workflow. The technology is in the building. It’s just not in the walls yet.

The factories that eventually transformed didn’t upgrade their equipment — they redesigned their entire production logic around what electricity made possible. The AI equivalent means embedding it into native workflows, training people to use it in ways that genuinely solve problems and return time for meaningful work. Not AI as a faster typewriter. AI as a reason to rethink what the factory floor looks like entirely.

Most companies won’t do this quickly. Redesigning around a new technology is expensive, disruptive, and the gains are invisible for years before they materialize. That’s exactly why the Installation Gap exists every single time. The factory owners who kept the belt-and-shaft system weren’t stupid — they were rational. But the ones who redesigned eventually made the ones who didn’t irrelevant.

The Bubble I Live In

I spend roughly twelve hours a day building and learning with AI. My LinkedIn feed looks like everyone has already crossed over. But I’ve learned to recognize that for what it is: a bubble.

Everett Rogers mapped this pattern in 1962 and it has held across thousands of studies since. Technological adoption follows a predictable S-curve. The enthusiasts and early adopters who dominate my feed represent maybe 15% of any population. The other 85% move on human timescales, not Moore’s Law timescales. They adopt when technology is trusted, proven, and embedded in systems they already understand.

The faster technology changes, the harder it becomes for the rest of society to catch up. That gap isn’t determined by how smart the technology gets — it’s limited by human comprehension and trust. We’re already seeing the pressure build: Microsoft’s CEO recently warned that AI must “do something useful” or they’ll lose the social permission to keep burning electricity on it. That’s not a technical problem. That’s a legitimacy problem. And legitimacy, historically, is always harder to engineer than the technology itself.

How the Story Actually Ends

Solow’s paradox was eventually resolved. By the late 1990s, US labor productivity roughly doubled — driven by ICT adoption. But here’s the part that gets lost: productivity didn’t surge because the technology got better. It surged because organizations finally redesigned themselves around what the technology made possible. Retailers rebuilt supply chains. Banks rebuilt back-office operations. The gains came from the ICT-using industries, not the ICT-producing ones. Two decades after the initial investment wave.

The academic literature on this is unambiguous: technology investment without organizational restructuring does not produce productivity gains. It may actually reduce them.

Productivity isn’t gained because a CEO announces you’ll soon have a magical PhD student at your fingertips. It’s gained because people adapt technology over time — but only when it’s trusted, safe, and proven to work for the use case.

History doesn’t tell us when AI’s productivity gains will arrive. But it tells us clearly what they’re waiting for.

The best thing you can do is stay in the loop.

Sources & Further Reading

  • Solow, R. (1987). “We’d Better Watch Out.” New York Times Book Review, July 12, 1987.

  • David, P. A. (1990). “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review, 80(2), 355–361.

  • National Bureau of Economic Research (2026). Survey of ~6,000 executives on AI, employment, and productivity.

  • Rogers, E. M. (2003). Diffusion of Innovations (5th ed.). Free Press.

  • Brynjolfsson, E., Rock, D. & Syverson, C. (2021). “The Productivity J-Curve.” American Economic Journal: Macroeconomics, 13(1), 333–372.

  • Fortune (2026). “Thousands of CEOs just admitted AI had no impact on employment or productivity.” February 17, 2026.

Keep Reading