Whenever an impressive new technology comes along, people rush to imagine the havoc it could wreak on society – and they overreact.
In the sixteenth century, parliament banned mechanical gig mills, and the hosiers guild ran William Lee out of Britain for inventing a better knitting frame.
In the early nineteenth century, Luddite stocking knitters burned down textile mills in Nottingham out of fear that automation would be the end of good textile jobs.
As stock trading went electronic in the late 1990s, people fretted about how many exchange jobs would be lost in the City.
And the arrival of the web prompted prolonged hand-wringing that the internet could create a “digital divide” between haves and have nots.
This recurring fear of the innovation menace — the notion that a major technological advance will lead to mass unemployment or some other widespread hardship — has never once proved correct.
In the years after the Luddite revolts, Britain grew to be an economic superpower, thanks to the industrial revolution. High-paying jobs in the City only expanded in the new millennium. The digital divide never materialised.
So why are so many technology leaders, pundits, and even economists now trotting out these same failed arguments to support absurd forecasts that artificial intelligence (AI), robots, and other new forms of automation are not merely putting 15m jobs at risk in the UK, but even posing a greater danger to humanity than nuclear weapons?
Several persistent fallacies are at work here.
One is the arrogance of the present. Every time the intellectual menace comes up, the people involved flatter themselves to be, uniquely in human history, the first to recognise the danger innovation poses in this new context.
They dismiss past examples as irrelevant because, they claim, “this time is different”. Except that later, in hindsight, it becomes clear that it never is.
Then there is the human instinct to fear the new and unfamiliar. Our species evolved to pay more attention to risks than to rewards, so when confronted with a new technology, we can’t help imagining how it might hurt us.
Those who see menace in innovation tend to fixate on one specific harm, plausible or not, while failing to imagine all the ways in which modern societies and markets will adapt to the new technology and thrive.
“You can’t prove it will always turn out well!” blurted a much-quoted professor at MIT to me recently when I pointed this out, contradicting his belief that AI will kill jobs. Perhaps not, I responded, but so far in the history of mass technology adoption it always has turned out well for humanity.
Part of the reason that is the case is that big technological changes take more time than most people expect. Indeed, a third fallacy behind the innovation menace is the false belief that a new technology will overturn everything before society can bring it under control.
Scary projections that self-driving cars and trucks will put taxi and lorry drivers out on the street in droves, for example, ignore the realities that those vehicles stay on the road for 15 years or more before they are replaced.
And even if taxi and delivery companies wanted to swap out their fleets for more expensive autonomous kit right away – never mind the economic insanity of that idea – manufacturers wouldn’t be able to ramp up production for years. Remember, the number of self-driving cars on the market today is zero.
And to the technologists who enthrall the Twitterverse with scaremongering that AI poses “vastly greater risks” than nuclear weapons do, my reply is: that insults both our intelligence and the very real dangers that nuclear weapons still pose.
It implausibly assumes that AI will start working massively better than it does today before we learn how to control it. And that an AI can somehow develop motivations totally counter from its programming. And that those motivations will include killing or enslaving us all.
Why should supersmart beings that we created bear us a grudge? It’s not like they compete with us for food or a place to live. Where is the logic in that?
Lastly, it assumes that an AI could acquire the means to pull off its dastardly plan despite the best efforts of its creators to stop it.
Dreaming up god-like powers paired with malevolent intent is fun if you are creating a Marvel movie or a story arc for Games of Thrones. But please, let’s not confuse it with reality.
At the moment, computers are smarter than us in a few very limited ways, like at mathematics and Go playing. They barely equal us in cat-video watching. Yes, they will become more capable in the coming decades. But we will have ample time in those decades to figure out how to put them to good use while avoiding existential catastrophes.