What are the chances of an AI apocalypse?
Professional “superforecasters” are more optimistic about the future than AI experts
In 1945, just before the test of the first nuclear bomb in the New Mexico desert, Enrico Fermi, one of the physicists who had helped build it, offered his fellow scientists a wager. Would the heat of the blast ignite a nuclear conflagration in the atmosphere? If so, would the firestorm destroy only New Mexico? Or would the entire world be consumed? (The test was not quite as reckless as Fermi’s mischievous bet suggests: Hans Bethe, another physicist, had calculated that such an inferno was almost certainly impossible.)
These days, worries about “existential risks”—those that pose a threat to humanity as a species, rather than to individuals—are not confined to military scientists. Nuclear war; nuclear winter; plagues (whether natural, like covid-19, or engineered); asteroid strikes and more could all wipe out most or all of the human race. The newest doomsday threat is artificial intelligence (ai). In May a group of luminaries in the field signed a one-sentence open letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Explore more
This article appeared in the Science & technology section of the print edition under the headline “Bringing down the curtain”
More from Science and technology
A new age of sail begins
By harnessing windpower, high-tech sails can help cut marine pollution
A promising non-invasive technique can help paralysed limbs move
All that’s needed is electricity and exercise
It is dangerously easy to hack the world’s phones
A system at the heart of global telecommunications is woefully insecure