After an initial period of enthusiasm, attitudes toward generative AI (embodied as GPT) have soured. A flurry of polls revealed the shift in mood. One showed 70% of respondents had little or no trust that GPT can provide accurate information. Respondents see great dangers to society from misinformation that cannot be detected, and they fear that when GPT is put into search engine interfaces, reliable fact checking will be impossible. Another poll showed 70% wanted to see some kind of regulation or ban on commercial rollout to allow time to head off the dangers. A question for computing professionals is how to put these machines to good and safe use, augmenting humans without harming them?
A feature of the new mood is fear of extinction of humanity. The 2022 Expert Survey on Progress in AI included a finding that "50% of AI researchers believe there is a 10% or greater chance that humans go extinct from their inability to control AI."a This claim was picked up by leading AI experts and became a widely spread media meme, stoking not only extinction fears but also other possible catastrophes of machines becoming sentient. Melanie Mitchell, a prominent AI researcher at Santa Fe Institute, did some fact checking on the survey website and discovered that only 162 of 4,271 respondents answered the question—so the 50% in the claim was only 81 respondents, hardly a solid basis for such an important claim about AI researchers.b
The concern about the danger of the extinction of the human race by sentient machines that see no value in human beings is an odd one.
Thanks to climate change, the rate of extinction of species is accelerating dramatically. But climate change is only possible due to machines. There's no way human or animal power could mine enough fossil fuels to cause climate change; only machines have that kind of power.
So, machines have already caused the extinction of a large portion of the species that were around a few decades ago. There's little reason to believe that humans will be OK as climate change continues to get more severe and entire ecosystems collapse.
Clearly, non-sentient machines that mine and burn fossil fuels see no value in human beings, and are laying the groundwork for our extinction. So why would someone be concerned about sentient machines joining in the process of causing our extinction? Does it matter which kind of machine causes our demise? We've already shown that our society is OK with Exxon's machines causing our demise, and we've shown that our society is OK with YouTube carrying disinformation like PragerU that assists Exxon's destruction.
Since our society isn't even slowing down the acceleration of our demise through non-sentient machines causing climate change, and shows a lack of concern about death by viruses (COVID) and guns, why would it care about death by sentient machines?
[[The following comment was submitted by author Peter J. Denning. --CACM Administrator]]
Good point about how machines doing our bidding are accelerating climate change and the attendant risk of decimating some of humanity. Why the newer concern that sentient machines might decimate us? I speculate that the reason is control. Whereas we feel we have control over the machines that mine and burn carbon fuels, we worry that sentient machines could go beyond our control. I argued that generative AI machines are not likely to become sentient and go out of our control. So all we need to worry about is non-sentient AI machines. -- Peter Denning, Salinas, CA
Displaying all 2 comments