BLOG@CACM
Architecture and Hardware

Are You a Doomer or a Boomer?

Looking ahead at the potential for technology to doom us, or push us to the next boom.

Posted
University of Wyoming lecturer Robin K. Hill

Some of us are doomers. Some of us are boosters. [See Marantz.] Some of us are Boomers, in demographics, tangent to that spectrum, with a jaded view of the AI revolution and the predictions that propel it. Alternative prognostications are offered below. This is not a generation gap or an attitude difference that just needs adjustment. One of us is wrong—either you, ardent Doomer/Booster, or I, blasé Boomer.

Maybe this!or…maybe this.
Reliance on unbridled and unmonitored viral Internet items will delude the public and subvert community. Deepfakes will bring about the downfall of democracy.ORThe citizenry will exercise judgement, as it has already learned to do with respect to text, drawings, and photos, as it comes to understand that audio and video can be faked. Concerned citizens will turn to reliable sources, having learned (or recalled) that written and illustrated material from conscientious professionals is a trustworthy source of news. We’ll pay a small charge for curated and edited publications that are broadly dispersed and closely monitored by those who follow current subjects, and read by those who would like to learn more. These publications could appear regularly, by consistent demand, with a mission to serve the public interest. This activity may be called “journalism.”
Consumer recommender systems will confer ever more benefits as they get smarter. Products, films, vacations, colleges, and financial instruments will all be matched flawlessly with those aspiring to secure the best choices based on personal parameters. The contingencies, special cases, and detailed exceptions that make up the world will be incorporated into the software. Control of our Things in the Internet of Things will improve our quality of life. And recommender systems will overcome bias through more sensitive algorithms.ORHandling all conditions will lead to code bloat akin to a bureaucracy, carrying the usual burdens of qualification, epicycle, and extra specification. Developers will announce updates to third-party customer services, where the updates will be dreaded and postponed, then resented, then neglected, software jobs having become as tedious as the factory jobs they replaced. Bizarre results will routinely follow customers’ entry of contradictory selections. Large firms that use automated resumé review (scuffling with automated resumé writing) will get exactly the employees that they deserve. Those who want to promote products and businesses for personal or public benefit will figure out, and share, techniques for thwarting or circumventing the systematic recommendations; they might call it Search Engine Opposition. Consumers will abandon the setting and resetting of their automated window shades, refrigerators, and cat litter-box cleaners.
Data centers will continue to proliferate and generate pollution as enterprises demand more and more data. As Yann LeCun says, “In the future, our entire information diet is going to be mediated by [AI] systems” [Perrigo]. Huge datasets will be as valuable as gold mines, offering rich sources of significant correlations and derivations that drive evidence-based social services as well as commercial enterprises.ORThe monetary costs associated with datasets large enough to drive beneficial decision-making will increase rapidly in an attempt to compensate for the environmental destruction. Competition for data will lead to conflict, and the quality will degrade. Elaborate altruistic schemes (while consuming compute resources) will enable donation to needy data clients; still, disgruntlement with the pollution will lead to restrictions. Derivation of refreshed models will be limited to ten-year cycles, leading pranksters to goad the model, near its expiration date, for outdated ideas—Worst Wedding Gift Suggestion, etc. People will turn to generative AI for text at the level of chitchat, but will assign significant life decisions to experience, observation, and judgment.
ChatGPT will affect all our lives, giving us thoughtful exegeses on…anything. To maintain quality, as prompt engineers work harder and harder to ensure generation of the “right” answers, few-shot prompting will become many-shot prompting, emerging as a learned profession.ORChatbots will do the working world a favor by exposing the writing tasks that were meaningless. Due to the lack of sustained use cases for text-generation (conventional, bland, and subject to sudden falsehood) from public scraped text, demand will fizzle out as user input declines, leading to even worse results and eventually, model collapse [Shumailov]. Volunteer groups will take up the task in order to boost quality, then fade away into other projects. A few weary Mechanical Turks will enter slightly corrupted material, sharing it on backchannels and snickering to each other. Doctors will draw up supplements to medical manuals, to explain common mistakes committed by diagnostic software. A young prompt engineer, flailing about in attempts to directly program a bot with facts, will succumb to frustration and generate the text response herself. This works so much better that she is promoted, surreptitiously subcontracts the work to human domain experts, and then starts her own company to provide such a service.
Tech leaders will continue to teeter along the line that separates their business models from the safety regulation that their AI companies call for. Beset by complaints about the toxic Internet, Congress will amplify its blame on social media and demand fixes, prodding such services to develop effective screening mechanisms driven by AI.ORThe detailed regulations finally developed will be routinely subverted in spite of their best efforts. Generative model bias correction will backfire (already happened!—Google Gemini [Knight]). Successive attempts to filter out dangerous and offensive content will suffer spectacular failures wrought by teenage game clubs taking up the challenge to sneak something by the filters. Software engineers will explain that algorithms can detect neither bias nor threats nor suicidal ideation, as such [Hill2021]. Congressional aides who understand this will finally reach the age to run for office.
After a few sad accidents, developers will perfect the software for driver-assisted vehicles, which will efficiently take over transportation tasks, enhancing safety and convenience. Eventually the seat behind the steering wheel, then the steering wheel itself, will be obsolete, and the road, a symphony of finely tuned vehicles carrying out important business.OR“It’s like a movie,” people will say when they find driverless cars traveling remote roads, security systems failed but traffic laws respected, abandoned after joyrides or hackings. Smugglers will hijack them for unmanned transport. Gangs will target them for initiation rites and terrorists will drive them into airports. Eventually kits for vehicle takeover, similar to Ransomware as a Service, will appear on the black market, updated in sync with new models. Prospective gang members will no longer gain street cred from simple remote hijacking, instead working the controls to turn the vehicle upside down.
AI will affect every corner of our lives, mostly for ill. AI will increasingly threaten humanity; all that we hold dear will vaporize, or…AI will take over every aspect of our lives, improving everything we do, ushering us into a glorious future.ORIt will become apparent that rogue programs don’t spontaneously emerge, and when complex systems are programmed by bad actors, someone will remember that they are machines in time to terminate, unpower, or destroy them. The flood of AI augmentations to routine business processes will settle into the localized ponds where such enhancements make constructive improvements. When a hapless tech leader suggests that elections should be replaced by AI-generated winners, because now the systems are “so much smarter” than we are, the body politic will revolt and continue to conduct elections reflective of itself—messy, righteous, and demanding. Human nature, in its fickle, wicked, honorable, inspiring, expedient, irreverent, generous way, will move on from AI hype…learning to handle both beneficent and malificent manifestations.

References

[Hill2021] Hill, R., 2021. Misnomer and Malgorithm. BLOG@CACM. March 27, 2021, https://cacm.acm.org/blogcacm/misnomer-and-malgorithm/

[Knight] Knight, W., 2024. Brace Yourself for More AI-Inspired Political Fights. WIRED Fast Forward Newsletter, 02.29.2024.

[Marantz] Marantz. A., 2024. O.K., Doomer. The New Yorker. March 18, 2024.

[Perrigo]. Perrigo, B., 2024. Yann LeCun On How An Open Source Approach Could Shape AI. Time Magazine. February 7, 2024, https://time.com/6691705/time100-impact-awards-yann-lecun/

[Shumailov] Shumailov, I., et al, 2023. The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv. 2305.17493.

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More