A review of literature on risks associated with the rapid growth of natural language processing finds that the capability comes with real costs, including perpetuating racism and causing significant environmental damage.
"On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" was presented at the 2021 ACM Conference on Fairness, Accountability, and Transparency.
"The question we're asking is what are the possible dangers of this approach and the answers that we're giving involve surveying literature across a broad range of fields and pulling them together," says co-author Emily M. Bender, a University of Washington professor of linguistics.
The researchers say that there are downsides to the ever-growing computing power put into natural language models. They discuss how the ever-increasing size of training data for language modeling exacerbates social and environmental issues.
The paper has generated widespread attention due in part to the fact that two of its co-authors, Margaret Mitchell (Shmargaret Shmitchell on the paper) and Timnit Gebru, say they were fired recently from Google.
From University of Washington
View Full Article
No entries found