Artificial intelligence (AI) researchers gathered at last week's Neural Information Processing Systems (NIPS 2017) conference in Long Beach, CA, to discuss measures against AI's use for deceit and disinformation.
One workshop concentrated on tactics in which adversarial examples are used to fool AI into seeing something that does not really exist. Workshop co-organizer Tim Hwang says the potential for such abuse of AI is growing, "especially if you think the inputs to do machine learning are getting lower and lower over time."
Hwang is concerned about AI-powered disinformation making it virtually impossible for large populations to distinguish reality from fiction, or whether trust of online content will ultimately only be possible via technological authentication.
NIPS workshop co-organizer Bryce Goodman warns of "systems that are trained to exhibit features of human intelligence but are fundamentally different in terms of how they process information. We're trying to show what hacks are possible and make it public."
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found