There is little emphasis on the philosophical ramifications of artificial intelligence (AI) research and development at AI conferences and other scientific forums, with most researchers preferring to focus on technical achievement, writes Duke University professor Vincent Conitzer.
He says this tendency can be partly traced to AI scientists' push to have their work respected by peers.
Bringing attention to philosophical issues in AI are experts such as Nick Bostrom, director of Oxford University's Future of Humanity Institute. He is concerned with an "intelligence explosion" in which humans build machines that exceed human intelligence, which in turn build something that is even more intelligent, leading to ever-escalating generations of smarter systems.
Another factor creating a disconnect between mainstream AI researchers and those worried about the future has been inaccurate predictions of how progress in the field would unfold, even in the short term.
Issues about AI are being raised outside of the discipline, with the American Association for the Advancement of Science calling for 10% of the AI research budget to be channeled into examining its societal effects.
Conitzer says it is in the AI community's interest to get involved in this debate, lest the discussion be less informed.
Currently absent is a way to engage with the more opaque long-term philosophical issues, but AI's ability to make ethical decisions is one subject in which immediate momentum appears possible.
From Prospect Magazine
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found