Microsoft is to invest around $1 billion into the OpenAI project, a group that has Elon Musk and Amazon as members. The partners are seeking to establish "shared principles on ethics and trust." The project is considering two streams: cognitive science, which is linked to psychology and considers the similarities between artificial intelligence and human intelligence; and machine intelligence, which is less concerned with how similar machines are to humans, and instead is focused on how systems behave in an intelligent way.
With the growth of smart technology comes an increased reliance for humanity to place trust in algorithms, that continue to evolve. Increasingly, people are asking whether an ethical framework is needed in response. It would appear so, with some machines now carrying out specific tasks more effectively than humans can. This leads to the questions 'what is ethical AI?' and 'who should develop ethics and regulate them?'
AI's ethical dilemmas
We're already seeing examples of what can go wrong when artificial intelligence is granted too much autonomy.Amazon had to pull an artificial intelligence operated recruiting tool after it was found to be biased against female applicants. A different form of bias was associated with a recidivism machine learning-run assessment tool that was biased against black defendants. The U.S. Department of Housing and Urban Development has recently sued Facebook due to its advertising algorithms, which allow advertisers to discriminate based on characteristics such as gender and race. For similar reasons Google opted not to renew its artificial intelligence contract with the U.S. Department of Defense for undisclosed ethical concerns.
These examples outline why, at the early stages, AI produces ethical dilemmas and perhaps why some level of control is required.
Designing AI ethics
Ethics is an important design consideration as artificial intelligence technology progresses. This philosophical inquiry extends from how humanity wants AI to make decisions and with which types of decisions. This is especially important where the is potential danger (as with many autonomous car driving scenarios); and extends to a more dystopian future where AI could replace human decision-making at work and at home. In-between, one notable experiment detailed what might happen if an artificially intelligent chatbot became virulently racist, a study intended to highlights the challenges humanity might face if machines ever become super intelligent.
While there is agreement that AI needs an ethical framework, what should this framework contain? There appears to be little consensus over the definition of ethical and trustworthy AI. A starting point is in the European Union document titled "Ethics Guidelines for Trustworthy AI". With this brief, the key criteria are for AI to be democratic, to contribute to an equitable society, to support human agency, to foster fundamental rights, and to ensure that human oversight remains in place.
These are important concerns for a liberal democracy. But how do these principles stack up with threats to the autonomy of humans, as with AI that interacts and seeks to influencing behavior, as with the Facebook Cambridge Analytica issue? Even with Google search results, the output, which is controlled by an algorithm, can have a significant influence on the behavior of users.
Furthermore, should AI be used as a weapon? If robots become sophisticated enough (and it can be proven they can 'reason'), should they be given rights akin to a human? The questions of ethics runs very deep.
From Digital Journal
View Full Article
No entries found