We live in the digital world, where every day we interact with digital systems either through a mobile device or from inside a car. These systems are increasingly autonomous in making decisions over and above their users or on behalf of them. As a consequence, ethical issues—privacy ones included (for example, unauthorized disclosure and mining of personal data, access to restricted resources)—are emerging as matters of utmost concern since they affect the moral rights of each human being and have an impact on the social, economic, and political spheres.
Europe is at the forefront of the regulation and reflections on these issues through its institutional bodies. Privacy with respect to the processing of personal data is recognized as part of the fundamental rights and freedoms of individuals. Regulation (EC) 45/2001 establishes the rules for data protection in the EU institutions and the creation of the European Data Protection Supervisor (EDPS) as independent supervisory authority to monitor and ensure people's right to privacy when EU institutions and bodies process their personal data. The European Group on Ethics in Science and New Technologies (EGE) is an independent advisory body of the President of the European Commission that advises on all aspects of Commission policies and legislation where ethical, societal, and fundamental rights dimensions intersect with the development of science and new technologies. In 2015, the EDPS appointed the Ethics Advisory Group (EAG) "to explore the relationships between human rights, technology, markets, and business models in the 21st century."
Autonomous systems. We broadly define autonomous systems as systems that have the ability of substituting humans in supplying (contextual) information that the system may use to make decisions while continuously running. Depending on the nature, property, and use of this information, an autonomous system may impact moral rights of the users, be they single citizens, groups, or the society as a whole. The widespread use of AI techniques in the implementation of these systems has exacerbated the problem contributing to the creation of systems and technologies whose behavior is intrinsically opaque.1,2,14 In this article, we will stick to the notion of autonomous technology rather than with AI technology. Indeed, we are concerned with the autonomous decision-making capabilities of systems even if those capabilities are a consequence of the availability of more and more complex AI enabling technologies.
The harm of digital society. The last years have witnessed an increasing rate of concerns on the impact of autonomous technologies on our societies. Economy, politics, and human being natural rights are endangered by the uncontrolled use of autonomous technology. Institutional as well as social and scientific entities and boards contribute to constantly feeding the debate by advocating and proposing codes of ethics for developers and regulations from governmental bodies.1,2,3,12,13,14,15,21,22 Admittedly, this debate is mostly concentrated in western countries although with different regulatory outcomes. Indeed, ethical principles, notably privacy, may vary from country to country due to their specific culture and history16,17 and to the impact the development of autonomic technologies can have on the economy of the country. However, at least in western countries there is growing consensus that it is time to take actions to address the harms of autonomous technologies15 and that those actions need eventually to have a regulatory nature and be part of public policy.18,19 To this respect, Europe is certainly far ahead both in thinking and regulation.
The GDPR aims to give individuals control over their personal data and to provide a unifying regulation within the EU for international business.
The General Data Protection Regulation (GDPR), which is the most advanced in the world regulation on personal data protection, is Europe's most relevant achievement so far. By comparison, the state of California recently passed a digital privacy law that will go into effect in January 2020. Although more limited in scope than GDPR, the law is considered one of the most comprehensive in the U.S.20 In a recent paper, "Constitutional Democracy and Technology in the Age of Artificial Intelligence,"19 Paul Nemitz, Principal Advisor of the European Commission, claimed that "The EU GDPR is the first piece of legislation for AI." He provides a comprehensive account of the debate and of the process that accompanied the formulation and adoption of GDPR. Nemitz points out that as happened with GDPR concerning personal data protection, AI and autonomous technologies need to be regulated by laws as far as individual fundamental rights and democracy of society are concerned.
This would lead to accept AI-based autonomous technologies only "if by design, the principles of democracy, rule of law, and compliance with fundamental rights are incorporated in AI, thus from the outset of program development," Nemitz writes.
The quest for an ethical approach. For years, Europe has called for a more comprehensive approach that encompasses privacy and addresses ethical issues in the scope of the digital society. The EDPS in its strategy for 2015–2019 sets out the goal to develop an ethical dimension to data protection.4 In order to reach the goal, it has established the EAG with the mandate to steer a reflection on the ethical implications that the digital world emerging from the present technological trends puts forward. EDPS 4/2015 Opinion "Toward a new digital ethics,"3 identifies the fundamental right to privacy and the protection of personal data as core elements of the new digital ethics necessary to preserve human dignity as stated in Article 1 of the EU Charter of Fundamental Rights. The Opinion also calls for a big data protection ecosystem that shall involve developers, businesses, regulators, and individuals in order to provide 'future-oriented regulation,' 'accountable controllers,' 'privacy-conscious engineering,' and 'empowered individuals.'
In its 2018 report,6 the EAG has provided a broader set of reflections on the notion of digital ethics that address the "fundamental questions about what it means to make claims about ethics and human conduct in the digital age, when the baseline conditions of humanness are under the pressure of interconnectivity, algorithmic decision-making, machine-learning, digital surveillance, and the enormous collection of personal data." In March 2018, the EGE released a statement on "artificial intelligence, robotics, and 'autonomous' systems" in which it urges an overall rethinking of the values around which the digital society is to be structured.5 Computer scientists, besides other societal actors, are called to join this effort by contributing theories, methods, and tools to build trustable and societal-friendly systems. "Advances in AI, robotics and so-called 'autonomous' technologies have ushered in a range of increasingly urgent and complex moral questions," the EGE states. "Current efforts to find answers to the ethical, societal, and legal challenges that they pose and to orient them for the common good represent a patchwork of disparate initiatives. This underlines the need for a collective, wide-ranging, and inclusive process of reflection and dialogue, a dialogue that focuses on the values around which we want to organize society and on the role that technologies should play in it."
In its statement, the EGE goes further and proposes "a set of basic principles and democratic prerequisites, based on the fundamental values laid down in the EU Treaties and in the EU Charter of Fundamental Rights." The first one restates human dignity in the context of the digital society: "(a) Human dignity, The principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by 'autonomous' technologies."
Supporting ethical concerns in autonomous systems. Europe is thus calling for a digital society in which the human being with fundamental rights remains at its center. Therefore, there is the need to rethink the role of the various actors in the digital world by empowering the users of the digital technology both when they operate as citizens and as individuals. However, what does it mean to empower the citizens and the individuals?
Human at the center. The stated principle of human dignity indicates that individuals need to be able to exercise some degree of control on their information and on the decisions that autonomous systems make on their behalf. This raises an issue of what is the scope of system autonomy. Indeed, the principle asks for autonomous systems that, in their behavior, pay respect to human's decisions and beliefs. This means that system's autonomy is a direct consequence of the amount and kind of respect of the individuals they interact with. The more individuals the system interacts with the less autonomy may be given to potential conflicts of respect. This is clearly understood in the scope of privacy where different individuals may have different privacy concerns about their personal data both in general and also depending on given contexts. Reflections on digital ethics can help us in shaping the scope of system autonomy.
Digital ethics. Luciano Floridi, a professor of philosophy and the ethics of information at Oxford and director of the Digital Ethics Lab of the Oxford Internet Institute, defines digital ethics7 as the branch of ethics that aims at formulating and supporting morally good solutions through the study of moral problems relating to personal data, AI algorithms, and corresponding practices and infrastructures. Simplifying, he further identifies two separate components of digital ethics, hard and soft ethics. Hard ethics is defined and enforced by legislation. However, legislation is necessary but insufficient, since it does not cover everything, nor should it. In the space that is left open by regulation, the actors of the digital world, for example, companies, citizens, and individuals, should exploit digital ethics in order to forge and characterize their identity and role in the digital world. This is the domain of soft ethics, which deals with what ought and ought not to be done over and above the existing regulation, without trying to bypass or change the hard ethics.
From the user perspective, soft ethics is where individual ethical values can be expressed; hard ethics characterizes the values, defined by the legislation, a digital system producer shall comply with. Soft ethics is therefore the context in which a user's control of autonomous technology shall and can be exercised.
A patchwork of approaches. Besides reflections and statements on ethics, Europe has put in place a number of initiatives that on the one side represent a patchwork according to the EGE, and on the other side they show that ethical concerns are at the core of the interest for the European society at a whole. A few examples follow:
From a regulatory standpoint, the GDPR was entered into application throughout the EU in May 2018. Article 1 states that: "Regulation lays down rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data. This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data."
The GDPR aims to give individuals control over their personal data and to provide a unifying regulation within the EU for international business. It states data protection rules for all companies operating in the EU, whether they are established in the EU or just operating inside the EU. This regulation forces controllers of personal data to shape their organization and their processing systems in order to implement the data protection principles. As already mentioned, GDPR is the most advanced regulation about personal data operating in the world.
Users need to be able to verify the system they use by possibly imposing on them their own ethical requirements.
Through its organizations, the scientific community has contributed (at a policy level) to identify problems and establish criteria to develop algorithms and systems that embed machine-learning-fueled autonomous capabilities. In March 2018, the ACM Europe Council, the ACM Europe Policy Committee (EUACM), and Informatics Europe presented a white paper on "When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making."2 The report critically analyzes the implications of the increasing adoption of machine learning automated decision making in modern autonomous systems. It concludes with a set of recommendations for policy-makers that concern the technical, ethical, legal, economic, societal, and educational dimensions of the digital society.
DECODE is a consortium of 14 European organizations (municipalities, companies, research institutions, foundations) led by the municipality of Barcelona. DECODE is developing a project, funded by the European Commission through its research programs Horizon 2020,8 whose aim is to empower citizens to control their personal information over the Internet. It provides a distributed platform and tools that use blockchain technology with attribute-based cryptography to give people control of how their data is accessed and used. DECODE experiments through pilots deployed in Amsterdam and Barcelona. The pilots focus on the Internet of Things, collaborative economy and open democracy. The DECODE project was selected in response to a call that stated the following objective: "The goal is to provide SMEs, social enterprises, industries, researchers, communities and individuals with a new development platform, which is intrinsically protective of the digital sovereignty of European citizens."
Beyond privacy, in particular regarding the potential conflict between user/social ethical principles and autonomous systems decisions, ethical issues insistently emerged in the autonomous cars domain. Indeed, there is no general consensus on which ethical principles (personal ethics settings versus mandatory ethics setting) need to be embedded, and how, in the control software of autonomous vehicles.9,10 In 2016, the German Federal Ministry of Transport and Digital Infrastructures appointed an ethical committee that produced a recommendation report resulting in 20 ethics rules for automated and connected vehicular traffic.11 In particular, rules 4 and 6 mention the ethical principle of safeguarding the freedom of individuals to make responsible decisions and the need to balance that with the freedom and safety of others.
Challenges for computer scientists. Responsible computing as defined in the European perspective sets out a number of ambitious challenges for computer scientists. Empowering the user requires a complete rethinking of the role of the user in the digital society. The user is no longer a passive consumer of digital technologies and a data producer for them. Her dignity as a human being implies ownership of personal data and freedom of making responsible decisions. Autonomous technologies shall be designed and developed to respect it. This lifts the user to become an independent actor in the digital society able to properly interact with the autonomous technologies she uses every day and equipped with the appropriate digital means.
The separation of digital ethics in hard and soft ethics suggests that hard ethics is what the autonomous system shall comply with while soft ethics is specific to each individual/user. To obey the principle of human dignity the system during its interactions with each individual shall not violate her soft ethics. The autonomous system architecture shall permit this interaction to happen by complying with the user's moral prerogatives and capabilities. Users need to be able to verify the system they use by possibly imposing on them their own ethical requirements. The separation of concerns implied by the above notion of digital ethics suggests an overall framework in which the autonomy of the system is delimited by hard ethics requirements, users are empowered with their own soft ethics, and the interactions between the system and each user are further constrained by their soft ethics requirements. Therefore, the capability of an autonomous system to make decisions does not only need to comply with legislation but also with a user's moral preferences. (See the intersection between soft and hard ethics in the accompanying figure.)
In such a framework, it should also be possible to deal with liability issues in a fine-grained way by distributing responsibility between the system and the user(s) according to hard and soft ethics. The envisioned framework requires several steps. On the ethics side, provided that autonomous systems will be developed by complying with hard ethics that is with the regulations, the crucial issue to face is to respect each individual's soft ethics. If verifying the compliance of autonomous systems to hard ethics is already raising huge scientific interest and great worries (given the use of obscure AI techniques),1,2,14 defining the scope of soft ethics and characterizing individual ones is a daunting task. Indeed, neither a person nor a society applies moral categories separately. Rather, everyday morality is in constant flux among norms, utilitarian assessment of consequences, and evaluation of virtues. Nevertheless, a digital society that fully realizes the principle of human dignity shall allow each individual to express her soft ethics preferences. Further challenges concern means to consistently combine user soft ethics with system hard ethics and to manage interactions of the system with users endorsing different ethics preferences. Autonomous systems shall be realized by embedding hard ethics by design but remaining open to accommodate users' soft ethics. This could be achieved through system customization or by mediating interactions between the system and the user, in any case through rethinking the system architecture.
Building systems that embody ethical principles by design may also permit acquiring a competitive advantage in the market, as predicted in the recent Gartner Top 10 Strategic Technology Trends for 2019.23
Computer scientists alone cannot solve the scientific and technical challenges we have ahead. A multi-disciplinary effort is needed that calls for philosophers, sociologists, law specialists, and computer scientists working together.
Acknowledgments. The author is indebted to the multi-disciplinary team of the [email protected] project (http://exosoul.disim.univaq.it) for enlightening debates and joint work on digital ethics for autonomous systems.
1. ACM U.S. Public Policy Council. Statement on algorithmic transparency and accountability, 2018; https://bit.ly/2j4IJEV.
2. Larus, J. et al. When Computers Decide: European Recommendations On Machine- Learned Automated Decision Making, 2018; https://dl.acm.org/citation.cfm?id=3185595.
3. EDPS. Opinion 4/2015: Towards a new digital ethics—data, dignity and technology; https://edps.europa.eu/sites/edp/files/publication/15-09-11_data_ethics_en.pdf.
4. EDPS. Leading by example, The EDPS Strategy 2015–2019; https://bit.ly/2MpegjJ
5. European Group on Ethics in Science and New Technologies. Statement on artificial intelligence, robotics and 'autonomous' systems; https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
6. Burgess, J.P., Floridi, L., Pols, A. and van den Hoven, J. Towards a digital ethics— EDPS ethics advisory group; https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf
8. The DECODE project; https://decodeproject.eu.
11. Ethics Commission Automated and Connected Driving. Appointed by the German Federal Minister of Transport and Digital Infrastructure, June 2017 Report; https://bit.ly/2xx18DZ
12. Cath, C. et al. editors. Governing artificial intelligence: ethical, legal, and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. Royal Society, Nov. 2018.
13. Declaration on Ethics and Data Protection in Artificial Intelligence at the 40th Intern. Conference of Data Protection and Privacy Commissioners, Oct. 2018; https://bit.ly/2Cz31AG.
14. AI Now Institute, New York University, 2017 Annual Report; https://ainowinstitute.org/AI_Now_2017_Report.pdf
15. AI Now Institute, New York University, 2018 Annual Report; https://ainowinstitute.org/AI_Now_2018_Report.pdf
17. Li, T. China's influence on digital privacy could be global; https://wapo.st/2TffDE0
18. Vardi, M. Are we having an ethical crisis in computing? Commun. ACM 62, 1 (Jan. 2019), 7; https://cacm.acm.org/magazines/2019/1/233511-are-we-having-an-ethical-crisis-in-computing/fulltext
19. Nemitz, P. Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical And Engineering Sciences. Royal Society, Nov. 2018.
20. Wakabayashi, D. California passes sweeping law to protect online privacy. New York Times (June 28, 2018); https://nyti.ms/2tGjAaf.
21. The European Commission's High-Level Expert Group on Artificial intelligence, Draft Ethics guidelines for trustworthy AI; https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56433
22. Artificial Intelligence: A European Perspective. European Commission Joint Research Centre, Dec. 2018; https://ec.europa.eu/jrc/en/artificial-intelligence-european-perspective
23. Gartner Top 10 Strategic Technology Trends for 2019; https://gtnr.it/2CJJYGp
Copyright held by author/owner. Publication rights licensed to ACM.
Request permission to publish from [email protected]
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.
No entries found