Research and Advances
Artificial Intelligence and Machine Learning

The EU AI Act and the Wager on Trustworthy AI

As the impact of AI is difficult to assess by a single group, policymakers should prioritize societal and environmental well being and seek advice from interdisciplinary groups focusing on ethical aspects, responsibility, and transparency in the development of algorithms.

Posted
cards and chips on a poker table

Artificial intelligence (AI) systems are increasingly supplementing or taking over tasks previously performed by humans. On the one hand, this relates to low-risk tasks, such as recommending books or movies, or recommending purchases based on previous buying behavior. But it also includes crucial decision making by highly autonomous systems. Many current systems are opaque in the sense that their internal principles of operation are unknown, leading to severe safety and regulation problems. Once trained, deep-learning systems perform well, but they are subject to surprising vulnerabilities when confronted with adversarial images.9

The decisions may be explicated after the fact, but these systems carry the risk of wrong decisions affecting the well being of people. They may be discriminated against, disadvantaged, or seriously injured. Examples include suggestions on how to select a job applicant, proper medical treatment for a patient, or how to navigate autonomous cars through heavy traffic. In such situations, several ethical, legal, and general societal challenges arise. At the forefront is the question of who is responsible for a decision made by an AI system: Do we leave the decision to the AI system, or does a human decide in partnership with an AI system? Are there reliable, trustworthy, and understandable explanations for the decisions in each case? Unfortunately, the inner workings of many AI systems remain hidden—even from experts. Given the critical role AI systems play in modern society, this seems in many cases unacceptable. But how can we make complex, self-learning systems explainable? And to what extent is this lack of explanation or broader transparency contributing to a watchful and responsible introduction of AI systems that have evidenced benefits?

Key Insights

  • Public trust, transparency, and interdisciplinary research are pivotal in the responsible deployment of AI systems.

  • The EU AI Act passed by the European Parliament will now be implemented in 27 member states of the EU. It is the first major law aimed at regulating AI across sectors, with a focus on risk management, transparency, ethical governance, and human oversight.

  • AI systems categorized as high-risk will be subject to stringent regulations to ensure they do not compromise human rights or safety.

A deeper look at the technical details of AI and technical innovation on their way, such as autonomous systems, shows an obvious need for technical expertise in the practical technical and societal aspects of AI in the decision-making process. On the other hand, a purely technological perspective may result in regulations that cause more significant societal problems. This article highlights accurate and realistic technology descriptions that take into account the risk factors as required, for example, by the risk pyramid of the EU AI Act that entered into force in August 2024. To strike such a balance for the public interest, policymakers should prioritize societal and environmental well being and seek advice from interdisciplinary groups, as the impact of AI and autonomous systems is very difficult to assess by a single group. This more holistic system view is complementary to previous statements focusing on ethical aspects, responsibility, and transparency in the development of algorithms,1 specifically on algorithmic systems involving AI and machine learning (ML).3,15,22

Many members of the public, particularly in Europe, exhibit skepticism toward AI and autonomous systems, which often translates into a lack of confidence or a cautious “wait-and-see” approach.23 For this technology to develop to its beneficial potential, we need a framework of rules within which all players can operate responsibly. For the future of AI systems—specifically in the public spheres, where people express their personal expectations and worries about the potential consequences of AI being used without proper oversight—certain aspects must be taken into account. The following points are crucial for guiding the formulation of policies and regulations related to AI and are essential for the research and development community:

Supporting research and development in AI and autonomous systems.  We recommend advanced research on the governance of implemented AI and automated systems, for example, transportation. Special care must be taken at an early stage to contribute and adhere to transparent standards for hardware and software that provide insight to carry out legally required independent safety certifications.

Creating and supporting sustainable solutions.  In light of the UN sustainability development goals, we recommend advancing multidisciplinary research methodologies that integrate social sciences and humanities alongside engineering sciences. Social sciences, such as sociology and anthropology, can provide crucial insights into how people understand, interact with, and trust AI systems. This understanding is vital for designing technologies that are socially acceptable, beneficial, and promote sustainable development. Humanities disciplines, like philosophy, can offer valuable perspectives on ethics, fairness, and the potential impact of AI on human values. This combined approach can lead to developing sustainable and energy-efficient autonomous systems that align with societal well being.

Prioritizing societal well being and equal opportunities.  We recommend that the legislative processes, especially in adapting existing laws and the new design of liabilities, take an interdisciplinary approach and consult the scientific and technical expertise in trusted AI. This should ideally lead to equal opportunities and fairness in new business development considering new autonomous systems and preventing monopolies.

Promoting education on science, technology, social impact, and ethics.  To foster responsible and beneficial use of AI, we propose enhancing educational curricula in secondary schools, universities, and technical fields to include fundamental knowledge about AI ethics and its impact on society. Incorporating ethical and social scientific aspects into computer science (CS) curricula, as exemplified by Stanford University’s approach, will encourage students to consider “embedded” ethical, legal, or social implications while solving problems. Similarly, in Europe, some institutions teach CS students to relate the ACM Code of Ethics for Professional Conduct1 to their tasks, fostering a sense of responsibility in their future AI-related endeavors.

The overall level of expertise in all levels of our society about how AI works and operates represents a critical success factor that will ultimately lead to confidence and acceptance of beneficial uses of these technologies in our daily lives. Policymakers, developers, and adopting users of AI systems need to be literate about these technologies and find answers at the intersection of technology, society, and policymaking. Furthermore, we should weigh the risks of autonomous systems against the benefits to allay public fears.

The points mentioned here highlight the need for an interdisciplinary and holistic approach to beneficial usage of AI. They set the foundation for a broader involvement of the public on one hand and the subsequent development of the EU AI Act. To be informed about the endeavors of a supranational governmental organization such as the EU, striving to establish consensus across 27 member states regarding the legal regulation of AI, is likely to capture the attention of a diverse international readership. This audience includes academics in the field of AI ethics, explainable AI, and risk management as well as professionals who may be called upon to provide technical expertise to lawmakers in other parts of the world.

Background: EU Policies on AI and Ethics Guidelines

Considered one of the ‘lighthouse’ projects, public trust in autonomous systems is a crucial issue, well in line with recent awareness in the governance over AI12,16,21 expressed in the joint agreement of the EU Commission and EU Council’s proposal for a new European AI Act,a as well as the High-Level Expert Group called in by the EU Commission in 2019.8 The High-level Expert Group’s Ethics Guidelines echo several critical issues on human-centered and transparent approaches pointed to several principled documents.13

The EU Commission takes a three-step approach: setting out the essential requirements for trustworthy AI, launching a large-scale pilot phase for feedback from stakeholders, and working on international consensus-building for human-centric AI.b Among others, the ACM Europe Technology Policy Council (TPC)2 collaborates with the EU Commission as a stakeholder and representative of the European CS community, providing technical input on relevant initiatives. While the Commission looks broadly at an assessment of AI from a general point of view to preserve the values of the European member states, a more comprehensive judgment will result if the predictive assessments of all the actors, that is, owners, designers, developers, and researchers, are taken into account.1,3,5,22 This process led to the proposal of an AI Act, first published by the European Commission in April 2021, and the final version in force starting August 2024, which this article discusses later.

Essentials for Achieving Trustworthy AI Systems

Implementers of AI and autonomous systems must be aware of what we, as responsible citizens, can accept and what is ethical, and put laws and regulations in place to safeguard against future tragedies. Trustworthy AI should, for example, according to the European Commission’s High-Level Expert Group on AI, respect all applicable laws and regulations and a series of requirements for the particular sector. Specific assessment lists aim to help verify the application of each essential requirement. The following list of essentials is taken from the EU document “Building Trust in Human-Centric Artificial Intelligence,” which results from the work of a European High-Level Expert Group on ethics.c Additional perspectives are covered in a report by the Alan Turing Institute.18

Developing trust in autonomous systems in the public sphere.

Human agency and oversight.  The essentials described above in the direction of an explainable and trustworthy AI may be suitable to convince professionals who knowingly interact with AI systems.6,7 It would be similarly essential to ensure trust in these systems among the public. However, it is important to note that explainability in AI, particularly in deep neural networks (DNNs), remains a significant scientific challenge. Some scientists argue that the inherent complexity and the high-dimensional nature of these models make it difficult, if not impossible, to fully explain their outcomes. This skepticism raises critical questions about the feasibility of achieving truly transparent AI systems.

Therefore, ways to establish an individual trust in AI must be sought. However, more than detailed explanations of individual outcomes will be required for the public. In Knowles and Richards,14 the authors call for building a public regulatory ecosystem based on traceable documentation and auditable AI, with a slightly different emphasis than the one on individual transparency and information for all.

Robustness and verification.  Given the complexity, more work needs to be done by interdisciplinary teams bringing together the social sciences and humanities expertise with the computer scientists, software engineers, legal scholars, and political scientists in investigating what meaningful control and verification procedures for AI systems might look like in the future.

Safety, risk issues, and ethical decisions.  In the domain of autonomous vehicles, looking at the state of the art to avoid collisions, autonomous cars have been trained not only to respect traffic rules rigorously but also to ‘drive cautiously’, that is, negotiate and not enforce the right of way. Even in case of unavoidable and dilemmatic situations, legislation is underway to respect the ethical dilemma, aka the Trolley Dilemma, investigated in Awad et al.4 and Goodall.11

In the context of public expectations, it is important to understand that there is no universally “right” answer when it comes to making decisions in dilemma situations. Primarily, an algorithm should not be constrained to making predefined decisions. Nevertheless, ongoing discussions about this topic persist in society. Furthermore, the lack of acceptance for autonomous driving can be attributed to the fact that humans are allowed to make mistakes, whereas there seems to be zero tolerance for any mistakes made by AI.

Cybersecurity.  In the cybersecurity domain, other than attacks through the Internet, there are also AI-specific attacks, such as adversarial learning, which researchers have successfully demonstrated from the Tencent Keen Security Lab.9 AI systems such as autonomous vehicles must demonstrably be able to defend themselves and go into a safe mode in case of doubt.

Physical security.  There might be physical attacks, such as throwing a paint bag against the cameras to blind an autonomous system or using a laser pointer against the LiDAR. In cases like these, the error handling must be capable of bringing the system into a safe mode.

Data privacy.  People have the right to determine if they want to be “filmed” and whether they want their location, date, and time to be recorded and shared. To build trust, autonomous systems manufacturers must adhere to the data-protection principles in the GDPR to ensure that no privacy rights are being violated.

Trust and human factors.  Different levels of trust and comfort may arise through explanation, for example, if an autonomous car explains its maneuvers to its passengers and road users outside the vehicle.

Trust and legal systems.  The decisive question is who or what caused the error: The human at the wheel? A flawed system? A defective sensor? Complete digitization makes it possible to answer these questions. To do this, however, extensive data must be stored. Open legal questions that need to be clarified in this context include who owns these data, who has access to the data, and whether this is compatible with privacy protection.

Public administration.  The answers to the above must be found because they represent significant citizen concerns. In our capacity as members of the ACM Europe TPC, we contribute to the work by the EU Commission and EU Parliament to establish harmonized rules for the use of AI. Our comments from the perspective of autonomous systems can be found in Saucedo et al.20

AI Legislation in the EU: The AI Act

AI policy work is underway globally in most industrial countries. Partnering with PricewaterhouseCoopers, the Future of Life Institute offers a dashboard on its website24 with a wealth of information and references to documents. According to their analysis, the approach to govern AI varies greatly between soft and hard law efforts, which depends largely on how the following areas of concern are rated and prioritized by policymakers:

  • Global governance and international cooperation

  • Maximizing beneficial AI research and development

  • Impact on the workforce

  • Accountability, transparency, and explainability

  • Surveillance, privacy, and civil liberties

  • Fairness, ethics, and human rights

  • Manipulation

  • Implications for health

  • National security

  • Artificial general intelligence and superintelligence

Looking at the major players, we see:

  • United States. The White House has published a ‘Blueprint for an AI Bill of Rights’, a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of AI. However, there is currently no federal AI regulation in the U.S., but some states have taken steps to regulate particular use cases and the use of AI in specific industries. For example, California has passed a law requiring companies to disclose the use of automated decision making in employment and housing. Overall, the strategy is business oriented. After the appearance of ChatGPT, the U.S. Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology and the Law held several hearings with leading AI academics to evaluate the risks of generative AI. In October 2023, the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, signed by President Biden, arose from a desire to address both the potential benefits and risks of AI.

  • China. China has been actively investing in AI and has taken steps to regulate its use, including developing national AI standards and guidelines for ethical use. The country has also established a national AI development plan that sets out its goals and objectives for the industry. China has significantly restricted the use of generative AI. ChatGPT is blocked within the Chinese network, and access to domestic alternatives is granted solely through individual application requests.

  • Canada. Canada has established the Pan-Canadian Artificial Intelligence Strategy, which aims to promote the responsible development and use of AI. The strategy includes funding for research, development, and innovation in AI, as well as ethical guidelines for its use.

  • United Kingdom. The U.K. has established the AI Council, which aims to promote the responsible use of AI and advise the government on AI regulation. The council has published guidelines on ethical use. The approach so far aims to ensure consumers “have confidence in the proper functioning of the system.”

  • The G7. During its summit meeting on May 20, 2023 in Hiroshima, the G7 issued a statement about what it called the ‘Hiroshima AI Process’.

    “We recognize the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors, and encourage international organizations to consider analysis on the impact of policy developments and Global Partnership on AI (GPAI) to conduct practical projects. In this respect, we task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI by the end of 2024. These discussions could include topics such as governance, safeguard of intellectual property rights including copy rights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilization of these technologies.”10 In October 2023, this was followed by the publication of AI guidelines for a ‘Hiroshima Process’ for advanced AI systems and a code of conduct for developer organizations.

Figure 1.  The EU Risk Pyramid.

In the EU, preparations for AI regulation began in April 2021, when the EU Commission presented the Artificial Intelligence Act, which sets out horizontal rules for the development, commodification, and use of AI-driven products, services, and systems within the territory of the EU. It should be noted that the EU AI legislation does not regulate AI technology per se, but rather the effect of AI products on the lives of EU citizens. There is no intention to intervene in the development of AI products, but there is a claim to help shape their use in the EU. The regulation provides core AI rules that apply to all industries.

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around four risk categories as evidenced in Figure 1. It imposes requirements for market entrance and certification of high-risk AI systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to ML training, testing, and validation datasets. The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically, and technically robust while respecting democratic values and human rights, including privacy and the rule of law.

This is claimed to be the first law worldwide to regulate AI in all areas of life, except the military sector. The legislative process reached a milestone in December 2023, when the EU Commission, the EU Council, and Parliament managed to reach an agreement in the so-called “trilogue.” After subsequent approval from votes in Parliament and the Council, the regulation came into force in August 2024, shifting the attention to member states to set up supervisory bodies, the standardization bodies to develop harmonized standards for high-risk AI compliance, and for the new AI Office to develop guidelines.

Who is affected by the new regulation?  Companies that plan to provide or deploy AI systems in the EU (the “providers and deployers” according to the wording of the Act) are the primary addresses bound by the provisions of the AI Act. They apply regardless of where the systems were developed or are operated from—or when the operation of the systems impacts EU citizens. It will take courage and creativity to legislate this convoluted, interdisciplinary issue and will require non-EU, namely U.S. and Chinese companies, to adhere to values-based EU standards before their AI products and services gain access to the European market of 450 million consumers. Consequently, the proposal has an extraterritorial effect.

Given the need for more awareness outside the EU, companies are well advised to start early to learn what is in the EU AI Act and what is needed to meet the compliance criteria.

The essence of the EU AI Act.  The AI act contains the following sections, called titles.d,13 A collection of all publicly available documents and amendments since the initial proposal to the AI Act as of July 2023 may be found in Zenner.25

Table 1. Contents of the EU AI Act.
Chapter I: General ProvisionsOutlines the proposal’s scope and how it would affect the market once in place.
Chapter II: Prohibited AI PracticesDefines AI systems that violate fundamental rights and are categorized at an unacceptable level of risk.
Chapter III: High-Risk AI SystemsCovers the specific rules for classifying AI systems as high risk, the connected requirements and obligations for Providers and Deoplyers and other parties.
Chapter IV: Transparency Obligations for Providers and Deployers of Certain AI Systems and GPAI ModelsLists transparency obligations for systems that interact with humans, detect emotions, or determine social categories based on biometric data, or generate or manipulate content (for example, ‘deep fakes’).
Chapter V: General Purpose AI ModelsClassification rules, obligations for providers of general-purpose AI models, and GPAI models with systemic risk.
Chapter VI: Measures in Support of InnovationAI regulatory sandboxes, testing of high-risk AI systems in real-world conditions.
Chapter VII: GovernanceEstablishing the Act’s governance systems, including the AI Office and the AI Board, and monitoring functions of the European Commission and national authorities.
Chapter VIII: EU Database for High-Risk AI SystemsEU database for high-risk AI systems listed in Annex III.
Chapter IX: Post-Market Monitoring, Information Sharing, Market SurveillanceSharing information on serious incidents; Supervision, investigation, enforcement, and monitoring in respect of providers of general-purpose AI models.
Chapter X: Codes of Conduct and GuidelinesGuidelines from the Commission on the implementation of this regulation.
Chapter XI: Delegation of Power and Committee ProcedureExercise of the delegation and committee procedure.
Chapter XII: Confidentiality and Penalties

Administrative fines on union institutions, agencies, and bodies.

Fines for providers of general-purpose AI models.

Chapter XIII: Final ProvisionsAmendments to several articles in other legislation.

The risk pyramid of the AI Act.  The main guiding point of the AI Act is the risk pyramid with a core focus on high-risk applications. The risk levels, as depicted previously in Figure 1, are summarized below.

Unacceptable risk.  This category delineates which uses of AI systems carry an unacceptable level of risk to society and individuals and are thus prohibited under the law. These prohibited use cases include AI systems that entail social scoring, subliminal techniques, biometric identification in public spaces, and exploiting people’s vulnerabilities. In these uses, the AI Act describes when and how exceptions may be made, such as in emergencies related to law enforcement and national security.

High risk.  Requirements related to high-risk systems, such as compliance with risk-mitigation requirements like documentation, data safeguards, transparency, and human oversight, are at the crux of this proposed regulation. The list of high-risk AI systems that must deploy additional safeguards is lengthy and can be found in Art. 6, Annex III of the Act.

Explainability plays a crucial role in ensuring that AI systems are transparent and trustworthy, particularly in domains where the risk of harmful decisions is high—for example, in the medical domain, where a false negative may be as harmful as a false positive. The EU AI Act requires that AI systems provide information on their decision-making process so that individuals can understand the basis for the AI system’s outputs and that they are not used to manipulate behavior. Additionally, the requirement for human oversight and control over high-risk AI systems is based on the principle that there must be a human in the loop to make decisions that have significant consequences for individuals’ rights and safety.19 The EU AI Act aims to ensure that AI systems are developed and deployed responsibly and transparently, considering the potential impact on individuals’ rights and safety. Harmonized standards, under development, are likely to play an important role for the compliance of high-risk AI systems.

Limited risk.  Limited-risk AI systems have much fewer obligations to providers, and users must follow compared to their high-risk counterparts. AI systems of limited risk must follow certain transparency obligations outlined in Title IV of the proposal. Examples of systems that fall into this category include biometric categorization, or establishing whether the biometric data of an individual belongs to a group with some predefined characteristic to take a specific action; emotion recognition; and deep-fake systems.

Minimal risk.  The proposal’s language describes minimally risky AI systems as all other systems not covered by its safeguards and regulations. There are no requirements for systems in this category. Of course, businesses with multiple kinds of AI systems must ensure compliance with each appropriately.

Handling general-purpose AI with or without systemic risk.  As a result of the increased general capabilities of several new AI models during the spring of 2023, and the broad adoption of ChatGPT, there were intense public debates and a delay of the EU Parliament’s proposal for the AI Act. The proposal, from June 2023, came to include rules that the earlier proposals did not, on “foundation models” (see definition in Art. 3) and responsibilities linked to providers of generative AI (see, for example Zenner25). These proved to be part of the most intensely negotiated aspects of the AI Act, which solidified into a set of obligations for all providers of general-purpose AI (GPAI), that also included a second tier with additional obligations for GPAI models that (see Chapter V) are “having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain” (Art. 3(65)).

In brief, all providers of GPAI models must:

  • Draw up technical documentation, including training and testing process and evaluation results, to be available, upon request, for the AI Office and national supervisory authorities.

  • Draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system so that the latter understands capabilities and limitations and is enabled to comply.

  • Put in place a policy to comply with EU law on copyright.

  • Publish a suffciently detailed summary about the content used for training the GPAI model.

  • Free and open-license GPAI models whose parameters, including weights, model architecture and model usage are publicly available, allowing for access, usage, modification and distribution of the model, only have to comply with the latter two obligations above. This exception does not apply to GPAI models with systemic risks.

The GPAI models are presumed to carry “systemic risk” when the cumulative amount of computation used for its training is greater than 1025 floating point operations per second (FLOPS), or through evaluation or the Commission’s decision have been found to have the high-impact capabilities that implicate this classification. If their model meets this criterion, providers must notify the Commission within two weeks. The provider may present arguments that, despite meeting the criteria, their model does not present systemic risks.

We consider it quite a leap to assume that more compute in training a model necessarily equals risks for negative impact on public health, safety, public security, and so on. The Commission is also quite autonomously mandated to change how “systemic risk” is allocated and can amend the criteria listed in Annex XIII, which may be both meaningful in terms of how AI evolves, but also opens for legal unpredictability.

In addition to the obligations for GPAI above, providers of GPAI models with systemic risk must also:

  • Perform model evaluations, including conducting and documenting adversarial testing to identify and mitigate systemic risk.

  • Assess and mitigate possible systemic risks, including their sources.

  • Track, document, and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay.

  • Ensure an adequate level of cybersecurity protection.

In response to the complexities of AI regulation, the EU has established an AI Office to facilitate coordination on cross-border cases. However, the resolution of intra-authority disputes remains the responsibility of the Commission.

Assessment and How to Cope with the EU AI Act

For anyone wishing to put an AI system into operation in the EU, the AI act serves as a reminder for developers to always prioritize the well being of individuals and society as a whole. They must first assess the risk and, depending on the risk class, comply with requirements relating to transparency and security. It is expected that it will be particularly challenging for high-risk applications to obtain approval for the EU market. There will be a grace period until the the various obligations or bans become applicable. Nevertheless, developers should analyze the respective compliance requirements at an early stage to adapt the development process accordingly. The strategy includes the following key elements:

  • Informing and training employees about the regulations and their obligations under the law. These cannot be understood without addressing the EU’s rationale for this law and the expectations of EU citizens regarding trustworthy AI. Researchers and developers must understand that automated and algorithmic decision making should be based on the principles and values enshrined in the Charter of Fundamental Rights (such as human dignity, equality, justice and equity, non-discrimination, informed consent, private and family life, and data protections), and the principles and values of Union law (such as non-stigmatization, and individual and social responsibility). Support from an interdisciplinary working group should therefore be planned for.

  • During the design of the systems, attention should be paid to transparency,5 the nature and quality of the training data, and its documentation because of a later evaluation by external reviewers. This also includes the establishment of a risk-management system (see Table 2).

  • Continuous investment in research and development, especially in rapidly evolving methods of AI explainability, see Balasubramanian6 and Barredo Arrieta et al.7 Once an AI system is explainable, it may positively contribute to trustworthiness and form a step toward acceptance and approval.

  • Collaborate with other companies, potentially supervisory authorities, and organizations in the industry to share information and best practices for compliance. This can help reduce costs and ensure that all parties are on the same page when it comes to compliance.

Acknowledgements

This work was undertaken while working for the ACM Europe Technology Policy Committee (TPC) on autonomous systems. We are grateful for support of and discussions with Chris Hankin, chair of the TPC. Further information may be found on the TPC website2 and in prior publications.3,17

    References

    • 1. ACM Code of Ethics and Professional Conduct. ACM (2018); https://bit.ly/4eNL1Tv
    • 2. ACM Europe Technology Policy Committee; https://bit.ly/3XL8LAY
    • 3. ACM Principles for Algorithmic Transparency and Accountability, Association for Computing Machinery (2017); https://bit.ly/4eSHbJ8
    • 4. Awad, E. et al. The moral machine experiment. Nature 563, (2018), 5964; https://go.nature.com/47S6g4w
    • 5. Baeza-Yates, R. et al. ACM Technology Policy Council Statement on Principles for Responsible Algorithmic Systems. ACM (2022); https://bit.ly/3TUTnRo
    • 6. Balasubramanian, V. Toward explainable deep learning. Commun. ACM 65, 11 (Nov. 2022), 6869; https://bit.ly/3Y9SwPa
    • 7. Barredo Arrieta, A. et al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Science Direct 58, (2020), 82115. https://bit.ly/4gRKeCY.
    • 8. Building trust in human-centric artificial intelligence. EU Commission (2019); https://bit.ly/3zTMzwr
    • 9. Doctorow, C. Small stickers on the ground trick Tesla autopilot into steering into opposing traffic lane. Boing Boing (Mar. 31, 2019); https://bit.ly/47VQbux
    • 10. G7 Meeting Hiroshima. (May 2023); https://bit.ly/3U8BM8E
    • 11. Goodall, N.J. Machine ethics and automated vehicles. Road Vehicle Automation, G.Meyer and S.Beiker (Eds.). Springer (2014), 93102; https://bit.ly/3TQn2Ll
    • 12. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition (2019); https://bit.ly/47RhbLR
    • 13. Jobin, A., Ienca, M., and Vayena, E. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, (2019), 389399.
    • 14. Knowles, B. and Richards, J. The sanction of authority: Promoting public trust in AI FAccT '21. In Proceedings of the 2021 ACM Conf. on Fairness, Accountability, and Transparency (March 2021), 262271.
    • 15. Larsson, S. and Heintz, F. Transparency in artificial intelligence. Internet Policy Rev 9, 2 (2020); https://bit.ly/3XQODO0
    • 16. Larsson, S. On the governance of artificial intelligence through ethics guidelines. Asian J. Law and Society 7, 3 (2020), 437451.
    • 17. Larus, J. et al. When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making. ACM (2018); https://bit.ly/3BtDkDV
    • 18. Leslie, D. Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute (2019); https://bit.ly/4dH2AUE
    • 19. Middelton, S. et al. Trust, regulation, and human-in-the-loop AI within the European region. Communications 65, 6 (April 2022), 6468.
    • 20. Saucedo, A. et al. ACM Europe TPC Comments on Proposed AI Regulations. ACM (2021); https://bit.ly/3XN060S
    • 21. Shneiderman, B. Responsible AI: Bridging from ethics to practice. Communications 64, 8 (Aug.2021), 3235.
    • 22. Villani, C. For a meaningful artificial intelligence. Comitè d’Ètica de La UPC (2018); https://bit.ly/3XRxIuC
    • 23. Wood, M. Self-driving cars might never be able to drive themselves, Marketplace (2021); https://bit.ly/4dw15s5
    • 24. Yelizarova, A. Global AI policy. Future of Life (Dec. 16, 2021); https://bit.ly/4dtV27u
    • 25. Zenner, K. The implementation and enforcement of the EU AI Act: The documents. Digitizing Europe (Jul. 28, 2024); https://bit.ly/3BFRth4

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More