Research and Advances
Artificial Intelligence and Machine Learning East Asia and Oceania Region Special Section: Hot Topics

Operationalizing Responsible AI at Scale: CSIRO Data61’s Pattern-Oriented Responsible AI Engineering Approach

Posted
figures constructing robots with various images and shapes, illustration

For the world to realize the benefits brought by AI, it is important to ensure artificial intelligent (AI) systems are responsibly developed, used throughout their entire life cycle, and trusted by the humans expected to rely on them.1 The goal for AI adoption has triggered a significant national effort to realize responsible AI (RAI) in Australia. CSIRO Data61 is the data and digital specialist arm of Australia’s national science agency. In 2019, CSIRO Data61’s worked with the Australian government to conduct the AI Ethics Framework research. This work led to the release of eight AI ethics principles to ensure Australia’s adoption of AI is safe, secure, and reliable.a


It is challenging to turn high-level AI ethics principles into real-life practices.


This effort is influenced by Australia and New Zealand’s unique culture values, such as “fair go,” indigenous community, and the wider Indo-Pacific regional thinking. To test out the impact of the framework, CSIRO Data61’s interviewed CSIRO scientists in 2020 to gain insight into their implementation of ethics principles in scientific projects. In 2021, the adoption of the ethics principles was examined via case studies from some of Australia’s biggest businesses.b

Through our engagement with the Australian industry and the wider Oceania region, we have identified five major challenges in operationalizing RAI at scale:

  • Challenge 1. A lack of connected reusable solutions for operationalizing these principles. It is challenging to turn those high-level AI ethics principles into real-life practices. Also, significant efforts have been made on algorithm-level solutions focusing on a subset of principles (such as fairness and privacy). To try to fill the gap, further guidance (such as guidebooks and checklists) has started to appear, but those efforts tend to be ad hoc sets of prompts.2
  • Challenge 2. Stakeholder interests in RAI risk management vary, demanding different but connected solutions. Organizations usually take a risk-based approach, but different stakeholders have different interests. For example, regulators or board/executives may be more interested in harms and societal impacts, associated preventive/corrective cost, and governance mechanisms, while developers care more about algorithmic risks and reliability/security techniques.
  • Challenge 3. Integrating RAI risk management into organizations’ existing governance frameworks. Organizations usually have centralized risk committees to manage traditional risks, such as financial, reputation, and legal risks, with some having dedicated cyber-security and privacy risk management. These committees do not have the necessary scope to handle RAI risks. But creating dedicated AI risk committees for human values or fairness gives rise to risk silos. Some risks, such as reliability, might be best handled at the operational level.
  • Challenge 4. Communication barrier. It is often difficult for software engineers to explain the RAI risks of or solutions for their AI systems to management teams and vice versa.
  • Challenge 5. Lack of talents and experts. Organizations often do not have RAI expertise; thus their traditional risk experts manage RAI risks. Also, the organizations often cannot afford to examine each AI project deeply, thus their risk committees only assess high-risk projects and rely on the project teams to do self-assessment.

CSIRO Data61’s worked with the Australian government and industry to conduct research in Responsible AI Engineering.


Back to Top

RAI Pattern Catalogue and its Research Impact

CSIRO Data61 focuses on RAI engineering, which addresses end-to-end system-level challenges. We built an RAI Pattern Cataloguec for different stakeholders in the AI industry.2,3 We analyzed and generalized successful case studies and best practices into reusable patterns (Challenge 1) and organized them into three interconnected categories (Challenge 2) for easier adoption for impact—governance patterns for establishing multilevel RAI governance, process patterns for setting up responsible development processes, and product patterns for building RAI-by-design (as illustrated in the accompanying figure).

uf1.jpg
Figure. Responsible AI Pattern Catalogue.

To describe the pattern, we created a template, including summary, type, objective, target users, impacted stakeholders, relevant principles, context, problem, solution, consequences, related patterns, and known uses. Patterns are selected and tailored for different contexts; for example, adding more quantitative assessment of risk residues and risk/cost increase in other parts. To maintain the risk profile for consequences, we have added pointers to measures, measuring methods, new risks, and risk residues (Challenge 3).


RAI governance through the API’s process pattern can be adopted to restrict the use of facial recognition APIs to approved users only.


Patterns are connected at multilevels/stages through the related patterns (Challenge 4). For example, the regulatory sandbox is an industry-level governance pattern that may specify high ethical-quality requirements and use restrictions for facial recognition. To support the regulatory sandbox pattern, the software bill of materials pattern at the organization-level and verifiable claim for AI system artifacts pattern at the team level can be implemented to manage the ethical qualities of the procured third-party facial-recognition technology. RAI governance through the API’s process pattern can be adopted to restrict the use of facial recognition APIs to approved users only. The bill of materials registry product pattern can be embedded as a system component to record the supply chain information.

In one task, we are assessing risks for CSIRO AI projects using our top-to-bottom/end-to-end RAI risk framework and recommending pattern-oriented mitigation strategies. The question bank for risk assessment is built based on four dimensions: ethics principles, life-cycle stages, stakeholders who will ask the questions, and stakeholders who will answer the questions. To support automated self-assessment, we are building a RAI knowledge graph based on incidents, questions, and patterns, among others (Challenge 5).

We offer a course on the RAI Pattern Catalogue in Australia’s nationwide AI graduate schools. Our industry partners have adopted some patterns in their governance policies and development guidelines—for example, for chatbot development.4

 

    1. Lu, Q. et. al. Towards a roadmap on software engineering for responsible AI. In Proceedings of CAIN 2022.

    2. Lu, Q. et. al. Responsible AI pattern catalogue: A multivocal literature review. 2022; arXiv:2209.04963.

    3. Lu, Q. et. al. Responsible-AI-by-design: A pattern collection for designing responsible AI systems. IEEE Software (2023).

    4. Lu, Q. et. al. Developing responsible chatbots for financial services: A pattern-oriented responsible AI engineering approach. IEEE Intelligent Systems (2023).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More