Opinion
Data and Information

Human Intuition and Algorithmic Efficiency Must Be Balanced to Enhance Data Mesh Resilience

Improving data governance methodologies.

Posted
hand under mesh network, illustration

Entities handling extensive and complex data environments increasingly adopt the data mesh paradigm across all sectors. Data mesh is an architectural and organizational governance approach that treats data as a product, promoting domain-specific ownership and self-serve infrastructure.11 It encourages domain teams to manage their data, with standardized metadata, governance, and a services layer for accessibility, which reduces centralization bottlenecks and improves data scalability and usability across complex organizations. In the commercial sector, multinational technology companies value data mesh for domain-oriented governance and decentralized structure. Often managing large, heterogeneous data, they benefit from enhanced scalability and domain-specific data management. In the public sector, government agencies adopt data mesh for resilient, flexible data-product delivery.10 Defense organizations like the U.S. Army use data mesh to provide reliable data products for informed decision making. Healthcare institutions and research organizations use data mesh to safeguard sensitive information, enhancing data security and authorized database access. Particularly with private, sensitive, or classified data, challenges arise in building data governance for these complex structures.14

Central to this challenge is the effective implementation of data governance in emerging paradigms like data mesh architecture. Data mesh revolutionizes how organizations manage their vast data landscapes. Unlike traditional monolithic approaches where data is centralized, data mesh architecture decentralizes data ownership, echoing efficiencies of scale achieved by microservice architecture. By distributing ownership, teams closest to the data can govern and leverage it effectively, making it more accessible, reliable, and usable. To manage such distributed and complex operations, autonomous decision models are necessary to evaluate user access to the data mesh, user activity correctness, correction of data safety and security issues, and passive quality control, searching for unauthorized data mesh access and quarantining any damage to the architecture.

An effective data governance framework can enhance system security and resilience (preventing, avoiding, and mitigating misuse), as well as recovery capacity.9 Automated decision models can handle much of the former, providing efficiency and robustness. However, human judgment remains necessary to address errors from AI managing large data systems.1 This column explores a “human-in-the-loop” approach for AI data mesh management, especially for the U.S. Army, known as Cyber Expert LLM Safety Assistant (CELSA). CELSA combines AI efficiency and expert judgment to aid data mesh resilience by promptly addressing well-understood threats automatically and elevating handling of novel threats. This work aligns with the strategic plan launched by the recent U.S. Presidential Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.13

Optimizing Data Mesh Governance

Understanding the potential for errors is critical, both when the automated decision model incorrectly identifies a benign data activity as a threat or an anomaly (Type I error), causing users to be blocked from otherwise legitimate access, and when the decision model grants access to unapproved users or for invalid purposes (Type II error). Each error type imposes its own costs: Type I causes unnecessary action to be taken, wasting people’s time or resources but the quantity of that waste is bounded by the action itself. Type II represents a failure to provide the promised service of the system being used; the harms of this failure, when realized, are likely to be high consequence, but it is uncertain when they will be realized, if they will be realized all at.5

Novel resilience measures must balance human intuition and automated efficiency in data governance, prompting questions about incorporating human-in-the-loop capacity in the data mesh architecture. To enhance system resilience against anomalies, we contend that human judgment is crucial to identify errors after an initial AI model evaluation (for example, an adaptive large language model (LLM) for frequent testing and updates).

Data mesh governance necessitates transparent interaction between the proposed LLM and a human expert panel. This starts when a user requests access or suggests data mesh changes, indicating potential disruptions by malicious or invalid users to the database architecture or entries. Rapid evaluation of such events is crucial, preferably near instantaneous, as access delays reduce data mesh product operational viability. Human judgment might suit small user bases or tightly constrained data mesh access instances, but it is not scalable for large entities, such as the U.S. Army or multinational corporations. CELSA provides a balanced approach to address scalability challenges in large organizations, combining automation for routine tasks, which reduces the workforce needs, and specialized human oversight for rare, complex issues. This strategy ensures efficient service provision while maintaining quality and safety. Effective data mesh governance requires the LLM’s initial screening, judgment, and preliminary summarization, sorting requests into two categories: suspicious or safe (see the accompanying figure) generated to facilitate handling of suspicious activity. For example, the LLM-based system might flag an entry as suspicious if it matches known malicious patterns or resembles social engineering attacks; it will then generate a report explaining the comparison with known malign activities. This includes phishing, where entries trick users into revealing sensitive data, or masquerading, where data seems legitimate but performs unauthorized actions. Other instances involve inadvertent exposure of personally identifiable information (PII) and breaches in classification protocol (ISOO Notice 2017-02). The LLM-human committee’s initial stance should assume relative risk aversion.

Identifying and correcting errors requires balancing Type I and Type II errors trade-offs, seen as a zero-sum game. Reducing one error type often heightens the other, necessitating an optimal equilibrium. Minimizing Type I errors leans conservative, accepting fewer changes to lower false alarms but raise missed detections (Type II errors). Yet, minimizing Type II errors leans liberal, accepting more changes to bolster detection at the cost of higher false alarms (Type I errors). Trade-offs extend to resource allocation and operational impact. Reducing Type I errors often requires rigorous vetting, slowing system operations, increasing human involvement, and causing economic losses. Conversely, addressing Type II errors demands sophisticated algorithms and comprehensive datasets; this is also resource intensive. Operationally, Type I errors could lead to caution and delays, while Type II errors might cause severe operational, security, or reputational damage from undetected threats. Learning from these errors involves trade-offs too. Type I errors reveal system oversensitivity and enhance threat detection, while Type II errors unveil vulnerabilities and blind spots, improving threat recognition and response. Navigating these trade-offs, especially in high-stakes settings such as the U.S. Army’s data mesh with CELSA, requires assessing context, operational needs, and risk tolerance.

Optimal system performance strikes the right balance to allow most valid users through the screening process. Conversely, an overly strict system focusing on detecting misuse might exclude valid users. A case in point is California’s CalFresh SNAP benefits registration system, which blocked valid users due to an excessively rigorous screening process, resulting in low benefits usage despite high need.6 For suspicious entries, further scrutiny is needed. Using rules and models, the LLM halts user access, providing a preliminary report for access cessation reasons. Suspicious entries undergo in-depth evaluation by an AI-human committee of experts (five assumed in the accompanying figure). Human committee members contribute judgment and domain knowledge to the decision-making process, recommending upholding LLM judgment, revising it, or requesting additional review for uncertain cases, maintaining user quarantine and lockout. The AI-assisted human committee aims to promptly evaluate the LLM’s initial report and make quick validity determinations. A risk-averse committee examines factors listed in the accompanying table.

Table. 
Factors for enhancing risk assessment: Expedited validation by AI-assisted human committee.
FactorDescription
Threat identificationCharacterization of the of the type of threat that the LLM has detected social engineering attack, an injection attack, or an inadvertent exposure of sensitive information.
Evidentiary supportEmpirical evaluation to establish the basis on which the LLM has classified the entry as a threat. This could involve examining the specific characteristics of the entry that matched known threat patterns.
Impact assessmentAnalysis of the data, systems, or operations that might be compromised if the threat is not addressed in a given period of time.
Source identificationEmpirical evaluation of the origin of the threat to provide understanding of motives that can guide response mitigation.
Remediation option identificationConsideration of potential actions to neutralize the threat based on LLM training data.
Threat mitigation proficiency assessmentAnalysis of the LLM’s interpreted ability to neutralize the threat.
Historical precedent analysis for current situation mitigationEvaluation of similar past situations and how they were mitigated to inform the current situation.

The human committee’s decision prompts immediate action against the identified threat. The system’s performance is closely monitored afterward to confirm effective threat mitigation and data mesh integrity restoration. Results feed back into the system, refining AI’s threat detection and decision models, boosting future assessment accuracy. Reinforcing CELSA’s resilience against errors via human feedback enhances its usefulness. Simultaneously, the loop helps the committee refine judgment and increase trust in the AI. This process guides new preventive measures against future occurrences, reducing the human committee’s burden. A continuous feedback loop ensures model improvement and eases the committee’s load over time.

Figure.  Integration plan for CELSA into the workflow for incoming changes proposed to a hypothetical data mesh.

Regarding Type II errors, the CELSA method demands an equally robust approach to detect, isolate, and rectify potential false negatives. Continuous monitoring is pivotal to mitigate consequences by assessing data mesh activities’ intent and outcomes.2 Once a possible error is identified, immediate action involves addressing the threat. This could mean isolating suspicious data, revising security protocols, or escalating the issue based on severity. Comprehensive investigation follows to uncover error contributors and enhance AI models. Proposed changes undergo human committee scrutiny, adding expertise to AI findings. Upon committee approval, the updated system is carefully monitored and integrated feedback refines the model. This way, continual improvement and self-correction enable the LLM to learn and enhance governance for future occurrences.

Conclusion

In the realm of decentralized, large-scale information systems, data governance presents multifaceted challenges without standardized methodologies. A primary concern is ensuring automated decision-making algorithms operate within their prescribed mission parameters and constraints while mitigating the potential pitfalls associated with normative bias and human limitations.3 Effective governance within this context is not simply a matter of applying an AI solution to a given problem. Rather, it necessitates a thoughtful, strategic approach to merge the capacities of advanced AI technologies with the discernment inherent in human intuition.12 This balance is vital to capitalizing on the strengths of both dimensions and minimizing potential weaknesses.

The implementation of an AI-human hybrid system such as CELSA marks an important evolution in data governance methodologies. The purpose of such a system is to expedite and enhance the validation process for proposed changes to a data mesh. It does this by augmenting conventional threat detection tools and implementing a proactive, continuous learning approach to identify and combat a diverse range of cyber threats. However, the introduction of such a system also places increased demand on the resources of large organizations, particularly regarding the continuous training and refinement of the AI component. Managing errors highlights data mesh governance complexity with an AI-human system. Incorporating human judgment must consider error impacts. This underscores tailored resource allocation’s importance in large organizations to maintain AI efficiency, minimize user downtime, and enhance security.

Incorporating AI into data governance within a data mesh architecture necessitates a profound understanding of the system’s intricacies and potential vulnerabilities.8 Large organizations must be prepared to invest the necessary resources into system refinement, AI training, and carefully considered integration of human oversight. As the technological landscape continues to evolve, the careful orchestration of these elements will be pivotal to ensuring the safe, efficient, and resilient operation of data mesh systems. Therefore, in the pursuit of efficiency and resilience in data governance, targeted human judgment is not just a valuable component, but an essential one.

    References

    • 1. Amershi, S. et al.  Guidelines for human-AI interaction. In Proceedings of the 2019 CHI Conf. on Human Factors in Computing Systems (2019).
    • 2. Ansari, M. et al.  The impact and limitations of artificial intelligence in cybersecurity: A literature review. Intern. J. of Advanced Research in Computer and Communication Engineering (2022).
    • 3. Chiou, E. and Lee, J.  Trusting automation: Designing for responsitivity and resilience. Human Factors 65, 1 (2023).
    • 4. Demarest, C.  Q&A: Army’s Jennifer Swanson talks data mesh and digital fluency. C4isrnet.com (2023); https://bit.ly/43ozIwH.
    • 5. Galaitsi, S. et al.  The ethics of algorithm errors. In Resilience and Hybrid Threats. IOS Press (2019).
    • 6. Gorman, A. and Rowan, H.  Why Millions of Californians Eligible for Food Stamps Can’t Get Them. npr.org (2018); https://bit.ly/43jaeAF
    • 7. ISOO Notice 2017-02. Clarification of Classification by Compilation; https://www.archives.gov/isoo/notices
    • 8. Kuziemski, M. and Misuraca, G.  AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications Policy 44, 6 (June 2020).
    • 9. Linkov, I. and Kott, A.  Fundamental concepts of cyber resilience: Introduction and overview. Cyber Resilience of Systems and Networks (2019).
    • 10. Machado, I., Costa, C. and Santos, M.  Data-driven information systems: The data mesh paradigm shift. In Proceedings of the 29th Intern. Conf. on Information Systems Development, Valencia, Spain, (2021); https://bit.ly/49RsbJ4
    • 11. Machado, I., Costa, C., and Santos, M.  Data mesh: Concepts and principles of a paradigm shift in data architectures. Procedia Computer Science 196, (2022).
    • 12. Rudin, C.  Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (May 2019).
    • 13. House, W.  Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (2023); https://bit.ly/43iWXZd
    • 14. Winter, J. and Davidson, E.  Big data governance of personal health information and challenges to contextual integrity. The Information Society 35, 1 (2019).
    • 15. Zuiderwijk, A., Chen, Y., and Salem, F.  Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly 38, 3 (Mar. 2021).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More