Credit: Andrij Borys Associates, Shutterstock
Despite the many wondrous uses of artificial intelligence (AI), examples of "fragile" AI are increasingly common. Facial recognition systems make serious errors when presented With images of people of color. Road sign recognizers mistake bird-stained stop signs for speed limit signs. Bias in training datasets skews to unfair decisions. Large-language models spew out fluent text that on closer inspection makes little sense. Adversaries successfully confuse AI systems by corrupting sensor data. Automatic missile defense systems have sometimes mistaken commercial aircraft for enemy warplanes. Military drones cannot reliably distinguish noncombatants in an operation.3
Such examples have made urgent the question: "When can you trust an AI?" This question has become a big challenge for professionals designing AI in response to client concerns about the safety and reliability of AI. A frequent feature of these failures is the machines were being used in a context different from what they were designed or trained for. In this column, we will discuss this context problem and strategies to mitigate it.
There are in fact several fields within CS actively working on just this problem. The oldest, centered around the CONTEXT (International and Interdisciplinary Conference on Modeling and Using Context) conference series, has been around since the mid-nineties; the proceedings of its biennial conferences are published as Springer Lecture Notes in Artificial Intelligence. The series has spawned books and at least one journal (Modeling and Using Context). Research on HCI and human-computer cooperation is very much part of this community's focus. The context-aware applications and ubiquitous computing communities are also important research communities focusing on context.
[[The following comment/response was submitted 28 November 2022 by Peter J. Denning and John Arquilla. -- CACM Administrator]]
Just because many are looking into the context problem, it does not mean that progress is being made toward a solution. Indeed, the very fact that so much attention is being paid to context -- without evidence of any kind of breakthrough advance -- suggests that it is a serious obstacle. In our view, it is an obstacle unlikely to be surmounted. Which, we might suggest, should encourage more attention on the human-machine pairing endeavor, as progress along this line of advance is more likely to be achieved.
Peter Denning & John Arquilla
Displaying all 2 comments