Researchers at the Massachusetts Institute of Technology (MIT) and IBM Research have developed a method for comparing the reasoning of artificial intelligence (AI) software with that of human thinking, in order to better understand the AI's decision-making.
The Shared Interest technique compares saliency analyses of an AI decision with human-annotated databases. It classifies the AI's reasoning as one of eight patterns, ranging from the AI being completely distracted (making incorrect predictions and not aligning with human reasoning) to making correct predictions and being completely human-aligned.
Said MIT's Angie Boggust, "Providing human users with tools to interrogate and understand their machine-learning models is crucial to ensuring machine-learning models can be safely deployed in the real world."
From IEEE Spectrum
View Full Article
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA
No entries found