Imagine going to space and deciding between Spaceship 1 and Spaceship 2. Although it has never been in flight, Spaceship 1 comes with precise equations outlining how it operates. Even though it is unknown how Spaceship 2 flies, it has undergone considerable testing and years of successful flights, including the one you are about to take. Cassie Kozyrkov, chief decision scientist at Google, posed this dilemma at the World Summit AI in 2018. We cannot provide a solution to this question because it is philosophical and perhaps generates a more profound inquiry on which better inspires trust—explanation or testing.
For a while, it appeared one issue with artificial intelligence (AI) algorithms, particularly cutting-edge deep learning techniques, was they were black boxes. It was impossible to pinpoint the precise reason the program predicted a particular outcome in a specific circumstance. Due to this lack of interpretability, businesses and governments were hesitant to use AI in critical sectors such as healthcare, banking, and government. So much so the EU Commission released its AI package in April 2021, including an AI act, recommending new laws and initiatives to make Europe a relevant hub for reliable AI, for example, in the case of the use of AI in high-risk sectors.
No entries found