Machine-learning artificial intelligences (AIs) become more capable with experience, but the trade-off is a lack of understanding about the nature of their intelligence, writes Nello Cristianini, a professor of artificial intelligence at the University of Bristol in the U.K.
Cristianini says modern AI systems can imitate complex human behaviors that cannot be fully modeled, but in a manner dissimilar from what people do. For example, automated customer-service agents adapt their behavior to various signals collated from customers' actions so they can constantly learn and monitor their preferences.
To contend with novel situations, the AIs must be able to generalize, using data from similar customers or products in a form of pattern recognition.
A key challenge in machine learning is selecting the right features to correctly recognize patterns, and engineers are using deep-learning techniques instead of programming this ability directly into computing.
The concurrent and ongoing application of these mechanisms on a massive scale induces highly adaptive behavior that appears intelligent, yet AI systems do not need the type of self-awareness that humans consider the mark of actual intelligence. One example is machine-translation systems, which use statistics to translate instead of following linguistic rules.
Cristianini says the results are computers capable of translating accurately without revealing how humans derive meaning from sentences.
From New Scientist
View Full Article - May Require Free Registration
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found