For a decade now, many of the most impressive artificial intelligence systems have been taught using a huge inventory of labeled data. An image might be labeled "tabby cat" or "tiger cat," for example, to "train" an artificial neural network to correctly distinguish a tabby from a tiger. The strategy has been both spectacularly successful and woefully deficient.
Such "supervised" training requires data laboriously labeled by humans, and the neural networks often take shortcuts, learning to associate the labels with minimal and sometimes superficial information. For example, a neural network might use the presence of grass to recognize a photo of a cow, because cows are typically photographed in fields.
"We are raising a generation of algorithms that are like undergrads [who] didn't come to class the whole semester and then the night before the final, they're cramming," said Alexei Efros, a computer scientist at the University of California, Berkeley. "They don't really learn the material, but they do well on the test."
From Quanta Magazine
View Full Article
No entries found