Last year, computer scientists at the University of Montreal in Canada were eager to show off a new speech recognition algorithm, and they wanted to compare it to a benchmark, an algorithm from a well-known scientist. The only problem: The benchmark's source code wasn't published. The researchers had to recreate it from the published description. But they couldn't get their version to match the benchmark's claimed performance. "We tried for 2 months and we couldn't get anywhere close," says Nan Rosemary Ke, a Ph.D. student in the U of M lab.
The booming field of artificial intelligence is grappling with a replication crisis, much like the ones that have afflicted psychology, medicine, and other fields over the past decade. AI researchers have found it difficult to reproduce many key results, and that is leading to a new conscientiousness about research methods and publication protocols.
View Full Article
No entries found