Sign In

Communications of the ACM

Home/News/Mr. Robot/Full Text
ACM News

Mr. Robot

View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Geoffrey E. Hinton manages Google's Brain Team Toronto.

Geoffrey E. Hinton is an Engineering Fellow at Google managing Brain Team Toronto, a new part of the Google Brain Team that does basic research on ways to improve neural network learning techniques.

Credit: Daniel Ehrenworth

For more than 30 years, Geoffrey Hinton hovered at the edges of artificial intelligence research, an outsider clinging to a simple proposition: that computers could think like humans do—using intuition rather than rules. The idea had taken root in Hinton as a teenager when a friend described how a hologram works: innumerable beams of light bouncing off an object are recorded, and then those many representations are scattered over a huge database. Hinton, who comes from a somewhat eccentric, generations-deep family of overachieving scientists, immediately understood that the human brain worked like that, too—information in our brains is spread across a vast network of cells, linked by an endless map of neurons, firing and connecting and transmitting along a billion paths. He wondered: could a computer behave the same way?

The answer, according to the academic mainstream, was a deafening no. Computers learned best by rules and logic, they said. And besides, Hinton's notion, called neural networks—which later became the groundwork for "deep learning" or "machine learning"—had already been disproven. In the late '50s, a Cornell scientist named Frank Rosenblatt had proposed the world's first neural network machine. It was called the Perceptron, and it had a simple objective—to recognize images. The goal was to show it a picture of an apple, and it would, at least in theory, spit out "apple." The Perceptron ran on an IBM mainframe, and it was ugly. A riot of criss-crossing silver wires, it looked like someone had glued the guts of a furnace filter to a fridge door. Still, the device sparked some serious sci-fi hyperbole. In 1958, the New York Times published a prediction that it would be the first device to think like the human brain. "[The Perceptron] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."

The Perceptron didn't end up walking or talking—it could barely tell left from right—and became a joke. In most academic circles, neural networks were written off as a fringe pursuit. Nevertheless, Hinton was undeterred. "The brain has got to work somehow and it sure as hell doesn't work by someone writing programs and sticking them in there," Hinton says. "We aren't programmed. We have common sense." The neural networks idea wasn't faulty, he believed; the main problem was power. Computers back then couldn't wade through the millions of images needed to make connections and find meaning. The sample size was just too small.

Hinton pursued a Ph.D. at the University of Edinburgh in 1972, with neural networks as his focus. On a weekly basis, his advisor would tell him he was wasting his time. Hinton pressed forward anyway. Neural networks did have some minor success—they later proved useful in detecting credit fraud—and after graduation, he was able to land a job at Carnegie Mellon University in Pittsburgh. Hinton, a proud socialist, grew troubled by U.S. foreign policy under Reagan, especially interference in Central America. He and his wife, Ros, a molecular biologist and former professor at University College London, were planning to adopt a boy and a girl from South America, and they didn't much like the idea of raising them in a country engaged in a bloody Latin American conflict. Plus, most AI research in the U.S. was funded by the Department of Defense, which didn't sit well with Hinton either, and so he accepted an offer from the Canadian Institute for Advanced Research. CIFAR, which encourages collaboration around the kind of unorthodox scientific ideas that might not find backers elsewhere, offered Hinton academic freedom and a decent salary. In 1987, he and Ros moved north, and settled in the Annex. Hinton accepted a CIFAR-related position at the University of Toronto in computer sciences—although he'd never taken a computer science course—and started the Learning in Machines and Brains program at CIFAR. He set up a small office in the Sandford Fleming building at the St. George campus and quietly got to work. Over time, a handful of fellow deep learning believers gravitated to him. Ilya Sutskever—now a co-founder and director at OpenAI, Elon Musk's $1-billion AI non-profit—remembers being part of Hinton's lab in the early 2000s with the kind of nostalgic fondness usually reserved for summer camp. He describes 10 or so students researching during the "AI winter," when jobs and funding in AI research were scarce, and offers from industry scarcer. "We were outsiders, but we also felt like we had a rare insight, like we were special," says Sutskever.


From Toronto Life
View Full Article



No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account