Games have long been a fertile testing ground for the artificial intelligence community, and not just because of their accessibility to the popular imagination. Games also enable researchers to simulate different models of human intelligence, and to quantify performance. No surprise, then, that the 2016 victory of DeepMind's AlphaGo algorithm—developed by 2019 ACM Computing Prize recipient David Silver, who leads the company's Reinforcement Learning Research Group—over world Go champion Lee Sedol generated excitement both within and outside of the computing community. As it turned out, that victory was only the beginning; subsequent iterations of the algorithm have been able to learn without any human data or prior knowledge except the rules of the game and, eventually, without even knowing the rules. Here, Silver talks about how the work evolved and what it means for the future of general-purpose AI.
You grew up playing games like chess and Scrabble. What drew you to Go?
No entries found