The Machine Wins 4-1 : what does this mean for the future of AI?
Giulio Prisco
2016-03-24 00:00:00
URL

In 1962 Fritz Leiber wrote a delightful science fiction short story titled “The 64-Square Madhouse.” The story, republished in several Leiber anthologies including “Day Dark, Night Bright,” describes the first grand-master-level chess tournament with an Artificial Intelligence (AI) – just called “The Machine” – among the participants. Leiber mentions The Turk (picture), a fake chess-playing machine constructed in the late 18th century, but now The Machine is real and about to win.



“To tell the truth, dear, the Machine is simply too good for all of us,” says one of the characters. “If it were only a little faster (and these technological improvements always come) it would out-class us completely.”



“We are at that fleeting moment of balance when genius is almost good enough to equal mechanism.”



The Machine doesn’t win Leiber’s fictional tournament because it breaks during a critical game, but the writing is on the wall. Today, a few decades after Leiber’s story, human genius isn’t good enough to equal mechanism at chess. But Go is more complex than chess and, until very recently, no machine had beaten human Go champion and experts predicted it would be at least another 10 years until a computer could beat one of the world’s elite group of Go professionals.





In October 2015, Google’s research program AlphaGo defeated a top Go player – European Go champion Fan Hui. The announcement was delayed until the publication of a research article titled “Mastering the game of Go with deep neural networks and tree search” in Nature.



Google announced that “AlphaGo’s next challenge will be to play the top Go player in the world over the last decade, Lee Sedol.” The match is just over – I am watching the press conference as I write here – and AlphaGo won 4-1. I tweeted: “Congratulations to Lee Sedol and Google’s DeepMind research team for a milestone in AI history. ‘We are at that fleeting moment of balance when genius is almost good enough to equal mechanism.’ (Fritz Leiber)”



I was happy to see Lee Sedol winning one game, which might well be the last time a top human Go player beats a top Go AI. There could be a re-match, and it seems likely that AlphaGo will play against other human champions, but the AI will continue learning Go, finding new strategies by playing against itself, and getting better and better much faster than humans can do.



Of course, the march of The Machine isn’t limited to Go.



“While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems,” said AI researcher and co-founder of DeepMind Demis Hassabis in January. “Because the methods we’ve used are general-purpose, our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems, from climate modelling to complex disease analysis. We’re excited to see what we can use this technology to tackle next!”



“The victory is notable because the technologies at the heart of AlphaGo are the future,” notes Wired. “They’re already changing Google and Facebook and Microsoft and Twitter, and they’re poised to reinvent everything from robotics to scientific research.”



In fact, the AlphaGo breakthrough is likely to accelerate the development of AI projects and bring more funding to the sector. AI is rapidly becoming adept at high level tasks previously considered as too complex for automation, and there’s a lot of money to be made.  So much money, in fact, that AI could become an exclusive preserve of the big tech giants, but perhaps open-source Linux-like development has a chance. If you want to give open-source AI a chance, you can sign the “Humans for Transparency in Artificial Intelligence” petition and join the mailing list.



The question remains of whether AIs will become superintelligent and conscious. The two are separable to a large extent – while AlphaGo could become superintelligent in the narrow domain of Go (way smarter than any human player and able to win any game against any human player easily) in only a few months, and similar projects could achieve narrow superintelligence in more and more domains, I don’t imagine a conscious version of AlphaGo anytime soon. In fact, I suspect that machine consciousness could require radically new hardware substrates and system software.



But sooner or later AIs will achieve some kind of consciousness. I find it difficult to imagine a complex intelligence without a sense of self. I think any really intelligent entity would have, as a necessary byproduct of the same computational complexity that makes it able to think intelligently, all sorts of subjective experiences (sentience) including emotions and feelings.



Emotions and feelings are not useless fluff but integral parts of human cognition, and I suspect that some kind of sentience, emotions, and feelings, must be an integral part of any type of sufficiently complex cognition.



Of course, the subjective experiences, emotions and feelings of future conscious AIs may be totally different from ours.



Go and AI experts have started analyzing the match between Lee Sedol and AlphaGo. In particular, AlphaGo’s move 37 in the second game, which is attracting a lot of attention, is considered beautiful, effective – and inhuman. “It’s not a human move. I’ve never seen a human play this move,” said Fan Hui. The move was initially questioned but then acclaimed by experts. Based on its knowledge base derived from both games played by human experts and games played against itself, AlphaGo “knew” that a human wouldn’t play that particular move, but also that the move is ultimately effective.



The excellent but inhuman move 37 reminds me of the superintelligent AIs imagined by Nick Bostrom in Superintelligence (2014), way smarter than us but with consciousness, values and goals radically different from ours. Nick’s superintelligent Paperclip Machine is totally focused, for its own inscrutable reasons that we wouldn’t be able to understand,  on filling the universe with paperclips, and may choose to convert the whole mass of the Earth (including people) to paperclips. You and I are not interesting or relevant to The Paperclip Machine’s incomprehensibly strange consciousness, just some useless matter that stands in the way.



I don’t presently fear The Paperclip Machine that much, because I suspect radical breakthroughs toward real thinking machines will happen much slower than hoped by AI enthusiasts, and could require radically new hardware substrates and system software.



But radical breakthroughs will be developed someday.



In the best case scenario, we will pass our most cherished values – love and compassion for all sentient life – to our superintelligent mind children, survive as their human core, spread to the stars, and become masters of space and time.



But annihilation by The Paperclip Machine or one of its friends remains a possible worst case scenario, so perhaps we should pursue AI breakthroughs as a slow-paced cosmic task, without the reckless urgency that comes from hopeless despair – the despair that comes from knowing that we are going to die and there’s noting we can do (but perhaps superintelligent AIs will think of something smart).



I have argued that religions of hope in personal resurrection – either traditional religions based on the “supernatural” or modern, Cosmist religions based on science, might be our best protection from reckless pursuit of superintelligence and other risky technologies.





Image: The Turk, a fake chess-playing machine constructed in the late 18th century, referenced by Fritz Leiber in “The 64-Square Madhouse.” – Image adapted from Wikimedia Commons.