I attended the Intelligence Squared artificial intelligence debate at the 92nd St. Y’s Seven Day of Genius Festival (March 9th) and felt like I had a seat at the edge of the world.
For better or worse, this is arguably the central debate of the 21st century. The motion was “Don’t Trust The Promise of AI?” Jaron Lanier and Andrew Keen took the position we shouldn’t trust AI, while Martine Rothblatt and James Hughes argued we should.
There were transhumanists on both sides of the debate, which made things interesting. For those unfamiliar, “transhumanism is an international and intellectual movement aiming to transform the human condition by developing and creating widely available sophisticated technologies to greatly enhance human intellectual, physical, and psychological capacities.”
Lanier and Keen, the debaters arguing we should not trust AI, were primarily challenging “the culture” surrounding it. Their position was that we don’t know enough and that the technology is moving too fast for us to ensure control. They brought up unforeseen consequences such as job loss – and made the analogy that there needs to be a separation of church and state. Suggesting people who deify artificial intelligence, or want to merge with it, should not be engineering it.
Rothblatt and Hughes addressed the subject by framing things in perspective of The Enlightenment. In their words, AI is a condensation, a crystallization, and an extension of human intelligence. In other words, not trusting the promise of artificial intelligence is akin to not trusting the promise of human intelligence. AI is evolving through a human selection process the same way we’ve engineered crops through agriculture or domesticated animals through breeding. They described how artificial intelligence could assist with diseases of the mind, as a sort of mental wheelchair for Alzheimer’s or dementia patients. They explained we will grow to love AIs like we do other people and pets. If we don’t implement AI, it won’t stop other authoritarian regimes from building it.
“But our decisions to be pessimistic about artificial intelligence will have no effect on the application by China or North Korea, or other authoritarian regimes. It is our own embrace in liberal democracy of these powerful tools, making our society as strong and as effective as possible that will determine its future. So, future AI will allow us to understand the complexity of the genome, unlock health and longevity for our children. It will not determine whether there’s universal access to health care. That is on us. Future AI will allow us to displace routine labor and make possible abundance and leisure for all. But it will not tax the rich. It will not determine if we create a safety net, and universal basic income so that we can all benefit from that universal abundance. That is on us. Future AI will allow us to make better collective decisions, to understand the consequences of our actions. But it will not determine whether we have a totalitarian government or a democratic one.“
– James Hughes
At Intelligence Squared debates the audience votes twice, before and after the debate. Before the debate 30% of the audience distrusted AI, 41% trusted it (29% were undecided). After the debate 59% of the audience distrusted AI; Lanier and Keen picked up 29 percentage points.
I was surprised by the outcome. Not only because I believed Hughes and Rothblatt had better arguments, but the point swings in Intelligence Squared debates are usually not so large.
The motion “Don’t Trust The Promise of AI” might not have been the best frame for the debate. In the audience, I remember the double negative causing confusion. Some people may have voted against the resolution at the beginning thinking that was voting against AI, and then figured out to vote PRO at the end. The winner of Intelligence Squared is determined by who sways the most votes, so it’s also possible that voters gamed the system by purposefully voting for different resolutions.
Lanier and Keen said a number of things I thought would diffuse people’s fear about artificial intelligence. Lanier mocked terminator scenarios and said no serious person could oppose the promise of pursuing information, sensing systems and algorithms. Their biggest concerns were cultural and had nothing to do with AI as a totalitarian overlord; their fears had more to do with the transitions society will need to make, and how we will adjust to the disruption AI will inevitably cause as we shift. Namely, they feared job loss and changes AI will bring to our economy. They were arguing against the “promise” of AI, not AI itself.
Ultimately, these concerns do not overshadow the consequences of not building AI – which is the takeaway in my view. AI is a technology people will use to solve many of the world’s problems, in order to find new problems to solve. It won’t create a utopia, but will be conducive to an “extropia” (a society that constantly improves). Determining not to build AI would pose a greater existential risk. Passivity in the sake of precaution would ensure AI’s development elsewhere, which would possibly land it in the hands of the last people you’d want to have it. We should move forward, not as reckless optimists incapable of foreseeing potential problems, but as optimistics who want to create a better future.