The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.
Is the deal linked to Google’s buying spree of seven robotics companies in December alone, including Boston Dynamics, “a company holding contracts with the US military”? Is Google building an unstoppable robot army powered by AI? Does Google want to create something like Skynet? Or, is this just busybody gossip that naturally happens in an information-vacuum? The deal could simply be to improve searchengine functionality.
All this uncertainty is driving an unnerving question: What exactly is DeepMind so worried about that they insisted on creating an ethics board? Is it a basic preventative measure, or is it a Hail-Mary pass to save “humanity from extinction”? Whatever the answer, we don’t want to feed the rumor mill here. But as professional ethicists, we can throw some light on the mysterious nature of ethics boards and what good they can do.
It’s fair to assume that the smart folks at DeepMind have thought deeply about AI and its implications. AI is very powerful technology that is largely invisible to the average person. Right now, AI controls airplanes, stock markets, information searches, surveillance programs, and more. These are important applications that can’t help but to have a tremendous impact on society and ethics, increasingly so as every futurist predicts AI to become more pervasive in our lives.
AI developers are thus under pressure to get it right. Just as we’d want to make sure you knew how to be a responsible gun-owner before we sell you one, DeepMind seems to have the same concern for commonsense responsibility as it sells potent AI technology and expertise. But because DeepMind is looking for ethical guidance from a review board, there are key cautionary issues to keep in mind as we follow its development.