Does predictability provide an overriding concept and perhaps a metric for evaluating when LAWS are acceptable or when they might be unacceptable under international humanitarian law? Arguably, if the behavior of an autonomous weapon is predictable, deploying it might be considered no different from, for example, launching a ballistic missile. This, of course, presumes that we can know how predictable the behavior of a specific autonomous weapon will be.
Here’s an interesting idea. It’s taken from Aaron Wright and Primavera de Filippi’s article ‘Decentralized Blockchain Technology and the Rise of Lex Cryptographia’. The article provides an excellent overview of blockchain technology and its potential impact on the law. It ends with an interesting historical reflection. It suggests that the growth of blockchain technology may give rise to a new type of legal order: a lex cryptographia. This is similar to how the growth in international trading networks gave rise to a lex mercatoria and how the growth in the internet gave rise to a lex informatica.
Deadly environmental pollution has become an existential risk that threatens the prospect for the long-term survival of our species and a great many others. Here we will focus on the nuclear waste aspect of the problem and ways to mitigate it before there is a critical tipping point in our global ecosystem.
As philosopher Nick Bostrom said in his 2001 paper titled “Existential Risks,” published in the Journal of Evolution and Technology, “Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges.”1
A l’aube de l’histoire de l’humanité, l’intelligence de ceux qui nous ont précédés n’était probablement guère inférieure à celle du lecteur de ces lignes. Certains paléontologues pensent même que les capacités de raisonnement de nos ancêtres étaient supérieures aux nôtres.
Phil Torres’ new book The End: What Science and Religion Tell Us about the Apocalypse, is one of the most important books recently published. It offers a fascinating study of the many real threats to our existence, provides multiple insights as to how we might avoid extinction, and it is carefully and conscientiously crafted.
Some things in life cannot be offset by a mere net gain in intelligence.
The last few years have seen the widespread recognition that sophisticated AI is under development. Bill Gates, Stephen Hawking, and others warn of the rise of “superintelligent” machines: AIs that outthink the smartest humans in every domain, including common sense reasoning and social skills. Superintelligence could destroy us, they caution. In contrast, Ray Kurzweil, a Google director of engineering, depicts a technological utopia bringing about the end of disease, poverty and resource scarcity.
At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.
AI has come a long way since 2010. If you were to travel back in time six years and ask an artificial intelligence researcher about the future of AI, it’s likely he or she would have predicted that it would never reach its full potential as originally envisioned by its founders.
The trend is clearly visible: sensors, and actuators, together with computation, memory and communication capabilities, are making all the objects around us smarter and smarter. Too many times, wether we call them robots, or AIs, the trend is depicted in menacing tones, represented in the dystopian futures preferred by Hollywood movies, and shape the gut reactions of policymakers eager to please the reactionary impulses of their electorates.
The field of Existential Risk Studies has, to date, focused largely on risk scenarios involving natural phenomena, anthropogenic phenomena, and a specific type of anthropogenic phenomenon that one could term “technogenic.” The first category includes asteroid/comet impacts, supervolcanoes, and pandemics. The second encompasses climate change and biodiversity loss. And the third deals with risks that arise from the misuse and abuse of advanced technologies, such as nuclear weapons, biotechnology, synthetic biology, nanotechnology, and artificial intelligence.
A decade ago AI research wasn’t as hot as it is now. But right now, in 2016 AI is very much a profitable endeavor. Many now argue that with regards to AI there is a risk for: (a) mass unemployment, (b) mass political destabilization (for instance mass-abuse of intelligent drones by terrorists), or even (c) a hard take-off of self-improving AI triggering a so-called “singularity”, which (in very short) is something we might simplified describe as “a point beyond we don’t have a clue what happens next”.
In his well-known piece, “Why the future doesn’t need us,” Bill Joy argues that 21st century technologies—genetic engineering, robotics, and nanotechnology (GNR)—will extinguish human beings as we now know them, a prospect he finds deeply disturbing. I find his arguments deeply flawed and critique each of them in turn.
Michio Kaku (1947 – ) is the Henry Semat Professor of Theoretical Physics at the City College of New York of City University of New York. He is the co-founder of string field theory and a popularizer of science. He earned his PhD in physics from the University of California-Berkeley in 1972.
I want to elaborate briefly on an issue that I mentioned in a previous article for the IEET, in which I argue (among other things) that we may be systematically underestimating the overall probability of annihilation. The line of reasoning goes as follows:
In the 1980s, the movies Terminator and Robocop introduced the world to the concept of the killer robot. While those films and others represented the peak of science fiction for many in the 80s and 90s, in reality, the militarization of robots and development of automated weapons systems has been going on for more than 15 years, according to Researcher and Activist Noel Sharkey. That buildup of weapons, he believes, poses a great danger to society.
Hans Moravec (1948 – ) is a faculty member at the Robotics Institute of Carnegie Mellon University and the chief scientist at Seegrid Corporation. He received his PhD in computer science from Stanford in 1980, and is known for his work on robotics, artificial intelligence, and writings on the impact of technology, as well as his many of publications and predictions focusing on transhumanism.
Since the first species of Homo emerged in the grassy savanna of East Africa some 2 million years ago, humanity has been haunted by a small constellation of improbable existential risks from nature. We can call this our cosmic risk background. It includes threats posed by asteroid/comet impacts, super volcanic eruptions, global pandemics, solar flares, black hole explosions or mergers, supernovae, galactic center outbursts, and gamma-ray bursts. While modern technology could potentially protect us against some of these risks — such as asteroids that could induce an “impact winter” — the background of existential dangers remains more or less unchanged up to the present.
IEET Affiliate Scholar Phil Torres has published a book on Existential Risks, titled The End: What Science and Religion Tell Us About the Apocalypse. The Foreword was written by IEET Fellow Russell Blackford.
A la fin du dix-huitième siècle, des bricoleurs ont fabriqué les premières boites à musique : de subtils petits mécanismes capables de jouer des harmonies et mélodies tout seuls. Quelques uns comptaient des cloches, percussions, orgues, et même des violons, tout cela coordonné par un cylindre rotatif. Les exemples les plus ambitieux étaient de véritables orchestres lilliputiens, comme le Panharmonicon, inventé à Vienne en 1805, ou l’Orchestrion, produit en série à Dresde en 1851.
Numerous innovations have the potential to dramatically augment human cognition and capabilities. They could magnify the economy and give rise to other, even more powerful technologies. Our response to this is crucial.
Philosophy could be an important conceptual resource in the determination of human-technology interactions for several reasons. First, philosophy concerns the topics of world, reality, self, society, aspirations, and meaning, all of which we are hoping to reconfigure and accentuate in our relations with technology. Improving human lives is after all one of the main purposes of technology. Second, philosophy relates to thinking, logic, reasoning, and being, which are the key properties of what we would like our technology entities to do.
I think metaphors are important. They can help to organise the way we think about something, highlighting its unappreciated features, and allowing us to identify possibilities that were previously hidden from view. They can also be problematic, biasing our thought in unproductive ways, and obscuring things that should be in plain view. Good metaphors are key.
When someone is asked to name one thing they’d like to change about themselves, rarely do they answer, “I’d like to change my brain.” But changing the way your brain works is possible, according to Author and Neuroanatomist Dr. Jill Bolte Taylor, and ongoing research into the inner workings of the human brain will have a profound effect on today’s younger generation and many more generations to follow.
This post is the second in a short series looking at the arguments against the use of fully autonomous weapons systems (AWSs). As I noted at the start of the previous entry, there is a well-publicised campaign that seeks to pre-emptively ban the use of such systems on the grounds that they cross fundamental moral line and fail to comply with the laws of war. I’m interested in this because it intersects with some of my own research on the ethics of robotic systems. And while I’m certainly not a fan of AWSs (I’m not a fan of any weapons systems), I’m not sure how strong the arguments of the campaigners really are.
Generally obviously OpenAI is a super-impressive initiative. I mean — a BILLION freakin’ dollars, for open-source AI, wow!!
So now we have an organization with a pile of money available and a mandate to support open-source AI, and a medium-term goal of AGI … and they seem fairly open-minded and flexible/adaptive about how to pursue their mandate, from what I can tell…
Opinions expressed by Hawking, Gates, and Musk about the dangers of artificial intelligence rang loud and clear in 2015, and continue to echo into the new year. Since then, there have been plenty of predictions of humanity’s doom at the hands of autonomous machines. But there have been leading thinkers in the AI space who have also come out from behind the curtain to play devil’s advocate and make clear opposing positions.
IEET Blog |
email list |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.
East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA
Email: director @ ieet.org phone:
West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org