Typhoons are generally associated with mass destruction, but a Japanese engineer has developed a wind turbine that can harness the tremendous power of these storms and turn it into useful energy. If he’s right, a single typhoon could power Japan for 50 years.
At Singularity University we address the world’s greatest challenges, through the application of exponential technologies, spreading knowledge through conferences, educating through our courses, and creating, accelerating and funding startups.
Blockchains as the new platform for technological innovation invite the creative imagining of applications at both the level of technology use and in the rethinking of economic principles. Some recent developments include optimism about rising Bitcoin prices and the rewards-halving milestone, trepidation about scalability, block size, and the latest hacking scandal of the Ethereum DAO, and fast-paced single ledger adoption by financial institutions.
Le mois de décembre 2015 a vu la signature d’un accord dit « universel », par 195 pays, et qui marquera peut-être un tournant dans la manière dont les humains envisagent collectivement leur rapport à la Terre. Les technoprogressistes pourront s’en réjouir à double titre. D’une part il doit permettre de mieux affronter les immenses défis que nous imposent les crises climatiques, mais d’autre part, loin d’un écologisme fondamentaliste, il reconnaît, dans son article 10, « l’importance qu’il y a à donner pleinement effet à la mise au point et au transfert de technologies de façon à accroître la résilience … ».
Discovered in an ancient shipwreck near Crete in 1901, the freakishly advanced Antikythera Mechanism has been called the world’s first computer. A decades-long investigation into the 2,000 year-old-device is shedding new light onto this mysterious device, including the revelation that it may have been used for more than just astronomy.
En 2014 et 2015, de nombreuses personnalités ont exprimé leurs craintes quant aux dangers de l’intelligence artificielle (IA) : Stephen Hawking, Elon Musk, Bill Gates… Depuis, ce sujet est devenu très présent dans les médias.
Does predictability provide an overriding concept and perhaps a metric for evaluating when LAWS are acceptable or when they might be unacceptable under international humanitarian law? Arguably, if the behavior of an autonomous weapon is predictable, deploying it might be considered no different from, for example, launching a ballistic missile. This, of course, presumes that we can know how predictable the behavior of a specific autonomous weapon will be.
Here’s an interesting idea. It’s taken from Aaron Wright and Primavera de Filippi’s article ‘Decentralized Blockchain Technology and the Rise of Lex Cryptographia’. The article provides an excellent overview of blockchain technology and its potential impact on the law. It ends with an interesting historical reflection. It suggests that the growth of blockchain technology may give rise to a new type of legal order: a lex cryptographia. This is similar to how the growth in international trading networks gave rise to a lex mercatoria and how the growth in the internet gave rise to a lex informatica.
Deadly environmental pollution has become an existential risk that threatens the prospect for the long-term survival of our species and a great many others. Here we will focus on the nuclear waste aspect of the problem and ways to mitigate it before there is a critical tipping point in our global ecosystem.
As philosopher Nick Bostrom said in his 2001 paper titled “Existential Risks,” published in the Journal of Evolution and Technology, “Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges.”1
A l’aube de l’histoire de l’humanité, l’intelligence de ceux qui nous ont précédés n’était probablement guère inférieure à celle du lecteur de ces lignes. Certains paléontologues pensent même que les capacités de raisonnement de nos ancêtres étaient supérieures aux nôtres.
Phil Torres’ new book The End: What Science and Religion Tell Us about the Apocalypse, is one of the most important books recently published. It offers a fascinating study of the many real threats to our existence, provides multiple insights as to how we might avoid extinction, and it is carefully and conscientiously crafted.
Some things in life cannot be offset by a mere net gain in intelligence.
The last few years have seen the widespread recognition that sophisticated AI is under development. Bill Gates, Stephen Hawking, and others warn of the rise of “superintelligent” machines: AIs that outthink the smartest humans in every domain, including common sense reasoning and social skills. Superintelligence could destroy us, they caution. In contrast, Ray Kurzweil, a Google director of engineering, depicts a technological utopia bringing about the end of disease, poverty and resource scarcity.
At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.
AI has come a long way since 2010. If you were to travel back in time six years and ask an artificial intelligence researcher about the future of AI, it’s likely he or she would have predicted that it would never reach its full potential as originally envisioned by its founders.
The trend is clearly visible: sensors, and actuators, together with computation, memory and communication capabilities, are making all the objects around us smarter and smarter. Too many times, wether we call them robots, or AIs, the trend is depicted in menacing tones, represented in the dystopian futures preferred by Hollywood movies, and shape the gut reactions of policymakers eager to please the reactionary impulses of their electorates.
The field of Existential Risk Studies has, to date, focused largely on risk scenarios involving natural phenomena, anthropogenic phenomena, and a specific type of anthropogenic phenomenon that one could term “technogenic.” The first category includes asteroid/comet impacts, supervolcanoes, and pandemics. The second encompasses climate change and biodiversity loss. And the third deals with risks that arise from the misuse and abuse of advanced technologies, such as nuclear weapons, biotechnology, synthetic biology, nanotechnology, and artificial intelligence.
A decade ago AI research wasn’t as hot as it is now. But right now, in 2016 AI is very much a profitable endeavor. Many now argue that with regards to AI there is a risk for: (a) mass unemployment, (b) mass political destabilization (for instance mass-abuse of intelligent drones by terrorists), or even (c) a hard take-off of self-improving AI triggering a so-called “singularity”, which (in very short) is something we might simplified describe as “a point beyond we don’t have a clue what happens next”.
In his well-known piece, “Why the future doesn’t need us,” Bill Joy argues that 21st century technologies—genetic engineering, robotics, and nanotechnology (GNR)—will extinguish human beings as we now know them, a prospect he finds deeply disturbing. I find his arguments deeply flawed and critique each of them in turn.
Michio Kaku (1947 – ) is the Henry Semat Professor of Theoretical Physics at the City College of New York of City University of New York. He is the co-founder of string field theory and a popularizer of science. He earned his PhD in physics from the University of California-Berkeley in 1972.
I want to elaborate briefly on an issue that I mentioned in a previous article for the IEET, in which I argue (among other things) that we may be systematically underestimating the overall probability of annihilation. The line of reasoning goes as follows:
In the 1980s, the movies Terminator and Robocop introduced the world to the concept of the killer robot. While those films and others represented the peak of science fiction for many in the 80s and 90s, in reality, the militarization of robots and development of automated weapons systems has been going on for more than 15 years, according to Researcher and Activist Noel Sharkey. That buildup of weapons, he believes, poses a great danger to society.
Hans Moravec (1948 – ) is a faculty member at the Robotics Institute of Carnegie Mellon University and the chief scientist at Seegrid Corporation. He received his PhD in computer science from Stanford in 1980, and is known for his work on robotics, artificial intelligence, and writings on the impact of technology, as well as his many of publications and predictions focusing on transhumanism.
Since the first species of Homo emerged in the grassy savanna of East Africa some 2 million years ago, humanity has been haunted by a small constellation of improbable existential risks from nature. We can call this our cosmic risk background. It includes threats posed by asteroid/comet impacts, super volcanic eruptions, global pandemics, solar flares, black hole explosions or mergers, supernovae, galactic center outbursts, and gamma-ray bursts. While modern technology could potentially protect us against some of these risks — such as asteroids that could induce an “impact winter” — the background of existential dangers remains more or less unchanged up to the present.
IEET Affiliate Scholar Phil Torres has published a book on Existential Risks, titled The End: What Science and Religion Tell Us About the Apocalypse. The Foreword was written by IEET Fellow Russell Blackford.
A la fin du dix-huitième siècle, des bricoleurs ont fabriqué les premières boites à musique : de subtils petits mécanismes capables de jouer des harmonies et mélodies tout seuls. Quelques uns comptaient des cloches, percussions, orgues, et même des violons, tout cela coordonné par un cylindre rotatif. Les exemples les plus ambitieux étaient de véritables orchestres lilliputiens, comme le Panharmonicon, inventé à Vienne en 1805, ou l’Orchestrion, produit en série à Dresde en 1851.
Numerous innovations have the potential to dramatically augment human cognition and capabilities. They could magnify the economy and give rise to other, even more powerful technologies. Our response to this is crucial.
Philosophy could be an important conceptual resource in the determination of human-technology interactions for several reasons. First, philosophy concerns the topics of world, reality, self, society, aspirations, and meaning, all of which we are hoping to reconfigure and accentuate in our relations with technology. Improving human lives is after all one of the main purposes of technology. Second, philosophy relates to thinking, logic, reasoning, and being, which are the key properties of what we would like our technology entities to do.