Of course we are all still quivering, following the attacks in Paris last week that killed 129 people, not so very far from where my wife and I lived for a couple of years, as newlyweds during the 1990s. Our hearts go out to the brave folk of Liberté, Egalité and Fraternité in la Ville Lumiere.
The last few weeks have been a whirlwind centred around the terrorist group knows as the Islamic State. First, several attacks in Paris left 129 dead and countless others injured, then a bomb threat in Germany and a threat by ISIS to attack the rest of Europe and Washington, D.C. Fear grips the hearts of people around the world in an iron vice. And that is exactly what ISIS wants. Right now, they are winning.
The recent slaughters of hundreds of innocent civilians in Paris, in Ankara, in Beirut, and aboard the Russian Metrojet Flight 9268 illustrate without a shadow of doubt that the threat from the barbaric sect known as ISIS, ISIL, Daesh, and the Islamic State cannot be contained within the Middle East. ISIS is an enemy of humanity, decency, and Western civilization. It will continue killing completely peaceful civilians of Western nations, both in their home countries and abroad, in gruesome ways. ISIS is a cancer upon humanity, and it will continue to metastasize and inflict damage until it is either eradicated or until it completely kills its host. Like cancer, ISIS cannot coexist with a healthy humankind. This cancerous “Islamic State” should be eradicated using the resources of any willing parties.
Blockchain technology is at the heart of cryptocurrencies like Bitcoin. Most people have heard of Bitcoin and some are excited by the prospect it raises of a decentralised, stateless currency/payment system. But this is not the most interesting thing about Bitcoin. It is the blockchain technology itself that is the real breakthrough. It not only provides the foundation for a currency and payment system; it also provides the foundation for new ways of organising and managing basic social relationships. This includes legal relationships such as those involved in contractual exchange and proprietary ownership. The most prominent expression of this potential comes in the shape of Ethereum, an open source platform that allows developers to use blockchains for whatever purpose they see fit.
In the second decade of the 21st century, crime has fully embraced the age of advanced technology. To address these futurist crimes, we have to consider combating them, fire with fire. In other words, using advanced technology to counter-balance the level of power criminals could possibly attain at the opportunity of the technological age. Which is why DARPA is stepping up its game in terms of chip making.
Dan Barker, echoing an idea expressed by many atheists, describes theology as “a subject without an object.” Since there’s little reason for thinking a God exists – much less the God of the Bible – the entire field is ultimately vacuous, despite the grandiloquent rigamarole of, as Jerry Coyne puts it, Sophisticated Theologians(TM). Theology studies nothing. Its heart and soul is a phenomenon that almost certainly doesn’t exist.
Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.
Last time I attempted to grapple with R. Scott Bakker’s intriguing essay on what kinds of philosophy aliens might practice and remaining dizzied by questions.
Luckily, I had a book in my possession which seemed to offer me the answers, a book that had nothing to do with the a modern preoccupation like question of alien philosophers at all, but rather a metaphysical problem that had been barred from philosophy except among seminary students since Darwin; namely, whether or not there was such a thing as moral truth if God didn’t exist.
IEET co-founder Nick Bostrom, IEET Fellow Wendell Wallach and Affiliate Scholar Seth Baum are Principal Investigators on projects n funded by Elon Musk and the Open Philanthropy Project and administered by the Future of Life Institute.
Hackers calling themselves “The Impact Team” recently stole the customer data of Ashley Madison, an online dating service for people who are married or in committed relationships. Ashley Madison employs a slogan that says it all: “Life is short. Have an affair.”
During July and August, customer data was released online by the hackers: the upshot is that it’s now possible to identify many individuals who held Ashley Madison accounts. This includes such intimate details as their sexual fetishes and proclivities.
The new television show Humans raises some important ethical questions for a not-too-distant future society where human-looking domestic robots are commonplace. The 8 part series, shown on AMC in the US and Channel 4 in the UK, is based on the Swedish series Äkta människor (“Real Humans”) and is set in modern day London with the only discernible difference being that a company is manufacturing and selling “synths” – multi-purpose robots designed to look like humans and work as direct replacements for them. The drama tackles a wide range of questions from how synths would be treated and their impact on society, alongside the main story line of what happens if the artificially intelligent humanoids gain true self awareness and consciousness.
I recently binge watched my first TV series, Humans, which airs Sunday nights on AMC. As a science fiction writer myself, many people have been suggesting I check it out for weeks now. Finally I gave in, sat down on the couch, and watched the first six episodes over the course of two days. Not bad for a mother of two. And now I have to wait two whole days to see what happens next!!!!!
When I started to work on this map of AI safety solutions, I wanted to illustrate the excellent 2013 article “Responses to Catastrophic AGI Risk: A Survey” by Kaj Sotala and IEET Affiliate Scholar Roman V. Yampolskiy, which I strongly recommend. However, during the process I had a number of ideas to expand the classification of the proposed ways to create safe AI.
This map shows that AI failure resulting in human extinction could happen on different levels of AI development, namely:
1. before it starts self-improvement (which is unlikely but we still can envision several failure modes),
2. during its take off, when it uses different instruments to break out from its initial confinement, and
3. after its successful takeover of the world, when it starts to implement its goal system which could be unfriendly, or its friendliness may be flawed.
I’ve been stumping for some time about how Artificial Intelligence will provide the shortest path to curing aging forever. In fact, without it, I’m convinced we won’t solve aging in our lifetime. I’m glad to hear Peter Diamandis describe AI as the most important technology we’re developing this decade.
Peter goes on to say it’s a massive opportunity for humanity, not a threat, as well as the following:
One thing I promise, when we do politics here. It won’t be stuff you are reading anywhere else.
Cranking back NSA spying…?
Topmost in the news, recently, the shocking ability of the U.S. Congress to actually pass a compromise bill, one that dials back a few of the powers given (since 9/11) to our Professional Protector Caste (PPC) in the Patriot Act.
On Friday March 6, 2015, more than 3,000 people attended the ASU Emerge event. This is where Eric Kingsbury, futurist, founder of KITEBA, cofounder of the Confluence Project, launched “You Have Been Inventoried”. I helped with some of the content for the project, along with others from the Confluence Project.
Everywhere you look in the world you can see pessimism, gloom, doom and negativity. No matter where you live, it seems many are convinced that there’s just no hope. Many people have stopped trying to do anything, while they “wait for god” or “wait for the Singularity.” Or simply wait, period.
The negativity is everywhere.
So, here’s one of my rants, against that negativity.
This article examines the risks posed by “unknown unknowns,” which I call monsters. It then introduces a taxonomy of the unknowable, and argues that one category of this taxonomy in particular should lead us to inflate our prior probability estimates of annihilation, whatever they happen to be. The lesson here is ultimately the same as the Doomsday Argument, except the reasoning is far more robust.
Even if they aren’t flesh, “mindclones” deserve protection.
For much of the 20th century, capital punishment was carried out in most countries. During the preceding century many, like England, had daily public hangings. Today, even Russia, with a mountainous history of government-ordered executions, has a capital-punishment moratorium. Since 1996, it has not executed a criminal through the judicial system.
If we can learn to protect the lives of serial killers, child mutilators, and terrorists, surely we can learn to protect the lives of peace-loving model citizens known as mind clones and bemans—even if they initially seem odd or weird to us.
An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting is its potential implication. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could lead to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.
Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down? Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?
IEET Blog |
email list |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.
East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA
Email: director @ ieet.org phone:
West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org