Institute for Ethics and Emerging Technologies



Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view



UPCOMING EVENTS: SciTech



MULTIMEDIA: SciTech Topics

The Rejection Of Climate Science And Motivated Reasoning (25min)

Science, Politics & Climate Change

What we need is a Tom Lehrer-style Elements of Risk Song

Is The Ebola Crisis (in the US) As Severe As The Media is Making It Out To Be?

SETI Institute: Risky tales: Talking with Seth Shostak at Big Picture Science

When Do We Quarantine or Isolate for Ebola?

Global Catastrophic & Existential Risk - Sleepwalking into the Abyss

History of a Time to Come

DeepMind, MetaMed, Existential Risk, and the Intelligence Explosion

"> A Participatory Panopticon

The future is going to be wonderful (If we don’t get whacked by the existential risks)

10 Awesome Facts About Nanotechnology

Scary, Thought-provoking, Futurist Prank by Singularity 1 on 1

Existential Risk

How to break the Internet, destroy democracy and enslave the human race (or not)




Subscribe to IEET Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List









SciTech Topics




The Making of an Anti-Theist Mom

by Valerie Tarico

What makes a Seattle mother spend her days trying to chip away at Bible belief rather than digging holes in the garden?

When my husband sent me the Pew Report  news that the percent of Americans who call themselves Christian has dropped from 78.4 to 70.6 over the last 7 years, I responded jokingly with six words: You’re welcome. Molly Moon’s after dinner?

Not that I actually claim credit for the decline. As they say, it takes a village.

Full Story...



A look back at our origins

by David Brin

We are the first human civilization to remove our envisioned “golden age” from an imagined-nostalgic past and instead plant that better-than-the-present era (tentatively) in a potential future.

Full Story...



Today’s Robot Films Reflect Popular Fears Concerning Artificial Intelligence

by Maria Ramos

The civilized world has an ever-intensifying relationship to automated computer technology. It is involved in nearly everything we do, every day, from the time we wake to the time we go to sleep. Why, then, does so much of our entertainment reflect a deep-set fear of technology and its potential for failure?



Dunkin’ Donuts ditches titanium dioxide – but is it actually harmful?

by Andrew Maynard

In response to pressure from the advocacy group As You Sow, Dunkin’ Brands has announced that it will be removing allegedly “nano” titanium dioxide from Dunkin’ Donuts’ powdered sugar donuts. As You Sow claims there are safety concerns around the use of the material, while Dunkin’ Brands cites concerns over investor confidence. It’s a move that further confirms the food sector’s conservatism over adopting new technologies in the face of public uncertainty. But how justified is it based on what we know about the safety of nanoparticles?



Our Final Hour

by John G. Messerly

From Our Final Hour: A Scientist’s Warning by Martin Rees, Royal Society Professor at Cambridge and England’s Royal Astronomer. “Twenty-first century science may alter human beings themselves - not just how they live.” (9) Rees accepts the common wisdom that the next hundred years will see changes that dwarf those of the past thousand years, but he is skeptical about specific predictions. 



A New Typology of Risks

by Phil Torres

In a previous article, I critiqued the two primary definitions of “existential risk” found in the literature, and then hinted at a new definition to replace them. Part of my critique centered on how the relevant group affected by an existential catastrophe is demarcated, e.g., as “our entire species,” “Earth-originating intelligent life,” or “either our current population or some future population of descendants that we value.” (I prefer the latter because it solves the problems of “good” and “bad” extinction that the first two encounter.) I want to put aside the issue of demarcation in this article and focus exclusively on the nature of existential risks themselves (that is, independent of who exactly they impact).



Is novelty in nanomaterials overrated when it comes to risk?

by Andrew Maynard

Nanomaterial risks are often considered in terms of novel material behaviours. But, as Andrew D. Maynard explains, does this framing end up obscuring some risks, while overplaying others?



Problems with Defining an Existential Risk

by Phil Torres

What is an existential risk? The general concept has been around for decades, but the term was coined by Nick Bostrom in his seminal 2002 paper, here. Like so many empirical concepts – from organism to gene to law of nature, all of which are still debated by philosophically-minded scientists and scientifically-minded philosophers – the notion of an existential risk turns out to be more difficult to define than one might at first think.



#1 Editor’s Choice Award: Rule by Algorithm? Big Data and the Threat of Algocracy

by John Danaher

An increasing number of people are worried about the way in which our data is being mined by governments and corporations. One of these people is Evgeny Morozov. In an article that appeared in the MIT Technology Review back in October 2013, he argued that this trend poses a serious threat to democracy, one that should be resisted through political activism and “sabotage”. As it happens, I have written about similar threats to democracy myself in the past, so I was interested to see how Morozov defended his view.



World Economic Forum highlights risks of emerging technologies

by Andrew Maynard

The challenges of governing emerging technologies are highlighted by the World Economic Forum in the 2015 edition of its Global Risks Report. Focusing in particular on synthetic biology, gene drives and artificial intelligence, the report warns that these and other emerging technologies present hard-to-foresee risks, and that oversight mechanisms need to more effectively balance likely benefits and commercial demands with a deeper consideration of ethical questions and medium to long-term risks.



How Humanity Will Conquer Space Without Rockets

by George Dvorsky

Getting out of Earth’s gravity well is hard. Conventional rockets are expensive, wasteful, and as we’re frequently reminded, very dangerous. Thankfully, there are alternative ways of getting ourselves and all our stuff off this rock. Here’s how we’ll get from Earth to space in the future.



Bad luck and cancer – did the media get it wrong?

by Andrew Maynard

The chances are that, if you follow news articles about cancer, you’ll have come across headlines like “Most Cancers Caused By Bad Luck” (The Daily Beast) or “Two-thirds of cancers are due to “bad luck,” study finds” (CBS News).  The story – based on research out of Johns Hopkins University – has grabbed widespread media attention.  But it’s also raised the ire of science communicators who think that the headlines and stories are, in the words of a couple of writers, “just bollocks”.



Self Absorption

by Joseph R. Carvalko

Looking back on my early experience as a young engineer, I am reminded how little my colleagues and I appreciated that what we did would change the world, for good and for bad. I am also reminded how Marcel Golay, one of my early mentors understood the duality of technology and how this feature plays large in its application for the right purpose.



#26: The Internet of Things, the industry and AI

by Kamil Muzyka

Communication is the basic principle of social interaction. We know that microbes use a method of communication called quorum sensing1, cetaceans have their whale song2, plants have airborne chemical communication and fungal signal transfer via their roots3. Let us take a moment to think about how do machines communicate with each other.



Defining “Benevolence” in the context of Safe AI

by Richard Loosemore

The question that motivates this essay is “Can we build a benevolent AI, and how do we get around the problem that humans, bless their cotton socks, can’t define ‘benevolence’?” A lot of people want to emphasize just how many different definitions of “benevolence” there are in the world — the point, of course, being that humans are very far from agreeing a universal definition of benevolence, so how can we expect to program something we cannot define into an AI?



The Future As History

by Rick Searle

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster.



Don’t Diss Dystopias: Sci-fi’s warning tales are as important as its optimistic stories.

by Ramez Naam

This piece is part of Future Tense, a partnership of Slate, New America, and Arizona State University. On Thursday, Oct. 2, Future Tense will host an event in Washington, D.C., on science fiction and public policy, inspired by the new anthology Hieroglyph: Stories & Visions for a Better Future. For more information on the event, visit the New America website; for more on Hieroglyph project, visit the website of ASU’s Project Hieroglyph.



10 Horrifying Technologies That Should Never Be Allowed To Exist

by George Dvorsky

As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.



Advanced Materials – What’s the big deal?

by Andrew Maynard

Materials and how we use them are inextricably linked to the development of human society.  Yet amazing as historic achievements using stone, wood, metals and other substances seem, these are unbelievably crude compared to the full potential of what could be achieved with designer materials.



Don’t fear the robot car bomb

by Patrick Lin

Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.



Is Singapore the next Silicon Valley?

by Ayesha Khanna

Tech giants like Google, Microsoft, Apple and Facebook are winning the war for talent and Silicon Valley office space, encouraging start-ups to go on a global hunt for a new heartland. In Asia, Singapore wants to be the answer. The government has established numerous schemes and initiatives to encourage entrepreneurs and venture capitalists to set up shop there.



Gaza Is a Transhumanist Issue!

by Benjamin Abbott

Transhumanists as a rule may prefer to contemplate implants and genetic engineering, but few if any violations of morphological freedom exceed being torn to pieces by shrapnel or dashed against concrete by an overpressure wave. In this piece I argue that the settler-colonial violence in occupied Palestine relates to core aspects of modernity and demands futurist attention both emotionally and intellectually.



Bostrom on Superintelligence (4): Malignant Failure Modes

by John Danaher

This is the fourth post of my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I started my discussion of Bostrom’s argument for an AI doomsday scenario. Today, I continue this discussion by looking at another criticism of that argument, along with Bostrom’s response.



Bostrom on Superintelligence (1): The Orthogonality Thesis

by John Danaher

In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?



The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation

by Richard Loosemore

My goal in this article is to demolish the AI Doomsday scenarios that are being heavily publicized by the Machine Intelligence Research Institute, the Future of Humanity Institute, and others, and which have now found their way into the farthest corners of the popular press. These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous.  On a more constructive and optimistic note, I will argue that even if someone did try to build the kind of unstable AI system that might lead to one of the doomsday behaviors, the system itself would immediately detect the offending logical contradiction in its design, and spontaneously self-modify to make itself safe.



How safe is the world’s darkest material?

by Andrew Maynard

Over the past few days, the interweb’s been awash with virtual “oohs” and “ahs” over Surrey Nanosystems’ carbon nanotube-based Vantablack coating.  The material – which absorbs over 99.9% of light falling onto it and is claimed to be the world’s darkest material – is made up of a densely packed “forest” of vertically aligned carbon nanotubes (see the image below).  In fact the name “vanta” stands for Vertically Aligned NanoTube Array.



Remaining Inaugural Members of NSABB Dismissed Last Night

by Kelly Hills

It’s not exactly been what one would call a banner month for the National Institutes of Health or the Centers for Disease Control and Prevention. In the last week and change, it’s been revealed that oops, the CDC completely screwed up how it handles anthrax and possibly exposed 86-odd people to anthrax and they accidentally shipped out H9N2 that had been contaminated with H5N1



Nanoparticles in Dunkin’ Donuts? Do the math!

by Andrew Maynard

Over the past couple of years a number of articles have been posted claiming that we’re eating more food products containing nanoparticles than we know (remember this piece from a couple of weeks ago?).  One of the latest appeared on The Guardian website yesterday with the headline “Activists take aim at nanomaterials in Dunkin’ Donuts” (thanks to @HilarySutcliffe for the tip-off).  



Geoengineering as a Human Right

by Kris Notaro

Geoengineering has come under attack recently by conspiracy theorists, scientists, to “greens.” There have been many kinds of proposals for geoengineering, and even a legal/illegal experiment pouring 200,000 pounds of iron sulfate into the North Pacific which was supposed to increase plankton that would absorb carbon dioxide. The experiment did not work and pissed off a lot of scientists. China also recently stopped their “flattening of mountains.” Therefore this article is not purely about techniques of combating global warming, but about the need for people to understand that geoengineering is a must, not only a must, but also a “human right.”



When Global Catastrophes Collide: The Climate Engineering Double Catastrophe

by Seth Baum

It could be difficult for human civilization to survive a global catastrophe like rapid climate change, nuclear war, or a pandemic disease outbreak. But imagine if two catastrophes strike at the same time. The damages could be even worse. Unfortunately, most research only looks at one catastrophe at a time, so we have little understanding of how they interact.

Page 1 of 17 pages  1 2 3 >  Last ›

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376