Institute for Ethics and Emerging Technologies



Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view



UPCOMING EVENTS: SciTech



MULTIMEDIA: SciTech Topics

10 Scientists Killed by Their Own Experiments

Futurist Gray Scott on Artificial Intelligence

How Can We Safely Build Something Smarter Than Us?

On Existential Risk and Individual Contribution to the “Good”

The Rejection Of Climate Science And Motivated Reasoning (25min)

Science, Politics & Climate Change

What we need is a Tom Lehrer-style Elements of Risk Song

Is The Ebola Crisis (in the US) As Severe As The Media is Making It Out To Be?

SETI Institute: Risky tales: Talking with Seth Shostak at Big Picture Science

When Do We Quarantine or Isolate for Ebola?

Global Catastrophic & Existential Risk - Sleepwalking into the Abyss

History of a Time to Come

DeepMind, MetaMed, Existential Risk, and the Intelligence Explosion

"> A Participatory Panopticon

The future is going to be wonderful (If we don’t get whacked by the existential risks)




Subscribe to IEET Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List









SciTech Topics




Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

by Steve Fuller

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Full Story...



How SETI Will Understand Messages Broadcast by an Alien Intelligence

by George Dvorsky

Imagine the day when we finally receive a signal from an extraterrestrial intelligence, only to find that there’s a message embedded within. Given that we don’t speak the same language, how could we ever hope to make sense of it? We spoke to the experts to find out.

Communication with Extraterrestrial Intelligence, aka “CETI”, is the branch of SETI concerned with both the transmission and reception of messages between ourselves and an alien civilization. Scientists have been trying to detect signals from an extraterrestrial intelligence (ETI) since the 1960s, but haven’t found anything.

Full Story...



Smart Regulation For Smart Drugs

by Geoffrey Woo

“For the modern mad men and wolves of Wall Street, gone are the days of widespread day drinking and functional cocaine use. Instead, in this age of efficiency above all else, corporate climbers sometimes seek a simple brain boost, something to help them to get the job done without manic jitters or a nasty crash.

For that, they are turning to nootropics,” writes Jack Smith IV on the cover story for an April 2015 edition of the New York Observer.

Full Story...



Science as Radicalism (Part 3: scientists have been largely captured by dominant power structures)

by William Gillis

This restructuring of how to view science is geared not just at defending science from charges of reactionism from leftists, but at more broadly clarifying how we might view that much looser bundle invoked by the word “science” as a political force. Because the array of things popularly associated with “science” is so wildly varying and hazy most of the political claims surrounding science that don’t slice it away to near irrelevance or neutrality as a formulaic procedure have sought to identify underlying ideological commitments and then define “science” in terms of them.

Full Story...



Science as Radicalism (Part 2: digging for the roots - the radicalism of scientists)

by William Gillis

The fact of the matter is that the remarkably successful phenomenon that the term “Science!” has wrapped itself around is not so much a methodology as an orientation. What was really going on, what is still going on in science that has given it so many great insights is the radicalism of scientists, that is to say their vigilant pursuit after the roots (or ‘radis’). Radicals constantly push our perspectives into extreme or alien contexts until they break or become littered with unwieldy complications, and when such occurs we are happy to shed off the historical baggage entirely and start anew. To not just add caveats upon caveats to an existing model but to sometimes prune them away or throw it all out entirely. Ours is the search for patterns and symmetries that might reflect more universal dynamics rather than merely good rules of thumb within a specific limited context. As any radical knows “good enough” is never actually enough.

Full Story...



The Supposed Dangers of Techno-Optimism

by John G. Messerly

In his recent article, “Why Techno-Optimism Is Dangerous,” the philosopher Nicholas Agar argues that we not should pursue radical human enhancement. (Professor Agar has made the same basic argument in three recent books: 1) The Sceptical Optimist: Why Technology Isn’t the Answer to Everything;  2) Truly Human Enhancement: A Philosophical Defense of Limits;  and 3) Humanity’s End: Why We Should Reject Radical Enhancement.)

Full Story...



Space Junk and Its Impending Impact

by Maria Ramos

With the launch of Sputnik in 1957, humankind extended its presence from the Earth’s surface towards outer space. Since that time, thousands of other objects have been sent into Earth orbit, including weather satellites, communications equipment and military hardware. Wherever people go, they tend to leave their mark, mostly harmful, on the natural environment, and space is no exception. There are many pieces of space junk – the remains of discarded, malfunctioning or obsolete devices – that now whiz around the earth and pose threats to current space projects.

Full Story...



Wallach Publishes in Prestigious NAS Journal

IEET Fellow Wendell Wallach recently co-published an article in the National Academy of Sciences‘ ISSUES in Science and Technology journal, with ASU law professor Gary E. Marchant,  The piece is entitled Coordinating Technology Governance and it explores the need for, and application of, a nimble authoritative coordinating body, referred to as a Governance Coordination Committee, to fill an urgent gap with regard to the assessment of the ethical, legal, social and economic consequences of emerging technologies.

Full Story...



Transhumanism – The Final Religion?

by Dirk Bruere

After several decades of relative obscurity Transhumanism as a philosophical and technological movement has finally begun to break out of its strange intellectual ghetto and make small inroads into the wider public consciousness. This is partly because some high profile people have either adopted it as their worldview or alternatively warned against its potential dangers. Indeed, the political scientist Francis Fukuyama named it “The world’s most dangerous idea” in a 2004 article in the US magazine Foreign Policy, and Transhumanism’s most outspoken publicist, Ray Kurzweil, was recently made director of engineering at Google, presumably to hasten Transhumanism’s goals.

Full Story...



Solar Cost Less than Half of What EIA Projected

by Ramez Naam

Skeptics of renewables sometimes cite data from EIA (The US Department of Energy’s Energy Information Administration) or from the IEA (the OECD’s International Energy Agency). The IEA has a long history of underestimating solar and wind that I think is starting to be understood.

Full Story...



Human Extinction Risks due to Artificial Intelligence Development - 55 ways we can be obliterated

by Alexey Turchin

This map shows that AI failure resulting in human extinction could happen on different levels of AI development, namely:

1. before it starts self-improvement (which is unlikely but we still can envision several failure modes),
2. during its take off, when it uses different instruments to break out from its initial confinement, and
3. after its successful takeover of the world, when it starts to implement its goal system which could be unfriendly, or its friendliness may be flawed. 

Full Story...



Is Effective Regulation of AI Possible? Eight Potential Regulatory Problems

by John Danaher

The halcyon days of the mid-20th century, when researchers at the (in?)famous Dartmouth summer school on AI dreamed of creating the first intelligent machine, seem so far away. Worries about the societal impacts of artificial intelligence (AI) are on the rise. Recent pronouncements from tech gurus like Elon Musk and Bill Gates have taken on a dramatically dystopian edge. They suggest that the proliferation and advance of AI could pose a existential threat to the human race.

Full Story...



Robosapiens – merging with machines will improve humanity at an exponential rate

by Agbolade Omowole

One can’t help be positive about the future. Even obstacles have a bright side. For example - humans at some point will be limited by space and time; we can’t expect to go far in space exploration without the development of strong artificial intelligence and robots.

Full Story...



Stoicism in the Post-Singularity Future

by Steven Umbrello

Futurists like Ray Kurzweil believe that advancements in the field of artificial intelligence will culminate to a point in the near future to allow humans to transcend their biological form. This is what he calls the Singularity and he describes it as follows:

Full Story...



How to Survive the End of the Universe

by Alexey Turchin

My plan below needs to be perceived with irony because it is almost irrelevant: we have only a very small chance of surviving the next 1000 years. If we do survive, we have numerous tasks to accomplish before my plan can become a reality.

Additionally, there’s the possibility that the “end of the universe” will arrive sooner, if our collider experiments lead to a vacuum phase transition, which begins at one point and spreads across the visible universe.

Full Story...



How many X-Risks for Humanity? This Roadmap has 100 Doomsday Scenarios

by Alexey Turchin

In 2008 I was working on a Russian language book “Structure of the Global Catastrophe”.  I showed it to a friend to review, the geologist Aranovich, an old friend of my late mother’s husband.

We started to discuss Stevenson’s probe — a hypothetical vehicle which could reach the earth’s core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60,000 tons – theoretically feasible, but practically impossible.

Full Story...



Should indoor tanning be banned?

by Andrew Maynard

Just how dangerous is indoor tanning?

A couple of weeks ago, colleagues from the University of Michigan published an article with a rather stark recommendation:

Full Story...



What Would Happen If All Our Satellites Were Suddenly Destroyed?

by George Dvorsky

Since their inception 60 years ago, satellites have gone on to become an indispensable component of our modern high-tech civilization. But because they’re reliable and practically invisible, we take their existence for granted. Here’s what would happen if all our satellites suddenly just disappeared.

The idea that all the satellites — or at least good portion of them — could be rendered inoperable is not as outlandish as it might seem at first. There are at least three plausible scenarios in which this could happen.

Full Story...



Who’s Winning the Surveillance Arms Race?

by Valkyrie Ice McGill

You know the names Manning, Snowden and Assange, at least, you do unless you’ve been living under a rock. I’m pretty sure you also know that “Big Brother” doesn’t like them much.

But what you might not know is that their very existence shows that “Big Brother” isn’t as large and in charge as you might think he is.

Full Story...



The Future of Personal Privacy - Review of “You Have Been Inventoried”

by Tery Spataro

On Friday March 6, 2015, more than 3,000 people attended the ASU Emerge event. This is where Eric Kingsbury, futurist, founder of KITEBA, cofounder of the Confluence Project, launched “You Have Been Inventoried”.  I helped with some of the content for the project, along with others from the Confluence Project.

Full Story...



How Freedom of Information Will Change the World

by Valkyrie Ice McGill

Everywhere you look in the world you can see pessimism, gloom, doom and negativity. No matter where you live, it seems many are convinced that there’s just no hope. Many people have stopped trying to do anything, while they “wait for god” or “wait for the Singularity.” Or simply wait, period.

The negativity is everywhere.

So, here’s one of my rants, against that negativity.

Full Story...



What, Me Worry? - I Don’t Share Most Concerns About Artificial Intelligence

by Lawrence Krauss

There has of late been a great deal of ink devoted to concerns about artificial intelligence, and a future world where machines can “think,” where the latter term ranges from simple autonomous decision-making to full fledged self-awareness. I don’t share most of these concerns, and I am personally quite excited by the possibility of experiencing thinking machines, both for the opportunities they will provide for potentially improving the human condition, to the insights they will undoubtedly provide into the nature of consciousness.

Full Story...



We May be Systematically Underestimating the Probability of Annihilation

by Phil Torres

This article examines the risks posed by “unknown unknowns,” which I call monsters. It then introduces a taxonomy of the unknowable, and argues that one category of this taxonomy in particular should lead us to inflate our prior probability estimates of annihilation, whatever they happen to be. The lesson here is ultimately the same as the Doomsday Argument, except the reasoning is far more robust.

Full Story...



When Is A Minion Not A Minion? - Should We Create Aware Machines?

by Aubrey de Grey

If asked to rank humanity’s problems by severity, I would give the silver medal to the need to spend so much time doing things that give us no fulfillment—work, in a word. I consider that the ultimate goal of artificial intelligence is to hand off this burden, to robots that have enough common sense to perform those tasks with minimal supervision.

But some AI researchers have altogether loftier aspirations for future machines: they foresee computer functionality that vastly exceeds our own in every sphere of cognition. Such machines would not only do things that people prefer not to; they would also discover how to do things that no one can yet do. This process can, in principle, iterate—the more such machines can do, the more they can discover.

What’s not to like about that? Why do I NOT view it as a superior research goal than machines with common sense (which I’ll call “minions”)?

Full Story...



The “Reputation Web” Will Generate Countless Opportunities

by Lincoln Cannon

Technological change is accelerating and transforming our world. Assuming trends persist, we will soon experience an evolutionary shift in the mechanisms of reputation, a fundamental on which relationships are based. Cascading effects of the shift will revolutionize the way we relate with each other and our machines, incentivizing unprecedented degrees of global cooperation.

In 2015, you probably have more computing power than that of the Apollo Guidance computer in your smartphone, and yet Moore’s Law continues unabated at its fiftieth anniversary. Machines are becoming faster and smaller and smarter.

Full Story...



The Making of an Anti-Theist Mom

by Valerie Tarico

What makes a Seattle mother spend her days trying to chip away at Bible belief rather than digging holes in the garden?

When my husband sent me the Pew Report  news that the percent of Americans who call themselves Christian has dropped from 78.4 to 70.6 over the last 7 years, I responded jokingly with six words: You’re welcome. Molly Moon’s after dinner?

Not that I actually claim credit for the decline. As they say, it takes a village.

Full Story...



A look back at our origins

by David Brin

We are the first human civilization to remove our envisioned “golden age” from an imagined-nostalgic past and instead plant that better-than-the-present era (tentatively) in a potential future.

Full Story...



Today’s Robot Films Reflect Popular Fears Concerning Artificial Intelligence

by Maria Ramos

The civilized world has an ever-intensifying relationship to automated computer technology. It is involved in nearly everything we do, every day, from the time we wake to the time we go to sleep. Why, then, does so much of our entertainment reflect a deep-set fear of technology and its potential for failure?



Dunkin’ Donuts ditches titanium dioxide – but is it actually harmful?

by Andrew Maynard

In response to pressure from the advocacy group As You Sow, Dunkin’ Brands has announced that it will be removing allegedly “nano” titanium dioxide from Dunkin’ Donuts’ powdered sugar donuts. As You Sow claims there are safety concerns around the use of the material, while Dunkin’ Brands cites concerns over investor confidence. It’s a move that further confirms the food sector’s conservatism over adopting new technologies in the face of public uncertainty. But how justified is it based on what we know about the safety of nanoparticles?



Our Final Hour

by John G. Messerly

From Our Final Hour: A Scientist’s Warning by Martin Rees, Royal Society Professor at Cambridge and England’s Royal Astronomer. “Twenty-first century science may alter human beings themselves - not just how they live.” (9) Rees accepts the common wisdom that the next hundred years will see changes that dwarf those of the past thousand years, but he is skeptical about specific predictions. 

Page 1 of 18 pages  1 2 3 >  Last ›

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376