Institute for Ethics and Emerging Technologies



Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view



UPCOMING EVENTS: Cyber

Kevin LaGrandeur@ Hacking Big Data Brother Conference: From Biometrics to Intra-action
July 21
Medialab Prado, Madrid




MULTIMEDIA: Cyber Topics

The Awareness

Are We Heading for a Jobless Future?

What happens when our computers get smarter than we are?

Review of EX MACHINA

The Dawn of Killer Robots

AI, Transhumanism & Merging with Superintelligence

Data Mining: Twitter, Facebook and Beyond

A Debate on the Right to be Forgotten

Building Digital Trust: A New Architecture for Engineering Privacy (10 min)

The future is going to be wonderful (If we don’t get whacked by the existential risks)

Digital Rights Management

Government & Surveillance

Give Edward Snowden Clemency!

How The Grinch Stole an NFL Gnome – The Future of Manufacturing and Distribution

TechDebate: Lethal Autonomous (“Killer”) Robots




Subscribe to IEET Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List









Cyber Topics




AI Will Solve Aging - it is a Tool, Not a Threat

by David Kekich

Dear Future Centenarian, 

I’ve been stumping for some time about how Artificial Intelligence will provide the shortest path to curing aging forever. In fact, without it, I’m convinced we won’t solve aging in our lifetime. I’m glad to hear Peter Diamandis describe AI as the most important technology we’re developing this decade.

Peter goes on to say it’s a massive opportunity for humanity, not a threat, as well as the following:

Full Story...



The Balkanization of Things

by Marcelo Rinesi

The smarter your stuff, the less you legally own it. And it won’t be long before, besides resisting you, things begin to quietly resist each other.

Full Story...



A Truly Major Issue, helping decide the fate of democracy

by David Brin

One thing I promise, when we do politics here.  It won’t be stuff you are reading anywhere else.

Cranking back NSA spying…?

Topmost in the news, recently, the shocking ability of the U.S. Congress to actually pass a compromise bill, one that dials back a few of the powers given (since 9/11) to our Professional Protector Caste (PPC) in the Patriot Act.

Full Story...



Who’s Winning the Surveillance Arms Race?

by Valkyrie Ice McGill

You know the names Manning, Snowden and Assange, at least, you do unless you’ve been living under a rock. I’m pretty sure you also know that “Big Brother” doesn’t like them much.

But what you might not know is that their very existence shows that “Big Brother” isn’t as large and in charge as you might think he is.

Full Story...



The Future of Personal Privacy - Review of “You Have Been Inventoried”

by Tery Spataro

On Friday March 6, 2015, more than 3,000 people attended the ASU Emerge event. This is where Eric Kingsbury, futurist, founder of KITEBA, cofounder of the Confluence Project, launched “You Have Been Inventoried”.  I helped with some of the content for the project, along with others from the Confluence Project.

Full Story...



How Freedom of Information Will Change the World

by Valkyrie Ice McGill

Everywhere you look in the world you can see pessimism, gloom, doom and negativity. No matter where you live, it seems many are convinced that there’s just no hope. Many people have stopped trying to do anything, while they “wait for god” or “wait for the Singularity.” Or simply wait, period.

The negativity is everywhere.

So, here’s one of my rants, against that negativity.

Full Story...



We May be Systematically Underestimating the Probability of Annihilation

by Phil Torres

This article examines the risks posed by “unknown unknowns,” which I call monsters. It then introduces a taxonomy of the unknowable, and argues that one category of this taxonomy in particular should lead us to inflate our prior probability estimates of annihilation, whatever they happen to be. The lesson here is ultimately the same as the Doomsday Argument, except the reasoning is far more robust.

Full Story...



Human Rights for Cyberconscious Beings

by Martine Rothblatt

Even if they aren’t flesh, “mindclones” deserve protection.

For much of the 20th century, capital punishment was carried out in most countries. During the preceding century many, like England, had daily public hangings. Today, even Russia, with a mountainous history of government-ordered executions, has a capital-punishment moratorium. Since 1996, it has not executed a criminal through the judicial system.

If we can learn to protect the lives of serial killers, child mutilators, and terrorists, surely we can learn to protect the lives of peace-loving model citizens known as mind clones and bemans—even if they initially seem odd or weird to us.

excerpt from Virtually Human: The Promise and Peril of Digital Immortality

Full Story...



Are AI-Doomsayers like Skeptical Theists? A Precis of the Argument

by John Danaher

Some of you may have noticed my recently-published paper on existential risk and artificial intelligence. The paper offers a somewhat critical perspective on the recent trend for AI-doomsaying among people like Elon Musk, Stephen Hawking and Bill Gates. Of course, it doesn’t focus on their opinions; rather, it focuses on the work of the philosopher Nick Bostrom, who has written the most impressive analysis to date of the potential risks posed by superintelligent machines.

Full Story...



The Epistemic Costs of Superintelligence: Bostrom’s Treacherous Turn and Sceptical Theism

by John Danaher

An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting is its potential implication. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could lead to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.

Full Story...



Do Killer Robots Violate Human Rights?

by Patrick Lin

When machines are anthropomorphized, we risk applying a human standard that should not apply to mere tools.

Full Story...



Should some conversations be suppressed?

by David Wood

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down? Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

Full Story...



It’s Time to Destroy DRM

by Erick Vasconcelos

On January 20, the Electronic Frontier Foundation (EFF) announced the Apollo 1201 project, an effort to eradicate digital rights management (DRM) schemes from the world of Internet commerce. Led by well-known activist Cory Doctorow, the project aims to “accelerate the movement to repeal laws protecting DRM” and “kick-start a vibrant market in viable, legal alternatives to digital locks.” According to EFF, DRM technologies “threaten users’ security and privacy, distort markets, undermine innovation,” and don’t effectively protect so-called “intellectual property.”



#1 Editor’s Choice Award: Rule by Algorithm? Big Data and the Threat of Algocracy

by John Danaher

An increasing number of people are worried about the way in which our data is being mined by governments and corporations. One of these people is Evgeny Morozov. In an article that appeared in the MIT Technology Review back in October 2013, he argued that this trend poses a serious threat to democracy, one that should be resisted through political activism and “sabotage”. As it happens, I have written about similar threats to democracy myself in the past, so I was interested to see how Morozov defended his view.



Living in the Divided World of the Internet’s Future

by Rick Searle

Sony hacks, barbarians with FaceBook pages, troll armies, ministries of “truth”- it wasn’t supposed to be like this. When the early pioneers of what we now call the Internet freed the network from the US military they were hoping for a network of mutual trust and sharing- a network like the scientific communities in which they worked where minds were brought into communion from every corner of the world. It didn’t take long for some of the witnesses to the global Internet’s birth to see in it the beginnings of a global civilization, the unification, at last, of all of humanity under one roof brought together in dialogue by the miracle of a network that seemed to eliminate the parochialism of space and time.



Procedural Due Process and the Dangers of Predictive Analytics

by John Danaher

I am really looking forward to Frank Pasquale’s new book The Black Box Society: The Secret Algorithms that Control Money and Information. The book looks to examine and critique the ways in which big data is being used to analyse, predict and control our behaviour. Unfortunately, it is not out until January 2015. In the meantime, I’m trying to distract myself with some of Pasquale’s previously published material.



The Transhumanist Future of Sex (Crimes?)

by B. J. Murphy

On August 31 of this year, nearly 200 celebrities had their private images hacked and released for the entire world to see. These images ranged from the normal day-to-day activities, to their utmost private moments – from nudity to sex. This event hit both mainstream and social media airwaves, flooding the online sphere under the hashtags #Celebgate and the #Fappening.



The Future As History

by Rick Searle

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster.



Google’s Cold Betrayal of the Internet

by Harry J. Bentham

Google Inc.’s 2013 book The New Digital Age, authored by Google chairman Eric Schmidt and Google Ideas director Jared Cohen, was showered with praise by many, but attacked in a review by Julian Assange for the New York Times, where it is described as a “love song” from Google to the US state. Also addressed in Assange’s subsequent book When Google Met WikiLeaks, Google’s book makes an unconvincing effort to depict the internet as a double-edged sword, both empowering (p. 6) and threatening our lives (p. 7).



Don’t Diss Dystopias: Sci-fi’s warning tales are as important as its optimistic stories.

by Ramez Naam

This piece is part of Future Tense, a partnership of Slate, New America, and Arizona State University. On Thursday, Oct. 2, Future Tense will host an event in Washington, D.C., on science fiction and public policy, inspired by the new anthology Hieroglyph: Stories & Visions for a Better Future. For more information on the event, visit the New America website; for more on Hieroglyph project, visit the website of ASU’s Project Hieroglyph.



Review: When Google Met WikiLeaks (2014) by Julian Assange

by Harry J. Bentham

Julian Assange’s 2014 book When Google Met WikiLeaks consists of essays authored by Assange and, more significantly, the transcript of a discussion between Assange and Google’s Eric Schmidt and Jared Cohen.



How to avoid drowning in the Library of Babel

by Rick Searle

Between us and the future stands an almost impregnable wall that cannot be scaled. We cannot see over it,or under it, or through it, no matter how hard we try. Sometimes the best way to see the future is by using the same tools we use in understanding the present which is also, at least partly, hidden from direct view by the dilemma inherent in our use of language.



Actually: You ARE the Customer, Not the Product

by Ramez Naam

Don’t believe the hype. You’re the customer, whether you pay directly or by seeing ads. Tell me if you’ve heard this one before: “On the internet, if you’re not paying for something, then you’re not the customer. You’re the product.”



Snowden, Sousveillance and Social T Cells

by David Brin

Wired has a long form interview with Edward Snowden: The Most-Wanted Man in the World. A must-read… as far as it goes. Only keep ahold of your ability to parse complexities and contradictions, because my reflex is always to point out aspects that were never raised. I refuse to choose one "side's" purist reflex.  So should you.



Emotion, Artificial Intelligence, and Ethics

by Kevin LaGrandeur

The growing body of work in the new field of “affective robotics” involves both theoretical and practical ways to instill – or at least imitate – human emotion in Artificial Intelligence (AI), and also to induce emotions toward AI in humans. The aim of this is to guarantee that as AI becomes smarter and more powerful, it will remain tractable and attractive to us. Inducing emotions is important to this effort to create safer and more attractive AI because it is hoped that instantiation of emotions will eventually lead to robots that have moral and ethical codes, making them safer; and also that humans and AI will be able to develop mutual emotional attachments, facilitating the use of robots as human companions and helpers. This paper discusses some of the more significant of these recent efforts and addresses some important ethical questions that arise relative to these endeavors.

Full Story...



Cyberwarfare ethics, or how Facebook could accidentally make its engineers into targets

by Patrick Lin

Without clear rules for cyberwarfare, technology workers could find themselves fair game in enemy attacks and counterattacks. If they participate in military cyberoperations—intentionally or not—employees at Facebook, Google, Apple, Microsoft, Yahoo!, Sprint, AT&T, Vodaphone, and many other companies may find themselves considered “civilians directly participating in hostilities” and therefore legitimate targets of war, according to the legal definitions of the Geneva Conventions and their Additional Protocols.



Don’t fear the robot car bomb

by Patrick Lin

Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.



Ways to make civilization robust

by David Brin

The resilience of our entire civilization is increasingly reliant on a fragile network of cell phone towers, which are the first things to fail in any crisis, e.g. a hurricane or other natural disaster… or else deliberate (e.g. EMP or hacker) sabotage.



Living Without ‘Her’

by Andy Miah

What makes love so important to us? Why is it so central to our lives? Why do we invest so much of ourselves into its discovery and feel so strongly that our happiness depends on it lasting?



The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation

by Richard Loosemore

My goal in this article is to demolish the AI Doomsday scenarios that are being heavily publicized by the Machine Intelligence Research Institute, the Future of Humanity Institute, and others, and which have now found their way into the farthest corners of the popular press. These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous.  On a more constructive and optimistic note, I will argue that even if someone did try to build the kind of unstable AI system that might lead to one of the doomsday behaviors, the system itself would immediately detect the offending logical contradiction in its design, and spontaneously self-modify to make itself safe.

Page 1 of 10 pages  1 2 3 >  Last ›

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376