Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view



UPCOMING EVENTS: Cyber



MULTIMEDIA: Cyber Topics

Data Mining: Twitter, Facebook and Beyond

A Debate on the Right to be Forgotten

Building Digital Trust: A New Architecture for Engineering Privacy (10 min)

The future is going to be wonderful (If we don’t get whacked by the existential risks)

Digital Rights Management

Government & Surveillance

Give Edward Snowden Clemency!

How The Grinch Stole an NFL Gnome – The Future of Manufacturing and Distribution

TechDebate: Lethal Autonomous (“Killer”) Robots

Terasem’s Lifenaut Project: Be The Author of Your Own Story

Futurists discuss The Transhumanist Wager

Transhumanism - Antecedents & the Future

Ignite San Francisco 6: The Wired Brain

The Love Police: Megaphone the Drone

Online Dating




Subscribe to IEET Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List









Cyber Topics




The Future As History

by Rick Searle

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster.



Google’s Cold Betrayal of the Internet

by Harry J. Bentham

Google Inc.’s 2013 book The New Digital Age, authored by Google chairman Eric Schmidt and Google Ideas director Jared Cohen, was showered with praise by many, but attacked in a review by Julian Assange for the New York Times, where it is described as a “love song” from Google to the US state. Also addressed in Assange’s subsequent book When Google Met WikiLeaks, Google’s book makes an unconvincing effort to depict the internet as a double-edged sword, both empowering (p. 6) and threatening our lives (p. 7).



Don’t Diss Dystopias: Sci-fi’s warning tales are as important as its optimistic stories.

by Ramez Naam

This piece is part of Future Tense, a partnership of Slate, New America, and Arizona State University. On Thursday, Oct. 2, Future Tense will host an event in Washington, D.C., on science fiction and public policy, inspired by the new anthology Hieroglyph: Stories & Visions for a Better Future. For more information on the event, visit the New America website; for more on Hieroglyph project, visit the website of ASU’s Project Hieroglyph.



Review: When Google Met WikiLeaks (2014) by Julian Assange

by Harry J. Bentham

Julian Assange’s 2014 book When Google Met WikiLeaks consists of essays authored by Assange and, more significantly, the transcript of a discussion between Assange and Google’s Eric Schmidt and Jared Cohen.



How to avoid drowning in the Library of Babel

by Rick Searle

Between us and the future stands an almost impregnable wall that cannot be scaled. We cannot see over it,or under it, or through it, no matter how hard we try. Sometimes the best way to see the future is by using the same tools we use in understanding the present which is also, at least partly, hidden from direct view by the dilemma inherent in our use of language.



Actually: You ARE the Customer, Not the Product

by Ramez Naam

Don’t believe the hype. You’re the customer, whether you pay directly or by seeing ads. Tell me if you’ve heard this one before: “On the internet, if you’re not paying for something, then you’re not the customer. You’re the product.”



Snowden, Sousveillance and Social T Cells

by David Brin

Wired has a long form interview with Edward Snowden: The Most-Wanted Man in the World. A must-read… as far as it goes. Only keep ahold of your ability to parse complexities and contradictions, because my reflex is always to point out aspects that were never raised. I refuse to choose one "side's" purist reflex.  So should you.



Emotion, Artificial Intelligence, and Ethics

by Kevin LaGrandeur

The growing body of work in the new field of “affective robotics” involves both theoretical and practical ways to instill – or at least imitate – human emotion in Artificial Intelligence (AI), and also to induce emotions toward AI in humans. The aim of this is to guarantee that as AI becomes smarter and more powerful, it will remain tractable and attractive to us. Inducing emotions is important to this effort to create safer and more attractive AI because it is hoped that instantiation of emotions will eventually lead to robots that have moral and ethical codes, making them safer; and also that humans and AI will be able to develop mutual emotional attachments, facilitating the use of robots as human companions and helpers. This paper discusses some of the more significant of these recent efforts and addresses some important ethical questions that arise relative to these endeavors.

Full Story...



Cyberwarfare ethics, or how Facebook could accidentally make its engineers into targets

by Patrick Lin

Without clear rules for cyberwarfare, technology workers could find themselves fair game in enemy attacks and counterattacks. If they participate in military cyberoperations—intentionally or not—employees at Facebook, Google, Apple, Microsoft, Yahoo!, Sprint, AT&T, Vodaphone, and many other companies may find themselves considered “civilians directly participating in hostilities” and therefore legitimate targets of war, according to the legal definitions of the Geneva Conventions and their Additional Protocols.



Don’t fear the robot car bomb

by Patrick Lin

Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.



Ways to make civilization robust

by David Brin

The resilience of our entire civilization is increasingly reliant on a fragile network of cell phone towers, which are the first things to fail in any crisis, e.g. a hurricane or other natural disaster… or else deliberate (e.g. EMP or hacker) sabotage.



Living Without ‘Her’

by Andy Miah

What makes love so important to us? Why is it so central to our lives? Why do we invest so much of ourselves into its discovery and feel so strongly that our happiness depends on it lasting?



The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation

by Richard Loosemore

My goal in this article is to demolish the AI Doomsday scenarios that are being heavily publicized by the Machine Intelligence Research Institute, the Future of Humanity Institute, and others, and which have now found their way into the farthest corners of the popular press. These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous.  On a more constructive and optimistic note, I will argue that even if someone did try to build the kind of unstable AI system that might lead to one of the doomsday behaviors, the system itself would immediately detect the offending logical contradiction in its design, and spontaneously self-modify to make itself safe.



Why the Castles of Silicon Valley are Built out of Sand

by Rick Searle

If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600′s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.



IEET Audience Overwhelmingly Supportive of Turing Test

Two thirds of the 278 respondents to our poll on the Turning Test believe that “It is a strong indication of a mind like a human’s.”

Full Story...



The Individual and the Collective, Part Two

by Valkyrie Ice McGill

Last post we observed the dynamics of the collective in the terms of a small tribe, and indicated that at this size, things worked pretty well. That is not to say that error modes were not possible, but that when error modes arose, there were mechanisms in place to deal with those errors. Essentially, at this scale, the ability of individuals to veil their actions in a wall of secrecy did not exist. While it is certainly possible for the individual to lie, cheat, steal and deceive, such actions could only be carried out to a limited extent, and carried repercussions that were deleterious to that individuals long term well being.



IEET Readers Divided on Robot Cars That Sacrifice Drivers’ Lives

Intrigued by IEET Fellow Patrick Lin’s essay “The Ethics of Autonomous Cars” we asked “Should your robot car sacrifice your life if it will save more lives?” A third of of the 196 of you who responded said no, a third said they should and a third said it should be the driver’s option.

Full Story...



Is Net Neutrality Really a “Lose-Lose?” (Marc Andreessen says so)

by Jon Perry

Tyler Cowen points to this great Marc Andreessen interview in the Washington Post that features him saying the following about net neutrality: So, I think the net neutrality issue is very difficult. I think it’s a lose-lose. It’s a good idea in theory because it basically appeals to this very powerful idea of permissionless innovation. But at the same time, I think that a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks.



Brave Citizenship beats a Scorched Earth Policy

by David Brin

Most of us in the west were raised with legends, myths and movies that taught Suspicion of Authority (SoA).  Thanks to the great science fiction author, George Orwell, we share a compelling metaphor— Big Brother —propelling our fears about a future that may be dominated by tyrants.



The Kingdom of Machines

by Rick Searle

For anyone thinking about the future relationship between nature-man-machines I’d like to make the case for the inclusion of an insightful piece of fiction to the canon. All of us have heard of H.G. Wells, Isaac Asimov or Arthur C. Clarke. And many, though perhaps fewer, of us have likely heard of fiction authors from the other side of the nature/technology fence, writers like Mary Shelley, or Ursula Le Guin, or nowadays, Paolo Bacigalupi, but certainly almost none of us have heard of Samuel Butler, or better, read his most famous novel Erewhon (pronounced with 3 short syllables E-re-Whon.)



Kevin Lagrandeur’s Book on Androids Wins SFTS Honorable Mention

IEET Fellow Kevin Lagrandeur’s book Androids and Intelligent Networks in Early Modern Literature and Culture: Artificial Slaves has been awarded an Honorable Mention by the Science Fiction and Technoculture Studies .

Full Story...



Google Is Not Your Enemy. (But it’s not your friend either)

by Valkyrie Ice McGill

I am sure you have heard it constantly. "Google is (insert fear term here.)" They want to take over the internet, they are building skynet, they are invading our privacy, they are trying to become big brother, etc, etc, ad nausem.  Be it Glass, or their recent acquisition of numerous robotics firms, to even hiring Ray Kurzweil, Google has recently been in the news a lot, usually as the big bad boogieman of whatever news story you are reading.



Black Death for the Internet?

by Kathryn Cave

Will viruses be the digital era’s Black Death?



Equality, Fairness and the Threat of Algocracy: Should we embrace automated predictive data-mining?

by John Danaher

I’ve looked at data-mining and predictive analytics before on this blog. As you know, there are many concerns about this type of technology and the increasing role it plays in our lives. Thus, for example, people are concerned about the oftentimes hidden way in which our data is collected prior to being “mined”. And they are concerned about how it is used by governments and corporations to guide their decision-making processes. Will we be unfairly targetted by the data-mining algorithms? Will they exercise too much control over socially important decision-making processes? I’ve reviewed some of these concerns before.



Majority of IEET Readers see AGI as Potential Threat to Humanity

We asked whether “artificial general intelligence with self-awareness” or “uploaded personalities or emulations of human brains” were more of a threat to human beings. Almost three times as many of you thought AGI was more of a threat than uploaded personalities, and overall 62% of the 245 respondents thought one or the other or both were a threat.

Full Story...



How the Web Will Implode

by Rick Searle

Jeff Stibel is either a genius when it comes to titles, or has one hell of an editor. The name of his recent book Breakpoint: Why the web will implode, search will be obsolete, and everything you need to know about technology is in your brain was about as intriguing as I had found a title, at least since The Joys of X. In many ways, the book delivers on the promise of its title, making an incredibly compelling argument for how we should be looking at the trend lines in technology, a book which is chalk full of surprising and original observations.



Special Issue of JET: Hughes, Walker, Campa & Danaher on Tech Unemployment and BIG

The special issue of the Journal of Evolution and Technology is published and has nine essays on technological unemployment and the basic income guarantee, six of them by IEETers.

Full Story...



Big Data and the Vices of Transparency

by John Danaher

Data-mining algorithms are increasingly being used to monitor and enforce governmental policies. For example, they are being used to shortlist people for tax auditing by the revenue services in several countries. They are also used by businesses to identify and target potential customers.



Real Identity on the Internet (My Variation)

by Kelly Hills

What is a digital trail? How can all your blog posts, photos, opinions, articles, and news affect your personal, professional and academic life? What is happening to the internet and how is affecting people in the real world? Kelly Hills tells us about her own personal story and how life online is a bit more complicated than you might expect.



The Dark Side of a World Without Boundaries

by Rick Searle

The problem I see with Nicolelis’ view of the future of neuroscience, which I discussed last time, is not that I find it unlikely that a good deal of his optimistic predictions will someday come to pass, it is that he spends no time at all talking about the darker potential of such technology.

Page 1 of 9 pages  1 2 3 >  Last ›

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376