IEET > Vision > Staff > Mike Treder > Futurism > Technoprogressivism
“Food Fight” or substantive debate?
Mike Treder   Aug 3, 2009   Ethical Technology  

In the Editor’s Blog of his online transhumanist magazine, h+, RU Sirius describes the recent and ongoing debate between technoprogressives and some radical libertarians as a “Political Food Fight.”

Over at The Speculist, Phil Bowermaster says:

One area where transhumanists consistently disappointment me is politics. We can talk about accelerating change and singularities and human enhancement and the possibilities are endless, but when the subject comes to politics, everyone seems to revert to one of a very small number of philosophical templates, most of them created in the 19th century or earlier. And for some reason those are inviolate.

Bruce Sterling at Beyond the Beyond jumps in and wonders, “Why aren’t these advanced conceptualists arguing about suffrage for Artificial Intelligences?”

Agreeing with that point, a commenter at Michael Anissimov’s Accelerating Future blog opines that “the arguments are taking place in a sandbox in the middle of the Sahara.”

The implication there, as in the other three items referenced above, is that a debate over contemporary terrestrial (and US-centric) politics is passé, that it basically misses the point of what emerging technologies are all about. Once artificial general intelligence is achieved, or molecular manufacturing is developed, or the Singularity arrives, then none of this will matter any more—or so the story goes.

I beg to differ.

For one thing, emerging technologies—whether AI or nanotech or genetic engineering—do not emerge into nor from a vacuum. They are developed within a context of political reality, amidst the daily tussle over regulation, funding, and proper usage. None of them will arise fully-grown and pristine, as Venus from the sea, but will be hammered out, molded, shaped, and modified through endless discussions in both corporate boardrooms and the halls of government. Our input is therefore essential if we hope to have any influence on how those technologies ultimately are deployed.

Some may speculate that a particular powerful technology (usually AI) will suddenly and immediately transform the world in such a way as to render moot all previous political considerations, but I would say that those ideas are exactly that: speculation. That scenario is no more certain than any other; clearly, it is worth exploring, but it would be a big mistake to make that the final word and forgo examination of other less millenarian but probably more likely outcomes.

Given that we live in a real world, not a science fiction world, where real governments and real companies make real decisions that affect real people—and knowing that we can’t say for sure when or if any spectacular new technology will turn everything upside down overnight—then it is up to us to stay engaged in current political debates and work out the best possible environments within which transformative technologies might emerge.

Mike Treder is a former Managing Director of the IEET.


Mike, I think you turned it into a food fight with your response to Michael Anissimov.  Certainly, the discourse should rage on…


Food fight! Socialismo o Muerte!

“it is up to us to stay engaged in current political debates and work out the best possible environments within which transformative technologies might emerge.”

this is exactly right.

so my political orientation is centered around promoting good investments in education, infrastructure and research in order to hasten the emergence of these technologies.

also reducing political/social/economic inequalities will enable maximum participation when transformative technologies emerge and provide a hedge against one section of society using them to exploit/oppress another.

Excellent piece, Mike.

I agree that our politics and social norms will powerfully affect what kind of singularity we’ll get.  But not the SURFACE politics of the useless, almost-meaningless so-called Left-vs-Right axis.  Nor will it be primarily a matter of allocation of taxed resources.  Except for investments in science and education and infrastructure, those are not where the main action will be.  They will not determine the difference between “good” and “bad” transcendence.  Between THE MATRIX and, say, FOUNDATION’S TRIUMPH.

No, what I figure will be the deeper, determining issue runs much deeper.  Shall we maintain momentum and fealty to the underlying concepts of the Western Enlightenment?  Concepts that run even deeper than democracy or the principle of equal rights, because they form the underlying, pragmatic basis for our entire renaissance.

These are, I believe, the pillars of our civilization—the reason that we have accomplished so much more than any other, and that we may be about to create Neo-Humanity.

1.  We acknowledge that individual human beings—and presumably the expected caste of neo-humans—are inherently flawed in their subjectively biased views of the world.  We are all delusional!  Even the very best of us.  Even (despite all their protestations to the contrary) all leaders.  And even (especially) those of you out there who believe that you have it all sussed.

Six thousand years of history show this to be the one towering fact of human nature.  Our combination of delusion and denial is the core predicament that stymied our creative, problem-solving abilities, delaying the great burst that we’re now part-of.  And these dismal traits still erupt everywhere, in all of us.  Delusion and denial will arise, inevitably, in the new gods.

2.  We (and presumably the neo-humans) can be forced to notice, acknowledge, and sometimes even correct our favorite delusions, through one trick that lies at the heart of every Enlightenment innovation—the processes called Reciprocal Accountability (RA). 

In order to overcome denial and delusion, the Enlightenment nurtured competitive systems in markets, democracy, science and courts, through which back and forth criticism is encouraged to flow, detecting many errors and allowing many innovations to improve.  Competition isn’t everything! Cooperation and generosity and ideals are clearly important parts of the process. But ingrained reciprocality of criticism—inescapable by any leader—is the core innovation.

3.  These systems—including “checks and balances” exemplified in the U.S. Constitution—help to prevent the kind of sole-sourcing of power, not only by old-fashioned human tyrants, but also the kind of oppression that we all fear might happen, if the Singularity were to run away, controlled by one or a few mega-machine-minds. The nightmare scenarios portrayed in The Matrix, Terminator, or the Asimov universe.

How can we ever feel safe, in a near future filled with powerful AI?  The most reassuring thing that could happen would be for us mere legacy/organic humans to peer upward and see a great diversity of mega minds, contending with each other, politely, and under civil rules, but vigorously nonetheless, holding each other transparent and accountable. 

This outcome—almost never portrayed in fiction— would strike us as inherently more likely to be safe and successful.  After all, isn’t it today’s situation?  The vast majority of citizens do not understand arcane matters of science or policy or finance.  They watch the wrangling among alphas and are reassured to see their application of accountability upon each other…. a reassurance that was betrayed by recent attempts to draw clouds of secrecy across deliberative processes. 

Sure, it is profoundly imperfect, and fickle citizens can be swayed to apply their votes in unwise directions.  We sigh and shake our heads… as future AI Leaders will moan over organic-human sovereignty.  But, if they are truly wise, they’ll continue this compact.  Because the wisest among them will recognize that “I might be wrong” is still the greatest thing than any mind can say.

Alas, even those who want to keep our values strong, heading into the Singularity Age, seldom parse it down to this fundamental level.  They talk—for example—about giving AI “rights” in purely moral terms…  or perhaps to placate them and prevent them from rebelling and squashing us.

But the real reason to do this is far more pragmatic.  If the new AIs feel vested in a civilization that considers them “human” then they may engage in our give and take process of shining light upon delusion.  Reciprocal accountability—extrapolated to a higher level—may thus maintain the core innovation of our civilization.  It’s core and vital insight.

And thus, we may find that our new leaders—our godlike grandchildren—will still care about us… and keep trying to explain.

I do get what you and Dr J are saying- you have to engage with the realities of the human condition to get anywhere in the world - sure I understand that, but not everyone is really suited to being a social activist or playing what are really just ‘monkey games’.  I do want a Singularity myself, being too hot-headed and impatient for the conventional route. 

I’m no super-genius , rather my strategy was to rely on direct conscious awareness (trying to bypass rational intelligence and represent the nature of mind directly in my consciousness by using what cognitive scientists call ‘ontology’).

Given the torrent of explosive insights coming out of IT/Cognitive Science,  I’m going to out on a limb and confidently predict a Singularity sometime in the next 25 years.

If the Singularity happens, well you and Dr J and the ieet folks have shown you are the supreme pragmatists - I’m sure you will quickly adjust to the new power structures.  Singularity won’t end politics, just radically change the nature of it.

We need post-human empathy, not intelligence.  And I think the way to do this is actually quite simple, but also rather subtle:. Here’s the principle:

“Take the universal mathematical principles governing the workings of minds in general.  Now *represent these principles directly in conscious experience*”

Then you would have a mind capable, not only of fully *understanding* all other minds, but far more importantly, an empathic mind, capable of directly *experiencing* what other minds experience.

That’s what AI folks should aim for.

Mike, please don’t interpret Peter Thiel’s remarks as representing those of most Libertarians.  Everyone I’ve spoken to is horrified by his remarks!

In my experience there are two types of libertarians.

1) Those like Peter Thiel who are fully aware that their ideas would lead to gross inequality, and exploitation and oppression of the economically weak by the economically strong with nothing to protect against it.

2) Those who are naive and utopian and actually believe that it would work and churches and charities would actually be adequate enough to protect against crushing poverty, illiteracy, disease, exploitation and environmental degradation.

Here’s a follow-up of Thiel defending himself against the disenfranchisement assertion:

YOUR COMMENT Login or Register to post a comment.

Next entry: Human Enhancement Aesthetics

Previous entry: Emergence - IEET News for August 2, 2009