Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

#21: Your nanorobotics future: life truly becomes ‘magical’

Meaning, Value and the Collective Afterlife: Must others survive for our lives to have meaning?

From German Idealism to American Pragmatism

Torture and the Ticking Time Bomb

#22: Ray Kurzweil on Rationality and the Moral Considerability of Intelligent Machines

Should we criminalise robotic rape and robotic child sexual abuse?


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

CygnusX1 on 'Four questions for Social Futurists, and others' (Dec 17, 2014)

instamatic on 'Four questions for Social Futurists, and others' (Dec 17, 2014)

CygnusX1 on 'Four questions for Social Futurists, and others' (Dec 16, 2014)

instamatic on 'Four questions for Social Futurists, and others' (Dec 15, 2014)

CygnusX1 on 'Four questions for Social Futurists, and others' (Dec 15, 2014)

Axiom on 'IEET Audience Wants Regulation of DIY Biohacking' (Dec 14, 2014)

Kris Notaro on 'Four questions for Social Futurists, and others' (Dec 14, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.
Nov 18, 2014
(21132) Hits
(2) Comments

What Will Life Be Like Inside A Computer?
Dec 7, 2014
(8080) Hits
(0) Comments

Review of Michio Kaku’s, Visions: How Science Will Revolutionize the 21st Century
Dec 15, 2014
(8044) Hits
(0) Comments

Bitcoin and Science: DNA is the Original Decentralized System
Nov 24, 2014
(7386) Hits
(0) Comments



Comment on this entry

Problems of Transhumanism: Liberal Democracy vs. Technocratic Absolutism


J. Hughes


Ethical Technology

January 23, 2010

Transhumanists, like Enlightenment partisans in general, believe that human nature can be improved but are conflicted about whether liberal democracy is the best path to betterment. The liberal tradition within the Enlightenment has argued that individuals are best at finding their own interests and should be left to improve themselves in self-determined ways. But many people are mistaken about their own best interests, and more rational elites may have a better understanding of the general good. Enlightenment partisans have often made a case for modernizing monarchs and scientific dictatorships. Transhumanists need to confront this tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites.


...

Complete entry


COMMENTS



Posted by Barnaby Dawson  on  01/23  at  05:44 PM

I generally agree with this piece, and I completely agree, that liberal democracy is the right model to go forward with, into a transhumanist future.  However, I suspect liberal democracy may require significant modification, to work in a world that has human level AI.

I’d be interested in the authors response, to the following points:

1) Suffrage will need to be extended in some way to AI.  Aside from the obvious moral arguments, some form of political representation for AIs will be necessary economically, once AIs form a significant part of the workforce.

2) Its likely, that single AI minds, will have multiple copies (perhaps even millions of copies) due to the economic advantage from a decrease in the cost of education and learning.

3) If one AI has 1,000,000 copies, does each copy get 1 vote?  Do all 1,000,000 exact copies, collectively have only 1 vote?  If the latter, does the time and closeness of the copying matter (i.e as the copies diverge over time, do their collective voting rights increase).  If the former, how do deal with problems as the number of extant copies of an AI increases and decreases (consider also the potential for manipulation of the system through the creation of AIs).

4) If AIs differ in the structure and size of processor how do you weigh their voting rights against each other? 2 standard processors =2 votes?





Posted by jhughes  on  01/23  at  06:55 PM

@ Barnaby

I agree that some form of robot rights should be possible, although it is going to require a very careful definition of the cognitive and emotional capabilities that they will need to possess to be rights-bearing. Defining those qualities will also define them for modified human beings, so it might be that we generate robot rights by defining which kinds of modified humans have rights. AIs may not experience personhood in anything like the way we do, making the question even more difficult. Singularitarians rarely take the question of robot rights seriously BTW since they believe robots will go from sub-human to godlike in a matter of days.

The capacity to clone minds, uploaded or machine-origin, raises a profound problem for the one-person/one-vote model. It will certainly require some combination of restrictions of mind cloning, or restrictions on the rights of mind clones, such as one-vote per original mind template or something.

The problem of weighing the rights of more and less powerful AIs is a subset of the problem of the rights of more and less powerful persons in general. The ideal is legal and political equality, while the reality is inequality. The question is how far we can push our political institutions toward the ideal without the system breaking down. Just as nuclear nations have rights and prerogatives that non-nuclear nations don’t, such as a vote at the UN Security Council, so we may be forced to formally acknowledge the power differentials between enhanced humans and machines, and humans 1.0. But I’d like to avoid that as long as possible.





Posted by mjgeddes  on  01/24  at  12:09 AM

I’ll stick with Alan Fiske and his three types of human relationship: Communal sharing, Authority Ranking and Equality Matching:

http://www.sscnet.ucla.edu/anthro/faculty/fiske/relmodov.htm

On ‘Overcoming Bias’ I hypothesized that these three types of cognitive social drives were what motivated the development of socialism, conservatism and libertarianism, respectively.  I think it’s not a matter of one or the other being best, but each having its proper place.

If I am installed president of the world, here is my ‘Extrapolation’ of optimal politics:

Socialism:  I will turn over all issues concerned with health and the environment to full democratic control.  I’m convinced that natural resources should not be subject to market forces and life is not for sale.  Land should be rented rather than owned, as all natural resources are owned by all, and revenues from natural resources should be returned as GMI (Minimum guaranteed income) as per the Georgists.  Universal Health care for all, also fully democratically controlled.

Conservatism:  I believe has its place for law and order and defence.  There needs to be *some* authoritarian control to prevent existential risks and maintain a single integrated society for global coordination.  So I would have a *limited* Singleton, whose authority is limted to these issues.  It can act as a global clearing house of good information and advice, an information source that should be available to all.

Libertarianism:  Also has its place for issues affecting only individuals.  I would still allow a flourishing free market for small-scale consumer goods, capitalism has its place as a driver of creativity, growth and liberty.  I would allow many more civil liberties than at present, enshrining morphological freedoms into law. 

I believe a Singleton should be in ultimate control (probably invested in SAI- Singularitarian Artifical Intelligence) but its role should be very limited (law and order, defence - as per the above) - it serves as ‘SysOp’.  Most control should be turned over to democracy and market (as per the above).  SAI also maintains the overall constitution, clealy spelling out in law the limits of all three types of control (Singleton, Democracy, Market).

I believe my system is a workable compromise of all the ideas discussed within transhumanism, avoiding the extreme megalomania of some Singularitarians, whilst still having a Singleton and radical transformation, and at the same time embracing the radical best of both socialism and capitalism, whilst avoiding the worst.





Posted by Michael Anissimov  on  01/24  at  04:55 AM

Hi James,

If one concedes that machines will eventually become smarter and more moral than humankind, then wouldn’t it make sense to give them positions of higher responsibility than most humans?  You’d be handing some degree of leadership over to a whole new category of mind—not a new individual, or even a new species.  Diversity among machine minds could greatly exceed that found within a species.  So would it really be anti-democratic to find it plausible that we would welcome their leadership?

Thank you for your essay,
Michael





Posted by Giulio Prisco  on  01/24  at  05:30 AM

My own position on democracy is summarized by two worn-out cliché‘s:

Democracy is the worst form of government except all the others that have been tried.

Democracy is two wolves and a lamb deciding, by majority vote, what to have for dinner.

The first acknowledges that democracy, with all its inefficiencies and byzantine rules, is still the best tradeoff between efficiency and fairness that we have found.

The second acknowledges that democracy has a tendency to become a dictatorship of the majority (think “moral majority”) and result in the oppression of all minorities.

I think there should be many communities living by many different sets of rules (“thousand flowers”), and citizens should be free to move to the community most suitable to them.

Of course, this is easier to say this than to do.





Posted by Erich Kofmel  on  01/24  at  08:58 AM

Check out my blog, the “Anti-Democracy Agenda”:

www.anti-democracy.com

Cheers





Posted by Carl Shulman  on  01/24  at  09:28 AM

“In particular, it is supposed, a hundred million dollars from Peter Thiel put toward the project of making a benevolent super-AI will do far more to improve the world than any political movement”

The comparison isn’t against a “political movement” it’s against the extra impact of an extra hundred million dollars added to the billions spent annually on political lobbying and campaigning. Consider the case of vegetarianism.

Vegetarian activists are not very numerous, and given current conditions they face a very uphill struggle in eliminating the suffering of farm animals, since this would involve the populace giving up meat. However, if efforts to produce in vitro meat succeed (http://www.new-harvest.org/default.php), meat consumption could be decoupled from animal suffering and the political struggle would become vastly easier.

If you want to promote vegetarianism as an individual or small group with limited funds, you’ll probably do best by backing in vitro meat rather than persuading the population at large to stop eating meat. Your limited capacity for political activism would likely be best targeted at getting funding for in vitro meat research, or recruiting more scientists and financial supporters for such research.

Of course, once in vitro meat technology is available it would take further efforts to ban meat-production methods that involve extensive suffering, but they would be much more productive per unit of effort at that time (easier to attract support, etc).

A democratic majority could decree mandatory vegetarianism today, even in the absence of in vitro meat, *if the attitudes of the voters miraculously changed*, and this would be better by the vegetarians’ lights, but it’s not an option open to individual activists or groups thereof. An individual who desires change must choose between methods of persuasion or changing the circumstances of choice, with varying degrees of effectiveness.

More here:

http://lesswrong.com/lw/181/standing_in_whose_shoes/

If you’re arguing against Thiel’s thesis that individual activists can have greater marginal impact on political outcomes by spending on research that would change the landscape in which political contests occur than by direct political action, the most convincing arguments would involve evidence (empirical or theoretical) that there exists some form of more direct political action that would deliver more in the way of outcomes per activist dollar. If you can show a method to buy more existential risk reduction per dollar through politics, a lot of folk would be willing to reallocate their efforts. But what is that method, at this time?





Posted by jhughes  on  01/24  at  10:32 AM

@ Michael

I don’t consider being the pet of a super-robot an attractive future.

@ Carl

Empirically X amount of money may accomplish more toward Y end in any of a number of Z projects. Since it is very hard to say ahead of time which are the best it is usually better to have a diversified portfolio of investments among the projects.

In the case of catastrophic risks, such as controlling nuclear proliferation,  Yudkowsky writes off any political effort and counsels focusing only on technology. That’s dumb. New energy technologies might make nuclear power moot, new nuclear reactors technologies might reduce proliferation, and new detection technologies might facilitate monitoring of proliferation. But without a stronger IAEA and political support for non-proliferation none of it makes any difference.

I know that your employer Peter Thiel doesn’t subscribe to this dismissal of politics since he has been so active as a conservative political activist and bankroller of conservative causes. If he believed that only technology changes the world I doubt he would waste his time as a Board member of the Hoover Institution, writing The Diversity Myth, or supporting anti-immigrant groups. Why not just invest in magical technologies that eliminate communists, immigrants and the threat of political correctness? Or is that technology Facebook?





Posted by Kaj Sotala  on  01/24  at  01:33 PM

Hi James,

interesting essay. Still, I found it a bit puzzling. I was expecting to find some sort of rebuttal of what you term “technocratic absolutist” arguments, but could see none. In the end you just state that “In response, we defenders of liberal democracy need to marshal our arguments for the virtuous circle of reinforcement between human technological enablement and self-governance”, but don’t actually provide any such arguments.

As far as I can see, you pretty much wrote an essay supporting exactly the view you’re opposing. After all, you repeated many of the “technocratic” arguments, but provided no support at all for the “liberal democratic” side. Am I missing something here?





Posted by jhughes  on  01/24  at  01:58 PM

@ Kaj

Fascinating and dismaying that you read the essay as an affirmation of totalitarian government. I note that Michael also appears to think I fairly represented the argument for Singularitarian totalitarianism and simply wonders why I don’t salute.

Given the values of the rest of my audience I assumed that most would understand the flaws with absolutist government, from monarchy, to military dictatorship, to fascism and communism. Plutocracies, theocracies, and absolutist governments do not do as good a job as liberal democracies at representing majoritarian interests, establishing peace and prosperity, and protecting individual liberty.

As I point out in the essay you see a fundamental difference between human dictatorship and robot dictatorship. To the extent that I see a difference it is that I would at least have some idea what motivates human dictators, and would therefore prefer them over robots, friendly or red-eyed.

That said I have pulled my punches on colonialism and authoritarianism because there are clearly cases, times and places where “the masses” do need to be guided back toward self-determination. For instance I support the military occupation of Afghanistan, which is helping to build that country to a point where they could exercise some meaningful self-determination, all of which would be lost if the Taliban returns. But I think we have to be very careful about endorsing the idea that wise enlightened elites have the right to guide ignorant masses - it has a very bad track record.





Posted by MHandy  on  01/24  at  02:13 PM

An excellent piece, that illustrates what may be one of, if not the, prime dilemma of Enlightenment politics. Frankly, while my own politics lean towards vaugely left-anarchism, I’m utterly undecided on this issue.

The ultimate question, I think, is are we willing to cede political power to a singleton (of any type) if we are sure it will fufill our stated values better than we ever could. Yudkowsky puts forward a good argument that we should, and CEV certainly has some democratic aspects. If the volition was taken from our current rather than our future preferences, its hard to see how the singleton could be differentiated in a practical sense from a direct democracy with unlimited state power. Assuming that our future preferences are better than our current ones, it’s easy to see the appeal of CEV.

I want to believe that humans can surmount the challenges of the next century alone, through reasoned debate and the freedoms that have allowed decentralised methods to be so successful in the past. I think we’re in with a chance. But with the many potential extinction risks hanging over us, the desire to use an AI to solve all the problems we wish it to will be tempting, and I’d like to have that option availible, if only as a failsafe.





Posted by jhughes  on  01/24  at  02:38 PM

@ MHandy

> CEV certainly has some democratic aspects

Perhaps I’m dating myself by hearing Marxist-Leninist warning sirens go off when I read the Singularitarian assertions that the benevolent omnipotent AI would be democratic if democracy was what was in our interests. Anyway, those sirens do go off for me.

The claim is that turning all our decision-making over to a robot god that loved us couldn’t possibly be totalitarian since it would be the fulfillment of our own self-determination, the ultimate democracy. SLAVERY IS FREEDOM.

I don’t buy it, and it scares the heck out of me that some intelligent people in the transhumanist community do. Not because I am very worried about the prospect of our being subjected to a robot dictatorship, but because it shows how open to totalitarian double-think the community is. Although I don’t expect a friendly super-AI I do expect lots of different kinds of future political elites motivated by flavors of “we’re doing this in everybody else’s interest even though they don’t realize it.” My intent here is not to disparage the idea of government by benevolent super-AI, which I consider patently absurd. It is to point to the danger of this kind of rationalizing of absolutism.





Posted by Giulio Prisco  on  01/24  at  02:45 PM

I believe most successful politicians are very smart, otherwise they would not be successful. At the same time, some successful politicians are liars, thieves and sociopaths. It is not about intelligence, is about self-interest and greed. I don’t see why things should be any different with superAIs.





Posted by Kaj Sotala  on  01/24  at  03:53 PM

@ James:

> Given the values of the rest of my audience I assumed that most would understand the flaws with absolutist government, from monarchy, to military dictatorship, to fascism and communism.

For as long as we were talking about a society run by humans, then yes. But as you remarked yourself, both in the essay and your comment, the “technocratic” counter-claim is that an AI isn’t a human absolutist ruler, and the comparison is thereby invalid. I was expecting you to provide a counter-counter-claim to this, but you never did.

Your comments about “not being wanting to be an AI’s pet” are also strange. This would be an understandable objection if we were talking about making an AI that ran things the way *it* liked, with little regard to what humans wanted to. But we are expressly talking about an AI that wants nothing else than what humans do. In principle, you could even say that the AI is in some way unnecessary - if it wasn’t around helping us out, we’d eventually build the very same future as it’d be guiding us towards. The main function of the AI would then be to make sure we didn’t kill ourselves by accident, as well as to speed things up, so there wouldn’t be any more unnecessary suffering on the way than there had to be.

To make sure I’ve understood you correctly: is your claim A) that the “technocratic” ideal is offensive even in principle, or B) that the ideal would be great if it could be achieved, but it is too difficult to pull off correctly and will more likely lead to what we humans would call a dictatorship?





Posted by jhughes  on  01/24  at  04:28 PM

@ Kaj

Yes, the idea of benevolent totalitarianism is in principle offensive to me, because I subscribe to those other values about the importance of individuals creating themselves and governing themselves through discussion.

I also do not believe in the possibility of a super-AI of the type you imagine capable of doing these tasks which did not have some kind of self-interest, or was not programmed to serve the interests of some group more than others. I think the notion of such a purely altruistic creatures is sublimated religion.

The objection is also to the idea that there is one thing that we all “really” would/should want but just need the super-smart machine to discover. That is a wrong-headed essentialist idea about human desires/aspirations/possibilities, and that any attempt to effect such a program is in fact the imposition of some static idea on the process of human self-invention. So yes, I am arguing that any attempt to implement your CEV program will “lead to what we humans would call a dictatorship.”

That said I am interested in having human cognition and emotion increasingly open to self-scrutiny and self-modification, presumably with the aid of many information tools. And I believe new information and communication architectures will facilitate debate and decision-making in ways that make self-governance more intelligent. In those ways AI will hopefully aid in the process of human self-invention and self-governance.





Posted by Aleksei Riikonen  on  01/24  at  06:35 PM

James Hughes:
> The objection is also to the idea that there is one thing
> that we all “really” would/should want but just need the
> super-smart machine to discover.

If that’s what CEV looks like to you, you don’t understand even the basics of it. There are no such assumptions of strong convergence in what humans want.

If you did manage to understand CEV, you’d see it doesn’t really differ from an elaborate polling mechanism (it just also implements the results of the polling). It would even self-delete if it found out that humans in general think like you do. (Though I’m not convinced you’d actually cling to your current thinking if you didn’t feel it convenient in trying to attain personal political power.)

What’s most mind-boggling in your comments is your apparent inability to grasp the theoretical possibility of non-self-interested creatures. Really makes one wonder whether a productive conversation with you on these matters is possible.





Posted by Zack M. Davis  on  01/24  at  09:10 PM

Aleksei: ”(Though I’m not convinced [Hughes would] actually cling to [his] current thinking if [he] didn’t feel it convenient in trying to attain personal political power.)”

Speaking of productive conversation, isn’t it enough to simply argue that someone is wrong, without tacking on mean-spirited speculation about their motivations?





Posted by Zack M. Davis  on  01/24  at  09:58 PM

I had submitted a long comment earlier that hasn’t appeared, whereas more recent comments from me have. I’m going to assume that the long comment got eaten by the antispam filter, so I’m removing the links (and making a few other edits) and resubmitting it below. Apologies to the moderator if this is not the correct thing to do. ZMD

——-

@ James

“As I point out in the essay you see a fundamental difference between human dictatorship and robot dictatorship. To the extent that I see a difference it is that I would at least have some idea what motivates human dictators, and would therefore prefer them over robots, friendly or red-eyed.”

This does seem to be the core of the disagreement. A robot god that loves us is a very misleading metaphor for the sort of thing Yudkowsky et al. are envisioning. Instead of a robot dictator, imagine (as an intuition-aiding thought experiment; obviously I don’t actually have an AI design on hand) something more along the lines of a non-conscious computer system that calculates the likely outcome of various actions, and then executes the action with the “best” likely outcome according to some more-or-less arbitrary specification of which outcomes are to be considered “best.” (More AIXI and less HAL.) It doesn’t have emotions, or a sense of self: it’s just a sheer calculation device. Imagine such a system being substantially better at making decisions and achieving goals than humans and human institutions are: it can develop technologies, engage in politics, manipulate people, and so on. (I realize you may not consider this scenario at all plausible, but please bear with the thought experiment in order to better understand where Yudkowsky et al. are coming from.)

Then, if one could develop such a system so that it considers outcomes “best” in the same way that humans would do, then by hypothesis it could and would solve our problems better than we could: it shares our values and is more competent. On the other hand, if the programmers made a mistake and the system has a very different conception of “best” outcomes than humans do then it could very well wipe out humanity: not out of malice (it doesn’t have emotions), but simply because it calculates that it could use our resources to achieve “better” outcomes by its own standards. Therefore, if the creation of such powerful optimizing systems is feasible, then it’s important to design them very carefully to make sure that they do things that humans want (are “Friendly”).

There are various reasons why one might deem this class of scenario unlikely or impossible: these criticisms must be examined carefully and evaluated on their merits. However, if AIs could be very powerful and very inhuman, then you can see why the types of thinking and modes of discourse we’ve developed for reasoning about human politics aren’t really applicable to successfully dealing with this particular problem of safe AI development. Note that this remains the case even if arguments for Friendly AI can potentially be misconstrued to support totalitarianism.

“I don’t buy [AI singleton scenarios], and it scares the heck out of me that some intelligent people in the transhumanist community do [...] because it shows how open to totalitarian double-think the community is. Although I don’t expect a friendly super-AI I do expect lots of different kinds of future political elites motivated by flavors of ‘we’re doing this in everybody else’s interest even though they don’t realize it.’ My intent [...] is to point to the danger of this kind of rationalizing of absolutism.”

I agree that arguments for a Friendly AI singleton are potentially dangerous ideas, in the sense that they could be misconstrued and abused to support totalitarianism. But the potential political abuse of these ideas is a separate problem from the object-level issues of Friendly AI; they need to be addressed separately.

Even if you don’t draw the misleading analogy to totalitarianism, AI singleton scenarios are scary to contemplate. And yet, this justified fear does not refute the arguments for Friendly AI. There are many good reasons (which space and time considerations prohibit me from addressing in this blog comment) for believing that vastly superintelligent AI is feasible, and that the motivations of the first superintelligences will end up determining the future. Conditional on those being the actual facts of the matter, then it’s important to design the first superhuman AIs to share human goals, because, like it or not (and please believe me when I say that I’m not sure I like it), they are going to be in control.





Posted by jhughes  on  01/24  at  11:59 PM

@ Zack

You begin your comment by asking me to imagine “a non-conscious computer system that calculates the likely outcome of various actions.”  I have no problem with computers that help humans project outcomes and make decisions. If that were all the “friendly AI”/CEV crowd were imagining there would be little difficulty.

You end your comment by asserting “the motivations of the first superintelligences will end up determining the future…. they are going to be in control.” Sounds like you don’t really believe in the possibility of your super-AI dictatorship being selfless, with no motivations of their own.

As I’ve pointed out above you and the other members of this particular religious denomination resolve this conflict in your thought by asserting that there can be no conflict between self-determination/freedom and submitting to the will of the perfect friendly robot God because the robot God will be a perfect reflection of our deepest aspirations. Thats scary double-plus bad think-think.

@ Aleksei

> makes one wonder whether a productive conversation
> with you on these matters is possible.

I suspect you are correct Aleksei.





Posted by Max Kaehn  on  01/25  at  02:31 AM

I think the biggest flaw in democracy these days is that the cost of being a well-informed voter is too high.  We don’t need godlike AI to come up with tools to help people understand which politicians are relying on lies and distortion to get elected, or to explain the expected outcomes of policy choices on an individual person’s situation; an artificial policy wonk functioning in human ranges of intelligence could do that.  Let’s try building some of those and see if democracy suddenly starts showing better outcomes before we try handing it all over to Deep Thought.





Posted by Zack M. Davis  on  01/25  at  03:01 AM

(I’m truly sorry if my comments have been too long, but in my defense, this is a complicated topic.)

@ James

I agree that there are some superficial similarities between some singularitarian claims and some religious claims, and I agree that the type of psychological forces that lead some people to adopt religions may very well lead some people to overestimate the probability of an altruistic superintelligent AI being created. However, there are also many subtle, carefully-reasoned arguments for taking the threat and opportunity of superintelligent AI seriously, and dismissing singularitarians as religious fanatics doesn’t address those arguments and doesn’t help us achieve a more accurate picture of what we all can do to positively effect the future. I do think a productive conversation is possible here; surely it should be in all of our best interests to form more accurate beliefs by means of reasoned discussion.

“Sounds like you don’t really believe in the possibility of your super-AI dictatorship being selfless, with no motivations of their own. [...] [Y]ou and [others] resolve this conflict in your thought by asserting that there can be no conflict between self-determination/freedom and submitting to the [AI singleton]”

Let me try to clarify. This notion of control is actually kind of philosophically tricky: what does it mean for an agent to exercise control in a deterministic physical universe where everything that happens, was predetermined to happen? Well, we might say that an agent controls an outcome to the extent that if (counterfactually) the agent had made a different decision, then a different outcome would have occurred. It gets really tricky when we need to analyze situations involving agents at very different epistemic positions and a smarter agent is in a position to causally affect the decisions of other agents. This may seem like a digression, but the upshot is that when I say that the motivations of the first superintelligences determine the future, I don’t mean that robots with guns are going to force us all to work in mines. I’m just saying that intelligence—-the ability to reason and carry out new complex plans in order to achieve goals—-is powerful, and can shape the world in unprecedented ways that are difficult to predict in advance. This has already happened once on this planet: human civilization has taken over a nontrivial fraction of the biosphere and shaped it to serve human needs. So when you have smarter-than-human AIs doing the development of new technologies—-including the next generation of AIs—-then things change again, and the initial conditions of what AIs want have a huge impact on what the future will be like.

If——by hypothesis for the purposes of this comment; feasibility is a separate question—-you have a genuine superintelligence lying around, that (let’s say) literally knows a billion times more than any human and thinks a billion times faster, then what does it mean for it to not control the future (in the sense of control that I described above)? Even if it’s “only” offering advice or doing engineering work and not taking political control as traditionally conceived, if the AI can predict what humans will do in response to its every little word or design choice, then it will in a very real sense be steering the human future by virtue of making those choices. In order to get the AI to respect that which we refer to when we speak of things like self-determination, you have to specifically program in respect for human values. Democracy is a good form of government, but I’m afraid it’s just not a serviceable decision theory for superintelligences. A technical solution is needed.

Clearly creating a benevolent (“Friendly,” whatever) AI is an incredibly difficult problem, but I’m rather confident that it’s at least physically possible (though it may not be possible for human programmers). Computer science tells us a lot about what computers cannot do (solve the halting problem, solve 3-SAT in linear time), but I don’t know of anything that prohibits us from writing code that does altruistic things—-if only someone knew how.





Posted by Giulio Prisco  on  01/25  at  04:38 AM

@Mike: You are right, Giulio, that this would be a lot harder to do than say. The problem arises, as you no doubt realize, when one or more of these thousand communities becomes oppressive or even abusive to its citizens and yet is able to convince them that it is doing right…

As usual, things are never black and white and there is no magic formula that always works. In the extreme case you describe, I would at least consider intervening.

But in less extreme cases, I think fragmentation of societies in different autonomous communities with different sets of rules should be encouraged, because this way everyone wins.

Take the eternal fight of the “moral majority” against abortion and gay marriage (and tomorrow, morphological freedom in more general terms). These people feel “morally outraged” by the lifestyle of others, even when it is no business of theirs. I think everyone would be happier if they had the option to move to separate amish-like communities where they can live as they wish.

If gay marriage is forbidden west of the border and permitted east of the border, everyone can move to what they consider a better place.





Posted by Kaj Sotala  on  01/25  at  05:31 AM

@ James

Okay, so basically your disagreement with the “technocrat” view can be divided into two parts - the ideological and the empirical. That’s good; we might be getting somewhere. Let’s try discussing these two parts separately.

On the ideological: you state that you subscribe to values about the importance of individuals creating themselves and governing themselves through discussion. Also, as a semi-ideological, semi-empirical claim, you mention that you disagree with the notion that there is some sort of static goal all humans are going to ultimately end up agreeing on.

Well, I agree with you on that it’s very possible that there really isn’t any kind of ultimate state humanity will converge on, given enough time. But it does seem to me that there are some things nearly all humans would agree on. For instance, nearly everyone would agree on that we don’t want to accidentially destroy this planet. If there really are such things that everyone agrees on, then the ideal CEV dynamic will take care of those, but not touch on anything where there is no guaranteed convergence, nor anything that we don’t want it to interfere with. This wouldn’t be in conflict with your stated values of individuals creating themselves and governing themselves via discussion. In practice, then, the CEV ideal isn’t any different from AI “aiding in the process of human self-invention and self-governance”.

Now, don’t get me wrong - I’m not claiming that there is no such thing as “technocratic absolutism”. I’m certain that there are several transhumanists who would, indeed, prefer a system where humans are “an AI’s pets”. But I do think that picking CEV as an example of such a position is a poor choice, as that isn’t what the proposal is about at all. (Indeed, the CEV document itself states that one of the design goals was “to avoid hijacking the destiny of humankind”.)

Then on to the empirical. You don’t believe it possible to create an AI that is purely altruistic. I do admit that I’m somewhat puzzled as to why you’d believe it to be impossible, but then this is an empirical question, ultimately to be solved via computer science and cognitive science. It is something we can simply agree to disagree on for now.

Furthermore, while you’re formulating this as a question of differing basic values, I think this is much more of a difference in empirical beliefs. As you well know, the Yudkowskian camp, me included, believes it likely that a super-intelligent AI will end up in a position where it’s strong enough to take control of humanity. Then the main question is what kind of an AI we want to end up achieving that position. You disagree with this basic premise, but I presume that in the case that you agreed, you too would prefer something like CEV - a system that prevented actual totalitarian AIs from taking control, and in general only did things all of humanity could agree on.

In contrast, if this basic premise were denied, and we presumed no AI could become much more powerful than (suitably enhanced) humans? In that case, I too would probably lean towards your position of favoring normal democratic rule, as there would be no reason to do it otherwise. (Even though I don’t think CEV is all that different from a form of super-democracy in the first place, but leaving that aside.)

So it comes down not to some insurmountable gap in what we value, but instead differing estimates on what is likely.





Posted by Aleksei Riikonen  on  01/25  at  03:17 PM

Kaj Sotala wrote:
> As you well know, the Yudkowskian camp, me included, believes
> it likely that a super-intelligent AI will end up in a position where
> it’s strong enough to take control of humanity.

I’m not sure if this description is actually accurate. It doesn’t describe me, for one, and I’m pretty “Yudkowskian”.

I wouldn’t claim that the described scenario is *likely* (thinking it to be likely would imply e.g. that I’d estimate non-AI existential risk scenarios at less than 50% probability), I just think that the scenario has a probability that is *significant* enough that it obliges responsible people to come up with ways to handle/prevent this particular strain of existential risks. Ways like Friendly AI.

I don’t hold this significant probability to be over 50% (though it might be; I just haven’t formed a precise probability estimate since it’s not relevant how large the probability exactly is, if it anyway is above a certain non-marginal level), and I think you shouldn’t be overly certain in such a way either.


Otherwise, good post.





Posted by Abraham  on  01/26  at  02:58 AM

Mr. Hughes wrote: “The capacity to clone minds, uploaded or machine-origin, raises a profound problem for the one-person/one-vote model. It will certainly require some combination of restrictions of mind cloning, or restrictions on the rights of mind clones, such as one-vote per original mind template or something.”

I vote for the former. In the case of the latter, they’d probably mutiny if they didn’t get to vote.





Posted by Lars Christensen  on  01/26  at  12:31 PM

I know, it warrants ridicule for being such a cliché, but technology is merely a tool. The AGI which seems to be the irreligious rapture for many transhumanists will still need to be modelled according to some plan or blueprint, which will initially be made, bootstrapped, by plain humans.

Basic existentialism tells us that we always influence the answers we get, by choosing who we ask. So, the structure of the society that builds an AGI will deeply influence its primary mode of thought. This makes all the difference between an omnipotent Jeremy Bentham and an omnipotent Cthulhu. Between a Gaia and a Xenu.

James is right in addressing the sad, but widespread belief amongst many transhumanists that we only need to sit back and watch technology grow.
I used to think that too, but that was before I realized that current endeavors in the field of automation make us all run faster rather than freeing us from stupid tasks. Self-service counters in malls are not there to give us better or faster service, they are there to minimize expenses, so you can still check that noone gets through without paying, but at the same time skip all those pesky costs that human labour requires. So, the customers, after their insane work week are now required to self-scan all the items themselves. Just like in net banks, gas stations and so many other places - automation in a capitalist monetary system is not putting technology to good widespread use.

And if you build an AGI with this codex in the back of the heads of its designers, you expect benevolence? Nah, the Smithian ‘invisible hand’ is no better in digital than analog form IMHO.

Personally, I’d prefer some form of libertarian socialism, The Venus Project-style, to accompany accelerating technological advance.
This would too include very advanced non-conscious expert systems to control energy conversion, water purification and all other sorts of necessities. It would provide us with an abundance of food, energy and lots of commodities in a nearly post-scarcity world. We’d be free of menial chores, have material abundance and have incentive to cooperate rather than to compete. Which would obliterate a lot of fear-induced control paradigms.

Were we to get there *before* wanting so desperately to give birth to (post)human-made self-awareness, I’d be a lot less opposed to this priority. But then we might not see is as such a wet dream.

Oh, and in case this comment should indicate otherwise, I’m actually quite the optimist. I do think that things will get a lot better - if we shape up a lot. I just find that the “We’re stupid, so we need to build something that’s smarter than us by feeding it input from stupid us” a bit of an unnecessary detour. I’d rather that we got smarter and changed the system grin





Posted by Michael Anissimov  on  01/27  at  12:09 AM

James, like Aleksei said, CEV is essentially an elaborate polling mechanism.

I’m disappointed that you think that purely altruistic beings are impossible.  I especially find it odd to see a Bhuddist say that.  Soldier ants are purely sacrificial for the question, because the locus of selection pressure is the colony/queen, not the individual.  In every case in nature, care is distributed in accordance with the amount of shared DNA.  Siblings are more mutually altruistic than cousins, which are more mutually altruistic than tribesman, etc. 

It seems that you are in denial about the notion of more powerful-than-human beings in general.  Reading your book gives me the same impression.  If coexisting with smarter-than-human beings and sharing civic responsibility with them results in smarter-than-humans rising to the top management positions while continuing to respect our preferences, I don’t see why that merits a hyperbolic freak-out. 

Given the potential abilities of future technologies and wisdom to increase communication and mediate conflicts, why do you think that Singularitarians believe that maintaining future order will require a totalitarian, non-democratic system?  Our primary concern is the rise of human-indifferent AIs, but I suspect that you might believe that human-like motivations are general to all minds, so you are perhaps skeptical of a human-indifferent AI, even though there exists ample evidence of powerful human-indifferent humans throughout history.  The risk of a powerful human-indifferent entity necessitates a powerful human-friendly entity.  CEV is a very democratic solution.  The alternatives would include programming a machine that takes our word literally, or trying to build a “normative altruist”, both of which seem to be potential disasters.  Is there a 4th option I’m not seeing?  If the takeoff is very slow and gradual, then there should be plenty of time for anyone to respond to unfolding developments.  We are especially concerned about the scenario where there is a fast takeoff, because even if its probability were estimated to be relatively low, the costs of potential failure are very large.





Posted by Tom Huffman  on  02/02  at  02:00 PM

Excellent post, Dr. Hughes.  I agree with the objections you made to “enlightened despotism” in all its forms.  Elites always end up serving their own ends over the masses they claim to love.  In our own day, many people are quick to laud Bill Gates for his philanthropy.  They (conveniently) overlook the fact that Gates was the inventor of such new forms of employment as ‘perma-temps’ - employees who could work for a company for years, and still receive limited or no benefits.  Google the phrases: ‘perma temp’ and ‘micro serf’.

‘Mr. Gates also invests much of the endowments that fund his fight against AIDS in Africa, with the Big Pharma corporations that work to keep AIDS drugs too expensive for people in Africa to afford.

The major threat to liberal democracy now is the continuing monopolization of communication media by a few, very powerful corporations.  These are often corporations with an anti-democratic agenda, like Rupert Murdoch’s News Corp. 

The move toward monopoly has been accompanied by the trend to delivering ‘news’ in ‘sound bytes,’ with a minimum of information, instead of in articles.

Another dangerous trend has been the ‘manufacture of consent’ by public relations and propaganda.  Propaganda techniques have been with us for centuries; but, starting in the early 20th Century, Edward Bernays gave PR and propaganda a scientific basis using Freud’s ideas of the subconscious (Bernays was Freud’s nephew.).

Very few people in ‘advanced’ societies (those with well-developed communications media) are aware of how many of their ideas, wants and desires have been carefully engineered by advertisers and public relations experts.





Posted by Tom Huffman  on  02/03  at  12:37 PM

I should have prefaced my previous comments about “the continuing monopolization of media” with the remark that those of the Founders who supported liberal democracy assumed the existence of a free press and continuing improvement in the education of the masses. 

Unfortunately, the media - print and broadcast - have been monopolized by a few and turned into instruments of social control.

Education, at least public education, has been methodically starved of funding in recent decades.





Posted by Zack M. Davis  on  02/03  at  02:15 PM

“Unfortunately, the media - print and broadcast - have been monopolized by a few and turned into instruments of social control.”

There’s something incongruous about reading this sentence on the internet.





Posted by Natasha Vita-More  on  02/08  at  12:26 PM

I enjoyed reading this article, but Max More has pointed out *specifically* why is not a “libertarian transhumanist”. 

Here is the interview with RU Sirus, Editor of “h+” magazine:
http://www.acceleratingfuture.com/people/Max-More/?interview=32

As a theorist, I can understand your perspective, but from a philosophical pont of view, your perspective would work better if you got this particular fact straight from the get-go and covered where Max’s philosophy separates from the so-called “libertarian transhumanists” becuase that is where the meat of the arugment is located - the locus of experience - the arrow’s bull’s eye!





Posted by Dirk Bruere  on  02/08  at  12:39 PM

There seems to be an either/or assumption here, as if the future will be either this or that. In a word, a single system will triumph over all. This is not desirable and not what we should be aiming towards. As a Nationalist who believes different groups of people should be able to live under laws of their own choosing (even if they only choose once, for dictatorship) I want a pluralistic future, not a monolithic one. It always seems that those who argue most for “diversity” are the ones who least want it. That it is, in fact, merely a rhetorical tool for a hidden agenda. I’ll go with Mao on this one: “Let a hundred flowers blossom and a hundred schools of thought contend”. I will add: “Let there be no clear winner”.





Posted by Mark Thompson  on  02/08  at  12:59 PM

“For instance I support the military occupation of Afghanistan, which is helping to build that country to a point where they could exercise some meaningful self-determination, all of which would be lost if the Taliban returns.”

Wow!  This is a classic example of faulty theory and analysis leading to faulty positions leading to people being killed for US imperialism.

The IEET is going to make statements about real politics it should have a better historical analysis.





Posted by BillK  on  02/08  at  01:00 PM

J Hughes wrote:
Yes, the idea of benevolent totalitarianism is in principle offensive to me, because I subscribe to those other values about the importance of individuals creating themselves and governing themselves through discussion.
————

Face it, James. Most people don’t want the continual hassle of doing that.

Western states are moving remorselessly towards being nanny-states.
(In the UK the TV weather forecasters even tell us to put a sweater on when it is colder than usual.  Like we’d never think of that for ourselves!).

Being the pet of an almost all-powerful AI would get the vote of the great majority.
Food, entertainment, sex, etc. all provided with no need for fighting or status games.
Who would want more?
Providing the AI could cater for the more unusual tastes (which, by definition, it should be able to), then discussion groupies, local planners, hobby groups, etc. can flourish while the rest of us couch potatoes enjoy our transfer to heaven.





Posted by Prakash  on  02/10  at  02:31 AM

Hi james,

I am for a friendly singularity and I am also a georgist libertarian terribly scared of authoritarian rule. The way I can reconcile these 2 viewpoints is below.

It all depends on how bad the situation of the present world is. A CEV in a richer world, where the median level is where Branson or Theil is today will probably arise as a gentle advisor. It will guide policy like a wise mentor and will be listened to.

Our world is NOT THAT WORLD. Our world is one there is an incredible amount of suffering, poverty, exploitation and death. There are all sorts of horrendous things going on TODAY. Any friendly AI cannot sit calmly by the side and watch all of this happening. It will need to take action and fast.

Becoming a Benevolent Dictator AI of nations not existing is a course of action I can see. A friendly AI can seed islands beyond legal boundaries of nations and open them to immigration. People can immigrate into these places, no questions asked. They are fed, clothed and sheltered and can educate themselves at whatever pace seems ok with them.

Standard Disclaimer - I am not superintelligent and I’m certain a Friendly AI can think up other courses of action. This is just a small trivial solution dreamed up in 2 minutes.plane





Posted by Clea  on  07/04  at  01:43 AM

The problem with the Enlightenment was that it led to some of the darkest moments in history - the justification of European slave systems.

The problem with transhumanism for me is that despite their “enlightened goals” it is another form of neo colonialism.  Imagine for a moment that in the “real” world (not the imaginary world of the transhumanist), those people who have been on the receiving end of the negative aspects of technology (including wars), exploitation, pollution, etc. are now going to be “saved” by some mostly white westerners who draw their ideas from the same font as the ones who have destroyed them.  Who has the right to do that?

I find it interesting that often the people who have the least experience with regular poor people out there in the world are the ones who want to control the rest of humanity or force them into their own belief systems.

As a victim of “organized stalking” and electronic harassment, I can definitively say that computer-brain interfaces are for me not a “dream” but rather a “living nightmare.”  I don’t have voice to skull but other forms of EH.  Is this the “bad” use of technology that would be countered with a similar non consensual “good” use by the transhumanist squad?  Count me out.





Posted by Dirk Bruere  on  07/04  at  06:44 AM

I have no problem with counting you out, just as long as you have no problem with counting me in. And don’t worry about old neocolonialist imperialist Whitey - I expect the Chinese to make most of the running.

I also assume you have rejected the enhanced immune system available to most people ( ie vaccination) and will similarly reject life extension technologies which are not available to all eg high quality medicine, in order to show your solidarity with the poor of the world.





Posted by Clea  on  07/04  at  03:43 PM

I most certainly respect differences of opinion.  I don’t expect to change your mind, but I would like to point out why I feel this way.

For me the three main issues are:
1) technology employed without understanding the full ramifications that include negative effects on some or all populations;
2) the issue of financing and the goals of financiers (whether they be China or the wealthy elite or even a nice elderly couple in New Jersey);
3) extending one’s vision to the world without really getting to know the world (that is, a requirement of some kind of empathy that could lead to a modification of opinions).

I’ll point out several examples:  The exploitation of “masses” in other countries in the name of medicine (forced vaccinations, forced sterlization, promotion of infant formula over mother’s milk, etc.)...  Infant formula is an interesting case:  not only was it later discovered not to be “superior” to mother’s milk - particularly in issues of immunity - but it costs money which “poor” people could better spend on something else. 

One could argue that it “liberated” some middle-class women somewhere else in the world, assuming that storing one’s milk wasn’t an option.  But just because it worked out for them doesn’t mean that it needed to be imposed on others.  Where I did research, most women worked as domestics or in open-air markets and they took their babies with them to work.  Educational and formal sector employment opportunities were practically nil and were not dependent on babysitting factors.

I’m okay with life extension too, but how can we be talking about that when we’ve done little to alleviate quality of life issues.  And by alleviate I mean methods that reduce exploitation, not in the way of lending a helping hand.  I’m not one of those “we are the world” types, but I did extensive fieldwork in one of the countries with the highest infant mortality rates and lowest life expectancy.

Technology provides a few uncertainties and who’s to say that someone should take the risks in the name of all of us?  For example, we don’t know the potential negative effects of eating GMOs yet we do know that the driving force behind that is money.  We don’t know what kind of enviromental effects GMOs but so far it’s not looking good.  I can complain about them yet I have little choice but to eat them since most grains in the US are GM and even non-GM fields are being contaminated.

What do we really know about nano technologies in terms of their impact both on humans and the environment?  Why don’t we have a democratic choice as to whether those technologies are employed?

The course of “technology” has always been a double-forked road.  While we can point to “advances” that were beneficial, one could easily point to how technology has been used in negative ways.  I haven’t yet seen where there has been a positive outcome *overall* on the global scale.  Wars have not ended, some people live longer and while others live in squalor until they reach an early death, the internet provides certain democratic access but is filled with disinformation and trolling, marital infidelities, terrorist networks, etc…..

on top of that, many pro-tech people (I’m not saying you) like to go off about a population problem when the world’s poor proportionally consume an almost statistically insignificant amount of the total.  It’s eugenics all over again.

That being said, regardless of how I feel about poor people, someone’s decided that I should be on the life-shortening non-consensual experimentation list and I doubt the technology used on me saved anyone——ever.





Posted by Dirk Bruere  on  07/04  at  04:41 PM

“For me the three main issues are:
1) technology employed without understanding the full ramifications that include negative effects on some or all populations;
2) the issue of financing and the goals of financiers (whether they be China or the wealthy elite or even a nice elderly couple in New Jersey);
3) extending one’s vision to the world without really getting to know the world (that is, a requirement of some kind of empathy that could lead to a modification of opinions).”

1) Since we still do not fully understand all the ramifications of something as simple as “fire” your wish is tantamount to wanting to ban all technology more complex than a stone.

2) The goals of financiers are pretty transparent - they want to make a profit.

3) The whole aim of science is to try to understand the world, but your position is to deny us the tools to do so until we can understand the world… Of course, by speaking of empathy you throw in the implied caveat that we all have to be Ghandis before we even make a start.

You also seem to discount the suffering and misery that comes with a “natural” lifestyle which is quite literally nasty, brutish and short. For example, there are plenty of studies which show that in the bulk of “traditional” tribal societies from Africa to the Americas war casualties average around 20% of the population. That makes WW2 seem like a small gang fight by comparison. If we lived their lives we could expect almost a billion deaths from war per generation. Why do you think just about everyone in the Third World wants to live like us? Start with toilets, hygeine, anaesthetics and dentistry and continue…

Anyway, population is not a problem with sufficient technology. In particular, we need cheap energy - and its coming. With cheap energy we can do almost anything.





Posted by Clea  on  07/04  at  07:59 PM

The profit-motivated model works for them, but it doesn’t usually take people/environment into account.  There’s a whole body of literature in anthropology on the perils of science for profit.  If you search for “anthropology of science” you might find some other interesting literature on how they treat the ethical issues of science within the discipline.  (It’s not my area of inquiry so I can’t address it academically.)  For me the problem is in the ideas of progress and superiority.

I’m suggesting that without empathy you can’t really understand people.  No need for a Ghandi character here though you have to have the guts to make it though life in the shantytowns.  The problem is that often in the sciences “empathy” is mistaken for “bias.”  They aren’t the same thing.  Everyone is biased in some way, but not everyone is empathetic.  I would say I’m not taking away the tools to understand people - I’m giving you a very important one.  It’s so important that it could end up changing your motives/intent.  That’s when you know that you really have gotten to understand other people.

In the part of Africa where I lived, there were kingdoms when the Europeans visited 500 years ago - their “tribal” past would have been quite some time ago.  There’s no archaeology in the region so we don’t know what happened in “tribal” times there. 

During their civil war, approximately 1 million died and at least 1/4 of the population was internally displaced.  (Before that war there was “forced labor” until 1961 and way before that was the slave trade that decimated a huge percentage of the population.)  I’m not saying there was no violence in the “olden times” but nobody’s been doing them any favors really.

A lot of them probably wouldn’t mind having a flushing toilet, but they do bathe daily - sometimes more than once a day and they brush their teeth.  Actually, I’ve never seen such good dentition - even old people had all their teeth. It might have something to do with limited sugars and processed foods in their diet, but I never studied that.  They are very hygenic - in some places they even “ritually” wash the corpses with bleach prior to burial. 

I bet a lot of them would like medical care - their main fatal illnesses are malaria and diarrheal diseases.  Some of the deaths come from taking medicines from the western medicine dump sites (expired or banned medicines are dumped in developing countries), eating toxic food that has been shipped there (only occasionally do they do the recalls) or walking through toxic waste in their shantytowns (even Halliburton has been accused of dumping).

I’m not the neo-Luddite who is going to show up at the factory with the flaming torch and a pitchfork, but I am saying there is a difference between making a fire and making a landmine…. and things that mess with the environment or people’s heads are not equivalent to things that “help” you in some way.  If you agree that science is profit-motivated, then it is unlikely that “beneficial” element of science are headed their way… and even then, they might resist them.

I for one don’t like having the technology decisions made for me and I can’t speak for the them but I would guess they might feel the same way.  The problem is that the people making the decisions don’t even know a lot of people who aren’t like them and frankly, they probably don’t care either.

Not trying to be your enemy here - it’s just sometimes people assume things about others without knowing.





Posted by Dirk Bruere  on  07/04  at  11:56 PM

The profit motive - it relies on making things that people want to buy.
It’s as simple as that.
Of course, if you are talking about corruption that’s a different game.
As for technology decisions being made for you, what happened to the videophone? That’s been pushed for decades by telecoms companies but people just did not want it. Every time you buy an item of technology you are casting your vote for it, and every time you don’t you are also telling the manufacturers something. It’s the most democratic system possible at present. And we (as in Humanity) have tried a number of others with environmental damage and bodycounts far in excess of the worse effects of “capitalism”.

Waiting for the perfect political and social system to evolve with the perfectly educated, moral and enlightened electorate before we do anything of substance is a recipe for global destruction. We have maybe 30 years to push beyond our current technologies, wean ourselves off fossil fuels and generally get a real hitech planetary infrastructure in place. If we fail it’s global dieback and Dark Ages 2, and history tells us what that will be like - forever - because the easy resources and energy needed to recreate the modern world will be gone. Standing still is not an option, neither is slowing down. One generation - that’s all.






Add your comment here:


Name:

Email:

Location:

Remember my personal information

Notify me of follow-up comments?

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376