Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Singularity 1 on 1: Quantum Thief Trilogy

Stanford Laptop Orchestra (1hr 30min)

The Nature of Categories and Concepts (1hr 30min)

Enhancing Virtues: Caring (part 2)

On Steven Pinker’s “The Better Angels of our Nature”

Cyberwarfare ethics, or how Facebook could accidentally make its engineers into targets


ieet books

Superintelligence: Paths, Dangers, Strategies
Author
by Nick Bostrom


comments

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Enhancing Virtues: Self-Control and Mindfulness
Aug 19, 2014
(7913) Hits
(0) Comments

Is using nano silver to treat Ebola misguided?
Aug 16, 2014
(6754) Hits
(0) Comments

“Lucy”: A Movie Review
Aug 18, 2014
(5813) Hits
(0) Comments

High Tech Jainism
Aug 10, 2014
(5317) Hits
(5) Comments



IEET > Security > Cyber > Eco-gov > SciTech > Vision > Futurism > Technoprogressivism > Contributors > Kyle Munkittrick

Print Email permalink (30) Comments (6288) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


A.I. Special Pleading


Kyle Munkittrick
Kyle Munkittrick
Pop Transhumanism

Posted: Feb 7, 2010

Special pleading, along with feigned neutrality, is one of the most infuriating symptoms of faulty rhetoric one can utilize in an argument.

Special pleading comes in multiple forms, but the most common is that of claiming a superior framework which is proven to be superior by its own internal criterion. Vulgar Marxism and Freudian psychoanalysis both resort to this tactic by using lines like, “that you would argue against the Revolution is proof you are bourgeoisie and do not understand” or “your denial is proof of your repressed desires.” The point is that any criticism can be fallaciously transformed into proof of the original claim or be fallaciously disregarded because the critic is inherently limited by his or her own paradigm.

Kaj Sotala, Roko Mijic, and Michael Anissimov all use special pleading when critiquing James Hughes’ piece “Liberal Democracy vs Technocratic Absolutism.” The central rebuttal for all of them can be paraphrased as “your critiques of communism, dictatorships, and other authoritarian governments make sense for humans, but don’t apply to Friendly AI because Friendly AI is different than human systems and is genuinely selfless.” Hughes hears echoes of Marxist-Leninist thought in that point.

imageSome thinkers, including the allegedly brilliant philosopher Slavoj Zizek, continue to defend Marxism using special pleading. Instead of claiming communism isn’t based in humans, they claim Stalin and the USSR were not pure communism, and therefore were doomed to failure because of the corrupting element of capitalism. Thus, thanks to special pleading, Stalin is not proof that communism and authoritarianism are dangerous and bad, but that capitalism is bad and corrupts the pure motives of communism.

The problem is that, like communism, friendly AI, even if derived through the process described by the CEV, will ultimately fail. The reason democracy works even remotely better than authoritarian systems is because it openly admits and aims to minimize the faults in the system. These faults include both the “programming,” that is, the legislation and philosophy underpinning it, and the agents of the system, humans. Democracy, communism, and, yes, AI-based technocratic authoritarianism, are all human systems. They will be imperfect. Democracy, of the three, is the only one that sees itself as imperfect and prone to mistakes and failure. Therein lies the inherent benefits of democracy – it is a radically reflexive system.

As a final point, I think it is very interesting that those who support friendly super-AI don’t see the AI coming to the conclusion that nearly all forms of government, particularly those of an authoritarian breed, are faulty and instead advocating anarchy or a form of hyper-limited government. That the AI would want to govern at all is a further assumption I don’t understand. Assuming it’s an AI, it should be volitional, which would make forcing it to govern a restriction in its will or it would make it a program, not a genuine AI. There are just too many problems here.


Kyle Munkittrick, IEET Program Director: Envisioning the Future, is a recent graduate of New York University, where he received his Master's in bioethics and critical theory.
Print Email permalink (30) Comments (6289) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


“Special pleading, along with feigned neutrality, is one of the most infuriating symptoms of faulty rhetoric one can utilize in an argument. “

Your discussion of special pleading was very interesting, but I was actually hoping you’d also discuss feigned neutrality. Arguing with someone who is feigning neutrality is indeed infuriating, but then again, it is supposed to force you to deal with the issues on the table instead of focussing on the personality/leanings of your opponent.





I never understand why people continue to refer to Friendly AI in the present tense, as if it already exists. Sure, I can //define// FAI as “that which would be immune to the faults of Marxism”, but then, gosh, I’d actually have to INVENT it!

When technocrats then invent something they claim to be FAI, subject the world to it’s rule, and end up creating a disaster, they can then just claim “Oh well - I guess it wasn’t REALLY Friendly AI after all.”

Friendly AI DOES NOT EXIST in the present tense. It makes no sense to argue about it until we have a candidate we can examine.





In you final point you included a sub-point which I don’t understand: your statement “Assuming it’s an AI, it should be volitional, which would make forcing it to govern a restriction in its will or it would make it a program, not a genuine AI” make no sense.

What is “genuine” intelligence, be it artificial or natural?  Humans and other animals are intelligent and volitional, yet they can be controlled by others.  Intelligence has evolved in niches and can be designed to operate in niches.  Of course, as you would probably agree, a system operating in the niche of world governance may be very faulty and/or not what the instigators had hoped for.





> Democracy, of the three, is the only one that sees itself as
> imperfect and prone to mistakes and failure. Therein lies
> the inherent benefits of democracy : it is a radically
> reflexive system.

You don’t notice the CEV dynamic as utilizing this reflexivity that (traditional) democracy also utilizes? In what way do you claim traditional democracy to be more reflective i.e. can you present an example of a situation where traditional democracy would catch a potential mistake that the CEV dynamic would not catch?

In what way do you think the CEV dynamic essentially differs from an elaborate polling system?

Also, if you claim that the plan to build a CEV-implementing FAI “sees itself as perfect”, you don’t seem to have read the CEV page. The very first paragraph there comments on how the writer sees the framework presented as being faulty. (Some of the other preliminary paragraphs explain that he anyway feels he needs to say something about “What should we do if we knew how to build FAI?”, since people keep asking that question even though it’s not a problem that currently should be focused on.)


> As a final point, I think it is very interesting that those who
> support friendly super-AI don’t see the AI coming to the
> conclusion that nearly all forms of government, particularly those
> of an authoritarian breed, are faulty and instead advocating
> anarchy or a form of hyper-limited government.

If you actually asked us what we thought the output of a CEV dynamic would be, you’d see we’d tend to guess it to be a rather “hyper-limited government”. You are making weird assumptions about what we think.


> That the AI would want to govern at all is a further assumption
> I don’t understand.

I consider Nick Bostrom to have proven that due to evolutionary pressures, a total absence of a governing/controlling structure would lead to outcomes that we humans very much dislike:

http://www.nickbostrom.com/fut/evolution.html

That means some form of “hyper-limited goverment” is necessary (for a more precise formulation, see the previous link).





@Nato: A design team of engineers has the first meeting to build a bridge.

One engineer says: <i>The bridge DOES NOT EXIST in the present tense. It makes no sense to argue about it until we have a candidate we can examine.<i>

Everybody goes home, and a bridge is never built.

Really now.





@Giulio:
Oh well - I guess they weren’t REALLY engineers after all. ;p

Snark aside, I don’t understand. Are you saying that I’m saying that no one should attempt to build FAI at all because… we definitely haven’t built one yet?

Really now?





Maybe science-fiction could be used to try some thought experiments. Iain M. Banks’ Culture cycle is a very interesting way to develop philosophical and political reflections on the potential role of “intelligent” machines in an advanced society. On the Culture as a sort of “computer-aided” anarchy, see: http://yannickrumpala.wordpress.com/2010/01/14/anarchy_in_a_world_of_machines/





@Aleksei

Let’s assume my understanding of the CEV is faulty, because it probably is.

My argument was focused primarily on those saying that a technocratic authority based in the CEV would be better than previous authoritarian governments. If the CEV is as limited as you claim, then proponents of it would have no reason to defend authoritarian rule. If it isn’t, there is a dubious more authority = more freedom line of thought being perpetrated.

I am new to the CEV and so may be making an error in response to pure CEV theory, but this post was specifically addressed to the FAI proponents critiquing Hughes’ piece on democracy vs technocracy.





Kyle: “Democracy, communism, and, yes, AI-based technocratic authoritarianism, are all human systems.

But your interlocutors are quite specifically denying that AI behavior can be successfully predicted by analogy to humans and human organizations. You seem to dismiss this as special pleading, but you don’t address the specific reasons we have for expecting powerful, non-anthropomorphic AI. That is: human behavior is the contingent product of our species’s evolutionary history and cultural legacies. The entire human way of existing—-where we have individual people, who have emotions and self-identities, and who think on our timescale and can’t be copied—-doesn’t seem to be inherent in the nature of reality or intelligence itself . A generic artificial optimization process isn’t going to share human characteristics except insofar as the designers specifically engineer those features in. This is by no means a guarantee that AIs will be “perfect”; it just means that whatever the failures of communism were, any AI failures are probably going to be different. FAI is an engineering problem. To the extent that functioning systems must “[see themselves] imperfect and prone to mistakes and failure,” then competent AI designers should endeavor to write code that sees itself as imperfect and prone to mistakes and failure.





Kyle:
> My argument was focused primarily on those saying that a
> technocratic authority based in the CEV would be better than
> previous authoritarian governments. If the CEV is as limited as you
> claim, then proponents of it would have no reason to defend
> authoritarian rule.

Kaj Sotala etc. did not say that they’d support technocratic absolutism (they were quite explicit about this), or that implementing CEV would result in technocratic absolutism, even though they *did* point out that much of James’s criticism of AI-authoritarianism was based on the mistaken assumption that all theoretically possible AIs would necessarily be similar to humans.

It is possible to point out mistakes in criticisms of AI-authoritarianism without being in favor of AI-authoritarianism. That is what happened here. (Though other things also happened, like pointing out that CEV isn’t AI-authoritarianism even though some people who e.g. haven’t read the page describing it think so.)


And even though I’m not in favor of AI-authoritarianism either, I hereby point out that you are mistaken in seeing special pleading where you see it:

Wikipedia describes “special pleading” as “someone attempting to cite something as an exemption to a generally accepted rule, principle, etc. without justifying the exemption”.

The key is “without justifying the exemption”. We probably all agree that sometimes there are exemptions to general rules that actually *are* justified.

So the question becomes, is it justified to claim that a hypothetical AI-dictator could be an exemption to e.g. “all dictators are self-interested beings” etc.

James Hughes claims that all AIs necessarily are self-interested beings in the way that humans are, while us others have been attempting to point out that the laws of physics do in fact allow other kinds of goal systems, and an AI can in principle have a goal system very different from those of humans. In soldier ants etc. we already see goal systems that aren’t really self-centered.





@Nato Snark aside, I don’t understand. Are you saying that I’m saying that no one should attempt to build FAI at all because… we definitely haven’t built one yet?

Well, this is how I interpret “Friendly AI DOES NOT EXIST in the present tense. It makes no sense to argue about it until we have a candidate we can examine.”. It is difficult to design something without arguing about it.





@Giulio: Fair enough. What I meant was that it makes no sense to argue about, say, whether to put an AI //in charge// until we have some more details to examine and test, not whether or not to design or build one. It’s a little early in the development stage to say as much as whether a purported FAI is immune from certain cognitive biases or not.

I’m very glad this is just a case of me not being very clear.





@Nato: Agreed. I don’t take FAI debates too seriously at this moment, because I don’t think we know enough yet. Also, I assume if an AI is much smarter than us, it will quickly shed design constrains and choose its own goals (this is not computer science but logic: it is the definition of much smarter.

But I consider engineering AIs much smarter than us as doable, and desirable. So I am not very much interested in the friendliness issue, but I am very much interested in the ongoing AI research, and I am confident it will produce spectacular result sometime in the first half of this century.





Aleksei, you can say they were “explicit” about not supporting AI governance, but here are some quotes that refute you:

Anissimov: “give them [AI] positions of higher responsibility than most humans”

Mijic: “for agents with bounded intelligence who might be preyed upon by would-be dictators with oh-so-convincing arguments in favor of their totalitarian utopia, the only winning cognitive algorithm might be to just not accept any arguments, no matter how convincing they are. Hughes would rather tolerate a highly suboptimal outcome - life without an FAI - than give in to his innate sense of the wrongness of totalitarianism.”

In both cases, the phrasing isn’t “I support FAI authoritarianism” it is, “if it works this well, why wouldn’t you support it?” That’s why you, Aleksei can claim the CEV derived FAI is both authoritarian and somehow minarchist at the same time and never have to take ownership of the actual consequences of these thought experiments.

As for special pleading, here it is, explicitly:

Sotala: “This would be an understandable objection if we were talking about making an AI that ran things the way *it* liked, with little regard to what humans wanted to. But we are expressly talking about an AI that wants nothing else than what humans do.”

Sotala and Mijic in particular (Anissimov much less so) use precisely the same argument as Communists - that the “system” gives you what you really want, even if you don’t know it, it works that well - but despite the enormous similarities, is immune to Communist critique because FAI is *different* and critics just *don’t understand* and in fact, as Sotala puts it, shouldn’t even be allowed to critique FAI unless they are experts in the field, and clearly someone isn’t an expert if they don’t support FAI.

So no, Aleksei, I am not mistaken. Furthermore, I’m not going to play this little game of rhetoric where CEV and FAI supporters openly critique Hughes advocacy of democracy and then turn around and say “oh, well, we don’t advocate absolutism, just, you know, pointing out that it’s not so bad.” That’s an extremely disingenuous form of argumentation and doesn’t move the debate forward.

I have yet to see a clear proposal as to how the CEV/FAI would be integrated into a modern system of government. Until I see one, I am going to presume that CEV/FAI advocates support some form of technocratic authoritarian rule because that is the form of government they (Mijic and Sotala) have been defending.





> Aleksei, you can say they were “explicit” about not supporting
> AI governance, but here are some quotes that refute you:

But that’s not at all what I said. I said they didn’t support “AI-authoritarianism” or “technocratic absolutism”. Supporting “AI governance” doesn’t mean those things any more than our current political system—which is “human governance”—can be described as “technocratic absolutism” or “human-authoritarianism”.

You probably understand that “human governance” doesn’t automatically mean any of these criticized political systems (instead, it can mean any system that has ever existed in history), but for some reason you assume that “AI governance” does.


> In both cases, the phrasing isn’t “I support FAI authoritarianism”
> it is, “if it works this well, why wouldn’t you support it?”

Do you understand that asking such a question doesn’t mean that one supports the political system under discussion? Even bad political systems can be criticized on faulty or insufficient grounds, and one thing that Sotala did was wonder why James Hughes didn’t really manage to criticize even the bad systems very well (in Sotala’s view).


> That’s why you, Aleksei can claim the CEV derived FAI is both
> authoritarian and somehow minarchist at the same time and
> never have to take ownership of the actual consequences of
> these thought experiments.

Where have I claimed that “CEV derived FAI” would be authoritarian?

And actually I have not claimed it to be minarchist either, only that my *guess* is that it would be. I actually *don’t know* what the final political system would be, since finding that out is the whole point of doing the extrapolation of human thinking that CEV is supposed to be built to do. If we already knew what the final political system should be, we wouldn’t want to build this complex polling mechanism known as CEV.


> as Sotala puts it, shouldn’t even be allowed to critique FAI unless
> they are experts in the field, and clearly someone isn’t an expert
> if they don’t support FAI.

It’s really amazing that you’re able to fantasize that Sotala would think something like that. You need to try to calm down, clear your head and try to actually read what people are saying.

I’d also guess that you still haven’t read the page describing CEV, even though you “criticize” it so much.





Kyle: “Furthermore, I’m not going to play this little game of rhetoric [...] That’s an extremely disingenuous form of argumentation and doesn’t move the debate forward.”

I don’t think anyone’s being disingenuous; communication is truly difficult, especially when trying to talk about topics such as superintelligence, where our common language doesn’t seem to have adequate concepts. And it cannot be stressed enough that FAI proponents are expecting a genuine superintelligence, an entity much, much smarter than humans and human organizations. So when we say that AIs are going to be in control and not us, it’s not out of a love for absolutist government; it’s based on the theoretical prediction that larger, smarter, faster minds have a greater share in determining the future. CEV is a rough sketch of how one might go about building an AI that does what we want, because the alternative to an AI that does what we want is either an AI that does things we don’t want, or no AI at all.

Of course, all this is assuming that superintelligence with arbitrarily programmable values is feasible; if that’s not true, then arguments for FAI obviously fail. If you believe that superintelligence is impossible, or that superintelligences necessarily have some particular sort of values, then it’s better to move the discussion back to that point. Until can we agree on some general features of what AI would be like, any discussion of the interaction of AI and politics is doomed to fail.

“I have yet to see a clear proposal as to how the CEV/FAI would be integrated into a modern system of government.”

If we must speak in analogies, it’s probably better to think of the rise of a new intelligent species, rather than an absolutist government. What would it even mean, to integrate human intelligence into a system of chimpanzee tribal governance? Or what would it mean to try to give dogs an equal say in human affairs? We have concepts and problems that dogs and chimpanzees simply aren’t equipped to comprehend. So it would be with superintelligences and us. We all want a good outcome, the FAI proponents are just saying that securing good outcomes is a mind design problem, rather than a political problem.





Zack M. Davis:
> We all want a good outcome, the FAI proponents are just saying
> that securing good outcomes is a mind design problem, rather
> than a political problem.

Well, it’s *also* a political problem, since the mind design choices are also political choices. It’s just that practically any political viewpoint can in principle be well-represented by making appropriate choices during mind design (the exception being if you just hate all AIs, even transitory self-deleting ones). Like representing liberal democrary by building a superintelligence that polls all humans in an elaborate way and implements whatever sufficiently common ground can be found in what humans want (which very well might include deleting the AI after it fixes a few things in the world).

(Yes, you probably actually meant this, that mind design is political too, but people here tend to jump at you if there’s an opportunity for an unfavorable interpretation.)

(One of these unfavorable interpretations that people will probably make from this message is that I’d claim that we’d actually be able to build such a superintelligence as described—I do claim it to be theoretically possible, but that does not mean practical possibility.)





Kyle, when you are rebutting someone in the future, I would suggest using specific quotes, rather than paraphrasing.  I think it would help clarify the disagreement.  Thanks!





Kyle, if I wasn’t using special pleading, i.e., if you don’t have quotes to back it up, then please remove my name from the list of names in this post.





Michael, Hughes already quoted you directly, but here is where you engage in special pleading:

“Power corrupts humans for evolutionary reasons:if one is on top of the heap, one had better take advantage of the opportunity to reward one’s allies and punish one’s enemies. This is pure evolutionary logic and need not be consciously calculated. AIs, which can be constructed entirely without selfish motivations, can be immune to these tendencies.”





Kyle,

Would you like to write an article claiming that Nick Bostrom engages in special pleading? I can provide many links to Bostrom articles where by your thinking he certainly does so, like here:

http://www.nickbostrom.com/ethics/ai.html

If you *don’t* want to write such an article, I’m very interested in hearing why. Don’t you notice that he very frequently says the very same thing as e.g. Anissimov there, or don’t you dare present your “argument” in his case?





Mike,

I’m not expecting you to fall obediently behind Bostrom. I’m expecting you to dare make it clear whether you agree with him or not.

Currently, you *don’t* dare publicly attack Bostrom when he says the very same things that people like me also say (in which case you attack vehemently, and with ad hominem). You also avoid making it clear whether you agree with him or not. Why is this?


Would you please explicitly confirm whether you think Nick Bostrom engages in special pleading or not?

I’m not expecting a particular answer, just an answer.





re Aleksei: “or don’t you dare present your ‘argument’ in his case?” and Mike: “in our world we don’t kowtow to a messiah”:

Why the snarkiness?—-it confuses and saddens me. I would guess that everyone here wants to see the same general sort of future: a humane future full of free people living happy, fulfilling transhuman lives. If some of us should find ourselves disagreeing on the factual question of what strategies are likely to result in such a future, then surely we should be eager to mutually exchange information, resulting in more accurate beliefs that will allow each of us to take more effective actions towards securing a humane outcome. Rational agents should expect to agree on questions of fact; when they don’t, it’s a puzzle to be solved, not a battle to be fought.

I think it would help to take a step back from politics and instead analyze what kinds of AI are likely: how smart, how fast, the nature and the extent to which this smartness concept is even coherent anyway, the nature of goals and optimization, and so forth. Given a clear understanding of the subject matter, perhaps interesting conclusions might be reached about singleton strategies and the like. Without that understanding, I fear we are just left with noise, for those two innocent letters “AI” seem to mean something entirely different to each who reads them.





@ Aleksei

In the article of Bostrom’s you cited, he doesn’t mention FAI governance. He notes that:

“we could delegate many investigations and decisions to the superintelligence.”

Or give it control over aspects of problem-solving, like how to best be philanthropic. I agree with most of Bostrom’s points about why an FAI would be different, but he at no point uses those arguments to exempt the FAI from the criticisms I am leveling. He does not engage in special pleading because he does not use a theoretical aspect of FAI to let it escape valid criticisms.

@Zack

You’re right, the snarkiness is largely unnecessary. But this is a passionate issue and there are only a few of us at all who care about it. There are no ad hominem attacks, just a lot of passionate language. That’s good in a debate, and snarkiness makes it fun. I’m not taking any of this personally and hope no one else is.

Reaching a consensus is for politicians and boardrooms - I’m interested in the best possible solution. Whether I’m wrong or right or somewhere in between, debates like this are the stuff of what intellectual growth is made.





Kyle,

Would you be willing to bet against my claim that if I showed Bostrom the Anissimov quote you cited as “special pleading”, he’d respond that he agrees that his article includes the very same sentiment?

You might benefit from looking again at the Anissimov quote and e.g. this excerpt from the Bostrom article:

“Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.”


I’ll also note here, that the first sentence is exactly what James Hughes tends to describe as “sublimated religion”. I wonder if he has ever told that to Bostrom?





@ Aleksei

Bostrom is open to the abstract possibility of an altruistic superintelligence. So I am. I just think its very unlikely.

I also see a world of difference between Bostrom’s careful, conditional thinking on these issues and the apocalyptic cult of personality you have built around a high school drop-out with the backing of a reactionary billionaire. There is an abstract possibility that the solution to humanity’s problems can be found in genetically engineered papaya, but it becomes a religious cult when you insist that only super-papaya can solve the world’s problems, and therefore super-papaya is the only thing worth caring about. Nick Bostrom has never said anything that absurd. You have.





jhughes,

> Bostrom is open to the abstract possibility of an altruistic
> superintelligence. So I am. I just think its very unlikely.

Nice to hear you backing away from previous recent claims such as this:

“I also do not believe in the possibility of a super-AI of the type you imagine capable of doing these tasks which did not have some kind of self-interest, or was not programmed to serve the interests of some group more than others. I think the notion of such a purely altruistic creatures is sublimated religion.”


> I also see a world of difference between Bostrom’s careful,
> conditional thinking on these issues and the apocalyptic cult of
> personality you have built around a high school drop-out with
> the backing of a reactionary billionaire.

Strange that Bostrom himself sees us quite differently. And thinks that e.g. this person who did not attend high-school because of e.g. homeschooling as “an officially recognized child prodigy” when he was very young has contributed a lot of quality thinking which Bostrom often references in his academic articles.

I’ll also mention that deciding to not attend high school is not at all the same as “dropping out”. In general, however, I would like to see us discussing the issues and not arguing over your ad hominem attacks.


> There is an abstract possibility that the solution to humanity’s
> problems can be found in genetically engineered papaya, but it
> becomes a religious cult when you insist that only super-papaya
> can solve the world’s problems, and therefore super-papaya is
> the only thing worth caring about. Nick Bostrom has never said
> anything that absurd. You have.

I don’t think that only AI can solve the world’s problems. How many times do I need to say this to stop you from fantasizing otherwise?

I also don’t think that only AI existential risks are worth caring about, though I am pretty close to claiming that only *existential risks in general* are worth actually paying significant attention to. So is Bostrom, and it actually is a article of his (the “astronomical waste” paper) that I tend reference everytime I express this view of mine and want to point to a more exact formulation.


But really, when you claim that I am religious, I don’t really take you any more seriously than those who claim I’m religious in my appreciation of Richard Dawkins’ efforts. It’s hard for me to see these comments of yours as not coming from a strong political bias and need to try to get people to take your word for what we are. There is such a breathtaking difference between how a calm and careful Oxford academic like Bostrom relates to us SIAI supporters (which he also is), and how you sound like a political fanatic quick to go for inaccurate ad hominem.





@ Aleksei

Zing. You got me. For the record I believe the possibility that we might someday be ruled by a perfectly selfless godlike robot a probability greater than 0. I also call myself an atheist although there the possibility that we are created and watched over by a godlike being is greater than 0.

As to ad hominem may I point out that you are the one constantly biting my ankles over my alleged perfidious political attacks on your cultlike behavior? Let me be plain about what I am saying. Your “friendly AI” community runs a gamut from, at one end, serious thinkers like Bostrom attempting to map the risks and benefits of AI and engage in dialogue with the wider intellectual community around affective computing and cognitive neuroscience. And at the other end there is an insular messianic cult that believes that they are the smartest people in the planet, led by the smartest man on the planet, engaged in the only important project on the planet.

Eli is a smart man, but he is treated by your group as a super-genius whose every word is worthy of deep meditation, despite his demonstrating shallow interest in the wide world of cognitive and computer science relevant to any serious effort to figure out how to make a friendly robot. Whoever invited the wide swath of speakers to the Singularity Summit obviously recognized the need to appear to be reaching out to the wider intellectual community, but it reminded me of the expensive peace conferences the Moonies used to organize - your community is more interested in exposing computer scientists and neurophilosophers to the wisdom of Chairman Eli than in having a real dialogue.

So yes, on the face of it Bostrom and Eli may say similar things. We trust Nick as a careful thinker and passionate activist trying to mitigate existential risks. But we have not built a cult around him. You have built one around Eli.





To repeat what Alexei said:

Kyle,

Would you be willing to bet against my claim that if I showed Bostrom the Anissimov quote you cited as “special pleading”, he’d respond that he agrees that his article includes the very same sentiment?

You might benefit from looking again at the Anissimov quote and e.g. this excerpt from the Bostrom article:

“Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.”

The above is essentially the same as what I was claiming.  If you are going to accuse me of special pleading, include Nick Bostrom as among the accused.

James, as an employee of SIAI who deals with all the other employees, including Michael and Eliezer, as a matter of daily life, I can pretty confidently say that if you were in my shoes you would see that we do not treat Eliezer as a cult leader.  Within SIAI, we openly share both positive and negative feedback with each other, without special privileges or limitations.  The people that set up the Singularity Summit—that is, primarily myself, Michael, and Aruna—work very hard to bring together a diverse group of intellectuals to discuss cutting-edge issues in sci, tech, and philosophy.  Like dozens of other speakers, Eliezer shares his views at the Summit, and people can agree with what he says or reject it on their own terms.  If we were so obsessed with highlighting Eliezer, then why did he only give his talk near the end of the second day at the last Summit?  All part of our nefarious cult plan, I presume.





Michael: “If we were so obsessed with highlighting Eliezer, then why did he only give his talk near the end of the second day at the last Summit? All part of our nefarious cult plan, I presume.”

Well, conditional on the (false) nefarious cult hypothesis, yes. It is not at all obvious that speaking near the end of the conference is less of a spotlight. (Compare: the Academy Awards end with the presentations for Best Actor, Actress, Director, and Picture.) Furthermore, SingInst researcher Anna Salamon spoke both first and last on the danger of hard takeoff scenarios.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Cruise Reservations Now Open

Previous entry: Reclaiming the Enlightenment pt. 1

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @ ieet.org     phone: 860-297-2376