IEET > GlobalDemocracySecurity > Vision > Staff > Mike Treder > Futurism > Cyber > Eco-gov > SciTech
Creationism, Birtherism, Singularitarianism, and Other Fantasies
Mike Treder   Aug 7, 2009   Ethical Technology  

What do Orly Taitz, Ken Ham, Eliezer Yudkowsky, and Max More have in common?

All have espoused beliefs they claim are rooted in fact and have a rational justification, but actually are motivated by ideology or emotion.

Orly Taitz - Self-described as a lawyer, dentist, and real estate agent (!), she is the public face of the “birther” campaign in the United States, a laughable attempt to “prove” with hard evidence that Barack Obama was not a natural-born American and thus is not qualified to serve as President.



Ken Ham - A tireless promoter of Young Earth creationism, Ham co-founded the Creation Science Foundation in Australia in 1979, and now runs the wildly successful Creation “Museum” in Petersburg, Kentucky, USA.



Eliezer Yudkowsky - An autodidact who did not attend high school and who has no formal education in artificial intelligence, Yudkowsky has nonetheless achieved a substantial (some might say Messianic) following as co-founder and intellectual leader of the Singularity Institute. His commitment to the concept of “Friendly AI” and to the desire to bring about the Singularity as soon as possible, not to mention his libertarian leanings, have made him popular among a certain segment of transhumanists.

Max More - The former Max O’Connor began the now-defunct Extropy Institute in 1990, championed technocratic libertarian ideals for many years, and now has fallen into the unfortunate role of opposing the hard science that underlies concerns about climate change.



So, we have birtherism, creationism, singularitarianism, and climate science denialism. In each case, arguments are marshaled that seem to resemble scientific or legal reasoning but that end up as speculative assertions intended to support fanciful, ideological, or faith-based positions. No doubt some who subscribe to each of those schools of thought would object to being lumped in with the others; they’d loudly proclaim that while the other beliefs may be misguided, theirs is not. I’ve placed them together deliberately, though, because I think they reveal a pattern: a dangerous, insidious compartmentalization of rationality.

Standing up in the court of public opinion armed with fancy-looking charts and with quotes from “authorities,” the poseur assumes the role of a sophisticated deliberator, but the outside image is only a shell. Under the surface, deeper non-rational impulses drive them. It’s true that well-ordered reason is sometimes in evidence, at least with some of these offenders, but the problem is that when facts and reason lead away from their pre-ordained conclusions, they readily jettison rationality in favor of orthodoxy.

Was Barack Obama born in Hawaii in 1961? Of course he was, but by raising a smokescreen of doubt maybe we can disrupt his agenda.

Was the Earth created in its present form by God six thousand years ago? Obviously not, but by jiggering the facts a little bit we can make it appear plausible to the ignorant and gain control over them.

Will a friendly machine soon reshape human society into Utopia? It seems highly doubtful, but since we’d like it to be so, let’s develop an argument of apparent certainty.

Is all the evidence of anthropogenic global warming pure bunk? No, I’m afraid it’s not, but there is a lot of oil money, and even some ideological satisfaction, in making the case that it is.

Of the four, creationism and global warming denialism are the most directly anti-science, although both try to argue their points from a “logical” perspective. Singularitarianism is openly anti-politics (and implicitly libertarian) as well as quietly fearful of technological progress. Singularitarians have more in common with relinquishers like Bill Joy than they may realize; their essential thesis is that science run amok is dangerous and can be made safe only if held under tight control by an elite group. That belief stems from fear more than reason.

I’m acquainted with Eliezer and Max, personally, and I know them both to be highly intelligent. Thus it seems a mystery to me that they can shift so rapidly from one mode to another, from pursuit of scientific reason to assertion of unsubstantiated opinion without differentiating between them. I freely admit to possessing and expressing political or other opinions, and I acknowledge openly that at least some of what I believe about right and wrong is based on intuition as opposed to deduction, but I try to make those divisions plain and to separate my thinking with reason from my reasoning with emotion. What seems harmful to me about the activities of Max and Eliezer, or even worse examples such as Taitz and Ham, is that they appear unable—or maybe unwilling—to tell the difference.

I have the same complaint about scientists and other pundits who assert that there is no inherent conflict between science and traditional religions like Islam, Christianity, or Hinduism: their arguments simply don’t meet the test of intellectual rigor. Sam Harris has done a much better job than I ever could of dismantling the accomodationists’ claims, though, so I won’t attempt to do so here, although I invite you to absorb his impeccable reasoning.

Finally, a request. If you support science and reason, if you appreciate all that technology and secular society have done to relieve suffering, promote freedom, and bring opportunity to many, then please stick with your rationality even when it takes you to uncomfortable places. Don’t allow fear to overcome logic. Accept the facts and the truth even if they require you to change your opinions. In the long run, both you and the world we live in will be much better for it.

Mike Treder is a former Managing Director of the IEET.



COMMENTS

Poor old Eli and Max, I personally may not like Eli much, but your comments seem a bit harsh, even by my standards. 😉  You are right that I do not think that the radicalism of the Singularitarians and Libertarians comes from ‘rational intelligence’, as they like to believe, rather the radicalism comes from reflective consciousness and raw emotion, just like with the rest of us.

I’m sure that intelligence is not primary, but rather it is an ‘aspect’ of consciousness (a ‘narrative’ - see Robin Hanson’s theory of identity on ‘Overcoming Bias’), and even the ‘Bayesian Induction’ that the Singularitarian Community seems to worship is really just another narrative, rather than an absolute bedrock of rationality.  The basis of narrative is of course, analogy formation rather than Bayes, which suggests to me that analogy formation is actually the bedrock of mind. (AI researchers take note!)  Curious that the Singularitarians who pride themselves so much on ‘rationalism’ seems to be driven by the most intense raw emotion and conscious experience.

There’s a good chance of Singularity, I don’t think the Singularity meme per se is irrational, but rather the idea that it will end politics.  Of course it won’t.

Two comments:

1) This seems a bit unfair to Max and Eliezer. I have not heard Max “opposing the hard science that underlies concerns about climate change”, but only questioning the accuracy of current wheater and global climate models. Which I can agree with.

I think there is enough evidence that climate change is real, but I also think it is overstated at times, often by scientists pursuing grants (we are all human beings who need to make a living after all). Let’s take climate change seriously, but let’s not become fundamentalist about it.

While I am not persuaded by Eliezer writings on FAI, I think putting him in the same category with Taitz and Ham is unfair. You know that.

2) If we had always meekly “accepted the facts and the truth” we would not have advanced much from the caves. Most of the technical and social advances which have permitted “to relieve suffering, promote freedom, and bring opportunity to many”, from electricity to gay rights, have been promoted by persons who refused to accept the “facts” and “truths” of their time and place. Let us support logic, science and reason, but without giving up our imagination.

I do not agree that Singularitarianism is similar to birtherism, Creationism and climate denialism. Nor do I equate Max More’s skepticism about climate modeling with the usual know-nothing climate deniers.

First, unlike these other belief systems which attempt to deny the accumulating evidence, Singularitarianism is a set of predictions about the imminence and nature of machine intelligence. Those propositions cannot be empirically tested or disproved, unlike birtherism, climate change and creationism. (Or as the S^ Bayesians would say, the probablilities of these other propositions can be reduced to negligable, unlike those of S^.)

Now, I agree that S^ propositions betray quite few cognitive biases about the likely shape that a-life will take, where it will arise, and what the best ways are to address its risks. I’ve written and spoken about those biases:

http://ieet.org/index.php/IEET/more/2393/

http://ieet.org/index.php/IEET/more/2021/

http://ieet.org/index.php/IEET/more/ffr0809/

But I accord far more debatable credibility to the prospect of machine intelligence, and its risks, than equating them to these other three belief systems as “fantasies” implies. I suspect most of the people involved with the IEET do. When last we polled our audience about the AGI threat only 4% dismissed it as unlikely:

http://ieet.org/index.php/IEET/more/1759/

And more than 60% endorsed the FAI position (after SIAI mobilized their followers to vote in the poll):

http://www.acceleratingfuture.com/michael/blog/2007/06/ai-related-poll-at-ieet/

Second, as to Max More’s skepticism about climate change models, he has articulated a fairly narrow set of claims, all of which I believe narrowly defensible. I don’t buy into Max’s skepticism because of the overwhelming consensus among scientists who know far more about climate modeling than Max or I. I also suspect that there are some ideological, psychological and cultural factors at work that make me accept the climate change consensus, and Max deny it. But since such factors are at work on both sides we should best look at the overt arguments.

Denialists who insist that a cabal of greedy green socialist scientists, led by an all-powerful Al Gore, have defeated the oil and gas industry, their paid scientists and lobbyists and right-wing disinformation machine, to propagate a climate change “Big Lie” - that kind of denialist is, I agree, as delusional and anti-empirical as a birther or creationist. But someone who simply claims, as Max does, that climate models are crude and can come up with very different conclusions based on small tweaks, that is a very different kind of claim.

Fortunately those climate models are including more variables. Unfortunately, the better models, such as this one including cloud layer effects, tend to validate the more extreme climate change predictions:

http://news.bbc.co.uk/2/hi/science/nature/8165223.stm
Clouds in climate ‘vicious cycle’

Wow, Mike Treder is really starting to lose it. I wonder if the IEET will continue to tolerate such a tireless slanderer and political hothead as their Managing Director, or will they try to salvage some credibility?

Or is IEET’s target audience now only those who don’t know what Treder’s political opponents (as he perceives them) think, and therefore might actually believe his mudslingings?

If this is what IEET has become, how long will serious people like Nick Bostrom be comfortable with associating with this organization?

(To be clear, I hadn’t before even heard of Orly Taitz or Ken Ham, and have not looked at what exactly Max More says about climate change, but I have looked at Eliezer Yudkowsky’s views in detail, and I recommend others to do so as well before believing the claims of political hotheads like Mike Treder.)

I think the singularity is inevitable.

Eventually we will be able to create intelligence higher than ourselves, whether its by reverse engineering the human brain and improving its design or by constantly increasing computing power. And that new intelligence we create will be able to improve its own design just as we have improved our design, creating an explosion of intelligence i.e. the singularity. There is just no avoiding it, the only question is WHEN it will happen. I believe Kurzweil’s estimate of 2045 is probably accurate, but barring human extinction its inevitable.

However, I do agree that Eliezer Yudkowsky does not always present it in a logical way. I much prefer Ray Kurzweil to present the argument.

In the same way, I also believe that a cure for aging will eventually be developed, but I would rather listen to Kurzweil on that than Aubrey De Grey.

Kurzweil rules !

“Unfortunately, the better models, such as this one including cloud layer effects, tend to validate the more extreme climate change predictions”


Unfortunately, it’s always the “predictions” of doom and gloom that get underlined by the dishonest journalists or arguers and never the cautious conclusions of the scientists saying that nothing definitive or certain has been established about the future. (And of course they do this, the Apocalypse will always sell tons more newspapers than cautions scientific conclusions about just how uncertain things are.)

From your article:
“Daniel Lunt, a climate scientist from the University of Bristol, UK, said this was an «important finding», but that it would be a «quantum leap» to conclude that this single model’s predictions about the effects of cloud cover on the future climate would be correct.”

and

“Matt Collins, a researcher from the Hadley Centre, said that the findings gave him confidence in the ability of models to predict climate change.

It was impossible to extrapolate this one test and say exactly what would happen in the future climate, he told BBC News.”


Pretending the uncertainty isn’t there won’t make it go away, nor will irrationally and disrespectfully bashing those who point it out.

I’m saddened by this post. I have no opinion about the Orly or Ken, but I have seen zero evidence over a number of years that either Mr. Yudkowsky or Dr. More are irrational in general, or with respect to the specific issues discussed. I don’t always agree with them, but the fact that someone does not agree with me does not establish they are jettisoning rationality. Nor does is it sufficient to establish that the majority disagrees that they are irrational, e.g., Copernicus and Darwin faced exactly this sort of criticism. The post seems to make what seems to me an all too common mistake. Motivation may explain why someone holds an irrational belief, but people then infer that because someone has motivation to hold a belief the belief is irrational. I would be highly motivated to answer “4” to the question of what 2+2 equals, if the prize were a million dollars. I’m pretty sure that my belief that 4 is the right answer is not thereby irrational. If Mr. Yudkowsky and Dr. More are motivated by bad toilet training (or whatever cause one wants to hypothesize) the question remains whether the reasons for their positions meet the usual standards of rationality. I see not a shred of evidence in the post that they fail to other than (a) they do not agree with the Mike and the majority, (b) they have motivations for their positions. Both of these are inadequate as a means to support the conclusion “they readily jettison rationality”.  One thing we must not lose sight of is that rational debate is hard, and often protracted. For example, it took almost ten years of scientific debate to establish that Einstein was right and Newton wrong after the famous Eddington observation. The debate over women’s suffrage took about century more or less in the West. It is easy to see who is right now, but at the time there were (at least initially) rational and well-meaning people on both sides. I would encourage Mike to engage with Yudkowsky and More’s reasons and leave questions of motivation and character aside.

I agree with you Mark on the general tone and assumptions made by Mike.  I know Max fairly well and one reason I found him quite stimulating intellectually is because he challenged my own thinking.  I have been a long-time environmentalist (without the branding) since I was a child (my bio-relatives donated some of the land to New York City’s Central Park) and one activity I share with Max is a love of the environment. 

Years ago when I produced and hosted a cable TV show in LA and Telluride, I had Max on as a guest (1992).  We discussed alternative energy, space exploration, human futures, etc.  In discussions I have with Max today, he is hell-bent on seriously addressing all sides of an issue, which is not anti-hard science.  It is pro-intelligent science.

Finally, I have to say that I dislike Mike’s continuous positioning of Max which is becoming slanderous.  It would be like someone saying that Mike is a plagiarist; no matter how many times Mike says that he is not.  I doubt Mike is a plagiarist, but if someone keeps saying it, I might start believing it.

Being disrespectful does not further the deep discussions we ought to be having on environmental issues.  Look - none of us agree on everthing, but if we give each other a little bit of leg room, we might learn something new, which, in turn, could lead to more effective problem-solving.

Like most of the other commenters have said, this post seems over the top hyperbole.

Mike wrote:

“Will a friendly machine soon reshape human society into Utopia? It seems highly doubtful, but since we’d like it to be so, let’s develop an argument of apparent certainty.”

Is there a shred of evidence that Eliezer has ever suggested that a Utopian future is certain? How could Mike honestly reconcile his claim with writings like this book chapter in Nick Bostrom’s Global Catastrophic Risks volume?

Likewise, the claim that he currently harbors a “desire to bring about the Singularity as soon as possible”, independently of the benefits or harms thereof, is ludicrously counter to his emphasis on caution and careful preparation.

If one is going to group people with creationists and Birthers, it behooves one to present some evidence (links, citations, quotes: all notably lacking) in support of one’s accusations and not to make specific claims about their views that can be trivially falsified with a few minutes of research.

I don’t know enough about the details of Max’s views to speak to substance, but the lack of any supporting links for the accusation against him is also bad form.

“Is all the evidence of anthropogenic global warming pure bunk? No, I’m afraid it’s not, but there is a lot of oil money, and even some ideological satisfaction, in making the case that it is.”

I favor a different wording of the question. Change “all” to “a good amount of” and “pure bunk” to “mistaken.” Now, it’s “Is a good amount of the evidence of anthropogenic global warming mistaken?”
You can still say “no,” but at least the question is fairer.

Mike, I’m uncomfortable about a number of aspects of this post. I’d rather discuss them privately, and I’m sure we’ll have an opportunity. But without going further, there are important human aspects here.

By all means, criticise aspects of SIAI and Eli’s thinking.  But make your criticisms specific and supported.  There are diverse communities around many of these ideas and you’ll probably find others who share and support your concerns.  At least this has been my experience when I have offered criticisms.  This kind of broadside mudslinging is… just ugly and unhelpful.

“Will a friendly machine soon reshape human society into Utopia? It seems highly doubtful, but since we’d like it to be so, let’s develop an argument of apparent certainty.”

This statement is as odds with what most people in the community think.  To use it as a point of criticism shows a poor understanding of what you are attacking.

I remain personally reluctant to form an opinion on the merits of AGW, but what I should like to point out, as I already did a few times, is that AGW is an issue composed by several semi-independent questions, whici should be answered individually, and not on the basis of a “Nicean creed”, namely:
- does GW exists?
- irrespective of whether it is anthropic or not, would gas emission reduction materially affect it (we might, e.g., having already entered a positive feedback loop of the kind suggested for the future by some climatologists)?
- irrespective of whether gas emission reduction would materially
affect it, is it really a bad thing (in fact, only a massive
propaganda keeps it unpopular amongst the inhabitants of really cold regions…)?
- if it is, how bad? or rather, how much would we really like to
reduce it? what costs would we be ready to pay for it? if planet
cooling is an unconditional goal, why not reliquishing altogether the
use of fossil fuels in a six-month time, damn any other concern,
welcome back to paleolithic?

Because the measures aimed at fighting GW, even admitting that the latter exist and that they are not futile, have a *cost”. A cost which can be measured in reduced or delayed growth, in investments diverted from other targets, in a massive redirection of technoscientific
research, and ultimately, for the utilitarians possibly amongst us, in human lives and human suffering.

Now, since such costs are as unpleasant as those created by a significant GW, they should of course be compared, at the best of our ability to assess both, with the costs of GW multiplied by the probability factors of other abovementioned hypotheses and parameters, which - as high as one may deem them to be - are by definition lower than 1.

Taking also into account that besides the crucial issue of
geoengineering as a necessary, albeit repressed and denied, part of the equation, we might well end up calculating that the breaking point in the tradeoff between keeping our planet “terraformed” and modifying our economic and ultimately biological requirements could be found at a level less obvious than expected.

Thus, I remain perplexed about AGW, and still assume that its “proponents” might be factually right.

But it is disquieting to see how even in transhumanist ranks all the points above are dismissed out of hand in favour of being, no matter what, on the “cool” side (pun intended).

And since I have no expertise nor training to express a meaningful
opinion on the merits, and my claims to competence lay instead in the area of politics, legislative mechanism, mass psychology and cultural
trends, I am mostly concerned with those aspects.

In his August 7 blog, Treder brands me as “anti-science” and smears me by placing me in the company of young Earth creationists and Obamer “birthers”. Anyone who is familiar with my writing will know how ridiculous this is. If you are not, I only ask that you sample it before giving any credence to Treder’s representation. (See links below.)

Treder also implies that I say that “all the evidence of anthropogenic global warming [is] pure bunk”, which is not at all what I have said.

It’s simple, simple-minded, and simply wrong to call me a “denier” just because I have serious doubts about the climate beliefs held by Treder. I do not deny that global warming has taken place over the past century. I don’t even deny that some of this is the result of human activity. (I don’t affirm that either; I’m not sure either way at present. I would be surprised if humans had zero effect, but I don’t believe that current climate models reliably tell us how much of an effect we’re having, compared to natural cycles and effects.)

It’s simply childish to refer to me using a name I haven’t used (and which hasn’t been my legal name) for two decades. I can’t see any reason for this other an attempt to be annoying.

Treder: Given your position with IEET now, you should seriously consider the effects of your communication style. Despite some differences in politics, I like a great deal of what IEET does. It would be a shame if support erodes because of your attacks. Comments from a growing number of people suggest that this danger is real.

To set the record straight on my ACTUAL (current) views on this cluster of complex issues, please see my August 8 blog entry: <http://strategicphilosophy.blogspot.com/2009/08/my-current-view-of-global-warming.html> and the one following it.

Well, Mike certainly has smacked a hornet’s nest here. I’ll leave my general response to the piece to my ongoing email conversation with Mike, but I do have to comment on a few of the arguments made here regarding climate.

First, the claim that we’ve been cooling for the past twelve years: Wrong.

NASA puts the warmest year on record as being 2005—1998 came close (and the World Meteorological Org, using different data sources, has it slightly warmer than 2005) due to an unusually strong El Niño. All of the top 10 warmest years have come since 1997, and 9 of the top 10 happened after 2000. 2008 was at the lower end of the top 10 (7th to 10th warmest) because of an unusually strong La Niña effect.

Second, the claim that we don’t know whether the increase in carbon concentrations comes from human activity: Wrong.

The ratio of Carbon-12 to Carbon-13 in the atmosphere starts to change just when carbon concentrations start to increase, according to ice core studies. CO2 from burning fossil fuels has a different balance of C-12 to C-13 than does the general atmosphere (due to a slight preference for C-12 in plants). The changing balance of C-12 to C-13, along with the build-up of carbon concentrations in general, matches what would be expected from a steadily-increasing amount of burned fossil fuels.

As for the models not being very good, I’m not quite sure what you mean. No model is perfect, but today’s models do back-cast the last century’s climate quite well, and the observed results we’ve seen over the past fifteen years or so generally fall within the error bars of even the 1990-era models.

I’d suggest looking at the Copenhagen Synthesis Report from March of this year to get the latest well-supported, well-analyzed data.

(let’s see if *this* version works)

Well, Mike certainly has smacked a hornet’s nest here. I’ll leave my general response to the piece to my ongoing email conversation with Mike, but I do have to comment on a few of the arguments made here regarding climate.

First, the claim that we’ve been cooling for the past twelve years: Wrong.
http://earthobservatory.nasa.gov/IOTD/view.php?id=36699

NASA puts the warmest year on record as being 2005—1998 came close (and the World Meteorological Org, using different data sources, has it slightly warmer than 2005) due to an unusually strong El Niño. All of the top 10 warmest years have come since 1997, and 9 of the top 10 happened after 2000. 2008 was at the lower end of the top 10 (7th to 10th warmest) because of an unusually strong La Niña effect.

Second, the claim that we don’t know whether the increase in carbon concentrations comes from human activity: Also wrong.
http://www.realclimate.org/index.php/archives/2004/12/how-do-we-know-that-recent-cosub2sub-increases-are-due-to-human-activities-updated/

The ratio of Carbon-12 to Carbon-13 in the atmosphere starts to change just when carbon concentrations start to increase, according to ice core studies. CO2 from burning fossil fuels has a different balance of C-12 to C-13 than does the general atmosphere (due to a slight preference for C-12 in plants). The changing balance of C-12 to C-13, along with the build-up of carbon concentrations in general, matches what would be expected from a steadily-increasing amount of burned fossil fuels.

As for the models not being very good, I’m not quite sure what you mean. No model is perfect, but today’s models do back-cast the last century’s climate quite well, and the observed results we’ve seen over the past fifteen years or so generally fall within the error bars of even the 1990-era models.

I’d suggest looking at the Copenhagen Synthesis Report (http://climatecongress.ku.dk/pdf/synthesisreport)  from March of this year to get the latest well-supported, well-analyzed data.

Jamais: You don’t comment on or dispute anything in Treder’s post. Should I take it that you find his attitude justified?

Instead, you comment on a couple of points: “First, the claim that we’ve been cooling for the past twelve years: Wrong.” Since I have not made such a claim, what is the relevance of posting this? (I did say that there appears to have been no warming for 10 to 12 years. The chart you link to shows a decline in the 5-year average, by the way.)

On your second point: “Second, the claim that we don’t know whether the increase in carbon concentrations comes from human activity: Wrong.” Again, I have not disputed that. So why post it? (I have doubts not about human activity causing an increase in carbon, but about the extent to which they have or will contribute to warming.)

Your misguided comments here are, alas, all too common in this discussion. If everyone actually looked closely at what people are claiming (and I went to trouble of making that completely explicit in the link from my post above), we might make more progress.

I liked your recent article in the Atlantic much better…

Max

One point that no one has yet commented on: Treder uses the term “technocratic” to describe views that I “championed” for “many years”. This is completely false, based on the accepted definition of that term. See <http://en.wikipedia.org/wiki/Technocracy_(bureaucratic)>

Treder is far closer to the definition of technocrat, since he favors a vastly larger role for government control of economy and society than I do.

But what’s especially odd about Treder applying the “technocrat” label is that I’m the one being skeptical about massively interventionist policies based on what a purported consensus of scientists say, while he is the one utterly, completely, and uncritically in favor. Pot calling kettle…

Max

I said at the very beginning that I had expressed my reaction to Mike in an ongoing private email exchange. I’m not sure what value it would have for me to repeat it here, unless it’s either important to reinforce the consensus that Mike’s post was bad, or to call out any deviation from that consensus.

“No warming” over the last 10-12 years is just as wrong as “cooling” over that time period, unless your goal is to cherry pick the anomalous 1998 record as your baseline. The data are very clear: the trend continues to be up over the long term. Yes, the five year running average shows a dip for the last entry—as I said above, 2008 had a strong La Niña, moderating temperatures (adjusting the moving average downward a tick). Nonetheless, it’s well within the fluctuation seen over the past half-century. For what it’s worth, 2009 at the half-way mark was on target to be fifth warmest yet (running neck-and-neck with 2004), and that was before the El Niño started in June.

As for how we know the carbon comes from human activity… you’re right, you didn’t say anything about that. But you’re not the only person talking about global warming in this comment thread. In any event, it’s important reinforce the fact that AGW claims aren’t just phantoms cooked up in some model, but have real, measurable data behind them.

I will say this about Mike’s post: I don’t think that there’s a clear parallel between birthers (in general) and climate change denialists (in general). Frankly, I think there’s a much clearer connection between 9/11 truthers and AGW denialists. Both rely on assertions made by passionate non-specialists (whether about how well a building would hold up with a airliner with a full fuel tank crashing into it, or about human activity changing the climate) to support their disagreement with the “consensus science” (that the Twin Towers were, in fact, brought down by airliners, or that AGW is happening, likely with dire results), usually with the purpose of illustrating a much larger conspiratorial claim (that the Bush admin was behind 9/11, or that hype about global warming was a way to increase government control).

[Just to be clear: in the above paragraph, I did not call out Max or in any way claim that Max holds the beliefs that parallel 9/11 truthers.]

Here are certain undisputed facts about Yudkowsky, that he has himself written about:
1. As a 15 year old, he read Vinge’s fiction about AIs FOOMing to superintelligence and god-like powers, in a matter of seconds.
2. He has become focused with the idea of FOOMing AIs, and in the past had proposed self-improving (FOOMing) as a strategy to achieving AI.
3. He realized that setting the initial conditions to the 1st FOOMing AI will mean the difference between extinction and utopia.
4. He is the leader of a project to create the 1st FOOMing AI.

Is everything I wrote above correct? Have I misrepresented anything?  If the above is correct, then this is what follows:

Eliezer Yudkowsky is the most important human being in the history of the universe. He will single handedly avoid extinction and create a utopia to last to the end of time by properly coding the first self improving AI.  And what of his followers? They are like the Apostles, or the Companions of the Prophet, but a billion times greater.

Am I the only one who senses a cult of personality? Not a cult in the sense of a religion which believes ridiculous things. Physics clearly permits AI.  A cult in the sense that Amway is a cult, or the way teenage girls who worship a rock star constitute a cult.

To psychoanalyze a researcher, and not his research, is a variation of the ad homenin fallacy and wrong on two fronts: it is rude, and it does not forward the debate by discussing the technical issues.

Fair enough. I don’t have room in these comments, but when I set up a blog, I’d like to offer constructive criticism on technical points that I think are in error. I suggest that Mike and others do the same.

Eliezer has many good and original ideas that I agree with. He’s correct that values are arbitrary and that human equivalence is an anthromorphism. He’s correctly identified failure modes of other approaches to FAI. He’s correct that AI motivation should track volition rather than happiness, though I think in practice the two will closely correspond, and that the utility function supergoal should be flexible.  I think he’s wrong about FOOMing, underestimates the collective intelligence of human society and overestimates his AIs.  I’ll offer more constructive criticism when I have more time/space.

I think cultiness of SIAI has slowed down progress. I’ve read posts where Yudkowski calls researchers that research evolutionary algorithms dumber that Bayesians; I’ve read posts where he talks about how dumb other AI researchers are. This is really childish. Eliezer should publish in peer reviewed journals and he should publish an academic book instead of constantly blogging or telling us about his secret skunkworks AI project.  Other researchers could build on it and knowledge would grow.

Jim,

There’s no doubt Eliezer has an extremely high opinion of himself, and wants very much to be admired. There is actually a psychological term for this, but for the sake of politeness, I will not use it here. 

I’m certainly not convinced at all about his core claim that ‘values are a product of the human brain’, however.  All that’s been established is that values don’t come from intelligence as commonly defined.  The believer in universal values can simply shift the means of validation of universal values elsewhere… to direct conscious experience for example.

Even the claim that Bayesian Induction is the whole basis of rationality is open to dispute.  Any consciously understandable justification of Bayes is merely a narrative;  Even the example given by Mark Walker (2+2=4) relies for its ultimate validation on consciousness, since only consciousness can actually assign meaning to the symbols. 

Nick Bostrom’s simulation argument only needs to be extended to cover Bayesian reasoning to see that the human view of Bayesian reasoning is itself merely ‘another simulation’, and only direct consciousness (analogy formation) can be trusted. Three geniuses, three elements (Hanson, Yudkowsky and Bostrom).  The first ‘two elements’ are fully in play, but the third element (Nick) has not yet fully shown his hand 😉

Mike probably went a bit over the top, but the point I think he was making is that there are no clear-cut answers about foundational issues yet, whereas Eliezer presents as if he personally has found them all.

PS A piece of fun:  I ask everyone in the transhumanist community to be on the look-out for sightings and references to the number ‘27’ :I just saw the number again playing a central role in a post on ‘Less wrong’!

To me, it seems as if the aggressiveness in this post would never have happened if we didn’t have a debate on politics to begin with.  Mike has been blogging here for some time now and only got super-aggressive in the last week or two.

“Of the four, creationism and global warming denialism are the most directly anti-science, “

Are the small minority of scientists who deny anthropogenic global warming “anti-science”? How about the laymen who believe in this minority? Similarly, were the small minority of scientists who were the early believers in plate tectonics theory “anti-science”?
These questions are really just suggestions to drop the “anti-science” canard, and just call these people “mistaken.”

As someone striving hard to prevent existential disaster and bring about the best possible futures (i.e. “utopias”), I’ve had to defend against claims of arrogance. “So you think you are important enough to actually make a difference? You think that whether you apply yourself to these risks or sit around playing video games will determine the fate of the world?” I think it’s technically possible for all of use to make a difference, that I can help in some way, and that if it comes down to being arrogant or apathetic I’ll choose arrogance. It’s the same reasons we choose to vote, though I’d say we each have a much better chance of determing the future than we do of determing the president with one more vote.

So yes, using those views of Eliezer, it would seem he is the most important person on earth. He thinks he’s going to make a difference, maybe even THE difference. But I think he would be more than happy to have people working with him, and I think he’s more than happy to be just a part of a group effort. I think he would even be very happy if someone else built the first AI, and it was a FAI (though maybe not quite as happy if he did it, but still happy).

Eliezer IS more prideful than I am, though I’d also say he is basically smarter than I am. And while I aim to give up pride to be effective it can be argued that one can find worthwhile strength in it, which is something I think Eliezer believes. I do have some concern about his pride but the concern is currently small, even if his pride may not be. I’ve only yet read a single piece of his where he discusses other AI researchers, and I think his pride can hurt him there, even if he’s right on a given point.

And yes he has a bit of a cult following. I thought this was strange a month ago but reading more of his Overcoming Bias and LessWrong posts, I admit I’ve been pretty impressed myself. Though I appreciate his humanitarian leanings, I think its more a “cult of rationality” than a cult of personality.

Michael: To me, it seems as if the aggressiveness in this post would never have happened if we didn’t have a debate on politics to begin with. Mike has been blogging here for some time now and only got super-aggressive in the last week or two.

Politics, of course—the old, tired and boring dogfight between libertarians and leftwingers.

But I detect also an infection, I hope curable, with the virus of the current PC crusade against imagination. Which has nothing to do with left and right: the left has had wildly imaginative thinkers like Wells, Haldane, Bernal, and the beautiful “Power to Imagination” of the 60s.

Ahhh, the 60s… I was only a child but now I realize that it was the most beautiful decade I have seen, so full of energy and imagination. It ended with the first moonwalk—then a couple of years later the Moon was abandoned and the spirit of the 60s too. How to bring it back?

In case you missed this related commentary:

http://www.alternet.org/environment/141857/

Top 5 Ways the ‘Birthers’ Are Like the Global Warming Deniers

By Joseph Romm, Climate Progress. Posted August 8, 2009

5. Both groups are impervious to the evidence.

4. Both come from the same group of people.

3. Both groups get their disinformation from the same right-wing sources.

2. Both groups have an underlying motivation—their desire to obstruct progressive government action.

1. Both groups believe in a mammoth conspiracy theory.

I think these useful characterological observations make clear how Creationism does fit with the conservative conspiracy theory demographic and mindset of many Birthers and denialists, but also how different from this checklist are the S^ or climate modeling skeptics like Max. I’m quite sure the S^ don’t watch Fox News, or denounce the liberal mainstream media for spreading disinformation about robots.

“No model is perfect, but today’s models do back-cast the last century’s climate quite well, and the observed results we’ve seen over the past fifteen years or so generally fall within the error bars of even the 1990-era models.”

Of course they back-cast well, that’s the only thing they do well, it’s a big part of what the adjustment efforts go into when the models are created in the first place and it’s also their most useless part.

As for forecasting “well”, 15 years are barely sufficient to draw conclusions from regarding century-level predictions, not to mention the actual results don’t even come close to following the trend the IPCC’s latest models “predicted” in 2007.

Here’s an analysis of the 2001-2008 observed temperatures from 3 public institutional data sources (plus a degree of climate variability expressed as white noise and adjusted to resemble the variability observed during another period with relatively few volcanic eruptions, just like 2001-2008):
http://rankexploits.com/musings/2008/result-of-hypothesis-tests-very-low-confidence-2ccentury-correct/

The conclusion based on the average trend over all 3 datasets is that there’s less than 10% probability that the IPCC’s predicted trend of +2 degrees per century is being reflected in the actual results (i.e. what the IPCC would call a “very low confidence” prediction; if they were honest about their results, that is).

The conclusion of the same analysis based particularly on the HadCRUT3 data is that there’s less than 5% probability that what we’re seeing is a 2C/century trend. Such a small probability is pretty much equivalent to saying the IPCC hypothesis stands falsified.

Frank said:

‘Though I appreciate his humanitarian leanings, I think its more a “cult of rationality” than a cult of personality. ‘

Frank, most people have no more desire to belong to a cult of rationality than any other type of cult.

I will stick to my instincts, emotions , narratives and analogies thank you very much.  Eliezer and his followers can trust to Bayes and arid IQ.  I suggest that I develop my Narrative arts,  Eliezer can develop his Bayesian arts.  Hopefully we can stay out of each others way (for a couple more decades at least).

Godel Theorems are the strongest possible evidence that Bayes is not the foundation of rationality.  Godel showed that there are some true statements that no amount of non-sentient symbol shuffling can get to, but direct conscios reflection can.  This strongly suggests that Bayesian reasoning is not the foundation of rationality that Singularitarians think it is.

The model of ‘intelligence’ Singularitarians are using is far too limited. I suggest that real intelligence is multi-dimensonial and not based on the arid rationalism that Singularitarians worship.

Come on. Godel theorem is a meta statement about knowledge and truth, Bayes theorem is a formula to estimate probabilities. They play in different corners of the field, and are never close.

You may wish to read more about Godel theorem—it just does not mean what you say.

Giulio,

As I understand it, the SIAI claim is that Bayesian reasoning captures the whole of rationality.  If their claim is correct, then mathematical logic should be entirely expressible in terms of Bayesian reasoning as well (deduction is simply the case where the probabilities are set to 1, 100%).

As I understand it, the Godel theorems say that in any consistent system of logic complex enough to capture both multiplication and division (for instance PA logic), there exist true statements expressible in the language of that system, that cannot be proved within that system.  They can only be ‘seen’ to be true using direct conscious reflection on the meaning of the symbols. 

This seems to show that no non-sentient (unconcious) reasoning process can fully capture rationality, and therefore the conclusion is that Bayesian reasoning cannot fully capture rationality.

If any mathematician or AI researcher reading this can provide a refutation then by all means post it.

Mark,

As I understand it, the Godel theorems say that in any consistent system of logic complex enough to capture both multiplication and division (for instance PA logic), there exist true statements expressible in the language of that system, that cannot be proved within that system…

This is correct.

...They can only be ‘seen’ to be true using direct conscious reflection on the meaning of the symbols.

Godel never said this. This “consciousness based interpretation” of Godel theorem is found only in third rate New-Age-ish books. Godel’t theorem means just what it says: that any sufficiently complex system contains statement that just “happen to be true” while their truth is not provable within the system. This han nothing to do with consciousness: no magic soul to breathe life in lifeless maths.

I really wish people would leave methematics alone when discussing personal issues.

Giulio

See any account of Godel (Hofstadler’s ‘I Am A Strange Loop’ to give a specific example).  Or from the wikipeda entry:

http://en.wikipedia.org/wiki/Gödel’s_incompleteness_theorems

We can formulate the Godel sentence of a logic system:

“The TRUE but unprovable statement referred to by the theorem is often referred to as “the Gödel sentence” for the theory. “

The sentence is unprovable within the system but TRUE.  How do we know it is true?  Through conscious reflection on the meaning of the symbols.

I am not suggesting that there is anything mystical about this nor does it mean that anything uncomputable is happening (I agree consciousness is entirely physical and is a computation).

However what it clearly shows is that no unconsious system of symbol shuffling (such as Bayesian reasoning for example) can understand why the Godel sentences are true.

@Mark: They you will agree that consciousness can be engineered in principle, as soon as its mechanisms are probably understood.

Without bothering Godel, I suppose you refer to those situations in which Bayesian reasoning alone is not able to identify an optimal decision (Buridan’s donkey).

But there is a simple mechanism: toss a coin. Note that the toss-a-coin function is noncomputable (a truly random sequence is noncomputable by definition), but this does not mean it cannot be implemented (hook the computation to a cosmic ray detector or a quantum generator of entropy).

Giulio,

Yes I would agree consciousness and intelligence can indeed be engineered in principle but I would suggest that the current Singularitarians are on the wrong path.

Yes, I was referring to situations where it appears that Bayesian reasoning is unable to make computable decisions, for example, deciding to believe the Godel sentences.

I agree that even uncomputable things can still be deal with, but I don’t think the brain is dealing with them randomly, I think that there is some new yet to be discovered method of non-Bayesian reasoning. 

So my point was that the Singularitarian obssession with Bayesian Reasoning is a very limited conception of intelligence. 

I would call Bayesian reasoning ‘rationalistic intelligence’, whereas my preferred mode of operation is much more ‘emotional/conscious intelligence’.

Friends, I am posting this message on behalf of Martine Rothblatt, who you all know is the Advisor to IEET http://ieet.org/index.php/IEET/bio/rothblatt/  Martine emailed me today and asked me to post it for her, as she had a bit of difficulty adding a comment to this page.

“I’m sure that Mike regrets, at least in some part of his mind, his post’s wrongful attack on Max. It is really not fair or accurate; I agree with Natasha that it is close to a kind of slander. Max has done a great deal of philosophical trailblazing for all of us and he does not deserve to be lumped in with creationists or anti-obama nuts.

I don’t know the specifics of the global warming debate between the two of them, but I do know that Ray Kurzweil has been insistent (backed by his logical extrapolations from a mountain of empirical data), that whatever the current situation is, the problem can and will (absent contrary government policy) be solved by burgeoning solar cell technology. The differences among us transhumanists (thank you Max for that too!) are tiny compared to what we support. If we don’t stand together, we’ll hang separately.

Let us not be like the characters in a delightful book I just finished reading, the true story of this guy in Amherst, MA who led a multi-decade effort to save the world’s Yiddish books, which were rapidly disappearing into Dumpsters. He describes the relatively tiny Yiddish community as rife with internecine battles over arcane details that are immaterial compared to the existence or disappearance of Yiddish culture. In a sense, they doomed themselves, and the protagonist of the story has basically put the culture into ‘biostasis’ (sociostasis?) by saving all of its literature. Let’s be smart enough to remain united.”

I read the author as an atheist attacking the possibility of any higher power, even if it is AGI, using aggressive but weak false association. Eliezer has been attacked by an angry man with a pillow.

YOUR COMMENT Login or Register to post a comment.

Next entry: True Blood and Personhood

Previous entry: The “End of Politics” Delusion