Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Drug That Lost High-Stakes Political Fight For Funding Now Being Used Against Ebola

Planetary Boundaries And Global Catastrophic Risk

Morality and God

Random Neuron Connections

Digital Afterlife: 2045

Is the UN up to the job?


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

rms on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

instamatic on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

rms on 'Science Fiction and our Dreams of the Future' (Oct 20, 2014)

rms on 'Sousveillance and Surveillance: What kind of future do we want?' (Oct 20, 2014)

dobermanmac on 'Transhumanism and the Will to Power' (Oct 20, 2014)

instamatic on 'Why Is There Something Rather Than Nothing?' (Oct 18, 2014)

CygnusX1 on 'Why Is There Something Rather Than Nothing?' (Oct 18, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7382) Hits
(2) Comments

Dawkins and the “We are going to die” -Argument
Sep 25, 2014
(5525) Hits
(21) Comments

Should we abolish work?
Oct 3, 2014
(4996) Hits
(1) Comments

Will we uplift other species to sapience?
Sep 25, 2014
(4515) Hits
(0) Comments



IEET > Security > Rights > FreeThought > Life > Innovation > Vision > Technoprogressivism > Affiliate Scholar > Phil Torres

Print Email permalink (12) Comments (6218) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


How Certain Should We Be?


Phil Torres
By Phil Torres
Ethical Technology

Posted: Jul 21, 2010

Given the complexity of the world today, plus the risks associated with current and emerging technologies, it behooves everyone on all sides of the biopolitical spectrum to be open to opposing points of view.

Indeed, it is precisely among men animated by the spirit of science that the doctrine of fallibilism will find supporters. - C.S. Pierce

In an IEET article published several weeks ago, I critiqued a Futurisms post in which Mark Gubrud argues that mind-uploading is doomed to fail, since it presupposes the existence of an immaterial soul (the ontological thesis of substance dualism). Thus, because mind-uploading fails, the entire transhumanist project does too [1].

A central theme of my critique was that one should make every effort to understand the positions that one is criticizing before one criticizes them, especially if one’s criticisms are couched in strong and decisive language. This led me to propose two “rules” for avoiding sciolism, or superficial and (especially in Gubrud’s case) pretentious scholarship [2].

Gubrud is guilty of violating both rules, I argued: his arguments are based on serious misunderstandings of important concepts, and on careless conflations of philosophical theories. Put differently, no one who knows the subject matter would make the arguments Gubrud made, even if he or she agreed with Gubrud’s conclusions [3].

Although my criticism was directed at Gubrud, the basic idea, I think, is universally applicable: none of us should be too sure of or obstinate about the views we hold, we should all strive to routinely question the tenability of our positions, and we should all be open to changing our opinions, no matter how dearly held, when it is rational to do so. (There are important questions here about whether, or to what extent, these precepts should apply to this position itself, to make the position self-consistent; but that’s for another discussion!)

image

At the core of this thesis is the notion of intellectual honesty, which might be explicated as follows: what one believes ought to be secondary to why one believes it. Or, in different words, the reasons for holding a belief ought to have priority over the content of that belief, whatever it may be. And, since content in part determines who one is as a person – i.e., which beliefs one accepts will determine whether one is a Christian, a socialist, a civil libertarian, and so on – one’s identity should also be secondary to issues of justifiability.

Thus, the intellectually honest (IH) individual is, for example, not primarily an atheist, or a Darwinian, or a transhumanist, or whatever. What matters to the IH individual is that his or her beliefs track, in an important sense, the best available evidence we have, using logic and reason to fit those beliefs together into a coherent system. As a result of such tracking, then, the IH individual must exhibit a high degree of mental flexibility, or a “willingness to reexamine assumptions as [he or she goes] along” (and, of course, to change those assumptions when necessary).

This last maxim comes, incidentally, from Nick Bostrom’s paper “Transhumanist Values.” Bostrom refers to this as “philosophical fallibilism,” listing it as a “derivative value” of transhumanism. If one takes this maxim seriously, though, it means that one must be willing to abandon one’s adherence to transhumanism itself, if ever there arises sufficiently compelling reasons for doing so.

While this may have undesirable consequences for the transhumanist moment, as well as for the (ex-)supporter him or herself (if transhumanism has become an integral part of who that individual is), the IH individual is compelled to accept or reject such beliefs independent of the consequences (even when they are good, I should add). This is, indeed, precisely what it means to be an intellectually honest individual first and a transhumanist second – to put why over what and who.

Similarly, the IH individual who accepts the narrative of evolution rather than that of Genesis must be prepared to abandon (or modify) his or her evolutionism if the available evidence were to make it unreasonable not to do so. We know, for example, that the discovery of a “Precambrian rabbit” would constitute a major model anomaly for the claim that evolution did in fact occur [4]. An evolutionist who cares more about truth than about being right (about the evolution of organisms) would be ready and willing (even if unhappy) to reconsider the idea of gradualistic evolution were such a fossil found.

Let me pause here to make two clarifications: first, I should emphasize that it is, at present, overwhelmingly reasonable to accept evolution – specifically, Darwinian evolution – as true. Darwinism is about as robust today as the theory of heliocentrism. Nonetheless, an important point to remember is that scientific theories, insofar as they depend on empirically-derived data for their justification, are all fallible in nature: future evidence, no matter how improbable from our present perspective, may indeed overturn what we now take to be secure.
image
This is a deep point about the limits of reason that David Hume made several centuries ago [5], and it leads directly to the above quote attributed to C.S. Pierce.

Second, we should distinguish between ideas that are purely descriptive in nature and those that have a normative (or “should”) component. Thus, while atheism may be construed as a theory of what reality really is like (namely, it lacks any supernatural beings), transhumanism makes additional claims about how the world – specifically, the world’s future – ought to be. This means that values enter the picture, and values are largely (or wholly?) independent of the way the world happens to be (another Humean point, which has recently been the subject of vociferous contention).

Nonetheless, transhumanism is crucially informed by the empirical facts [6], and insofar as this is true the tenability of transhumanism will depend, in part, on the way the world is. It may be the case, for example, that increasing our intelligence would only increase the likelihood of an existential disaster. If this happens to be the case – in my own view it is probably not [7] – then the transhumanist prescription to develop and use cognitive enhancement technologies, to “person-engineer,” appears less than judicious.

Finally, I would like to mention two reasons for thinking that IH, and the humility associated with it, is more important today than ever before.

1) Collective human knowledge is growing at (something like) an exponential rate; a salient manifestation of this fact is the rapid proliferation of academic specialties in both the sciences and humanities. Combine this with the fact that, despite an increase in the abundance of cognitive enhancers (from a good education to the World Wide Web [pdf]), the human mind still remains relatively fixed in terms of its various capacities. (The human mind is certainly not expanding at anything close to that of collective human knowledge.)

From this it follows that individual ignorance must be growing at a rate (approximately) proportional to that at which collective ignorance is shrinking; thus, even the most erudite scholar today knows only a small fraction of what we – the whole – have cataloged away in our collectively authored Book of Knowledge.

This expansion of individual ignorance, I believe, should give us less confidence in any claim we might make about the world today – especially the sort of “big picture” claims that, for instance, the Future of Humanity Institute at Oxford, with its transhumanist leanings, aims to make. Many leading futurists are, no doubt, acutely aware of this point; nonetheless, I believe it warrants repeating, given its importance.

2) Another more or less obvious point is that the stakes today are high. Technology has already reached the absolute ceiling of its destructive potential: contemporary nuclear arsenals are, for example, sufficient to obliterate the biosphere and permanently eradicate Homo sapiens. Future technologies, from AI to nanotechnology, are expected to introduce even more risks of existential proportions.

Given the risks associated with present and emerging technologies, it (obviously) behooves us to be as prudent as possible in mapping out and implementing a program for the future – especially one that endorses the “proactionary” development of unprecedentedly powerful technologies. This means, therefore, letting reasons determine which positions we end up espousing, independent of what those positions are – be they bioconservative, technoprogressive, or whatever. (I should add that, in my view, the reasons for accepting a technoprogressive stance are far stronger than those opposing it, just as the reasons for accepting Darwinian evolution are far stronger than those for, say, creationism. But the possibility remains that robust arguments for bioconservative positions may be articulated.)


In conclusion, individuals on all sides of the biopolitical (or whatever) spectrum ought to be, I believe, open to changing their views, if presented with sufficiently good reasons for doing so. Gubrud’s lesson is a lesson for everyone.


Print Email permalink (12) Comments (6219) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Re In conclusion, individuals on all sides of the biopolitical (or whatever) spectrum ought to be, I believe, open to changing their views, if presented with sufficiently good reasons for doing so.

I am, of course, open by changing my views if presented with sufficiently good arguments. But I have never found the arguments of anti-transhumanists good, or persuasive.





> “In conclusion, individuals on all sides of the biopolitical (or whatever) spectrum ought to be, I believe, open to changing their views, if presented with sufficiently good reasons for doing so. “

I’d go even further:

“In conclusion, individuals on all sides of the biopolitical (or whatever) spectrum ought to be, I believe, open to changing their views, not by waiting to be presented with sufficiently good reasons to do so, but by actively looking for alternative viewpoints for doing so. “





“In conclusion, individuals on all sides of the biopolitical (or whatever) spectrum ought to be, I believe, open to changing their views, if presented with sufficiently good reasons for doing so.”

I’m a bit baffled by how you managed to write a complicated-looking article around this very basic and simple point. Isn’t this something that everyone agrees on anyway?





“Technology has already reached the absolute ceiling of its destructive potential”

This is quite a weird statement. Are we able to destroy entire solar systems yet, for example? Or set into action a process that would expand to destroy (or reorganize) the entire universe theoretically reachable from where we are?

Also, even though we might be able to “obliterate the biosphere” with currently existing nuclear weapons, there are bacteria deep underground and so on. They could eventually migrate above ground after a nuclear holocaust and start an explosion of biodiversity all over again. Is our current technology sufficient to destroy even these refuges of the currently existing biosphere?





Really, Phillippe, another post regurgitating your empty use of the word “sciolism” as a smear against my name, and posturing that you are the Learned One in a position to dismiss my imperfectly but I think sufficiently clearly expressed critique of “uploading” and its proponents?  In the continuing absence of any substantive response to my arguments, besides pedantic misunderstandings (or misrepresentations) of what I wrote, saying that I ought to know that this term means that and if not I ought to read this or that or else I am unqualified to write about things I’ve been considering since before your birth?  And you quote my comment about Christopher Hitchens (responding to your citing the ego-inflated ex-socialist turned neocon apologist for wars of aggression as some kind of fount of wisdom), apparently oblivious to the (I thought not too subtle suggestion) of what I would say if were to stoop to your level?  Really.





In conclusion, individuals on all sides of the biopolitical (or whatever) spectrum ought to be, I believe, open to changing their views, not by waiting to be presented with sufficiently good reasons to do so, but by actively looking for alternative viewpoints for doing so.

Actively looking? Let’s see. I am definitely not a racist, and I have strong feelings against racists. Do you mean that I should spend my time reading racist writers? That would strike me as a monumental waste of time: based on a few decades of experience, I know that their arguments are not likely to change my opinion.





I agree with you Giulio. I suppose I could’ve saved you time if only I had added the word “often” somewhere in my statement, like before the words “actively looking.”





@Riikonen

I’m a bit baffled by how you managed to write a complicated-looking article around this very basic and simple point. Isn’t this something that everyone agrees on anyway?

Thanks for the comment. First of all, I don’t think everyone agrees with the IH position. (Not sure, though, what exactly you meant by “everyone,” given the terms scope ambiguity.) There are plenty of fundamentalists out there who aren’t interested in changing their views about science, religion, etc. no matter what the arguments for or against might be. And anti-intellectualism is, by most accounts, a hallmark of contemporary American culture.

More importantly, though, people are far more willing to avow the IH position than actually implement it in their lives; changing one’s beliefs when it’s reasonable to do so is one of the most psychologically difficult things to do. The point of the article, therefore, was not to articulate a novel thesis (although I did attempt to give a somewhat novel explication of that thesis), but rather to underline the importance of an already well-known, but not universally accepted, way of forming and holding one’s beliefs. As for the simplicity, not all points have to be complicated to be worth considering (I don’t think this is exactly what you were saying…). Atheists, for example, have written entire “complicated-looking” books in support of the simple proposition that God doesn’t exist; and a common response to the results of psychological studies is that these results are “obvious” or “predictable.” Yet the results may still be important. (See, e.g., http://psp.sagepub.com/content/27/4/497.abstract.)

In my view, underlining the importance of IH is never a bad thing. (What do you think?)

@Giulio:

I am, of course, open by changing my views if presented with sufficiently good arguments. But I have never found the arguments of anti-transhumanists good, or persuasive.

I take it that the IH argument is a safe one for people like atheists to make, because atheism seems to be an unavoidable conclusion to reach, given what we now know about the universe. The same applies to the technoprogressive position, although I don’t think transhumanists have a knock-down argument for their position the way that atheists do. Thus, while I’m open to bioconservative arguments against (e.g.) “designer babies,” I don’t find those arguments compelling at all. Appeals to the notion of life as a “gift,” as Sandel does, seem to me pretty vacuous.





Since you’ve had this epiphany and are so open now to the possibility that you might be wrong about something, and since you’ve now put forward two blog posts and a half-dozen comments on Futurisms calling me names and pretending that you can see right through me, before you write another, why don’t you try rereading what I wrote and see if you can’t actually wrap your head around the arguments and engage them on substantive terms?  You have not done that.  At most you’ve accused me of not knowing this or that argument or term, and of “conflating different theories” in academic philosophy when I was not even addressing the works of any academic philosophers but rather those of popular transhumanist writers like Moravec and Kurzweil, whom I explicitly argued use many invented terms, without any careful analytic distinctions, to mean the same thing, which is the same as the meaning of the word “soul” in common parlance.  You have not answered or even addressed this argument, all you’ve done is to posture as one in a position to dismiss it as ignorant.  Believe me, I’ve got plenty of names to call you, too.





@Mark: many readers, including me, have posted very substantive responses to your arguments, here and on the New Atlantis blog. As far as I can tell, your replies have not been very persuasive, or cogent.





@Guilio- I responded to your comments and those of others at length at Futurisms and elsewhere.  I can’t claim to have answered every word of every comment, but I always tried to respond as directly as possible to the main and strongest points.  Not once did I resort to posturing as a “scholar” and denigrating opponents as unlearned.  Of course, every conversation has to end at some point, so I’m sorry if I didn’t answer everything to your satisfaction.  I suspect that would have been impossible, but I did make a good-faith effort to engage ideas and reply substantively.





@Mark - I think you are right, answering everything to my satisfaction would have been impossible. In the words of Hugo de Garis, you are a Terran and I am a Cosmist. The difference is a difference of basic emotional stance, which cannot be changed by rational arguments. I also have answers to all your objections, which make perfect sense to me, but would fail to persuade you.

In the debate over at New Atlantis, I have pointed at some logical flaws in your arguments, but I have no doubts that you could find better arguments. Which, of course, would fail to persuade me.

As Cosmists, we _want_ to see humans transcending biology, merging with AIs, and spreading to the universe as uploads, and we will use science and technology as tools to achieve our goals. On the basis of the scientific knowledge available to me, I think this program is feasible. If you tell me that you _don’t like it_, I will respect this.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Twilight, or Bram Stoker’s Final Triumph

Previous entry: When Ideas Have Sex

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376