Given the complexity of the world today, plus the risks associated with current and emerging technologies, it behooves everyone on all sides of the biopolitical spectrum to be open to opposing points of view.
Indeed, it is precisely among men animated by the spirit of science that the doctrine of fallibilism will find supporters. - C.S. Pierce
In an IEET article published several weeks ago, I critiqued a Futurisms post in which Mark Gubrud argues that mind-uploading is doomed to fail, since it presupposes the existence of an immaterial soul (the ontological thesis of substance dualism). Thus, because mind-uploading fails, the entire transhumanist project does too .
A central theme of my critique was that one should make every effort to understand the positions that one is criticizing before one criticizes them, especially if one’s criticisms are couched in strong and decisive language. This led me to propose two “rules” for avoiding sciolism, or superficial and (especially in Gubrud’s case) pretentious scholarship .
Gubrud is guilty of violating both rules, I argued: his arguments are based on serious misunderstandings of important concepts, and on careless conflations of philosophical theories. Put differently, no one who knows the subject matter would make the arguments Gubrud made, even if he or she agreed with Gubrud’s conclusions .
Although my criticism was directed at Gubrud, the basic idea, I think, is universally applicable: none of us should be too sure of or obstinate about the views we hold, we should all strive to routinely question the tenability of our positions, and we should all be open to changing our opinions, no matter how dearly held, when it is rational to do so. (There are important questions here about whether, or to what extent, these precepts should apply to this position itself, to make the position self-consistent; but that’s for another discussion!)
At the core of this thesis is the notion of intellectual honesty, which might be explicated as follows: what one believes ought to be secondary to why one believes it. Or, in different words, the reasons for holding a belief ought to have priority over the content of that belief, whatever it may be. And, since content in part determines who one is as a person – i.e., which beliefs one accepts will determine whether one is a Christian, a socialist, a civil libertarian, and so on – one’s identity should also be secondary to issues of justifiability.
Thus, the intellectually honest (IH) individual is, for example, not primarily an atheist, or a Darwinian, or a transhumanist, or whatever. What matters to the IH individual is that his or her beliefs track, in an important sense, the best available evidence we have, using logic and reason to fit those beliefs together into a coherent system. As a result of such tracking, then, the IH individual must exhibit a high degree of mental flexibility, or a “willingness to reexamine assumptions as [he or she goes] along” (and, of course, to change those assumptions when necessary).
This last maxim comes, incidentally, from Nick Bostrom’s paper “Transhumanist Values.” Bostrom refers to this as “philosophical fallibilism,” listing it as a “derivative value” of transhumanism. If one takes this maxim seriously, though, it means that one must be willing to abandon one’s adherence to transhumanism itself, if ever there arises sufficiently compelling reasons for doing so.
While this may have undesirable consequences for the transhumanist moment, as well as for the (ex-)supporter him or herself (if transhumanism has become an integral part of who that individual is), the IH individual is compelled to accept or reject such beliefs independent of the consequences (even when they are good, I should add). This is, indeed, precisely what it means to be an intellectually honest individual first and a transhumanist second – to put why over what and who.
Similarly, the IH individual who accepts the narrative of evolution rather than that of Genesis must be prepared to abandon (or modify) his or her evolutionism if the available evidence were to make it unreasonable not to do so. We know, for example, that the discovery of a “Precambrian rabbit” would constitute a major model anomaly for the claim that evolution did in fact occur . An evolutionist who cares more about truth than about being right (about the evolution of organisms) would be ready and willing (even if unhappy) to reconsider the idea of gradualistic evolution were such a fossil found.
Let me pause here to make two clarifications: first, I should emphasize that it is, at present, overwhelmingly reasonable to accept evolution – specifically, Darwinian evolution – as true. Darwinism is about as robust today as the theory of heliocentrism. Nonetheless, an important point to remember is that scientific theories, insofar as they depend on empirically-derived data for their justification, are all fallible in nature: future evidence, no matter how improbable from our present perspective, may indeed overturn what we now take to be secure.
This is a deep point about the limits of reason that David Hume made several centuries ago , and it leads directly to the above quote attributed to C.S. Pierce.
Second, we should distinguish between ideas that are purely descriptive in nature and those that have a normative (or “should”) component. Thus, while atheism may be construed as a theory of what reality really is like (namely, it lacks any supernatural beings), transhumanism makes additional claims about how the world – specifically, the world’s future – ought to be. This means that values enter the picture, and values are largely (or wholly?) independent of the way the world happens to be (another Humean point, which has recently been the subject of vociferous contention).
Nonetheless, transhumanism is crucially informed by the empirical facts , and insofar as this is true the tenability of transhumanism will depend, in part, on the way the world is. It may be the case, for example, that increasing our intelligence would only increase the likelihood of an existential disaster. If this happens to be the case – in my own view it is probably not  – then the transhumanist prescription to develop and use cognitive enhancement technologies, to “person-engineer,” appears less than judicious.
Finally, I would like to mention two reasons for thinking that IH, and the humility associated with it, is more important today than ever before.
1) Collective human knowledge is growing at (something like) an exponential rate; a salient manifestation of this fact is the rapid proliferation of academic specialties in both the sciences and humanities. Combine this with the fact that, despite an increase in the abundance of cognitive enhancers (from a good education to the World Wide Web [pdf]), the human mind still remains relatively fixed in terms of its various capacities. (The human mind is certainly not expanding at anything close to that of collective human knowledge.)
From this it follows that individual ignorance must be growing at a rate (approximately) proportional to that at which collective ignorance is shrinking; thus, even the most erudite scholar today knows only a small fraction of what we – the whole – have cataloged away in our collectively authored Book of Knowledge.
This expansion of individual ignorance, I believe, should give us less confidence in any claim we might make about the world today – especially the sort of “big picture” claims that, for instance, the Future of Humanity Institute at Oxford, with its transhumanist leanings, aims to make. Many leading futurists are, no doubt, acutely aware of this point; nonetheless, I believe it warrants repeating, given its importance.
2) Another more or less obvious point is that the stakes today are high. Technology has already reached the absolute ceiling of its destructive potential: contemporary nuclear arsenals are, for example, sufficient to obliterate the biosphere and permanently eradicate Homo sapiens. Future technologies, from AI to nanotechnology, are expected to introduce even more risks of existential proportions.
Given the risks associated with present and emerging technologies, it (obviously) behooves us to be as prudent as possible in mapping out and implementing a program for the future – especially one that endorses the “proactionary” development of unprecedentedly powerful technologies. This means, therefore, letting reasons determine which positions we end up espousing, independent of what those positions are – be they bioconservative, technoprogressive, or whatever. (I should add that, in my view, the reasons for accepting a technoprogressive stance are far stronger than those opposing it, just as the reasons for accepting Darwinian evolution are far stronger than those for, say, creationism. But the possibility remains that robust arguments for bioconservative positions may be articulated.)
In conclusion, individuals on all sides of the biopolitical (or whatever) spectrum ought to be, I believe, open to changing their views, if presented with sufficiently good reasons for doing so. Gubrud’s lesson is a lesson for everyone.