Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

Brexit for Transhumanists: A Parable for Getting What You Wish For

How we’re harnessing nature’s hidden superpowers

The era of personal DNA testing is here

What democracy’s future shouldn’t be

IEET Scholars Cited in New Book ‘The Posthuman Body in Superhero Comics’

Why the Human Lifespan Ends at 122

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


RJP8915 on 'Brexit for Transhumanists: A Parable for Getting What You Wish For' (Oct 21, 2016)

instamatic on 'What democracy’s future shouldn’t be' (Oct 20, 2016)

instamatic on 'Is the internet killing democracy?' (Oct 17, 2016)

RJP8915 on 'The Ethics of a Simulated Universe' (Oct 17, 2016)

Nicholsp03 on 'The Ethics of a Simulated Universe' (Oct 17, 2016)

RJP8915 on 'The Ethics of a Simulated Universe' (Oct 16, 2016)

Nicholsp03 on 'The Ethics of a Simulated Universe' (Oct 16, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Here’s Why The IoT Is Already Bigger Than You Realize
Sep 26, 2016
(5905) Hits
(1) Comments

IEET Fellow Stefan Sorgner to discuss most recent monograph with theologian Prof. Friedrich Graf
Oct 3, 2016
(3958) Hits
(0) Comments

Space Exploration, Alien Life, and the Future of Humanity
Oct 4, 2016
(3930) Hits
(1) Comments

All the Incredible Things We Learned From Our First Trip to a Comet
Oct 6, 2016
(3012) Hits
(0) Comments

IEET > Life > Brain–computer-interface > Innovation > Vision > Nanotechnology > Sociology > Bioculture > CyborgBuddha > Directors > Giulio Prisco

Print Email permalink (2) Comments (4105) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Review of ‘Against Transhumanism’ by Richard Jones

Giulio Prisco
By Giulio Prisco
Turing Church

Posted: Jan 25, 2016

Physicist Richard Jones, author of the (highly recommended) nanotechnology book “Soft Machines: nanotechnology and life” and editor of the Soft Machines blog, has written a short book provocatively titled “Against Transhumanism – The delusion of technological transcendence.” The book, an edited compilation of essays previously published on Soft Machines and IEEE Spectrum, is free to download.

It’s not that unusual for outspoken anti-transhumanists to show a crystal clear understanding of transhumanism. Francis Fukuyama denounced transhumanism as “the most dangerous idea in the world” in an influential 2004 article in the Foreign Policy magazine.

“As ‘transhumanists’ see it, humans must wrest their biological destiny from evolution’s blind process of random variation and adaptation and move to the next stage as a species,” noted Fukuyama. That’s a clear and good definition of transhumanism, one of the best that I have seen.

Jones, like Fukuyama, understands transhumanism. Chapter 2 of the book, titled “The strange ideological roots of transhumanism,” outlines the Marxist and Christian roots of transhumanism in the works of Russian Cosmists (e.g. Tsiolovsky, Fedorov) and British Marxists (e.g. Bernal, Haldane). See my review of the chapter and my related essays “The Russian Cosmists” and “John D. Bernal’s The World, the Flesh, and the Devil, a transhumanist classic.”

Jones considers transhumanism as essentially spiritual and religious, a view shared by Robert Geraci in “Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality” and (at times) Ray Kurzweil himself, who said that we can regard “the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking.”

Chapter 3, dedicated to nanotechnology, criticizes Drexler’s vision of self-replicating molecular nanotechnology (see my essay “The nanobots are coming back“). Jones doesn’t deny the feasibility in-principle of molecular nanotechnology (MNT) – he acknowledges that biology proves the feasibility of molecular nanotech – but underlines the huge challenges ahead. “Of course, none of these issues constitutes a definitive proof that the MNT route will not work,” he says. “But they certainly imply that the difficulties of implementing this program are going to be substantially greater than implied by [its proponents].”

“My own view is that radical nanotechnology will be developed, but not necessarily along the path proposed by Drexler,” said Jones in the Soft Machines book. “I accept the force of the argument that biology gives us a proof in principle that a radical nanotechnology, in which machines of molecular scale manipulate matter and energy with great precision, can exist. But this argument also shows that there may be more than one way of reaching the goal of radical nanotechnology.”

Jones’ thesis is that, while Drexlerian “mechanical” nanotech designs must work against nanoscale physics, biological evolution has found powerful ways to take advantage of the same nanoscale physics – a superior design approach. I think molecular nanotech will be eventually combine mechanical nanotech, wet bio-inspired engineering, and other approaches that somebody will think of.

In Chapter 4, Jones presents arguments against mind uploading but acknowledges that mind uploading might eventually be possible. “[Mind] uploading being impossible in principle [is] a conclusion I suggest only very tentatively,” he says “But there’s nothing tentative about my conclusion that if you are alive now, your mind will not be uploaded. What comforts does this leave for those fearing oblivion and the void, but reluctant to engage with the traditional consolations of religion and philosophy?”

I am in total agreement with Jones about mind uploading technology being very unlikely to be developed in useful time for those alive now. I consider both molecular nanotechnology and mind uploading as feasible in-principle and likely to be achieved by our grandchildren, or theirs, but I find Jones’ predictions more plausible than Kurzweil’s.

Jones mentions hope in cryonics and radical life extension as a mental strategy that transhumanists use to cope with the idea of death. I propose a Cosmist Third Way (described by Disinfo as “An Afterlife For Atheists” and by Motherboard as “The Religion of the Future“) as a better coping strategy.

I think we transhumanists should realize that molecular nanotech, superintelligent AI, mind uploading, interstellar travel and all that aren’t arriving anytime soon, and find coping strategies. My coping strategy, openly religious, is to think of future technologies able to resurrect the dead from the past with advanced science, space-time engineering and “time magic.” So I don’t fear death too much and I can enjoy the slow hike to the future.

“[The] tantalising possibility remains that we will truly learn to harness the unfamiliar quantum effects of the nanoscale to implement true quantum computing and information processing,” says Jones in Chapter 3 on nanotech, but he warns that the quantum aspects of nanoscale physics make molecular nanotech very challenging. Similarly, in Chapter 4 on mind uploading, Jones argues that the random quantum behavior of matter at the nanoscale poses significant conceptual problems for mind uploading. I disagree, but the discussion is interesting.

Chapter 5 echoes Dale Carrico’s views and can be summarized as Carrico minus insults. A difference between Jones and Carrico is that, while Carrico sticks to personal insults, Jones addresses the technical arguments proposed by transhumanist scientists in support of futuristic technologies like molecular nanotechnology and mind uploading. Another difference is that Jones acknowledges the good arguments of his opponents, and tries not to look like a fool.

OK, I will admit that I found Jones’ book via Carrico’s blog, of which I am an avid reader. I find Carrico’s blog interesting and often fun, especially when he insults me – I remember laughing for 10 minutes non-stop reading a particularly fun series of insults against me a few years ago. Carrico is an asshole, but one with a sense of humor, and he says intelligent things when he forgets being an asshole. Note: Dale hasn’t been insulting me much recently, I guess we are getting old.

Jones’ translation of Carrico’s views into reasonable arguments and polite language is likely to be taken more seriously than the original. The book is called “Edition 1.0,” and I look forward to reading future editions.

Replies to especially interesting points in Jones’ book.

“[Contrary] to the technological determinism espoused by the transhumanists, technologies don’t develop themselves.”

Technological determinism is not as common among transhumanists as Jones thinks. We (that is, I and similarly inclined transhumanists) don’t make predictions, but plans. It isn’t a predetermined outcome fixed in stone, it’s a project. “Will” is not used in the sense of inevitability, but in the sense of intention: we want to do this, we are confident that we can do it, and we will do our fucking best to do it.

“I think the brain is a computer, by the way, but it’s a computer that’s so different to manmade ones, so plastic and mutable, so much immersed in and responsive to its environment, that comparisons with the computers we know about are bound to be misleading.”

I don’t know anyone who seriously thinks that the brain is similar to today’s computers, perhaps powered by Intel 20ium and Nvidia JupiterForce, and running Windows 30. That is a restrictive and unnecessary assumption. The brain is a computer in the sense that it is a physical system that follows physical laws. Once these laws are well understood and engineers are able to reproduce the key physical features of the neural substrate, there’s no reason mind uploading shouldn’t be feasible. If a conscious mind can run only on a substrate with certain specific properties, then we will have to engineer substrates with the same specific properties to upload minds. (More…)

“It seems to me that all the agonising about whether the idea of free will is compatible with a brain that operates through deterministic physics is completely misplaced, because the brain just doesn’t operate through deterministic physics… The molecular basis of biological computation means that it isn’t deterministic, it’s stochastic, it’s random. This randomness isn’t an accidental add-on, it’s intrinsic to the way molecular information processing works.”

This is the most interesting part of Jones’ book. That the brain doesn’t operate through deterministic physics is trivially true if fundamental quantum physics isn’t deterministic. However, perhaps quantum physics is not only present as random background noise but plays a strong fundamental role in how the brain’s wetware generates consciousness. If consciousness depends critically on subtle quantum aspects of  our neural circuitry, not present in silicon electronics, then we wouldn’t be able to upload a mind to a silicon computer. If so, we will have to develop alternative substrates that exhibit the key quantum properties found (actually not yet found) in carbon-based biology. Jones doesn’t think we need fundamentally new physics to understand the brain-mind system, but I’m not so sure.

“Radical ideas like mind uploading are not part of the scientific mainstream, but there is a danger that they can still end up distorting scientific priorities… I think computational neuroscience will lead to some fascinating new science, but you could certainly question the proportionality of the resource it will receive compared to, say, more experimental work to understand the causes of neurodegenerative diseases.”

Any opinion “distorts” scientific priorities. But the term “influence” is more appropriate than “distort” in this case, and the right of citizens (including transhumanists) to influence public policy decisions is called democracy. I don’t think transhumanist research should receive disproportionate public funding at the expense of more urgent priorities, but appropriate resources should continue to be allocated to highly speculative science and technology driven by curiosity and visionary imagination, because history shows that’s the way to get good things done. Scouts don’t cost too much and come back with useful findings.

“Carrico sees a eugenic streak in both mindsets [transhumanist and bioconservative], as well as an intolerance of diversity and an unwillingness to allow people to choose what they actually want. It’s this diversity that Carrico wants to keep hold of, as we talk, not of The Future, but of the many possible futures that could emerge from the proper way democracy should balance the different desires and wishes of many different people.”

Of course democracy should balance the different desires and wishes of many different people – including transhumanists. It is Carrico who is intolerant of diversity and unwilling to allow people to choose what they actually want. Of course Carrico is a rhetorician with no power to enforce conformity, and at times he says intelligent things in a fun way, but the fact remains that he doesn’t tolerate dissent and claims the right to tell people what they must think and what they must want. I enjoy reading Carrico (who used to be a transhumanist himself a few years ago), but his views are entirely motivated by hatred of libertarianism. Like many American liberals, Carrico sees only the fake libertarianism of guns and predatory capitalism and ignores the real libertarianism of self-ownership and personal rights (of which transhumanism is an expression), but Jones should know better.

“For Carrico, transhumanism distorts the way we think about technology, it contaminates the way we consider possible futures, and rather than being radical it is actually profoundly conservative in the way in which it buttresses existing power structures… [Tranhumanism]/singularitarianism constitutes the state religion of Californian techno-neoliberalism, and like all state religions its purpose is to justify the power of the incumbents.”

See above about “distorts.” There’s something true here, but Carrico and Jones see only one side of the coin. Of course the incumbents are powerful and ready to take advantage of all trends, for example they are trying to turn the Internet into a tool for mass spamming, surveillance and mind control, and Bitcoin into a tool of the banks. But others are trying to find ways to use technology to give more power back to the people, and I think it’s more appropriate to consider transhumanism as their “state religion.” Here again, Carrico and Jones conflate very different aspects of libertarianism, bad ones and good ones, and throw away the baby with the water.

Image from Transcendence, a recent transhumanist film featuring mind uploading and molecular nanotech.

Giulio Prisco is a writer, technology expert, futurist and transhumanist. A former manager in European science and technology centers, he writes and speaks on a wide range of topics, including science, information technology, emerging technologies, virtual worlds, space exploration and future studies. He serves as President of the Italian Transhumanist Association.
Print Email permalink (2) Comments (4106) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Just dropping some thoughts.I hope that is okay.

“I am in total agreement with Jones about mind uploading technology very unlikely to be developed in useful time for those alive now. ...”

Before someone even begins discussing this, they should get clear about terminology and purposes. While speaking about mind-uploading technology it should always be obvious whenever we mention that term whether we mean copying or transferring the mind or just changing its substrate (saw that usage a few times), and what it is we mean by mind. I see three major reasons for mind-uploading: Life-extension, emotional benefits (e.g. feelings akin to successful reproduction, [maybe] providing solace to those left behind, etc.), and otherwise the scientific and practical utility resulting from having uploads around. For each of these benefits one should ask whether we really believe to obtain them and how we would know they have been obtained, and at any given time, whether our conception of mind-uploads makes them the best choice for a given purpose. Lastly the questions of feasibility, depending partly on what we take to be a successful mind-upload, and whether there are more fitting terms given our conceptions, remain. Depending on all that, it might turn out mind-uploads and mind-uploading are significantly different from what we think, and so would be their implications… impacting all arguments where these terms are of central importance. 

If our purpose is life extension, we have to ask what combination of criteria would have to be fulfilled for it to be obtained. Is simple identity preservation sufficient or do we want continued perceptual existence? I think for most people to be alive entails having experiences or at least the possibility of experiencing something sometimes. A life that will never experience anything could hardly be said to have been extended. It could hardly be said to be a life at all. Experiencing things seems at least a necessary condition. From that perspective mind-uploading has a few simple issues.

Let’s start with a simple upload which is just like copying a file to another location, not really impacting the original much. Depending on the precision of the copying process it might be situated in the latter part of a progression starting with paintings, photographic pictures, statues, etc, and ending with an “abstractly conceivable” perfect clone. I think we should call this kind of procedure mind-copying. So I go with that from now, to differentiate it from other forms of uploading. If some mind-thing, not only according to its behavior but also the specific logical structures (as supposed to the original embodiments) bringing about such behavior, has been sufficiently well imprinted onto something or emerges within the development of something, I will call that a mind-clone. If it diverges significantly from what we deem ideal, I will call it a copy. This is relative to our personal views, of course. Whatever is emergent from such specific embodied structures and leads to those behaviors we call mental I will call a mind. So a mere copy is a deficient clone, but might still be sufficient to fulfill some of our criteria for obtaining a bit of the three benefits mentioned before, or others, and thus is possibly still a good investment.

Anyway. Do you think your perceptual existence would be prolonged by a copy or clone? For example, would you expect to swap sides or view things through that other object’s “eyes”? That depends on what it is we have actually copied! For example, if the mind is something in constant interaction with the embodied structures that bring it about but not equal to it, on a simple materialist interpretation, then for example just copying and reinstating the “logic” of such structures and their behavior at a given point in time should not be expected to shift a perceptual or existential “center” anywhere.It will instead create a new one at its target location. One that is disconnected to the original, belonging to the other object “over there”. It seems intuitive that this would not extend our lives according to the criterion of continued perceptual existence. This also fits in with our intuitions about numerical identity and uninterrupted continuity being something of relevance, which by the way help us generally to distinguish copies from originals or things seemingly the same.

But what about just requiring identity preservation? If you count that as life extension would simple reproduction suffice as well or do we really need a stricter copy? Or a full-blown clone? How fine grained would it have to be?  Depending on your answers, mind-copying, as opposed to the ideal of mind-cloning, might become less salient. Cloning a mind perfectly seems highly unlikely. I will outline my reasoning further below. But before I get there, let me briefly wrap up the other two benefits of mind-uploading: emotional benefits and utility.

I am more interested in life-extension and practical utility than emotional benefits so I just briefly say that I believe the main issues here to be how much intellectual honesty and rigor we want to have as compared to how “pragmatic” we want to be, and in the latter case, how much “rationalist judo” is necessary to feel “irrationally” good (i.e. the distinction between epistemic and instrumental rationality and what we prefer in this case)? If trying to be objective and reasonable is not too important, we can try to just believe whatever seems to make us happy. The question of how much rationalization we would require probably depends on the specific case, that is primarily the person and their specific situation (temperament, requirements, overall life-quality, etc). I will not go into that. Of course all this ties in with the feasibility of what we would specifically like. I believe many confused arguments regarding “mind-uploading” stem partly from wishful thinking.

The scientific and practical (i.e. aside from feelings and life-extension) utility of mind-uploads depends on what purposes they might fulfill, and in what instances they seem to be the best (e.g. most economical, etc,) choices at a given time. Given possible difficulties in obtaining “good copies” (a term general enough to allow different interpretations), let alone real clones, different ways of simulation e.g., focusing on sets of appropriate behaviors, might be more practical. One could disregard the structure of the mind, that is the how-to part of it that brings about the behavioral outcomes, and just engineer backwards from those behaviors to the level of accuracy we want. Thereby obtaining structures that are functionally equivalent “enough”. Something like that is often done already, for specific applications using small parts of our cognition. Understanding the nature of the minds to be copied might be done better by more conventional means, like neuroscience etc. Especially since that is already actionable and much cheaper due to economic scaling etc,. and bypasses all the difficulties I will mention about creating copies and clones.  Of course, ultimately there might be some knowledge to be gained through creating copies and clones, only much less than one would think given what other approaches could yield more easily, cheaper, etc.

For economics and politics the behavior is more important, e.g., similar to what the author of the recent (?) book “sapiens” wrote, just manufacturing “functional minds” and bodies might be more practical than creating whole persons, from an economical and political/power standpoint. To someone owning a taxy-service, the driver possesses tons of “functions” unnecessary to the job, some that might interfere with it or else are deemed contrary to employer interest. As to whether mind-uploading in its copying or cloning forms is the shortest or best way to create strong artificial intelligence, and thus reap its benefits… I do not know much about this. Maybe something like a Moravec transfer is better suited for that? Anyway, onto the feasibility of identity preservation via copying and cloning. About transfers I will write afterwards.

Assessments of the possibility of creating a “good” mind-copy, let alone a clone, of course depend on how strict you are. I think that bears repeating, else we risk speaking past each other by shortening our statements and using differing criteria, having terms switch meanings within our arguments, and otherwise inviting other “producers of nonsense”. Whether good substrates are feasible, and whether you can sculpt the kind of structures which seem to have mind-activity… in a way that captures your mind or mind state without that very structure interfering or distorting too much, seems dubious.

Basically there could be two kinds of objects used for mind-cloning or copying (or hybrids inbetween). Those either sporting mostly similar structures or substrates or dissimilar ones, and both I would expect to have their unique difficulties. I believe structures of the form that allows cognition of the kind which biological entities exhibit might have their behaviors emerge automatically. Having them run before or while copying might dilute the informational content we want to have imprinted there, and thus introduce unwanted variance. Mind, whatever it is, aside from a murky catch-all term, seems very much an active thing, not static. Its nature also seems very depended on its more stable embodiment, leading to its cyclical repetitions and behavioral reliability towards certain stimuli. Given the way such structures emerge naturally, e.g., the developmental stages of a human brain, a clean, inactive “brain”, seems extremely hard to built even in principle. It’s just a very messy process depending on tons of input variables. How would one have the right kind of container or even mind (i.e. seeding our copy somehow within the developing container) emerge naturally from such a growth or building process? Let alone a copy or clone… IMHO most similar structures would necessarily develop a mind of their own during the building process. That would also open up a Pandora’s box of ethical issues, especially if you would still want to imprint a mind of your choice onto it. Also, to what extent that would still be possible given differently emerged structures is another question. Which leads us to dissimalar structures and substrates. 

We know we can have computational processes that can be much more “controllable” than common biological instances. Those are very different from the ones our minds run on, both in structure and substrate. Which could be an issue… using digital computers as example, it seems questionable that its alien nature would not quickly force a strong distortion on the mind pattern that we want to engrain or engrave, especially on its structure, making it at best only temporarily equivalent. This structural variation could lead to unpredicted and unwanted divergent evolution between the original and the copy or clone mind, ultimately leading to behavioral differences that prompt “de-identification”. We can see this from a more general standpoint as well, concerning “possibly cognitive” structures. Much research been done on embodied and enactive cognition, which seems to be largely ignored by most Transhumanists, but I will leave this just as a suggestion here. Instead I go from what seems common knowledge. For example, taking neuroscience into account, it seems that our consciousness, our way of being, and even our abstract thoughts, by virtue of being influenced by emotion and moods and other states or their physical correlates, very much depend on tons of wetware, chemicals, etc,. This surely takes part in shaping the mind as pattern. Taking the digital computer as example it IMHO fails to qualify for cloning and likely even “good enough” copying. It seems the best you would achieve with this is some kind of behavioral simulation as mentioned above, not a mind-clone or copy. Thus, more generally, using something very different as vehicle for our mind might not be possible without distorting it too heavily. It seems most synthetic materials fail to allow enough similarity in structure, it seems behavior simulations are not enough in the long term for identity preservation, and it seems also that any substrate similar enough potentially brings about its own activity and is hard to imprint cleanly. Did we ever see any synthetic material similar enough to biological tissues? How would we know it allows for inner experience? How could we know? For example just plugging more processing power into our brains and becoming a hybrid being…leaves open whether the subjectivity part comes from your biological part.

Anyway, mind-copying and its alternatives might violate the identity requirement for life-extension besides possibly being impractical. One could make the requirements less strict but in that case many things will seem to be identical contrary to our intuitions, along the lines of a joke saying a topologist cannot tell a cup of coffee and a doughnut apart. If we are too abstract, pseudo-simulations already abound in other people and things. Whether you would still want to do cloning or rather copying of any kind e.g., for emotional benefits, depends of course, as indicated above, on the “rationalistic judo” you require to obtain those benefits. Limited cloning or copying might also be valuable for specific tasks etc, etc.

Now, for the individual there is also the option of transferring the mind, like with a Moravec transfer, or changing the substrate gradually. But who knows how many structural changes would occur that are not captured by our mathematical models? Even if successful you might still be a singular object (...if you care about perceptual existence that is not necessarily good), and additionally subject to strong identity distortions as argued above. The life-extension benefits could be dubious as well given the durability of substrates currently available and extrapolating from that. Biology tends to be underestimated by our selective perception focusing on details of what does not work, specific “savantries” in computers, etc,. Aren’t most synthetic materials that we use for computing short-lived compared to us as self-repairing organisms, in the sense of independently sustaining their activity (guarantying “food” etc, that is electricity etc,.)!? Also, who knows if such a transfer would even be feasible without suiciding yourself, in the sense of perceptual existence? Would you stop experiencing things gradually? That depends on the actual solution to the hard problem of consciousness. Now many scientists seem not to understand this problem (e.g. often those also sporting a seemingly self-justifying “Münchhausen empiricism” with Neoplatonic undertones and subsequent naive reification of models and reduction of things to their descriptions or even just incomplete descriptions of their “behavior”.)

The hard problem lies with explaining why and how we have subjective experiences, qualia, consciousness, or whatever else you call the mental, in the first place. Even if your third person centered mathematical modeling based and thus abstracting tools can find correlates, and nothing else due to their nature, that does not necessarily mean there is no “true answer to that question.” In the case of a Moravec transfer this question might be a matter of life and death. So much for mind-uploading in its transfer forms… whatever wink

I guess I wrote enough already and do not want to clog up all this space here. Though there is a bit left to be said about the notions of computation, the dubious usage of the notion of natural law, and -especially- the confused claim about the investigation of whether determinism and free will are compatible or not. For now I will end this with a short RIP for Marvin Minsky. I wish him luck with his cryonics.

It takes a long time to change the habit of anthropocentric ways of looking at our world but, at last we are seeing a groundswell of folk, including some that have commented above, who are breaking away from the weary old paradigms presented at this summit.

So, forever patient, I will yet again again point out the objective realities:

Sadly, the article represents another serving of the same old anthropocentric nonsense.

One wonders when will those of this antiquated transhumanist cult finally wake up to the reality that whatever beneficial spin-off may arise from the autonomous evolution of technology within the medium of the shared imagination of our species, such developments are quite incidental to nature’s overall machinery.

Our quite natural anthropocentric mindsets leave us with vague woolly notions that the advancement of technology is something over which we humans have significant control.

But we are putting the cart before the horse! The reality being that it is a largely autonomous evolutionary process.

We seem to fail to notice that most of us are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps, and many other trappings of the increasingly cloudy net and the ever-growing insidious reticulation to peripheral devices.

We are already largely dependent upon it for our commerce and industry and there is no turning back. What we have fondly perceived to be a tool is well on its way to becoming an agent.

It may be humiliating to have to admit that our species, far from being potential “masters of the universe” or the “pinnacle of biology”, is simply a tiny cog in the humongous machinery of nature. But, viewed objectively, that is quite clearly the way it is.

And that our only claim to distinction is that our collective imagination is the medium in which technology has autonomously evolved in the course of the past 2.5 million years.

Furthermore, that, within decades, we can reasonably expect to become redundant to that process. That Netty, rather than this pathetic tribe of snout-less apes, will, in turn, rule.

The construction of a “brain” that will soon equal and then surpass that typical of our species has for long been a work in progress. Not as a result of any deliberate human “design” but rather as the result of an autonomous evolutionary process that can be seen to have run its exponential course since humankind acquired the ability to share imagination, an ability which we know as language.

Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet.

It is effectively evolving, in the tradition of biology, by a process of self-assembly.

Consider this:

There are at present an estimated 3 billion Internet users. There are an estimated 13 Billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.

That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 3 billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but can adopt multiple states.

Without even crunching the numbers, we see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in processing power.
And, of course, the degree of interconnection and cross-linking of networks within networks is also growing rapidly.The culmination of this exponential growth corresponds to the event that transhumanists inappropriately call “The Singularity” but is more properly regarded as a phase transition of the on-going “life” process.

An evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Episode 237 - The Sofalurity Is Near

Previous entry: Religion and Violence Revisted


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @