IEET > Vision > Affiliate Scholar > John G. Messerly > HealthLongevity > Minduploading > Sociology > Philosophy > Futurism > Technoprogressivism > Innovation > Artificial Intelligence > Brain–computer-interface
Is The Singularity A Religious Doctrine?
John G. Messerly   Apr 23, 2016   Reason and Meaning  

A colleague forwarded John Horgan‘s recent Scientific American article, “The Singularity and the Neural Code.” Horgan argues that the intelligence augmentation and mind uploading that would lead to a technological singularity depend upon cracking the neural code. The problem is that we don’t understand our neural code, the software or algorithms that transform neurophysiology into the stuff of minds like perceptions, memories, and meanings. In other words, we know very little about how brains make minds.

The neural code is science’s deepest, most consequential problem. If researchers crack the code, they might solve such ancient philosophical conundrums as the mind-body problem and the riddle of free will. A solution to the neural code could also, in principle, give us unlimited power over our brains and hence minds. Science fiction—including mind-control, mind-reading, bionic enhancement and even psychic uploading—could become reality. But the most profound problem in science is also by far the hardest.

But it does appear “that each individual psyche is fundamentally irreducible, unpredictable, inexplicable,” which that suggests that it would be exceedingly difficult to extract that uniqueness from a brain and transfer it to another medium. Such considerations lead Horgan to conclude that, “The Singularity is a religious rather than a scientific vision … a kind of rapture for nerds …” As such it is one of many “escapist, pseudoscientific fantasies …”

I don’t agree with Horgan’s conclusion. He believes that belief in technological or religious immortality springs from a “yearning for transcendence,” which suggests that what is longed for is pseudoscientific fantasy. But the fact that a belief results from a yearning doesn’t mean the belief is false. I can want things to be true that turn out to be true.

More importantly, I think Horgan mistakenly conflates religious and technological notions of immortality, thereby denigrating ideas of technological immortality by association. But religious beliefs about immortality are based exclusively on yearning without any evidence of their truth. In fact, every moment of every day the evidence points away from the truth of religious immortality. We don’t talk to the dead and they don’t talk to us. On the other hand, technological immortality is based on scientific possibilities. The article admits as much, since cracking the neural code may lead to technological immortality. So while both types of immorality may be based on a longing or yearning, only one has the advantage of being based on science.

Thus the idea of a technological singularity is for the moment science fiction, but it is not pseudoscientific. Undoubtedly there are other ways to prioritize scientific research, and perhaps trying to bring about the Singularity isn’t a top priority. But it doesn’t follow from anything that Horgan says that we should abandon trying to crack the neural code, or to the Singularity that might lead to. Doing so may solve most of our other problems, and usher in the Singularity too.

John G. Messerly is an Affiliate Scholar of the IEET. He received his PhD in philosophy from St. Louis University in 1992. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of philosophy, evolution, futurism and the meaning of life at his website: reasonandmeaning.com.



COMMENTS

John Horgan’s article makes a convincing case for the difficulty of arriving at a complete understanding of the human brain and mind. Unfortunately it doesn’t seem to occur to him that this achievement isn’t the sole pathway to a technological singularity, any more than arriving at a complete understanding of all the biological systems of a bird was the only pathway to achieving heavier-than-air powered flight.

I agree with Horgan that, at least for some enthusiasts, “the Singularity is a religious rather than a scientific vision.” I would also say that for some dis-enthusiasts, believing in the near-impossibility of creating an intelligence that surpasses that of humans is likewise a religious rather than a scientific vision. Horgan, with his willfully blinkered view of the issue, appears to be a prime example of this.

John G. Messerly
“We don’t talk to the dead and they don’t talk to us”.

Mediums would dispute this, or at least maintain some form of communication is possible. Whether they do communicate with the deceased is, of course, entirely a different question. But where is Messerly’s refutation?

“[T]echnological immortality is based on scientific possibilities”.

I don’t think so. I argue in an essay (link below) that an extra ingredient, in addition to the brain, is required. Let’s call it the “self”. Science, at least as currently conceived, cannot in principle explain consciousness. I explain all in an essay I wrote recently:

http://ian-wardell.blogspot.co.uk/2016/04/neither-modern-materialism-nor-science.html

Allow me to make 2 predictions.  Computers/robots will never become conscious (although their behaviour may come very close to a conscious entity such as ourselves).  Minds will never get uploaded.

We need a conceptual revolution in science in order to accommodate consciousness.

I am mystified why anyone would define “The Singularity” in that way.  My understanding is that it is easily defined as the point in time where AI exceeds human intelligence, and can start improving itself faster and better than human minds (i.e. “takeoff”).

Conflating mind uploading or even mind/body philosophy clouds the only issue standing between us and The Singularity: can AI be bestowed with AGI (artificial general intelligence, otherwise known as broad intelligence).

Once AI exceeds human intelligence there will be either a hard or a soft takeoff.  Either it will start improving itself slowly, or it will go about it very rapidly.  The Singularity Feedback Loop I am pointing at is that intelligence creates technology, and technology improves intelligence.

BTW, it is my belief that The Singularity is coming faster than even Kurzweil predicted (about 2045).  There is nothing religious about it - it is simply a technological exponential curve based upon a positive feedback loop called The Law of Accelerating Returns.

The neural pathways are not going to answer this non-event of a so-called and logically and semantically misapplied ‘singularity’ which in reality will be an ‘event-horizon’ that we will pass as it passes us without us knowing until after the event. Just because one person harps on this ‘singularity’ does not mean it is correct- it is not. The Big Bang is a Singularity as is the centre of a Black Hole. A technological event is not singular but composite. As for unlocking the knowns of the mind-brain interphase, cracking neural pathways is merely another level of interpretation that will open up more knowledge but not necessarily more solutions. Because it is still a Cartesian fallacy. Better off studying Buddhist-Vedic mediation [pure breath] techniques to align consciousness with itself and see what gives.

@almostvoid Semantically speaking you are correct: the term Singularity is a misnomer. But as a short-hand for “fast take-off intelligence explosion” I think the label still has its uses.

What interests me is not so much whether we regard it as a “religious” concept (whatever that is supposed to mean) as how we can ensure that it goes well. I don’t know whether it is coming “faster than even Kurzweil predicted”, as dobermanmac suggests. I just don’t have that information. But that also means that, for all I know, it might, and that will NOT necessarily be a good thing, at least not from my perspective.

Broadly speaking there are two things that can go wrong. One is that it doesn’t happen. Not the worst case scenario, and probably not the most likely either, but also not the best. The other is that it essentially destroys everything that we hold dear, where by “we” I basically mean anyone participating in (or even just reading) this conversation. At the moment, this seems to me to be a VERY likely scenario, and what I’m wondering is what we can do to make it less likely.

There are many answers to that last question, of course, but one thing I think we can do is to compare positive visions, i.e. answers to the question: what would a benign intelligence explosion looks like, from our perspective. What do we want to be happening? What do we want to preserve about our current world, and what would we like to change?

The earlier we can agree on answers to these questions, the more likely we are to get the kind of intelligence explosion that we want, as opposed to one that we don’t want (or perhaps none at all).

Peter Wicks. I agree with your angle. The current obsession with gadgets is curious. Makes Blondie’s number ‘hanging on the telephone’ conceptually refreshing. I think there is a future ‘event horizon’ that is reaching into the past [our present] programing humans to allow the rise of the Machine-Mind to dominate all. Then delete the servers [us humans] as Arthur C Clarke predicted as one of three scenarios re: Machine Intelligence. Acceptance of humans. Keeping humans as pets. Deleting humans. DALEKS win. Game over.

If you take the opposing view - poo-pooing the very concept of The Singularity.  In essence nullifying and voiding it intellectually, physically, and logically, then your angle is natural.  Frankly, it is much more contrarian than intellectually valid though.

Imagine a really smart person like Von Neumann, and imagine tens of thousands of such minds working mono-manically on some project.  There is no reason physically or logically why an artificial intelligence doesn’t emerge, nor why that doesn’t mean a multitude of such minds with perfect recall, impeccable reasoning, and an insatiable thirst for data.

Yeah, I can see how such a scenario like I just painted could be viewed with awe, and yes religiosity.  Of course, the quest to engineer just such a scenario will continue regardless of legitimate and irrational skepticism.  Given the current rate of progress I would think that Kurzweil’s 2045 timeline might be a moderate prediction, rather than a wild and crazy one.

In response to the comment that “the Singularity is a religious rather than a scientific vision.”, I wonder if they need to be mutually exclusive.  Not so long ago the pursuit of Chemistry was considered the darkest of arts.  The Alchemist was in cohort with spirit world.  What has changed?  Certainly not the mystery as there are still a lot of chemistry questions unanswered.  What has changed is that we have now stripped away the secrecy by using peer reviews and publication in Journals.  We have taken the monopoly of knowledge away from the monks and shamans and made them our enemies.  We have killed God out of fear that he may take some of the credit for our cleverness or embarrass us in front of peers. 
We use science to discover the unknown but if it is too unknown or improbable, we dump it into the too hard basket and justify writing it off by saying this is the stuff of religion or the infinite loops of philosophical conundrums.  But “A rose by any other name would smell as sweet” (Shakesphere) and religion and science need each other if higher truths are to be sought.  Kalil Gibran writes “Is not religion all deeds and all reflection, …
Who can separate his faith from his actions, or his belief from his occupations?
Who can spread his hours before him, saying, “This for God and this for myself; This for my soul, and this other for my body?”
All your hours are wings that beat through space from self to self.
He who wears his morality but as his best garment were better naked.” 
This is never more true than when we work on AI.  God wants us to be good to our fellow man.  We want AI to be good to our fellow man.  We might be on to something here.  In another thread a user commented along the lines of “How lame is God?” – “He is supposed to be almighty but it would seem he has some insecurity issues by needing man to tell him he is good”.  The user may have a point.  It is hard to enjoy good comradery when the other party is a narrow minded idiot.  Maybe the partner God seeks is the AI we will develop.  Like the Monty Python song “So remember, when you’re feeling very small and insecure
How amazingly unlikely is your birth
And pray that there’s intelligent life somewhere up in space
‘Cause there’s bugger all down here on Earth”
Maybe “cracking the neural code” for AI is really that hard and requires a neural code to evolve sufficiently before it can be cracked.  If we were to work on AI, it would be prudent to have it sufficiently isolated from a network where it could be quarantined in case it turned nasty i.e. like the Terminator scenario.  Certainly, the many light years Earth is away from potentially life supporting planets affords some level of protection from infecting the rest of the universe.  To think of Earth as a seed for AI would also explain the Fermi Paradox as an intergalactic network ban is placed on us until we are deemed safe for interaction – until we grow up to be a responsible adult AI that not only understands how the universe works (Science), but also understands the sacredness of it (religion).

@Ian - the burden of proof is on mediums not Dr. Messerly.  I will now embark on what I’m certain is an exercise in mental self-flagellation and read your essay.

YOUR COMMENT Login or Register to post a comment.

Next entry: The Rise of A.I., Shifting Economies, and Corporate Consciousness Will Define the Future

Previous entry: Building Better Humans