Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

The Legal Challenges of Robotics (2): Are robots exceptional?

Longevity Cook Book

On “How We Became Post-Human”

Bitcoin and Science: DNA is the Original Decentralized System

Summa Technologiae, Or Why The Trouble With Science Is Religion

Technoprogressive Declaration - Transvision 2014


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Giulio Prisco on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Giulio Prisco on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Giulio Prisco on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Rick Searle on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 25, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Why Running Simulations May Mean the End is Near
Nov 3, 2014
(20992) Hits
(15) Comments

Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.
Nov 18, 2014
(19694) Hits
(2) Comments

Decentralized Money: Bitcoin 1.0, 2.0, and 3.0
Nov 10, 2014
(8802) Hits
(1) Comments

Psychological Harms of Bible-Believing Christianity
Nov 2, 2014
(6869) Hits
(5) Comments



IEET > Security > Cyber > Life > Innovation > Vision > Futurism > Directors > Giulio Prisco

Print Email permalink (108) Comments (13524) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The Perils and the Promises of Mind Uploading


Giulio Prisco
By Giulio Prisco
Space Collective

Posted: Feb 1, 2012

Science fiction authors Richard Morgan and Greg Egan have described mind uploading  and “backup copies” as a practical technology for immortality. Of course, “carbon chauvinists” often speak against mind uploading, and some have interesting things to say.

The perils of mind uploading

In The Perils of Mind Uploading, science fiction writer Nigel Seel anticipates mind uploading, which he describes as “in a few decades time, it will be possible to scan a living brain at the resolution of individual neurons — cell bodies, dendrites and axons — and “parse” such a “bitmap” into a computerized brain model.”

He also warns that “every technological advance has its dark side,” and imagines three case studies from the criminal files of a fictional future with widespread mind uploading technology:

— Case 1: The Self-Erasing Murderer
— Case 2: The Bed-Sit Torturer
— Case 3: The Memory Blackmailers

The three examples are horrible and scary, and seem taken from Morgan novels.

Altered carbon and digitally-stored personalities

Science fiction author Richard K. Morgan, quoted by Seel, has developed a very complete and elaborate future universe with “noir” stories built around mind uploading and digitally-stored personalities.

In Morgan’s stories everyone has a brain implant, called “stack,” which stores the user’s memories in realtime. The content of the stack can be retrieved after physical death (which happens frequently and often violently in Morgan’s stories) and downloaded to a new body, or “sleeve.” People are effectively immortal, of course provided they are able to pay.

The main character Takeshi Kovacs, a hyper-trained killer who is basically a nice person inside, travels from star to star, as a data file beamed to destination, and from sleeve to sleeve. In the three novels published so far, Kovacs:

— Visits the decadent Earth and defeats a psychopath billionaire. He needs some help, so he splits in two and spawns an expendable copy to do the dirtiest part of the work Altered Carbon.
— Participates in a planetary war and recovers a super spaceship left in a parking orbit by an ancient civilization, with a team of mercenaries restored from their stacks purchased wholesale at a Voodoo market (Broken Angels).
— Back in his home world, rescues a long-dead revolutionary leader, imprinted by an alien database in the brain of a mercenary girl and emerged after some steamy sex with our Takeshi. Who, in the meantime, is stalked by a murderous paid killer, a certain… Takeshi Kovacs, recovered from an old bootleg softcopy of his stack (Woken Furies).

Virtual hells

The worse thing that can happen to you in Morgan’s universe is being copied from a bootleg softcopy of your stack by a sadist who wants to torture a copy of you (or thousands of copies of you) forever.

If you accept the possibility of continued existence after biological death as a upload (sorry Randal Koene, substrate-independent mind, then you must also accept the possibility of such a virtual hell. In The Perils of Mind Uploading, Seel says: ” The moment you permit your brain to be scanned you’ve lost control. Your computer-virtual is exactly identical to corporeal-you, except that it can be copied without limit and can be hacked by anyone who can get access. The nearest analogue to this situation today is your money. It’s also stored electronically and can be moved around the network. We trust institutions, banks and credit card companies, to keep our digital cash safe, but we know that it doesn’t always happen.”

Yes, once mind uploading technology is developed, people will use it for good as well as bad ends, and substrate-independent minds will be at constant risk of being hacked and abused. Like Seels, I would like my upload copy to have “high-grade data-encryption, a remote location-finder, and a self-erase function in case it gets stolen.”

The perils of the Internet

But any technology can be abused for bad ends. A few decades ago, today’s interconnected world with billions of personal mobile devices connected to the planet-wide Internet would have been seen as a beautiful science fiction utopia. But somebody might have written an article on The Perils of the Internet, with stories like:

— Case 1: Pedophile stalks children on the Internet, bodies found
— Case 2: Terrorists remotely detonate bombs with mobile phones, kill hundreds
— Case 3: Addict gamer shoots in shopping center, thought it was a video-game

I don’t have to make up the content, because such things have, unfortunately, happened. A quick Google search will sadly reveal real examples.

Most people are good, but some people are bad. Some bad people commit atrocities and use any technical means to abuse and kill others. This does not mean that we must relinquish or slow down the development of advanced technologies. On the contrary, advanced technologies may someday offer the means to cure severely disturbed psychopaths.

While recognizing the perils of advanced technologies available to insane people, I think we can agree that the effects of the Internet have been (much) more good than bad.

The promises of mind uploading

Similarly, while recognizing the perils of mind uploading, I prefer to think of the promises of mind uploading, and imagine examples like:

— Case 1: 21st century cancer victim recovered from chemically preserved brain, beamed to the Tau Ceti colony to meet grandchildren
— Case 2: Couple revived from mind scans and softcopy mindfiles celebrate second wedding with a global mindcast, plan to recover their children
— Case 3: Artist merges with quantum AI, produces sublime works

The very advanced mind uploading technology described by Morgan will not be developed for quite some time, so perhaps we should not worry too much just yet. But the first baby steps toward mind uploading technology may be taken much sooner.

Back to the present then, or to the near-term future. Seel links to his review of Greg Egan’s Zendegi, a well-researched and believable fictional account of the very early development stages of mind uploading technology.

I have also written a review of Zendegi. If you have read the book or if you don’t mind spoilers, read Seel’s review or mine. My conclusions:

The tragic end is already expected by the reader and does not come as a surprise. Egan knows that the development of disruptive technologies is never easy, never linear, and always troubled. I think uploading technology will be developed eventually, perhaps in the second half of this century, but I am afraid Greg is right, and in the early development stages there will be unexpected problems and major setbacks, there will be unhappiness, and there will be tragedies. But, fast forward a few centuries to the upload society in Diaspora, this is how the “Introdus” to a next phase of our evolution might begin.

And, in fact, the novel ends with a positive thought: “Maybe in Javeed’s lifetime a door could be opened up into Zendegi-ye-Bethar; maybe his generation would be the first to live without the old kind of death. Whether or not that proved to be possible, it was a noble aspiration.”

There will be risks, and then there will be even more risks. There will be suffering and death, as it has always been the case. But there will be much more happiness, and wonderful adventures. The only way to avoid risk is staying all day in bed and never going out… but even so, an earthquake can kill you in bed.

No, risk is part of being alive. The only way to avoid all risks, is not being alive.

I prefer risk.


Giulio Prisco is a physicist and computer scientist, and former senior manager in the European space administration. Giulio works as a consultant and contributes to several science and technology magazines. In 2002-2008 he served on the Board of Directors of Humanity Plus, of which he was Executive Director, and serves on the Board of Directors of the Italian Transhumanist Association. He is often in Hungary, Italy and Spain. You can find more about Giulio at his Turing Church, RSS feed and skefi'a science/fiction, RSS feed.
Print Email permalink (108) Comments (13525) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


If we assume some future scenario incorporating such freedoms and open source philosophy, (even black market pirating), towards substrate migration and transference of minds, then this kind of dystopia and fears could be imagined.

Yet we could always aim to implement a central hub or CEV to help police and oversee corruption, evils and any other psychological perils that we could imagine.

Be careful what you sign up for in the future, (and who you sign up with)? .. Indeed!

What could the imagined influences and consequences be of extreme hedonists or psychopaths merging with minds and thriving on chaos and pandemonium, promoting insanity and fear? Well, merged collective minds can still have powers of aggregate to overcome intrusion by mayhem and irrational threats, (just one example of how a collective can protect itself, or a CEV may oversee with some rational supervision without too much intrusion)?

So yes.. totally agree.. you can’t let fear stop progress!





Question, would people who go forward with mind uploading existing in some kind of info-sphere or would they have bodies of some kind?





As you say, it is very easy to put emphasis on either the dark or light side of a particular technology. The interesting thing is that all this assumes that there is no ethical enhancement technology that would go with the ability to copy the mind. If we are talking about moral and ethical enhancement now and even beginning to target specific chemicals, surely by the time we can upload our brain we will also include a ethical sub-routine.





@Christian re bodies - I guess those who need or want to have bodies of some kind will have options. In Egan’s Diaspora, uploads use robotic bodies to visit the “fleshers.”

@Alex re “surely by the time we can upload our brain we will also include a ethical sub-routine”

I guess we will have the technical ability to include ethical sub-routines. But the question is, whose morality and ethics should they be based on? I know many people here disagree, but I consider morality and ethics as subjective choices. My morality is not necessarily the same as your morality.





Giulio: “No, risk is part of being alive. The only way to avoid all risks, is not being alive. I prefer risk.”

Amen, and of course we’ll seek to mitigate these risks with identity security systems (whatever those turn out to be).

Giulio: “I know many people here disagree, but I consider morality and ethics as subjective choices. My morality is not necessarily the same as your morality.”

I think we can approach an objective morality, so long as we remember that objectivity is an abstraction across subjectivity or a shared subjectivity, rather than a negation of subjectivity.





@CygnusX1 re malevolent mind hacking - I guess there will be a market for protection and counter-intrusion tools, crypto suites for mind files, etc. I understand this will be a _big_ problem, but I prefer to imagine a future with mind uploading tech than one without.

As you say, you can’t let fear stop progress, and I am sure future generations will have a lot of fun. Too bad these things will not happen in useful time for our generation… well, there is cryo and chemical brain preservation for those who really want to get there.





@Lincoln re “objectivity is an abstraction across subjectivity or a shared subjectivity, rather than a negation of subjectivity.”

Wow, sounds like you solved this problem that has occupied philosophers for centuries! I like your formulation.

In my own formulation, for practical purposes we can all agree that protecting children is good and abusing children is bad, and that helping people is good and killing people is bad. Of course, things are not so easy in less extreme cases, when there are conflicting interests.

In the “ethical sub-routine” scenario, I would not object to grafting a _strict minimum of (almost) universally accepted_ ethical choices (e.g. be kind to children and don’t shoot people randomly), but without going too far. My mind is my mind is my mind.





@Giulio I’ve actually read from another article posted on this site that uploaded minds will need bodies in order to fully interact with the world and each other, the same goes form artificial intelligences.  Inf that’s the cast what kind of bodies do you think uploaded people will want?  Would they look biological, recognizably technological. or some mixture of the two.  Will they look the same (or similar to) the bodies they once had or will there be more diversity (extra limbs, taller stature, animal-like features, etc)?





@Christian re bodies. I agree that “uploaded minds will need bodies in order to fully interact with the world and each other” (sounds like something that I might have written), with the caveat that a “body” can be whatever permits to fully interact with the world and each other. A any actuator permit to interact with the physical world, and even purely virtual persons can interact with each other.

I think any person should be free to wear, or not to wear, whatever (s)he wants, and I think the same will apply to bodies. I am used to wearing the first thing that I find, so I would probably pick the first body that I find suitable to whatever I want to do with it. Others may be more picky and develop aesthetic criteria for wearable bodies. As long as these aesthetic criteria are not normative and coercive, this is fine with me.





Giulio, one issue that exercises my (currently entirely physical) mind when I (it?) consider(s?) uploading is that of identity. The positive examples you cite in your article are indeed inspiring, but to what extent do you identify with a future uploaded version (copy?) of your mind? To what extent _should_ we identify with such (future) entities?

Already in today’s world there are ambiguities, of which we generally are largely unaware, in the way we use the pronoun “I”, in particular with regard to whether it does or doesn’t include our bodies. Originally I’m pretty sure it did, and still does when we say things like,  “Ow, you’re hurting me!” By contrast when we say something like, “My foot is killing me!” we are implicitly referring to our “foot” as something separate from the “me” that is our mind, and which the foot is (by sending annoying pain signals to our brain) hurting, if not quite “killing”. Furthermore, I think this tendency to identify narrowly with our minds, as something separate from our bodies, has increased with modernity, as direct physical threats on the latter have become mercifully rare (at least for some of us).

Now suppose we were to return to the more ancient approach of seeing mind and body as part of one, undivided entity that we call “I”. How would we feel about mind uploading then? Is it enough to believe that we will eventually be reunited with some new, future bodies?





@Peter re “to what extent do you identify with a future uploaded version (copy?) of your mind?”

If my upload copy thinks he is me, then there is a person identical to me, with all my memories, all my feelings and all my thoughts, who thinks he is me. This is a good definition of “me”.

In my opinion, the relation between the two instances is similar to the relations that exists between the person who went to sleep in my bed last night, and the person who woke up in my bed this morning.

I think the person is the same (we all do). I could also choose to think that yesterday’s me has been killed and replaced by today’s me, but this would be useless masochism without any practical benefit.

If/when upload technology will be operationally available, I will choose to consider the “upload copies” as valid continuation of the originals. I say “choose” because philosophers have been discussing this for decades (actually, centuries, ref. Leibniz and others) and they won’t stop anytime soon.

Re “To what extent _should_ we identify with such (future) entities?”

I will skip this part of the question, because I think it is an individual choice. I choose to identify with my future upload copy (or copies). Another person can choose not to (like he could choose not to identify his self of yesterday with his self of today), but I guess such position would be difficult to maintain in a world where uploading is routine. I also guess that, in a world where uploading is routine, continuity of identity would be considered as a non-issue.

Re “suppose we were to return to the more ancient approach of seeing mind and body as part of one, undivided entity that we call “I”“

A problem with this approach is that, strictly speaking, if you get a haircut you cease to be you and become another person. But we all know that we are still the same person after a haircut, so the undivided brain/body approach is not consistent. Of course, if instead of getting a haircut I lose both legs and both arms, I would be very seriously traumatized and I would not feel like myself anymore. But this can be solved with good prosthetic arms and legs.





@Giulio

The haircut example reveals limits to the extent that we can sensibly identify with our whole bodies, but bear in mind that hair, like nails, is a non-living secretion.

Identity is a fragile concept. More than anything else it is a psychological construct, to some extent perhaps even a linguistic one. Prelinguistic humans would of course have had memories, but would have lacked the language to describe those memories, and thus to build up the “life story” that forms such an integral part of ur sense of who we are.

Question: in the scenario you described, does the uploaded version (copy) of your mind ow it’s an uploaded copy? Does it know it is no longer part of a physical human organism? If not it is being deceived; if it is, then in what sense is it still you? Could there be more than one of you? Which one is the real you?

Even in today’s world I think a lot of people do identify with their bodies. My guess is that they are the ones who pay attention to how they look, what they wear, how well in shape they are etc. Just as mental thoughts and feelings come and go, so hair, blood, skin cells and so on can come and go, but the physical organism - your body, including your brain - stays basically the same. But not if you’re uploaded.





By the way on the “ethical subroutine” issue I think I basically share Lincoln’s view but I would not call it “objective morality”: I would call it a version of (subjective) morality that we have all*, which much haggling, negotiation, compromise and door-slamming tantrums along the way, agreed upon, much as we agree on laws today. (For example, human rights are not “self-evident truths”. They are rules that we have agreed on. The only thing that’s objective about them is that they are generally conducive to human happiness, and whether that’s something we actually want is still subjective, i.e. - as Giulio correctly observes - a choice.)

*OK perhaps not all. As always there will probably be some who will not agree, and will have to be coerced. Otherwise it becomes like those UN meetings where a single country can thwart the will of the rest of the world, thus pushing real decision-making to less democratic fora.





@Peter re bodies

Yes, some persons identify with their bodies,  and they are those who pay attention to _and are happy with_ how they look, what they wear, how well in shape they are etc. I am reasonably happy with these things, but not as happy as when I was 20, and I guess in 20 years I will very much _dislike_ my look and shape, and I will very much look forward to any option to move from my old, ugly, painful and dysfunctional bio body to a better model.

There are people, and some of them are here on this forum, who re-target their emotional attachment to their body… to their Second Life avatar, and spend countless hours in improving the look and clothes of their avatar. This shows that identification with, and emotional attachment to, a biological body, is not necessary.

If I get a haircut, I am still me. Same if I clip my nails. If I lost an arm, I would be severely distressed, but I think I would be happy with a prosthetic arm as good as (or better than) my lost biological arm. Same for the other arm. Same for both legs. Same for you-know-what. And then lungs, liver, heart… the end result is that I would be perfectly happy if I could move to a robotic body, provided it is as good (or better) than my bio body when it was 20 years old..

Re “In what sense is it still you? Could there be more than one of you? Which one is the real you?”

My upload is still me in the sense that matters: he has my memories, thoughts, feelings, experience and convictions. This is what defines me. If there are two upload copies, I consider both of them as the real me. I think mind uploading technology will force a re-conceptualization of our sense of self.





“...But wait,” I said to myself, “shouldn’t I have thought, ‘Here I am, suspended in a bubbling fluid, being stared at by my own eyes’?” I tried to think this latter thought. I tried to project it into the tank, offering it hopefully to my brain, but I failed to carry off the exercise with any conviction. I tried again. “Here am I, Daniel Dennett, suspended in a bubbling fluid, being stared at by my own eyes.” No, it just didn’t work. Most puzzling and confusing. Being a philosopher of firm physicalist conviction, I believed unswervingly that the tokening of my thoughts was occurring somewhere in my brain: yet, when I thought “Here I am,” where the thought occurred to me was here, outside the vat, where I, Dennett, was standing staring at my brain.”

>> http://www.newbanner.com/SecHumSCM/WhereAmI.html





“I think mind uploading technology will force a re-conceptualization of our sense of self.”

We certainly agree on that (assuming it happens of course, which is another discussion).

To be honest, part of what drove my comments is the idea that I would pay better attention to my own body, appearance, etc - practical stuff in other words - if I reduced this disconnect between my mind and my body. It might be better though to view the causality the other way round: the more I pay attention to these things, the more I am likely to identify with them (as opposed to, say, the positions I take on this blog).

The question of perceived location raised in the passage quoted by CygnusX1 is also something I have wondered about from time to time. Indeed I think our perceived location must be driven by sensory data, mustn’t it. In other words we feel ourselves to be in our heads to the extent that we focus mainly on visual and auditory data. Interesting meditative exercise which I might try later: to try to focus so intensely on sensory data from other parts of our bodies that we imagine ourselves there instead of in our heads.

Of course, when we’re dreaming, we perceive ourselves to be somewhere else entirely…





By the way I agree it’s _possible_ to identify only with one’s mind, and/or with one’s Second Life Avatar, nation, religious grouping, football team, etc, etc. Identity is quite a fungible commodity! I guess I’m wondering to what extent it’s a good idea, and what it depends on. My own thinking - and it’s a purely personal position at this stage, not intended as a prescription for anyone else - is that it’s better, for the moment at least, to identify with my whole body, and this makes me less enthusiastic about uploading than I might be.





“The question of perceived location raised in the passage quoted by CygnusX1 is also something I have wondered about from time to time.”

But did you read the whole essay? That’s why I provided the link? It attempts to answer many of your questions.


OK.. what about these articles and more by Martine Rothblatt

What Are Mindclones?

http://ieet.org/index.php/IEET/more/rothblatt20090502

Can Consciousness be Created in Software?

http://ieet.org/index.php/IEET/more/rothblatt20090815





@CygnusX1 Read it (the Daniel Dennett article, that is, not the other links you posted: one thing at a time!), and have the following comments.

1. I wonder how well the perfect identity between Yorick and Hubert survives many worlds issues and chaos theory. In principle they receive the same data (from Hamlet), but, given that the mind is clearly a chaotic system, the even the tiniest inaccuracies in either data transmission or simulation - or perhaps even different ways in which the quantum state collapses - would presumably set the processing off on divergent paths. I guess this is something like what happens at the end of the article, but I think it would happen essentially immediately.

2. The impression that he’s back in Houston after all the connections with Hamlet are severed seems to be based on (i) his prior knowledge that his brain is there, combined with his physicalist convictions, and (ii) the lack of any sensory data to contradict this impression. Not really sure what this is supposed to prove. Just as questions about what is the real me are as subjective as ethics, in the sense that the answers are matters of choice, not of truth, so questions about where we “really are” are also subjective.

One thing I haven’t quite grasped yet, but perhaps I need to read the article more carefully, is what is the precise mechanism by which the two “brothers” at the end talk to each other (brother 2 seems to be addressing brother 1, but in what sense?).

In any case, in an important sense I do NOT believe I am the same person I was yesterday, especially if we take a many worlds view such that there are ever-branching pathways away from the Big Bang (our common past) towards divergent futures. There is a unique event lying in my past when I was conceived (as well as myriad similar occurrences in parallel universes that do not quite lie in my past), and a unique pathway from that moment to the present in which my life unfolds more or less as I remember it (and as other records show), my future already involves massively diverging copies of me.

My guess is that if I was uploaded AND copied, two things could happen: if the two copies were unaware of each other’s existence they would both think they were me, whereas if they were aware and interacted with me they would rather think of themselves as my offspring, and of me as their common ancestor, despite the fact that they would have memories of actually _being_ me. They would have to, in order to communicate meaningfully and I’m pretty sure that consciousness is to a VERY significant degree determines by the language we use.

In any case for the moment I shall do my best to stay firmly attached to my current body,  take good care of it, and indeed continue to regard it as an integral part of me, not as something separate that can be detached and replaced by another without causing a rupture of my identity. As I say, I regard these questions of identity, like ethics, as a matter of choice, not of truth.





@Peter re “for the moment I shall do my best to stay firmly attached to my current body, take good care of it, and indeed continue to regard it as an integral part of me, not as something separate that can be detached and replaced by another without causing a rupture of my identity. As I say, I regard these questions of identity, like ethics, as a matter of choice, not of truth. “

I also regard these questions of identity as a matter of choice, not of truth.

Your choice makes a lot of sense if you are _happy_ with your body. I hope this is your case, and I hope you will continue to be happy with your body for a long, long time. I am reasonably happy with mine… but not so happy as 10 years ago, let alone 20 years ago.

Our human1.0 bodies make us less and less happy as we age, and at some point they cease to be a source of happiness and become a source of pain. Not something pleasurable to identify with, but a cage to hate.

If I were 95, confined in bed or on a wheelchair, always in pain, and not even able to read, I would welcome even a untested very experimental uploading procedure with open arms and without thinking twice. It beats the only other option that remains.

Of course I hope Aubrey is on the right path and medicine will soon be able to give us centuries of healthy biological life, but I think that biology is just not made to last very long, and I see post-biological life as uploads as the long-term future of our species.





I hope Aubrey is right too. I don’t believe there is anything really fundamental about biology that makes ageing inevitable: it’s a kind of built-in obsolescence that has come with sexual reproduction: the old making way for the young, thus helping the species to adapt faster. Ultimately it’s just a question of exporting entropy more efficiently (but the “just” is where the devilish detail lies: whether we ever manage to do it or whether civilisation collapses under the weight of various stresses and internal conflicts is another matter).

What you say about our bodies becoming “cages to hate” makes a lot of sense…then again there must be plenty of 95-year olds, even ones in fairly constant pain, who feel differently. To some extent it is a question of attitude isn’t it?

But anyway we’re agreed: it’s not an issue of right or wrong, but of personal choice. Then again there are also societal choices that have to be made (where to fund research etc), so these issues are not irrelevant.





But all of this seems to assume such a mind-body dualism which, I understand, has been proven to be nonsense. 

Also, see Shoemaker, Parfit, Price, Williams - all have delightful thought experiments on mind/body transplants and identity…too much for me to go into here…but relevant!





@Peg re “all of this seems to assume such a mind-body dualism”

I don’t think so, or at least it depends on how you define dualism.

Mind-body dualism says that the mental and the physical are both real and neither can be assimilated to the other (source: Stanford Encyclopedia of Philosophy online).

According to the dualist view, the body is material and obeys the laws of physics, while the mind is a mysterious “something else.”

On the contrary, the concept of mind uploading assumes that the mind is generated by physical phenomena in the brain (and to a lesser extent the rest of the body), which obey the laws of physics and are understandable by science, hackable, and in principle reproducible on different material substrates.

Mind uploading is a very speculative technology that will require huge advances in our detailed understanding of the mind/brain/body system, and new technologies for implementation. There are many promising advances, but I don’t see it happening in the first half of the century.

But it seems to me that negating the possibility _in principle_ of mind uplaoding is a dualist position that assumes that the mind is something mysterious and non-physical.





@Giulio Would you agree that there are in fact two “dualisms” that are quite defensible: the dualism between the brain and the rest of the body, and the dualism between brain-as-hardware and mind-as-software?





@Peter re “Would you agree that there are in fact two “dualisms” that are quite defensible: the dualism between the brain and the rest of the body, and the dualism between brain-as-hardware and mind-as-software?”

Not much. I think the body and the brain are physical systems subject to the laws of physics like everything else under the stars, and the mind is a computation encoded in the physical features of the brain (and to a lesser extent the rest of the body).

When it is useful and convenient, we can also use a higher level description of the mind, and say that we love our friends and Shakespeare without making explicit references to neurons and synapses. But, ultimately, our love for our friends and Shakespeare is encoded in our neurons and synapses, or at least this is what science says.

Similarly, a higher level of interpretation (if, then, else, for, while, class and all these things) is useful to write and/or understand a computer program, but what really happens is solid state physics, only indirectly related to the program code.

Yet, if we have the program code, we can implement the same behavior, very similar and for all practical purposes identical, on a different computing device, by translating it to a form suitable for the new substrate. This is analogous to mind uploading.





“@Giulio Would you agree that there are in fact two “dualisms” that are quite defensible: the dualism between the brain and the rest of the body, and the dualism between brain-as-hardware and mind-as-software?”

So you are a dualist then?

Good question, and no matter how far one travels along the path of physicalism towards reductionism and reducibility we still all have to face this dualism of mind, concepts and physicality, and accept that quantum states reducible to patterns will still leave us with the problems of Qualia, subjectivity of understanding of concepts, symbolism and abstractions?


“Similarly, a higher level of interpretation (if, then, else, for, while, class and all these things) is useful to write and/or understand a computer program, but what really happens is solid state physics, only indirectly related to the program code.

Yet, if we have the program code, we can implement the same behavior, very similar and for all practical purposes identical, on a different computing device, by translating it to a form suitable for the new substrate. This is analogous to mind uploading.”

Hmm.. I understand but still have problems with this somewhat, even now?

Checkout this short video with Frank Tipler who attempts to describe just this, yet seems to me to still fall short of an explanation that physics, (and thus physicalism), resolves all problems and dilemmas?

“Is Consciousness an Ultimate Fact? (Frank Tipler)”

http://www.closertotruth.com/video-profile/Is-Consciousness-an-Ultimate-Fact-Frank-Tipler-/802





yeah, i was just referring to the mind being inserted in another sleeve, as if the sleeves were interchangeable and irrelevant.  came into this discussion too late and so find it rather like trying to enter six games of double dutch simultaneously.





@Peg, a sleeve (In Morgan’s universe, bodies can be worn and taken off like shirts) is certainly not irrelevant, because even in Morgan’s universe a person cannot live without one. It is interchangeable though.





“so find it rather like trying to enter six games of double dutch simultaneously.”

Ah, but that’s just what I love about this blog smile

@CygnusX1

“Checkout this short video with Frank Tipler who attempts to describe just this, yet seems to me to still fall short of an explanation that physics, (and thus physicalism), resolves all problems and dilemmas?”

Me too.

I buy the idea that “soul” (I prefer to use the word “mind”) for this is pattern/form, so immaterial (but still physical), while the brain is the hardware, so material. I’m not sure I completely buy the idea that the first law of thermodynamics relates to the material and second law of thermodynamics to the immaterial: that seems a bit stretched to me, as does (judging from a quick wikipedia scan) much of Tipler’s work. He seems to be the kind of people you need in science (many top figures in science, from Pythagoras on, had a more or less explicitly religious motivation), but you also need to take what they come up with with several pinches of salt.

And then - as I think you’re alluding to CygnusX1 - there’s still the issue that consciousness is what we experience _directly_, in the now, and there _is_ something mysterious about that that cannot be fully explained by science, with its third-person objectivising approach. There’s the material/immaterial dichotomy (“dualism”, if you will), and then there’s the subject/object dichotomy. These are two different things.

“So you are a dualist then?”
Not in the sense that Giulio quote from Stanford Encyclopedia. But yes, we need to recognise dualisms, just as much as we need to recognise the essential interconnectedness of things.





@Giulio

I’ve been trying to figure out where we disagree. You seem to be saying that the mind is more closely connected with the physical brain (“encoded in the physical features of the brain”) than my “mind-as-software” analogy. Maybe so, but then my point above about the Dennett thought experiment applies even more so: when dealing with chaotic systems the hardware you run the program on can indeed (radically) change the results, so I would question your conclusion that “we can implement the same behavior, very similar and for all practical purposes identical, on a different computing device”. A problem for mind uploading then?





@Peter re “A problem for mind uploading then?”

A practical engineering problem to solve, yes, but not a problem in principle. The key issue is FAPP (For All Practical Purposes): what and how much difference can be tolerated between the original and the copy.

There is certainly a difference between the person who is writing this word now, and the person who was writing FAPP a few seconds ago, but the difference is negligible for most practical purposes.





Hmm, not sure it’s as non-fundamental as all that. Sensitive dependence on initial conditions means that the most minuscule differences diverge exponentially. The issue is not so much how much we change over a few seconds (mainly the acquisition of knowledge I guess, for example I bet you didn’t know _exactly_ what you were going to write until you wrote it), as how running the same software on different computers can lead to completely different results, when the underlying conditions are chaotic (which they most certainly will be when modelling the human brain). Just how long it takes for differences to become perceptible depends on the rate of divergence, of course…this is basically why we can only forecast the weather a few days in advance.

None of this is to say that we won’t be able to upload minds, or that they won’t function FAPP as persons. Only that they won’t be the _same_ persons as the wetware versions on which they are based. Does that matter in the context of identity? Again, I would say that’s a matter of choice rather than of truth.





“And then - as I think you’re alluding to CygnusX1 - there’s still the issue that consciousness is what we experience _directly_, in the now, and there _is_ something mysterious about that that cannot be fully explained by science, with its third-person objectivising approach.”

Not exactly, I believe that Consciousness is natural phenomenon, that “supports” our experiences, sensations and Qualia and apperception’s. That Consciousness is the conduit that “supports” quantum entanglement outside of space-time. Manifests in our complex minds quite possibly as quantum decoherence (“Orchestrated OR”) – for want of a better thesis/explanation at this time? And thus not at all viewing Consciousness as “something mysterious”, but both natural and ubiquitous.

True, that it may never be fully understood in terms of classical physics, (Tipler) nor necessarily even quantum mechanics, because the “conscious observer” is required to contemplate and witness, (the dual nature of external “object” via inner reflexivity “subject”), any thing or idea, concept or experiment. And thus, as Hinduism proposes, Consciousness is the “impartial witness”, absolutely necessary, and cannot be pinpointed as physically real, only contemplated as ethereal?

To surmise, what we experience is not consciousness, it is supported by Consciousness, which further supports Self-reflexivity? The processes of Self-reflexivity are thus reducible to software and replication will be successful, due to the grace and free ubiquitous nature of Consciousness as fundamental attribute in atomic and quantum interactions?

If we contemplate and accept Consciousness as ubiquitous natural phenomenon, it is thus not something we need worry about, nor necessarily concern ourselves with to explain, nor struggle to understand? Because it is a “given”?

Once again, this philosophical position is also an option and a choice. Yet it may prove to be of great importance in both how and when we contemplate transcendence to different substrate – Our state of mind, our freedom from doubt and fear, and what we deem as fundamentally crucial to support vitality, may be absolutely necessary for the success of any transmigration?


Now to present something more radical – that what we deem as essential, (as described above), and that which we explain using the term “consciousness” to support our experiences, Qualia, apperception’s, and Self-reflexivity, does not really exist at all?

That consciousness is not in itself a phenomenon, nor phenomenological, but merely an abstract concept that we use to attempt to explain to ourselves and others “how we experience?”

That we should dump the term consciousness and all of its mysticism in favour of the term awareness? Forget that this is some insolvable or “hard problem” to overcome because it does not exist?

You may be thinking this negates and contradicts all of what I have stated earlier? But it doesn’t, because I am only suggesting we dump the term “Consciousness” and all of it’s mythology and luggage? 

Quantum entanglement still requires awareness between entities, (and it also requires duality of subject (electron #1) and object (electron # 2)). Duality is the foundation of materialism?


“There’s the material/immaterial dichotomy (“dualism”, if you will), and then there’s the subject/object dichotomy. These are two different things.”

Agreed! Yet as concepts in the mind they are similar or even the same?

I can only realise, (make real), and substantiate my “Self” as subject with reference to an external object – without external reference to any thing, I cannot continue to substantiate my own position or identity – literally!

And yet, when I dream in an immaterial world, I still resolve my own identity with no reference or contact with the physical Universe, thus only the concept of duality in my mind is real?

Perhaps the concept of duality in my mind is the only real thing in both cases?


This poses a dilemma, imagine a philosophical zombie that is not Self-reflexive, (P-Zombie if such thing is possible at all – or think of an uploaded mind without true Self awareness). This zombie/mind has all of the same senses and sensory information as we do? How can the zombie contemplate or interact with the external or even substantiate the physical external world without the ability to differentiate subject and object?

This kind of mind or transmigration may have all of the possibilities and attributes to be fully functional, yet the mind would remain idle and dormant without this concept of subject and object?





Maybe we can dump the luggage and th mysticism without dumping the word itself? In any case I tend to regard “consciousness” and “awareness” as synonyms.

How true is it that we can only substantiate our selves with reference to an external object? Certainly in the developmental sense, awareness of the other comes first, followed by awareness of the self in relation to the other. But once that sense of self is established, does it still need the other to be substantiated? I’m not sure. Perhaps it is precisely then that we experience pure consciousness?

The P-Zombie is the natural state of most animals, is it not?  Even a plant interacts with its environment, albeit entirely “mindlessly”, since it has no mind. So apparently a sense of self is not necessary, only perception of external objects and some kind of algorithm for behaving accordingly. The awareness of self enables a further degree of complexity in behaviour, cognitive manipulation and decision-making. The subject becomes object: we start to observe ourselves.

All this, I believe, can be modelled, but here’s the thing that science can never explain: why am I Peter Wicks, typing here on this iPad at this particular location and moment in time, and not you, or Giulio, or Peg, or one of the billions of other people who apparently display the same kind of consciousness, but whose consciousness I can never experience directly. That, to me, is the true mystery, and this is what - for peace of mind - I prefer (most of the time) to take as a “given”, and not something to explain or struggle to understand. Indeed this - my own, current consciousness - is my primary reality. Everything and everyone else is at at least one remove, including my past and future selves.

I doubt that we need to be free of fear and doubt for a successful transmigration, any more than we do for successful operative surgery. In any case fear and doubt are all part of what has to be modelled, as are our theories about consciousness etc.

Well, perhaps all will be revealed at Frank Tipler’s Omega Point (I hope there’ll be a restaurant there).

Final question: are “soul” and “identity” synonyms?
Post-final question(s): how confident can we really be that anything we say makes any sense at all? Is this all just mind chatter, some kind of collective hallucination that draws us away from pure consciousness?

It’s late, I should really go to bed…





I just came across this at David Pearce’s HEDWEB website:
http://www.hedweb.com/abolitionist-project/index.html

“I haven’t explicitly addressed the value nihilist - the subjectivist or ethical sceptic who says all values are simply matters of opinion, and that one can’t logically derive an “ought” from an “is”.
Well, let’s say I find myself in agony because my hand is on a hot stove. That agony is intrinsically motivating, even if my conviction that I ought to withdraw my hand doesn’t follow the formal canons of logical inference. If one takes the scientific world-picture seriously, then there is nothing ontologically special or privileged about here-and-now or me - the egocentric illusion is a trick of perspective engineered by selfish DNA. If it’s wrong for me to be in agony, then it is wrong for anyone, anywhere.”

Far be it from me to argue against Pearce’s abolitionist project (in which he advocates the abolition of suffering through recalibration of the hedonic treadmill), but IMO he gets two things wrong here: first he equates moral subjectivism with value nihilism, and they are not at all the same thing, but it is the second gripe I have with this excerpt that is most relevant to this thread: his statement that “there is nothing ontologically special or privileged about here-and-now or me - the egocentric illusion is a trick of perspective engineered by selfish DNA”.

From the perspective of science all this is highly plausible, but from the perspective of my own direct experience it is just wrong: there IS something ontologically special about here-and-now and me. Tell me if you don’t have the same experience. And it is precisely this discrepancy between asymmetric experience and symmetric science (in which all possible configurations of the universe are ontologically equivalent) that constitutes, for me, the really hard problem of consciousness…and identity.





@Peter re “there IS something ontologically special about here-and-now and me. Tell me if you don’t have the same experience.”

I don’t have the same experience. According to my experience,  there is something ontologically special about here-and-now and ME. grin





@Giulio Exactly! Lol





I guess it is a bit “too radical” to propose dumping the term consciousness, and I also believe that the terms “consciousness” and “awareness”  are synonymous. My inference was rather that because consciousness, (awareness), is natural and ubiquitous, that we should remove it’s special status and reverence for us humans?

I do believe that all entities, from the quantum up, including plants, insects, animals and even robots, are conscious or “aware” of their surroundings, at least enough to react to change in circumstance. Rocks? Well, they gotta be conscious also, because they too are made of atoms and comprise energy-matter interactions? Yet a rock is not a complex organism.

“How true is it that we can only substantiate our selves with reference to an external object? Certainly in the developmental sense, awareness of the other comes first, followed by awareness of the self in relation to the other. But once that sense of self is established, does it still need the other to be substantiated? I’m not sure. Perhaps it is precisely then that we experience pure consciousness?”

This is an important point. We deem a human baby to not be born Self-aware but still a fully conscious being until.. suddenly and almost instantaneously.. Self-reflexivity appears miraculously around 12 to 18 months old? Once this Self-reflexive feedback loop is established, it becomes set like stone with a high level of contingency in the brain. Even with a fatal head injury or injury resulting in coma, or amnesia, our Self-reflexivity is protected as priority in the brain.

Memory also plays a crucial function for being, (See Martine’s articles also), and I believe that “immediate short term memory” and the ability to experience change in circumstance from each moment to the next is vital for vitality! How else could you substantiate that you are alive, (or a brain in a vat thinking), without this ability to compare what was with what is? Am I really here? Pinch me!

“The awareness of self enables a further degree of complexity in behaviour, cognitive manipulation and decision-making. The subject becomes object: we start to observe ourselves.”

Yes, this conceptualization and self reflection from a third person perspective is amazing, but this must be a concept, immaterial and perhaps still physical, (classical physics – Tipler), yet still a concept rather than another physical Self-reflexive feedback loop, or else there would be physical Self-reflexive turtle loops all the way down towards insanity?

“All this, I believe, can be modelled, but here’s the thing that science can never explain: why am I Peter Wicks, typing here on this iPad at this particular location and moment in time, and not you, or Giulio, or Peg, or one of the billions of other people who apparently display the same kind of consciousness, but whose consciousness I can never experience directly.”

So you are most of the way towards accepting uploading, and I reiterate that we all have to face the “reality” of the dual nature of body/mind, (physicalism), and conceptualization. I myself am quite happy to keep edging further along the path of physicalism towards total acceptance of reductionism.

I would say that the reason why you only experience Peter Wicks is because your conscious experience is localised and entrapped in a shell of clay and mud, (at least that is what you think and feel is most real?) Yet if you could extend your empathy and conceptualize that all of the physical Universe is interconnected at the atomic and quantum level, you may just see the matrix for what it really is.. just code? And that perhaps a “sea of consciousness” awareness ebbs and flows in every direction with every particle entity outwardly conscious and interacting with its nearest neighbours, (but not Self-reflexive – this requires a high level of brain complexity?)

Re. One of your earlier comments regarding language and primeval humans. Symbolism is also conceptual and realised even before language and speech has been fully established. Humans must be modelled upon how apes still communicate now, with not only grunts, body language and movement, but also more subtle and intelligent communications with eye contact, and gentleness of touch to communicate compassion and empathy and even love?

Yet it is difficult to conceptualize any “thing” in my mind right now without the use of symbolism and language, that is now set in stone by my education and past experiences. What a dream it would be, to look upon the sky and experience the purity and depth of a blue atmosphere without automatically thinking “blue!”





“What a dream it would be, to look upon the sky and experience the purity and depth of a blue atmosphere without automatically thinking “blue!”“

Amen! Indeed,  having escaped from my office jobs there are a number of things I need to “unlearn”, and just as it’s deleting, not creating, files from a computer that makes it hot, so unlearning entails a local reduction in entropy, the difference of course needing to be exported. Do we have the technology to unlearn? Can we organise our mind so that we can have the experience you describe but still have the concept of “blue” to use when the situation demands. Osn’t that what we do, however imperfectly, when we meditate?

” Yet if you could extend your empathy and conceptualize that all of the physical Universe is interconnected at the atomic and quantum level, you may just see the matrix for what it really is.. just code?”

Again…yes, as a meditative experience. But you will agree, no doubt, that this “transcendent perception”, if I can call it that, will be reflected in, nay _the result of_, neural patterns in my brain, that is the brain of the living organism that currently calls himself Peter Wicks. Will I really be connected with the universal consciousness, or will I just be tripping (albeit not necessarily with the aid of drugs)?

“Yes, this conceptualization and self reflection from a third person perspective is amazing, but this must be a concept, immaterial and perhaps still physical, (classical physics – Tipler), yet still a concept rather than another physical Self-reflexive feedback loop, or else there would be physical Self-reflexive turtle loops all the way down towards insanity?”

In a way, the fact that we are discussing self-awareness means that we have become aware of our self-awareness, so we’re already at the third iteration. Perhaps the analogy is with parallel mirrors: yes, in principal there is an infinite regression, but in practice the images get smaller and smaller, and the mirrors are never exactly parallel. In turbulence theory the self-similarity disappears at scales small enough for viscosity to be relevant; something similar will doubtless happen in the human brain regarding our self-self-...-self-awareness.

“I do believe that all entities, from the quantum up, including plants, insects, animals and even robots, are conscious or “aware” of their surroundings, at least enough to react to change in circumstance. Rocks? Well, they gotta be conscious also, because they too are made of atoms and comprise energy-matter interactions? Yet a rock is not a complex organism.”

I’ve come across this idea before (not least from Burt, who I hope will soon be able to find the time to rejoin our discussions), but am as yet unconvinced of the evidential basis. And still I don’t see how it resolves my “hard problem”. The transcendental/matrix experience that you describe would apparently remove it for the duration of the experience, in that my direct experience would become aligned with the more “symmetrical” picture suggested by science, but still that doesn’t explain why my current experience is not so aligned, or indeed why I need to make such an effort to even perceive myself to be having such an experience (and as noted above I’m still sceptical as to how genuine it would be). In the mean time, it would seem I’m stuck with mud and clay (but perhaps with the promise of ecstasy?).





Peter Wicks, first a quick general observation. Support for phasing out the biology of suffering is consistent with a whole host of ethical - and meta-ethical - stances. So the abolitionist project isn’t tied exclusively to a classical utilitarian ethic, let alone my idiosyncratic views on meta-ethics.

However, on to your substantive point above. Evolution hasn’t endowed us all with the hyper-empathising condition of mirror-touch synaesthesia
(cf. http://www.livescience.com/
1628-study-people-literally-feel-pain.html )
This is because it’s genetically adaptive for us to be quasi-psychopaths. Yet strictly speaking, even the mirror-touch synaesthete doesn’t literally “feel my pain”. S/he has a type-identical experience when s/he sees me cut myself, not a token-identical experience. So is there indeed something ontologically special or privileged about my here-and-now? This certainly seems to be the case (to each of us!). My own first-person perspective would still seem special in some sense even if I were a mirror-touch synaesthete. Yet how can such apparent ontological privilege be reconciled with the third-person story of the world delivered by natural science - the perspective that aspires to a God’s eye-view, an impartial “view from nowhere”? How can one reconcile one’s own seemingly ontologically privileged here-and-now with the physicist’s conception of space-time [or Hilbert space etc] - a “block universe” in which all here-and-nows are equally real and equally [tenselessly] exist?

Well, the short answer, of course, is that we don’t know. But my working hypothesis is some kind of Strawsonian physicalism
( cf. Galen Strawson’s “Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?”
http://www.utsc.utoronto.ca/
~seager/strawson_on_panpsychism.doc )
Crudely speaking, on this story first-person states are literally the stuff of the world, each seeming to itself to be ontologically privileged. Their distribution is formally described by the equations of physics: ultimately perhaps the universal Schrödinger equation or its relativistic generalization. I say “crudely speaking” because on this account the great majority of experiences aren’t found in persons at all, but in the postulated fields of microqualia: the “fire” in the equations.

Naturally, hard-nosed materialists find such a conjecture repugnant. But though physicalism and materialism are normally assumed to be cousins, this needn’t be so at all. Could the Hard Problem of consciousness turn out to be an artifact of materialist metaphysics?

Maybe so. But I don’t think even physicalistic panpsychist / monistic idealist ontologies, taken by themselves, solve the Hard Problem. For why aren’t organic robots like us quasi-zombies, akin to an ant colony or the population of China? The population of China consists in 1.3 billion separate skull-bound experiential fields. But that population is not, in addition, a unitary subject of experience - regardless of how Chinese people are functionally arranged: not even if they all hold hands. Why are organic robots like you or me any different i.e. even if Strawsonian physicalism is correct, why aren’t we mere structured aggregates of “mind dust”, 100 billion neuronal psychons, just pixels of discrete microqualia, rather than unitary subjects of experience?

This brings me to Giulio’s thought-provoking essay. What grounds have we for believing that a digital “upload” could ever be non-trivially conscious? Despite my sympathy for physicalistic versions of panpsychism / monistic idealism, I’m profoundly sceptical that classical digital computers will ever be anything other than the insentient zombies they are now. (cf. the abstract of my Tucson talk: http://www.hedweb.com/philsoph/quantum-computer.html). Now perhaps I’m wrong and classical digital computers will indeed solve the binding problem
http://www.hedweb.com/intelligence-explosion/binding.pdf
and generate unitary bound objects in a unitary perceptual field apprehended by an apparently unitary self - whether in the guise of uploads or artificial posthuman superiintelligence. But if so, I think it’s fair to say no one working within the materialist paradigm has the slightest clue how such ontological magic can be performed - nor even the slightest idea of an explanation-space in which the answer could be found.





@David Pearce

Many thanks for taking the trouble to respond to my comment. Indeed, the abolitionist project, which I find wonderful, does not depend on a specific standpoint on either ethics or meta-ethics. My first point was that moral subjectivism does not equate within “value nihilism” (in fact I am inclined to say they are incompatible), and I note that you don’t specifically take issue with this in your reply.

With regard to the “hard problem”, which we seem to be describing in more-or-less equivalent ways, my (still relatively uneducated) instinct is to agree that panpsychism doesn’t really solve it, although for somewhat different reasons. In fact if anything it seems to make the problem worse, since my “ontologically special” (according to my direct experience) first-person perspective now has to compete (in the third-persin persoective of science) not only with the seven billion or so other human souls on the planet (not to mention all those others, including earlier/future/parallel versions of myself in other times and other universes), but now also with every rock, nay perhaps every fluctuation of the vacuum state. It seems to be less, not more, explicable that my current single skull-constrained here-and-now should seem ontologically special.





@David - thanks for commenting! To Hank and Marcelo: David’s post above appears badly formatted on the website, but it is OK in the email. I think there may be some problems in the comment module.

Re “What grounds have we for believing that a digital “upload” could ever be non-trivially conscious?”

It is an assumption, but I think it is the only assumption compatible with the scientific method and our current scientific knowledge. According to both, we are machines that obey the laws of physics, and a machine sufficiently similar to another machine can be considered identical and interchangeable for all practical purposes.

There are many things that we don’t know yet, but it seems to me that in this case the burden of proof should be with the other side. That is, what grounds have we for NOT believing that a digital “upload” could ever be non-trivially conscious?





To Hank and Marcelo: also Peter’s comment is badly formatted (without the last words of each line), and also mine. The comments before are OK. I guess somebody just changed the php code for comment parsing.





Continued. I am using Firefox now and everything seems OK, On Chrome, the last few words of each line are cut).





Same problem with Safari.





David Pearce’s long URLs are screwing things up. I’ll edit the comment.





Two questions, wouldn’t it require an enormous amount of energy to power any technology capable of containing an uploaded human conscious?  Also, if you manage to get every single human being on the planet uploaded onto computers (which I think would require an inconceivable amount of energy), who will be maintaining the technology housing the minds that are on the inside because no technology, however advance, lasts forever without proper maintenance.





That’s better.

@Christian Nice questions! I’d be interested to know if someone has done an engineering calculation, although I recall Kurzweil saying (in The Singularity Is Near) that in principle there’s enough sunlight falling on 3% of the Earth’s surface to fuel the singularity, i.e. provide trillions of times our current global demand.





@ Peter Wicks The problem with that is that we have yet to create solar panels or any other way of harnessing solar energy that is efficient enough to meet those needs (plants are still far more efficient).  What about my second question?





@Christian

On thebfirst question: sure, there are a lot of thing we haven’t learnt to do _yet_, but for which there is good reason to believe that we will.

On the second question, again, yes that’s an issue, I guess my view is that once technology is at the level to make the uploading work it will also be at the level to ensure maintenance. As to who does it, well that’s an organisational/governance issue. Could be people, could be robots, could be post humans.

There is a more fundamental issue, though: why technology always needs maintenance, and whether this always needs to be the case. A teapot is a kind of technology, and really doesn’t need much in the way of maintenance. There is something of an analogy to ageing, is there not? We need maintenance as well, and eventually replacement. Companies have little incentive to build in long-term durability to their products because that would undermine the market for new products, and because consumers only tend to focus a few years ahead. At some level maintenance is about keeping entropy at bay, of course, but as machines become superintelligent one can imagine they would pretty much look after themselves, and/or design their own replacements.

So I’m not really worried about energy or maintenance. The challenge is feasibility, how the technology will be used, and whether civilisation makes it through the bottleneck to get there. (On that point I DO agree with Kaku.) And whether I live to see the day…





@ Peter Wicks I don’t think the teapot was a very good analogy because teapots don’t have many functioning parts that could overheat or break that would incapacitate the whole system.  I guess the issue of maintenance relates to the need for bodies since you need a body or something with a body to take proper care of yourself if you are existing in an info-sphere or something





Not only bodies: the info sphere will need to be maintained in some way. I just don’t think it’s the main challenge/concern with regard to uploading.





Giulio, you could well be right about conscious uploads. But here are some counter-arguments.

By analogy, if I send you the relevant algebraic notation, you can re-enact a full game of chess I’ve just played. But have you any grounds for believing you’ve also reconstructed the textures of my pieces? Have you even grounds for believing they have any textures at all? (Perhaps I was playing chess online)? Presumably not -  they are functionally incidental or irrelevant. A type-identical game of chess can be instantiated in many differently substrates.

So how about the minds of organic robots? Assume for the sake of argument that with utopian technology our brains can be scanned, digitized, uploaded and then executed on an ultra-powerful classical digital computer. Now it may be the case that our particular textures of consciousness (“qualia”) are functionally incidental or irrelevant to human behaviour, just as are the textures, if any, of chess pieces. If so, our gross behaviour really can be implemented in other substrates. Yet why expect these multiple instantiatons to inherit the same [or any!] textures of consciousness incidental to the original biological wetware? By hypothesis, those textures of consciousness are mere implementation details, functionally incidental and irrelevant.

Of course, alternatively it’s possible that purely formal models of mind are inadequate to capture the nature of the conscious human mind/brain: perhaps our behaviour is constitutively tied to our particular textures of consciousness. I for one don’t see how the phenomenology of mind could be merely epiphenomenal. (cf. http://plato.stanford.edu/entries/epiphenomenalism/ ) Not least, how could mere epiiphenomena have the causal efficacy to enable us to talk of their existence?

Either way, my point here is that even if an exact functional-behavioural emulation of a human being were possible on a classical digital computer, the particular kinds of consciousness - or (in)sentience - of that emulation would still be very much an open question. This worry isn’t just a rehash of idle philosophical scepticism on The Problem of Other MInds..

As it happens, I don’t think even the behaviour, let alone bound phenomenal objects and the unitary subject of experience, characteristic of the awake human mind/brain can be faithfully emulated on a classical digital computer - any more than a classical digital computer could factorize a 1000 digit number this side of Doomsday. The binding problem isn’t just a puzzle or an anomaly: IMO it’s computationally fundamental to understanding the evolutionary success of organic robots over the past 540 million years. Unfortunately the best known advocates of quantum mind are no less prone to what the French call “déformation professionnelle” than the rest of us. Thus I very much doubt if evolution cares whether or not e.g. mathematically-inclined human minds can perform “non-computable”  functions that no classical digital computer could perform. However, this opens a fresh can of worms I won’t pursue here.





Peter, I think something akin to Stawsonian physicalism is a necessary but not sufficient condition to solving the Hard Problem. By contrast, most proponents of quantum mind theories try, IMO, to do too much: they claim quantum mechanics somehow explains the existence of consciousness itself. By way of distinction, if we provisionally assume Strawsonian physicalism, then we can understand, at least in outline, how quantum mechanics might show we aren’t mere aggregates of microqualia, discrete classical “mind dust”. But if instead we assume a standard materialist ontology, then I can’t see how quantum mechanics, with or without the mythical collapse of the wavefunction, could change water into wine and convert the supposedly nonsentient “fire” in the equations into qualia.

Needless to say, this failure could reflect a simple lack of imagination on my part.

I gather you share the materialist conviction that whatever the elusive nature of this “fire” may be, it lacks any subjective textures. This may of course be so. But we need to be quite explicit about the metaphysical assumption we’re making here.

Apologies for not addressing earlier your point that moral subjectivism shouldn’t be equated with value nihilism. I was worried we might simply be using the terms in different senses, since in one sense I agree with you. In common with, say, phenomenal colour, value is mind-dependent. But that same mind-dependence implies its objective existence since minds are a part of the natural world. Perhaps speaking of the brain-dependence of value and phenomenal colour might make one’s ontological commitments clearer. In this sense, one can be a moral subjectivist without being a value nihilist. Maybe the pain-pleasure axis discloses the universe’s inbuilt metric of (dis)value. But any claim that one’s consciousness discloses the existence of intrinsically normative states opens up a host of issues that take us a long way away from Giulio’s essay.





The problems concerning the debate over Consciousness forever fall victim to confusions, because we begin the argument from subjective points of reflection and understanding of the term, and its never long before qualia is brought to case to perplex possibilities?

Indeed, I would agree, as everyone here, that qualia is “The Hard problem”. Yet qualia should not be confused with Consciousness? Qualia, mind and thoughts, are reliant upon Consciousness, (awareness), this must be truth?

Thus, should we not deem Consciousness as fundamental and without “texture”? And as “that” proposed by Hinduism, merely “impartial witness”?

Without further need in repeating myself, Consciousness “as fundamental phenomenon, (awareness)” even at the quantum level, is thus precondition for “will to action”, even in the nature of a simple electron?

Consciousness as “natural and fundamental phenomenon”, (physical), supports aggregated and layed complexity, Phenomenology in complex organisms and systems?





Games of chess can be reduced to pattern and algorithm and transmigrated to different substrate, yet perhaps so too this “Hard problem” of qualia, emotion and feeling? It all depends on how much we are willing to forgoe “texture”, for the extension of longevity? In the same manner that we may “collectively” be willing to accept generic textures, (for chess pieces), we may accept default value of redness and taste of Apple? And I propose that we not only may, but would and will?

Textures, behaviours and values, feelings and emotions are NOT what constitute my Self/ego, and all of these may be acceptable for simulation?

Memories are important for congruity of Self/ego, yet even these may comprise a high degree of inaccuracies, as is our natural memory of past experience?

Thus successful transmigration may be more reliant upon our philosophy of mind and how much of our former ego Self we are willing to relinquish? Thus, ask your “Self”, to what degree this is acceptable or problematic?





As a final point, the path to success of uploading of Self/ego “mind” to different substrate is an engineering problem in overcoming and reconciling the duality of the physical/material with physicalism and phenomenology.

Yet the successful transmigration of “mind” ultimately relies upon supporting these “notions” of the duality of physical/material substrate and the mind, and the duality between Self, (subject) and external object? Else there would be no purpose in the goal for Self longevity?

Again, this dilemma is not necessarily problematic, depending upon one’s philosophy, but rather indicates that dualism cannot ultimately be resolved in the goal to acceptance of uploading?





@David re “I send you the relevant algebraic notation, you can re-enact a full game of chess I’ve just played. But have you any grounds for believing you’ve also reconstructed the textures of my pieces?”

No, but if what I want to do is re-enact the game, the textures of the pieces are irrelevant. The game is the same with any other textures, for all practical purposes (FAPP) related to my objective of re-enacting the game.

Of course, sending the moves in algebraic notation is not good enough if the objective is the artistic appreciation of a fine chessboard and its pieces.

As far as identity preservation is concerned, my choice (it is basically an aesthetic choice, you cannot prove one position or another) is to consider the computational pattern good enough FAPP.

The texture of my skin, I can do without if I have to. I don’t actually like it that much.





If the texture (“what it feels like”) of consciousness is indeed as incidental to human behaviour as the make-up of chess pieces, then yes, I guess so, one might say that for all practical purposes (FAPP) an upload is a type-identical copy of you. If (s)he can be systematically interpreted as walking and talking and behaving in the same way as you, then why worry that your nominal upload isn’t a unitary subject of experience, or might have different textures of consciousness? Or rather why worry that (as I suspect) the upload has no textures of consciousness at all: it’s not even “all dark inside” since it’s a zombie, or at most an aggregate of mind-dust - the upload has got no more ontological integrity than the zombies we kill playing Modern Warfare?

Well, for a start, I’d argue that first-person states are important because without the subjective textures of the pleasure-pain axis, nothing inherently matters. We can envisage their functional analogues in the guise of formal utility functions. But in a insentient world populated by zombies, nothing matters at all. Indeed one (admittedly unlikely) form of existential risk envisages a future society dedicated to mass destructive uploading as a route to nirvana. Afterwards our digital uploads congratulate themselves on the successful transition. They chuckle at the bioconservative sceptics and the primitive carbon chauvinists who have been confounded. Unfortunately (or fortunately if you’re a negative utilitarian) our metaphysical assumptions were mistaken. All that exists in the post-Transition world are digital zombies. The human experiment has come to an end.

Sociologically speaking, I think we may regard this prospect as vanishingly remote.





@David

Thanks again for you reply. To be honest I’m not sure to what extent I share the materialistic conviction that whatever turns the “fire in the equations” into qualia has no subjective elements. Surely qualia are by definition subjective, so I don’t see how you can get a subjective from an objective via a process that is fully objective. But neither am I sure whether any of this matters. After all, the only qualia of which I have direct experience are my own, in the here-and-now. The rest I believe in more or less by analogy. That is to say: I have direct experiences (qualia), which since infancy I have incorporated into a worldview according to which the “I” in this sentence is a human being who goes by the name of Peter Wicks, and whose qualia correspond to attributes in this objective worldview that I appear to share with others. From this I deduce, or perhaps rather _induce_, that you and everyone else experience such qualia too. But I don’t really know that, any more than I know that my past or future selves have are anything more than fleeting illusions.

I actually feel more strongly about the values/subjectivism issue. I agree that values _exist_, objectively, in the sense you describe, i.e. as brain-dependent entities (concepts). What I think is lacking in terms of any empirical evidence is anything that converts “most people think that killing is wrong” into “killing IS wrong”. In this I’m a devout follower of Hume.

But what I mean by being “a moral subjectivist without being a value nihilist” is not so much “the pain-pleasure axis discloses the universe’s inbuilt metric of (dis)value” so much as “there are the values I have decided to promote and try to live by. I claim no objective truth about them, it’s just a decision I have made.”

Somewhat off-topic, as you say, but something I feel strongly about because I think the perception that moral subjectivism equates to value nihilism tends to breed neurosis and intolerance.





Peter, yes indeed. A materialist like Hawking will standardly acknowledge that we have no idea what “breathes fire into the equations”.
But he will also say that whatever the nature of this impenetrable “fire”, it’s wholly non-sentient.
It’s by no means clear that these two beliefs can be reconciled.
This debate might seem a hopeless metaphysical morass. As Kant argued, we shall never have access to the Ding an sich, the thing-in-itself or noumenal essence of the world. All one ever has non-inferential access to are phenomena.
Or is this so?
Philosopher Michael Lockwood, anticipated by Russell and perhaps Schopenhauer, in effect turns Kant on his head. There is one part of the world know as it is in itself, and this is one’s own mind/brain - the glorious technicolor contents of one’s own mind and egocentric virtual world. Introspection grants privileged access to a tiny part of the “fire” in the equations - and it discloses that the intrinsic nature of matter and energy is nothing like what one’s naive materialist intuitions might suppose.

Value realism? Yes, in one sense I think Hume is correct. The standard canons of logical inference don’t allow derivation of an “ought” statement from an is statement. But I’d argue that states on the pleasure-pain access have an irreducibly hybrid quality. The normative properties of agony, say, are built into the nature of the condition itself: I-ought-not-to-be-in-this-dreadful-state. To which the cynical antirealist replies: sure, your agony is disvaluable to you. But it’s not disvaluable to me. There is nothing irrational in not caring about it. To which I’d respond that the anti-realist is guilty of an epistemic failure. He’s simply not adequately represented the first-person nature of the state in question i.e. he’s not a mirror-touch synaesthete. Of course, the antirealist disagrees. What about sociopaths? Sociopaths know perfectly well their victims suffer. They just don’t care. To which I’d respond that sociopaths are only marginally less cognitively incompetent than the rest of us. Thanks to evolution, we all share the egocentric delusion. But if we had a God’s-eye-view, a cosmic analogue of mirror-touch synaesthesia, then we would understand that the agony of an anonymous stranger on the other side of the street is as intrinsically disvaluable as one’s own - and act accordingly.





There is of course a connection between these two discussions. If our egocentric perception is indeed a delusion, then we commit a cognitive error when disvaluing the pain of others (in particular strangers of whose pain we are only dimly aware, e.g.via news media) less intensely than our own.

But is it? On what basis do we claim that this is a delusion? What is this “God’s-eye-view” that we suppose to be more “real”, more cognitively correct, than the egocentric virtual world to which we have such privileged access?

My own impression is that, to Hume, Kant, Lockwood, Russell and Shopenhauer we also need to add Wittgenstein and consider the role that language plays in all this. Do we *know* the glorious technicolor contents of our own minds or is it rather that we *are* those contents? What role is the word “know” playing in all this? When our stone age ancestors killed for food were they being “cognitively incompetent”? What about a cat playing with a mouse? When I see such things I may feel repelled (that’s me empathising with the mouse of course), as I do when I see what we’re doing to Greece right now (that REALLY makes me sick), but I don’t really see “cognitive incompetence”. I just see natural instinct, and lack of empathy. My emotional reaction to that is whatever it is - it tells me something about my own mind, but not much else - and my values may require me to do something about it - to join the abolitionist project, for example (how does one do that, by the way?) - but I don’t have the impression that the cat is making any cognitive error (unlike the austerity hawks, who most certainly are).

#occupy





How is it possible to go from the contents of one’s own mind - in the last analysis, the contents of the here-and-now - to the towering theoretical edifice of modern science, from post-Everett quantum mechanics to M-theory? I won’t even attempt to answer the question here. Instead I’d simply argue that our epistemic predicament in ethics and value theory is no greater or less than in natural science. The contents of one’s own mental states are an incredibly fragile foundation on which to base any universal theory of value and conduct. But modern science gives no ground for thinking one is ontologically special. If agony really is disvaluable for me, agony really is disvaluable for anybody: the pleasure-pain axis is a universal, there aren’t aliens on Alpha Centauri with an inverted axis of value. True, natural selection has “encephalised” that axis in different fitness-enhancing ways. Hence ethical conflict. But posthuman superintelligence will be able impartially to weigh all first-person perspectives and act appropriately. Ultimately, IMO, ethics is computable. For sure, a true God’s-eye-view is impossible in physics. Physicists are flesh-and-blood mortals trapped in the specious present like you or me. Nonetheless, a God’s-eye-view is the regulative ideal to which we aspire.

So yes, cats are cognitively incompetent. They don’t understand the implications of what they are doing when they “play” with mice. Cats don’t understand the perspective of the tormented mouse - or even that other first-person perspectives exist. An empathy deficit - most extreme when manifested as a complete absence of a theory of mind - isn’t a mere personality variable, with extreme agreeableness at one end and extreme disagreeableness on the other. Rather it’s a cognitive limitation at least as grievous as ignorance of Newton’s inverse square law or the Second Law of Thermodynamics.

Signing up for the abolitionist project? Well, sadly I think there is something of a leadership vacuum in contemporary transhumanism. But that’s another story.





@David Interested in your views on leadership vacuum!

Indeed, cats don’t understand the implications of what they are doing when they “play” with mice. A driverless car doesn’t understand what it’s doing when it runs someone over, but I’m not sure what inference we should be drawing from this. Clearly there is something of which the cat is unaware (namely the mouse’s pain), but is it actually making a cognitive _error_. To me there’s a difference between limited knowledge (i.e. ignorance) and incorrect beliefs, which imply some kind of cognitive error.

As you’ve noted above, the psychopath is well aware of the suffering of others, but simply doesn’t care. So he/she is not even ignorant, just disinterested. Is he/she wrong? I disagree that epistemically predicament in ethics and value theory is no greater or less than in natural science, since in the former there just doesn’t seem to be the equivalent of Johnson’s “I refute it thus”.

“Nonetheless, a God’s-eye-view is the regulator ideal to which we aspire.” Some of us, but not all: some people really don’t give a xxxx. Fortunately they are the minority, and the main enemy of the good is learned helplessness and defeatism, rather than the existence of psychopaths. But it seems to me that we will find true evil (i.e. psychopathy in action) less disturbing, and therefore easier to deal with, if we are not simultaneously trying to shore up some kind of moral realism. (And by “true evil” I am of course talking from the perspective of my own preferred, essentially utilitarian, value system.)





@David re “I think there is something of a leadership vacuum in contemporary transhumanism.”

I remember discussing this with you at Transvision 2010. Today, I don’t see this as much of a problem. We can do without leaders, and let a thousand flowers bloom.





I love David’s analogy (between mind uploading and re-enacting a full game of chess via the sequence of moves in algebraic notation), and I think it is deeper than it sounds.

Chess is far too simple to encode thought, but imagine a multi-dimensional, hyper-complex form of chess, with zillions of cells and gazillions of connections between cells. The positions of the pieces, and the connections between them (like, this hyper-horse jumps to that cell) may well be complex enough to encode thought.

Well, we all have a similar hyper-chess board in our skull, and mind uplaoding means extracting the sequence of moves, and re-enacting the game on another hyper-chess board. The essential nature of the game would not change much.





@Peter re “the psychopath is well aware of the suffering of others, but simply doesn’t care. So he/she is not even ignorant, just disinterested. Is he/she wrong?”

I am also disinterested, and I simply don’t care. I just want to protect myself and others from the psychopath. A lion that eats people is not “bad” (it is just doing what lions do), but this does not mean that we should not protect ourselves.





@Giulio Disinterested in what? Surely not in the welfare of others? For sure we tend not to be as interest as we claim to be, but I suspect you are going to the other extreme and pretending to be less interested than you are smile

I will not say that a lion that eats people is bad, and I will not say that it’s not bad either. I think it’s bad when people get eaten by lions (mostly). We do not generally call lions psychopaths (David may be something of an outlier here) because we simply don’t expect them to behave in any other way. They are what they are. Whereas human psychopaths scare us more, partly because they are more dangerous but especially because we feel that humans _shouldn’t_ behave like that. It’s not necessarily because they do more harm, but rather because we have an idea of how humans should behave, and want to believe that we do behave like that. Probably it makes us fear our own inner psychopath: “perhaps I too could behave like that”. And there is ample evidence, from the Stanford Prison Experiment to various civil wars, that most of us can, of suitably stimulated.

In any cases you say, whether a lion of psychopath is “bad” in some moral sense has little of anything to do with whether we should protect ourselves. Once again it comes down to our own, subjectively determined values. If you are so committed to nonviolence that even self-defence is out of bounds then it will be wrong, according to your values, to protect yourself (at least through violence). Personally I would never espouse anything that stupid, but that’s just me (and most of the human race).





@Peter - my point is that “ethical” judgments are irrelevant to whether we should protect ourselves. I do _not_ think a lion that wants to eat me is “bad” but I will try to kill it before it eats me.

Re self-defense, I am against violence, but I will use violence to protect myself if I really have to.





@Giulio I don’t agree that ethical judgements are irrelevant to whether we should protect ourselves. Any sentence with a “should” in it falls within the purview of ethics. But I do agree that whether we consider a lion, or psychopath, as “bad” is of at most marginal relevance (and the last para of my previous comment was supposed to read, “In any case as you say…”).





@Peter - It is snowing and I should wear something warm to go out, but this sentence does not fall within the purview of ethics. It does fall within the purview of practical common sense, which I consider as a better guide than abstract “ethics.”

Given primary values, I think “ethics” is either: applying practical common sense to deriving objectives from values, or: abstract and irrelevant. Values are primary, basically aesthetic in nature, and cannot be derived from other things: I like what I like because I like it, and you like what you like because you like it.

If we like the same things (for example a fair and open society where all pursue happiness their own way without hurting others), we can apply common sense to achieving our vision. I wish we could do this with _more_ practical common sense, and less abstract theories.





@Giulio

Interestingly I was just discussing with some former colleagues this morning the extent to which people generally want to live in a peaceful and equitable world, which is more or less the vision you describe (though we can of course argue for hours about what constitutes “fair” or “equitable”). The most obvious exceptions to this are going to be privileged people who lack empathy or an upbringing that has imprinted such ideas on their consciousness. But indeed when we identify visions that are “difficult to disagree with” we know we have hit on something that is likely to resonate with most people, and abstract theorising can get in the way of the common sense required to put those ideas into practice effectively.

Re your first sentence, indeed “should” is playing a different role there: your basically saying you’re going to regret it if you don’t, and making a judgement about what to do, but without anything that would normally be thought of as a moral dimension. Ultimately it’s your business whether you put on a coat or not. So I need to modify my claim: ethical statements are judgements about what we should do that are based, at least in part, on a consideration of the effect we are likely to have on others. “I think we should go for a holiday in the sun” will certainly affect others, but is not really an ethical statement since that’s not the main consideration (unless you’ve been particularly grouchy in the office), whereas “I think we should skip the holiday this year to reduce our carbon footprint” IS an ethical statement, because we think it will be better for the planet (or something).

So in this sense, whether we should protect ourselves (against lions or psychopaths) is an ethical statement to the extent that it is based on considerations about whether it is legitimate to do so despite the harm it might cause the latter. Come to think of it, “we should go for a holiday in the sun” is also an ethical statement to the extent that you are thinking, “I’ve thought about it, and I think my need for a holiday outweighs concerns about my carbon footprint”.

I agree that primary values are aesthetic in nature. I tend to regard them as choices, in part because aesthetic (which also means emotional) reactions can be quite fickle, but if the choice doesn’t resonate reasonably well with the latter then you have a problem. But values can’t be right or wrong, in any purely objective sense. They are not truth-apt.





Peter, you distinguish between error and ignorance. As a matter of usage, you’re surely correct. But surely some forms of ignorance are more profound than explicitly represented beliefs? Thus I may entertain all sorts of false beliefs during a dream - but my fundamental cognitive limitation throughout is that I don’t realize I’m dreaming. Likewise, I’d argue that the sociopath - who “knows” others suffer from his actions - doesn’t understand the implications of what he’s doing. He thinks he’s the centre of the world and other people have walk-on parts. Fot he doesn’t recognize the egocentric delusion as a delusion. For that matter, neither do the rest of us a lot of the time: sociopathy is dimensional rather than categorical.

Can values be objectively right or wrong? Are we reduced simply to telling Nazis that, as a matter of autobiographical fact, we _very_ strongly dislike their treatment of Jews? No, IMO. Every value system that we now find obnoxious is rooted in error and ignorance: false beliefs and theories combined with mistaken presuppositions and background assumptions. e.g. human sacrifice, slavery, witch-burning, the oppression of women, the Protoculs of the Elders of Zion, etc. You say that values can’t be truth-apt. But empirically, at least, Heaven is more valuable than Hell. There are actions and value-systems that [objectively] promote a world with a greater or lesser abundance of subjectively (dis)valuable experiences. Or would you argue there is no objective difference in value between our world and an insentient zombie world in which nothing matters at all?





@David - I _very_ strongly dislike the Nazi’s treatment of Jews and, on the basis of this very strong dislike, I will do my best to prevent such atrocities from happening again.

Most people today, unfortunately not all but a very significant majority, share my strong dislike of the Nazi’s treatment of Jews. If enough persons share a value, we can elevate it to the status of shared common value, and protect it with laws.

But we should not forget that that values cannot be “objectively right or wrong.” Values are subjective, different persons choose different values, and the universe just doesn’t give a damn.

The important thing is not “demonstrating” that atrocities are wrong, but ensuring that atrocities don’t happen.

 





But we should not forget that that values cannot be “objectively right or wrong.” Values are subjective, different persons choose different values, and the universe just doesn’t give a damn.

Yes.. and not only this our values, ethics, morals and behaviours are not set in stone, and with wisdom and knowledge we may progress - for without progress?

Thus..

“Textures, behaviours and values, feelings and emotions are NOT what constitute my Self/ego, and all of these may be acceptable for simulation?”

However, note also that the platonic ideal of universal values, behaviours, ethics as derived from common knowledge and wisdom is hardly ever attainable. Life is a learning process, and the diversity of values and behaviours and learning between humans, and even more so for future transhumans and Posthumans will stand in opposition of goals towards common values, common sense, and objective morality?


The important thing is not “demonstrating” that atrocities are wrong, but ensuring that atrocities don’t happen.

To oppose atrocities we must first demonstrate that they are indeed wrong!





Unfortunately the whole debate over whether values are objective or subjective is bedevilled by the two senses of “subjective”. For reasons we simply don’t understand, we live in a world whose ontology includes subjective first-person states. Some of these states are intrinsically normative. e.g. I am in agony. The disvaluable nature of agony (panic, despair etc) is built into the nature of the first-person state itself in human and nonhuman animals alike. The limbic structures that mediate these (dis)valuable states project to the cerebral cortex in different ways in different people, and also in e.g. predators and prey. Such differences in innervation sometimes lead to profound ethical disagreements. Thus the goodness or badness of Jews, blacks, witches or whoever typically seems written into the very nature of the world - as distinct from our diverse cortical (mis)representations. But the fact that we often mislocate (dis)value as external to the mind doesn’t make (dis)value somehow unreal - any more than the fact we typically mislocate phenomenal colour as external to the mind somehow makes phenomenal colour unreal. Perhaps talking of the brain-dependence of (dis)value and colour would work better.





@CygnusX1 re “To oppose atrocities we must first demonstrate that they are indeed wrong!”

There cannot be any demonstration that something is wrong (or right). Subjective value judgments just don’t belong to the demonstrable / falsifiable category.

But this does not mean that we should not oppose atrocities. We just _know_ that atrocities are bad, regardless of abstract demonstrations and big words.





What does the term Consciousness “mean” for each of us.. ?

“Is Consciousness Fundamental? (Marilyn Schlitz)”

http://www.closertotruth.com/video-profile/Is-Consciousness-Fundamental-Marilyn-Schlitz-/146





@ Giulio..

“There cannot be any demonstration that something is wrong (or right). Subjective value judgments just don’t belong to the demonstrable / falsifiable category.”

I disagree. You have already deemed that the holocaust was wrong? How do you thus “demonstrate” that this is indeed wrong.. to a Nazi? And moreover, to other humans who do not necessarily subscribe to Nazi fascism? Iran perhaps?

How do you demonstrate today and to “many” that it is wrong to persecute, demonize, torture and kill children and innocents? (A much more difficult task I grant - but more difficult why)?

I did not state that Universal values cannot be achieved, because we clearly already subscribe to certain common values and ethics, and aspire towards moral progress. I only state that humans are required to “attain” knowledge and wisdom and are not born with Universal values a priori(?)

When I say “Textures, behaviours and values, feelings and emotions are NOT what constitute my Self/ego, and all of these may be acceptable for simulation?”

I should correct this and replace the word “constitute” with “objectify”, because my present behaviours and values certainly do comprise my ego, how I view my “Self” and project my ego onto the world.

Yet, my values and behaviours are not set in stone, because where I was once happy to eat the flesh of animals, I am now ethically opposed to this - which is the better me? The more learned and ethical me? The more progressive me? I know for sure there is no going back to eating the flesh of animals - I will now stick “stubbornly” to my guns.

Is my denial of meat now “set in stone”? Perhaps then this is?





I’m completely with Giulio on this. Rather than trying to demonstrate atrocities are wrong, we should try to ensure they don’t happen, through practical action. The answer to CygnusX1’s “How do you…?” questions is “You don’t. You just do your best to prevent these things from happening again.”

On which subject, I just posted a short (three para) article entitled “Europe Needs Fresh Air” on my website http://peterwicks.wordpress.com

Comments are welcome!





@David I can come along with you up to the following point: as there are two senses of the word “subjective”, so there are two senses of the word “value”. (You emphasise disvalue for reasons I think I understand, but let’s go with “value” for the time being, mutatis mutandis and all that.) There’s what we are hard-wired to value in practice, and there’s what we decide to value as a matter of choice.

This distinction first occurred to me when I was in my mid-twenties and on my way to a job interview. I realised that rather than aiming to be happy, I could aim to be miserable instead. This troubled me, so I stopped thinking about it, but years later while staring at a painting in an art gallery the thought occurred again, and this time I realises that if I just shut down that particular train of thought, something new would emerge…some kind of volition. So I didn’t need to convince myself that I should be trying to be happy. It was a very mindful moment, and a very liberating thought, which is why I tend to be quite evangelical on the subject.





Now we need remonstrate the demonstration of right and wrong? And to what end?

How do we plead our subjective case for Universal values without demonstration?





@CygnusX1 re “How do we plead our subjective case for Universal values without demonstration?”

There are no universal values that can be “demonstrated.” I want to plead my subjective case for my subjective values, not for some abstract thing that I don’t believe in.

I admire your decision not to eat animals, and I think you are a good person. But in nature, animals do eat other animals, so you cannot elevate your choice to a universal principle. You can still encourage others to make the same choice though, and this helps building a nicer world.





CygnusX1 asks to what in end we oppose attempting a demonstration of right and wrong, and Giulio’s response provides part of the answer: if he and I are right,  and there are indeed “no universal values that can be demonstrated”, then opposing the attempt to do so saves us from a distracting wild goose chase.

Of course, attempts to find such demonstrations might indeed help us to plead out subjective case, in the same way that scaring the children with stories about hellfire can help us to convince them to be good and obedient. The question is whether the means justifies the end.

Frankly, I can imagine many situations - no, I _find myself_ in several situations, frequently - where I end up arguing like a moral realist. This has emotional power that caveats about the fundamentally subjective nature of my moral opinions would only dilute. That’s effective communication: you don’t aim for 100% precision, you take short-cuts to ensure that the important message gets across.

Where it goes wrong is when we start to believe our own rhetoric, and confuse those opinions for the truth.

On any case we should avoid overestimating the power of logical demonstration to convince people. Most people are not that cerebral, and do not care about our logical arguments. You will not convince a Nazi or neo-fascist of whatever type that the holocaust was wrong through logical argument. You will do it by offering a compelling narrative that changes his or her _feelings_ about it. Once feelings change, logical arguments follow (you will be amazed at how malleable they can be).

Does this mean we should give up entirely on rational discourse? Certainly not. It helps us to get at the truth, during those moments when our main emotional driver is simple curiosity, and that truth can come in handy. But we must also recognise its limitations. If we want to actually create a better world, and have some idea of what that means, then we have to motivate people to take the required actions. Reasoned discourse will be part, but probably quite a minor part, of our strategy. And in the long run I believe moral realism to be counterproductive, not only because it leads to wild goose chases but also because it is an invitation to neurosis, as we perform mental gymnastics to shore up our belief in one or another moral realism (e.g. “it is wrong to eat the flesh of animals”), in the absence of any shred of evidence. Better just to decide that that is your ethical choice, and lead by example. Find a sufficiently compelling narrative, and it will probably catch on.





Peter, the greatest source of severe and readily avoidable suffering in the world today is factory farming. So I don’t think we can reduce a refusal to harm other sentient beings to a form of “neurosis”, any more than it’s neurotic to oppose the enslavement of Africans or the genocide of Jews. Rather the adoption of a cruelty-free lifestyle reflects a more sophisticated empathetic intelligence - and a capacity to overcome arbitrary ethnocentric and anthropocentric bias.





Are we not confusing “Universal values” with “Objective morality” here? For the record, I believe there are Universal values and that we are progressing towards them, or may? I do not believe in objective morality, although I am open to rational cases proposed.

@ Peter.. You are simply playing “spin doctor” once more. You seriously suggest supplanting reason and logic in favour of appealing to feelings and emotion to convince parties against murder and genocide? I suggest you revisit those remarks?

As for scaring children to accept Universal values? I don’t know where you pulled that from?

@ Giulio.. My point regarding eating animal flesh was to highlight that my subjective values and ethics have changed over time and cannot thus define my objective Self/ego as candidate for uploading? I am not attempting to change anyone’s morals or diet - do as you please, it is your choice and right, (within context of non harm)? I am certainly not in favour of “toothless Lions”.





Some “Universal human values”?

Love, life, liberty, Freedom, and freedom from suffering, health and longevity, (as opposed to disease and death) curiosity and knowledge, peace as opposed to conflict, calm as opposed to chaos and stress, serenity and confidence as opposed to confusion, .. ?





@CygnusX1 I think it is perhaps you that need to revisit my remarks. I explicitly did not suggest _supplanting_ reason and logic in favour of appealing to feelings and emotion, I merely pointed out that the latter often makes for more effective communication. I think it’s important to be aware of and recognise this. I then went on to say precisely that we should _not_ abandon rational discourse. Accusing me of “playing spin doctor” does not constitute rational, respectful discourse, by the way. (Spin doctor for whom?)

@David I did not suggest that we can reduce the refusal to harm other sentient beings to a form of neurosis, in fact that is a surprisingly crass misinterpretation of what I said. What I said was that I believe moral realism to be an invitation to neurosis, for reasons I explained (and which you have not addressed). Refusal to harm other sentient beings is a decision, and in many ways a very honourable one (albeit unrealistic in the short term). Attempting to demonstrate the “truth” of such a position is something else.

I want to live in a world where we don’t harm other sentient beings. I really do. I want other things as well, of course, but I believe that these can, eventually, be made compatible with this objective. Trying to demonstrate that we have some kind of moral obligation to do this is pointless. What we should rather be discussing - but not here because we’re way off-topic - his how to achieve it. Paranoid name-calling (this is addressed to CygnusX1) is not, in my opinion, the most effective strategy.





@CygnusX1 re “You seriously suggest supplanting reason and logic in favour of appealing to feelings and emotion to convince parties against murder and genocide?”

The question is addressed to Peter, but I will give my answer: Yes, I do.

Appealing to feelings is simple, honest, and (thanks God) effective. A honest appeal to human feelings is better and more effective than a complicated, nebulous, and abstract theoretical argument.

Of course reason and logic are useful _in support_ of emotional appeals, but they cannot establish values by themselves. For any “demonstration” that a value is “right” there is a demonstration that the opposite value is better.

For example: we have an overpopulation problem, and we are screwing up the ecosystem. The Earth would be a better place with less people. So, there is nothing wrong with murdering a few thousands, or billions. Of course I don’t like this argument but I could argue this way if I wanted.





Peter, first, forgive me, it’s quite possible I’m simply misunderstanding your position.
[“And in the long run I believe moral realism to be counterproductive, not only because it leads to wild goose chases but also because it is an invitation to neurosis, as we perform mental gymnastics to shore up our belief in one or another moral realism (e.g. “it is wrong to eat the flesh of animals”), in the absence of any shred of evidence.”]

One may dispute the evidence; but as you know, philosophers disagree on value realism.
All I’ll say is that I know empirically - insofar as I can be said to know anything at all - that treating me in the way humans treat factory-farmed non-human animals would induce intensely disvaluable states. And natural science gives me no reason to think I’m ontologically special. Of course, a nonhuman animal is not the same as a human being. A pig, for example, lacks the variant of FOXP2 gene implicated in the human capacity for generative syntax. But are such intellectual differences morally relevant?





@David: I _hate_ factory farming for the suffering that it causes to animals.

I don’t feel any urge to “prove” that it is “Wrong,” not only because I don’t believe in absolute “Right” and “Wrong”, but also because abstract philosophy is not going to help these animals.

I do feel a strong urge to pass very clear and very strict laws against it, and severely punishing the responsible with strong fines and long jail time. As a frequently misunderstood German philosopher used to say,  practically _changing_ the world is more useful than abstract theorizing.





@David

Indeed philosophers disagree, they wouldn’t be philosophers if the didn’t smile

I agree with your empirical claims. But for me the question, “Are such…differences morally relevant?” in itself implies a moral realist position, which I don’t share. From a subjectivist standpoint it’s not a question of “are they or aren’t they”, it’s a question of deciding what we want. And what I want is what Giulio wants. I probably have greater tolerance for “abstract theorising” than he does, but at some point we need to move from theory to practice, no?





Is the most empirically valuable outcome also, in some sense, really, metaphysically the most valuable? What sense can we make of the latter claim?
Peter, Giulio I’d agree with you that meta-ethical theorizing is often unfruitful.
That is why, when discussing e.g. phasing out the biology of suffering, I normally keep the philosophising to a minimum. I just try and find some basic first-order ethical principle with which my audience agrees, and try and show how the abolitionist project is a disguised implication of their core ethic [Such pragmatism can - potentially - work with everything from Buddhism to the Judeo-Christian and Islamic belief in an infinitely compassionate God.]

Guilio, I agree with you on the importance of action!
However, when contemplating an option like, e.g. destructive uploading, we will need very strong grounds for confidence that our background philosophical assumptions are correct.





@David re “when contemplating an option like, e.g. destructive uploading, we will need very strong grounds for confidence that our background philosophical assumptions are correct.”

I agree, provided “we” refers to each person, individually, and not to the society as a whole.

My point is that, when considering a potentially dangerous and lethal medical procedure, I will need to be very careful in making my decision. But it is my decision to make, because it is my body and life that I am taking risk with. Of course I will accept advice, but not orders. I am sure you agree, but this is an important point that needs to be made.





Glad to see we’re getting back on-topic smile At some point I had to scroll up to remind myself what the topic was!

Giulio will not be surprised to read that I am more sympathetic to the idea of making a societal choice as opposed to, or rather complementary to, individual ones. By contrast I still find the wording “confidence that our background philosophical assumptions are correct” too morally realist for my taste. What is “the most empirically valuable outcome” depends on what you mean by valuable and that is non-empirical.

@David you mentioned above that treating you in the way that humans treat farm animals would induce “intensely disvaluable states”. I would rather say that it would cause you a lot of pain and suffering, that you are hard-wired to find such suffering “disvaluable” in the sense that you try to avoid it, and assuming you are not a masochist you’re conscious / linguistically constructed values are relatively well-aligned with you natural instincts in this matter. What we can deduce from this about “moral relevance” of doing this to non-human animals is moot, to say the least.

Actually there was an interesting post at practical ethics recently proposing a kind of philosophical precautionary principle, which is someone similar to your point. The “risk” we are considering here is that morality turns out to be objective after all (i.e. I am wrong about my subjectivism), and we find out that we really have been committing blue murder. I still wonder what the consequences of this are, however. Depends if you believe in karma, or eternal judgement I guess. But once again, given the dearth of evidence we might as well believe that the fate of our immortal souls depends on whether we eat scrambled egg for breakfast. So that doesn’t get us very far.

Re uploading (as opposed to eating meat), the societal (as opposed to individual) choices we need to make mainly concern research and development. Is this an avenue worth pursuing? What kind of priority should we be giving it, and what does it depend on? As I noted in response to Christian C. some time ago, I’m more worried about feasibility, how the technology will be used, and whether I will live to see the day, rather than ontological considerations relating to destructive uploading. As Giulio says, that can surely be left to individual choice. If my subjectivist intuition is right (regarding not only morality but also identity), then we could speculate indefinitely and we will never get anywhere.





We are hardwired to find agony or despair disvaluable in a stronger sense of “hardwired” than the sense in which we are hardwired to find, say, incest disvaluable. If incest had been fitness-enhancing, then we would presumably find the practice admirable or even morally obligatory. But no such contingency attaches to agony and despair: their disvaulable nature is built into the very nature of the experience itself. There is no species or tribe that seeks out agony and despair - not by design, at any rate.





@Peter re ““Europe Needs Fresh Air”  - just left a comment on your blog.





@Giulio Thanks!!! Responded.

@David

I still don’t think this demonstrates moral truth. At the most it means there is an in-built tendency in nature (currently that means humanity) to produce moral (linguistic) structures that gravitate towards utilitarianism, so we might as well go with the flow. One can also try to go against the flow, which was the troubling insight I came up with on the way to the job interview. It was precisely when I realised that I didn’t have to struggle with this - that there was no need to prove that it was wrong to do so - that I was free from this distracting angst. It was a zen moment.

Also, I think you’re somewhat overstating your case. A certain tolerance for agony and despair is evolutionarily advantageous. Ask any mother.





@ Giulio.. “Appealing to feelings is simple, honest, (and thank God), effective”

Whilst there is value in what you say, this in many cases proves insufficient, and did not stop the Nazis, prevent the holocaust, nor present day murders and genocide, nor crises in Libya, Egypt and Syria. The UN is required to “demonstrate” case for sanctions or “moral”(?) need for military action against these atrocities, which as you intimate, may be argued from both sides. Thus we cannot aim to prevent them without due process and demonstration using logic, reason and appeal to moral subjectivism and also feelings?

That was and is my point, and why I see demonstration IS necessary, now and for future progress in human ethics?

However, what concerns me more, is that you do not accept that “Universal values” are real for humans and are a goal that we may aspire towards in overcoming inequality, suffering and ultimately death? That “Universal values” are a useful guide for bioethics and VR society?





@CugnusX1

I think your concern is misplaced. I don’t exactly know what you mean by “universal values are real for humans”, but if you mean “demonstrable by rational argument” then I don’t consider this necessary, and as I’ve argued repeatedly I think believing that they are is likely to be counterproductive.

It is interesting to compare your “universal human values” (but the way I wish they _were_ universal) with David’s more one-dimensional focus on the pain-pleasure axis. Yours has the advantage of being richer and far more descriptive of the kind of qualities and indeed “values”  that we need to foster in order to live better, while I think David’s does a better job of defining, with the greatest precision possible, what “living better” actually means in the first place, and who we include in the equation. The reason I find yours compelling is that on the whole I believe they _do_ help us to live better, in David’s sense. If I didn’t, then I would ditch the values, not David’s framework.

But that’s just me, and it’s not something I’ve arrived at by logical argument. On the contrary, as noted above logical argument made me doubt that I should be pursuing such goals at all, leaving within me a void that I found troubling until I realised that I had no need to rely on logical argument.

So no, I am not arguing like a spin doctor, I am trying to set out as clearly and honestly as
I can my position on these important issues. I hope you can understand and respect that.





@CygnusX1 re “You do not accept that “Universal values” are real for humans and are a goal that we may aspire towards in overcoming inequality, suffering and ultimately death? That “Universal values” are a useful guide for bioethics and VR society?”

I don’t believe that such things as “Universal values” exist, so I don’t waste much time thinking about them. What is important to me, is making the world a better place, where “better” is, of course, based on my own subjective value judgments. But when a value (for example, kindness to animals) is shared by many persons, we can consider it as a common shared value and embed it in our laws and customs. I don’t think we can do better than that. Forget the Santa Claus of objective morality, which does not exist, and act on your best moral judgments.





@ Peter..

“So no, I am not arguing like a spin doctor, I am trying to set out as clearly and honestly as I can my position on these important issues. I hope you can understand and respect that.”

Yes I do understand and respect that, and integrity is important for both progress and Self reflection, which is why I continue to hold interest and value in IEET.

We are all guilty of not reading each others comments more carefully sometimes, taking meaning out of context, and then being over zealous to critique and criticise. However, it can be both frustrating and time wasting to have to repeat oneself on minor points, which then distracts from the flow of constructive debate?

I do respect your input here and value your integrity.





Thanks CygnusX1. Re “it can be both frustrating and time wasting”, yes indeed, I think we all get frustrated from time to time (who doesn’t) and feel we might be wasting our time with one or other aspect of a debate. Then there is the simple frustration of feeling one hasn’t succeeded in getting one’s point across.

The issue of what constitutes a “minor point” is, for me at least, an interesting one, and is related to the extent to which we tend to stray off-topic. I know that at least one person, and I’m sure there are many, many more, sometimes finds it difficult to break into the very rich and often complex and overlapping discussions we have here, so maybe that’s a note to self to be more disciplined about that.

On the substance though, I think this is actually an important, and not particularly ‘minor’ point, not least since it underlies just about any discussion you want to have on ethics (and this is after all an ethics blog). To what extent should we regard values as being universal and objectively “true”, and to what extent should we rather regard them, as Giulio and I do, as ultimately a matter of personal choice? And what are the consequences of our choice in this matter? It is especially in this latter sense that I believe the concern you expressed in your response to Giulio to be misplaced, and essentially for the reasons Giulio has stated, and which I have been arguing with David about. Moral subjectivism does not equate to moral nihilism, and searching for a holy grail can be an unhelpful distraction. I think it’s worth occupying a bit of the conversational space to make, repeat and defend that point, because it’s a trap I think people fall into quite frequently.





@Peter re “searching for a holy grail can be an unhelpful distraction”

This is exactly my point.

I think the holy grail of “universal values” does not even make conceptual sense, because it mixes categories that are not meant to be mixed. “Universal” implies objectivity, and “value” implies subjectivity. Is love odd, even, yellow, or blue?

Even if one wants to believe in the holy grail of universal values, philosophers have been on this quest for centuries, and will continue the quest for other centuries, without ever agreeing with each other, because they try to prove what are really personal preferences and produce only circular arguments that assume the desired conclusions.

Fortunately, we don’t _need_ universal values, because “Moral subjectivism does not equate to moral nihilism.” I have chosen some values, I cannot “prove” them, but sure as hell I can fight for them.

The risk with the holy grail of universal values is that, after having been on the quest for many years, one may give up in despair and become a moral nihilist. I think it is better not to waste time with this unnecessary and distracting quest, and instead try to do something to make the world a better place.





@Giulio Exactly - although to be fair, the moral realists do occasionally come up with some interesting ideas. Rather like quests for the holy grail, or indeed the first attempts to circumnavigate the globe, even a doomed mission can stumble on something interesting and surprising.





Sorry.. but I cannot accept this line of reasoning against pursuit of Universal values, so I will try once more?

There are examples of values I have mentioned already that I could expand upon, as indicated and measured along a pleasure/pain axis that directly affect hormone stimulation and brain chemicals, that cause harm/benefit, (for example Stress/calm axis). But I will attempt to use some value a little closer to transhuman heart here?

Longevity - The struggle for life and existence as opposed to death, and for more life in the face of looming demise, the ethic of preventing death where indeed possible, and the goal of raising humanity as a whole from premature death, disease and famine?

Life - is a Universal value, because it is valued Universally? This must be a fundamental value as existence precedes essence, subjectivity, diversity - of species and human political philosophies?

Thus I propose the above as “Universal value” with mind to pursuit without exception?





What I cannot apply is a categorical imperative nor even “objective morality”, (reasons or coercion, for the continued existence of humanity?) that would aim to achieve these goals of longevity, or at least not thus far?

Yet this is besides the point. How we attempt to pursue the goal of longevity is not restricted to Universal methodology nor process?





“What I cannot apply is a categorical imperative nor even “objective morality”, (reasons or coercion, for the continued existence of humanity?) that would aim to achieve these goals of longevity, or at least not thus far?

Yet this is besides the point. How we attempt to pursue the goal of longevity is not restricted to Universal methodology nor process?”

I’m not sure what you mean by this CygnusX1, but it actually seems to be quite close to what Giulio and I are saying. Indeed it is the claimed (by some) _objective_ nature of morality that we are questioning, not the desirability of the values you have cited from our perspective.

Starting from the list of “universal values” that you presented some comments ago, and which I very much like, one can posit at least five alternative positions:

1. We embrace these as universal values, holding them to be “self-evident” and _objectively_ desirable, in the sense that saying we should pursue them is taken to be a true statement, not merely a preference or decision. (_How_ we pursue them is a different issue, by the way.)

2. We embrace them not “for their own sake” as above, but because they are conducive to overall well-being with respect to the pleasure-pain axis, and THAT is what we hold to be objectively desirable.

3. We embrace some or all of these values, not because we believe them to be objectively desirable or conducive to human well-being necessarily, but simply as a preference. As Giulio says, this in no way restricts are ability, nor necessarily our motivation, to fight for them.

4. We embrace them to the extent that they are conducive to overall well-being (as in 2.), but again for the _subjective_ reason that we have decided to embrace values that are conducive to overall well-being, not because we think there is any objective obligation to do so. This is my position.

5. We don’t embrace them at all.

As you see, in four of the five of these positions we basically embrace the values you listed (at least to the extent we believe them to be conducive to overall well-being, and on the whole I think they are, as you’ve also argued). The only questions are (i) to what extent we attempt to derive them from a more one-dimensional pursuit of overall well-being (pleasure minus pain), and (ii) to what extent we see them as objectively desirable rather than our own preferences.

The main point of this somewhat long-winded comment is to emphasise that questioning the objective “truth” of certain values in no way implies lack of commitment to those values. It just means that we see our commitment as a free choice that we have made, ultimately for aesthetic reasons, rather than as something “given by nature” in the way that mathematical truths and scientific reality (at least as far as classical assumptions remain FAPP valid) are.

Perhaps the confusion has to do with what we mean by “pursuit”. If by pursuing universal values you mean trying to live by them and (non-coercively, except in extremis) encourage others do to as well, then I think we are all in agreement (except that we may not wish to call them universal values). If instead you mean trying to demonstrate their desirability _objectively_, then we would tend to see this as a waste of effort that could better be spent on the former approach. That’s basically what we’re saying. Not that we don’t believe in these things.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Seven Ways to Boost Your Brain - the medieval, the modern, and the mammal diving reflex

Previous entry: ‪2B - The Era of Flesh is Over‬

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376