Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

The World Transhumanist Association (WTA)

Enhancing Virtues: Intelligence (Part 3): Pharmaceutical Cognitive Enhancement

Actually: You ARE the Customer, Not the Product

A message about the power of free expression

Secrets of the Mind: Can Science Explain Consciousness? (34 min)

Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (2): Pigliucci’s Pessimism


ieet books

A History of Life-Extensionism in the Twentieth Century
Author
Ilia Stambler


comments

Peter Wicks on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 22, 2014)

instamatic on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 21, 2014)

Peter Wicks on 'Review of Ilia Stambler’s “A History of Life-Extensionism in the Twentieth Century"' (Sep 21, 2014)

Peter Wicks on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 21, 2014)

Kris Notaro on 'Review of Ilia Stambler’s “A History of Life-Extensionism in the Twentieth Century"' (Sep 21, 2014)

Kris Notaro on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 21, 2014)

Peter Wicks on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 21, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Why and How Should We Build a Basic Income for Every Citizen?
Sep 16, 2014
(11650) Hits
(5) Comments

Enhancing Virtues: Caring (part 1)
Aug 29, 2014
(5341) Hits
(1) Comments

An open source future for synthetic biology
Sep 9, 2014
(4565) Hits
(0) Comments

MMR Vaccines and Autism: Bringing clarity to the CDC Whistleblower Story
Sep 14, 2014
(4387) Hits
(1) Comments



IEET > Rights > Personhood > Vision > Futurism > Virtuality > Trustees > Martine Rothblatt

Print Email permalink (34) Comments (17636) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Can Consciousness be Created in Software?


Martine Rothblatt
By Martine Rothblatt
Mindfiles, Mindware and Mindclones

Posted: Aug 15, 2009

“Some men see things as they are and wonder why.  Others dream things that never were and ask why not?” Robert F. Kennedy

There are thousands of software engineers across the globe working day and night to create cyberconsciousness.  This is real intelligent design.  There are great financial awards available to the people who can make game avatars respond as curiously as people.  Even vaster wealth awaits the programming teams that create personal digital assistants with the conscientiousness, and hence consciousness, of a perfect slave.

How can we know that all of this hacking will produce consciousness?  This takes us to what is known as the “hard problem” and “easy problem” of consciousness.  The “hard problem” is how do the web of molecules we call neurons give rise to subjective feelings or qualia (the “redness of red”)?  The alternative “easy problem” of consciousness is how electrons racing along neurochemistry can result in complex simulations of “concrete and mortar” (and flesh and blood) reality?  Or how metaphysical thoughts arise from physical matter?  Basically, both the hard and the easy problems of consciousness come down to this:  how is it that brains give rise to thoughts (‘easy’ problem), especially about immeasurable things (‘hard’ problem), but other parts of bodies do not?  If these hard and easy questions can be answered for brain waves running on molecules, then it remains only to ask whether the answers are different for software code running on integrated circuits.

At least since the time of Isaac Newton and Leibniz, it was felt that some things appreciated by the mind could be measured whereas others could not.  The measurable thoughts, such as the size of a building, or the name of a friend, were imagined to take place in the brain via some exquisite micro-mechanical processes.  Today we would draw analogies to a computer’s memory chips, processors and peripherals.  Although this is what philosopher David Chalmers calls the “easy problem” of consciousness, we still need an actual explanation of exactly how one or more neurons save, cut, paste and recall any word, number, scent or image.  In other words, how do neuromolecules catch and process bits of information?

Those things that cannot be measured are what Chalmers calls the “hard problem.”  In his view, a being could be conscious, but not human, if they were only capable of the “easy” kind of consciousness.  Such a being, called a zombie, would be robotic, without feelings, empathy or nuances.  Since the non-zombie, non-robot characteristics are also purported to be immeasurable (e.g., the redness of red or the heartache of unrequited love), Chalmers cannot see even in principle how they could ever be processed by something physical, such as neurons.  He suggests consciousness is a mystical phenomenon that can never be explained by science.  If this is the case, then one could argue that it might attach just as well to software as to neurons – or that it might not – or that it might perfuse the air we breathe and the space between the stars.  If consciousness is mystical, then anything is possible.  As will be shown below, there is no need to go there.  Perfectly mundane, empirical explanations are available to explain both the easy and the hard kinds of consciousness.  These explanations work as well for neurons as they do for software.

As indicated in the following figure, Essentialists v. Materialists, there are three basic points of view regarding the source of consciousness.  Essentialists believe in a mystical source specific to humans.  This is basically a view that God gave Man consciousness.  Materialists believe in an empirical source (pattern-association complexity) that exists in humans and can exist in non-humans.  A third point of view is that consciousness can mystically attach to anything.  While mystical explanations cannot be disproved, they are unnecessary because there is a perfectly reasonable Materialist explanation to both the easy and hard kinds of consciousness.

MATERIALISM vs.  ESSENTIALISM


If human consciousness is to arise in software we must do three things:  first explain how the easy problem is solved in neurons; second, explain how the hard problem is solved in neurons; and third, explain how the solution in neurons is replicable in information technology.  The key to all three explanations is the relational database concept.  With the relational database an inquiry (or a sensory input for the brain) triggers a number of related responses.  Each of these responses is, in turn, a stimulus for a further number of related responses.  An output response is triggered when the strength of a stimulus, such as the number of times it was triggered, is greater than a set threshold.

For example, there are certain neurons hard-wired by our DNA to be sensitive to different wavelengths of light, and other neurons are sensitive to different phonemes.  So, suppose when looking at something red, we are repeatedly told “that is red.”  The red-sensitive neuron becomes paired with, among other neurons, the neurons that are sensitive to the different phonetics that make up the sounds “that is red.”  Over time, we learn that there are many shades of red, and our neurons responsible for these varying wavelengths each become associated with words and objects that reflect the different “rednesses” of red.  The redness of red is simply (1) each person’s unique set of connections between neurons hard-wired genetically from the retina to the various wavelengths we associate with different reds, and (2) the plethora of further synaptic connections we have between those hard-wired neurons and neural patterns that include things that are red.  If the only red thing a person ever saw was an apple, then redness to them means the red wavelength neuron output that is part of the set of neural connections associated in their mind with an apple.  Redness is not an electrical signal in our mind per se, but it is the associations of color wavelength signals with a referent in the real world.  Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things.

After a few front lines of sensory neurons, everything else is represented in our minds as a pattern of neural connections.  It is as if the sensory neurons are our alphabet.  These are associated (via synapses) in a vast number of ways to form mental images of objects and actions, just like letters can be arranged into a dictionary full of words.  The mental images can be strung together (many more synaptic connections) into any number of coherent (even when dreaming) sequences to form worldviews, emotions, personalities, and guides to behavior.  This is just like words can be grouped into a limitless number of coherent sentences, paragraphs and chapters.  Grammar for words is like the as yet poorly understood electro-chemical properties of the brain that enable strengthening or weakening of waves of synaptic connections that support attentiveness, mental continuity and characteristic thought patterns.  Continuing the analogy, the self, our consciousness, is the entire book of our autonomous and empathetic lives, written with that idiosyncratic style that is unique to us.  It is a book full of chapters of life-phases, paragraphs of things we’ve done and sentences reflecting streams of thought. 

Neurons save, cut, paste and recall any word, number, scent, image, sensation or feeling no differently for the so-called hard than for the so-called easy problems of consciousness.  Let’s take as our example the “hard” problem of love, what Ray Kurzweil calls the “ultimate form of intelligence.”  Robert Heinlein defines it as the feeling that another’s happiness is essential to your own. 

Neurons save the subject of someone’s love as a collection of outputs from hard-wired sensory neurons tuned to the subject’s shapes, colors, scents, phonetics and/or textures.  These outputs come from the front-line neurons that emit a signal only when they receive a signal of a particular contour, light-wave, pheromone, sound wave or tactile sensation.  The set of outputs that describes the subject of our love is a stable thought – once so established with some units of neurochemical strength, any one of the triggering sensory neurons can harken from our mind the other triggering neurons. 

Neurons paste thoughts together with matrices of synaptic connections.  The constellation of sensory neuron outputs that is the thought of the subject of our love is, itself, connected to a vast array of additional thoughts (each grounded directly or, via other thoughts, indirectly, to sensory neurons).  Those other thoughts would include the many cues that lead us to love someone or something.  These may be resemblance in appearance or behavior to some previously favored person or thing, logical connection to some preferred entity, or some subtle pattern that matches extraordinarily well (including in counterpoint, syncopation or other form of complementarities) with the patterns of things we like in life.  As we spend more time with the subject of our love, we further strengthen sensory connections with additional and strengthened synaptic connections such as those connected with eroticism, mutuality, endorphins and adrenaline.

There is no neuron with our lover’s face on it.  There are instead a vast number of neurons that, as a stable set of connections, represent our lover.  The connections are stable because they are important to us.  When things are important to us, we concentrate on them, and as we do, the brain increases the neurochemical strengths of their neural connections.  Many things are unimportant to us, or become so.  For these things the neurochemical linkages become weaker and finally the thought dissipates like an abandoned spider web.  Neurons cut unused and unimportant thoughts by weakening the neurochemical strengths of their connections.  Often a vestigial connection is retained, capable of being triggered by a concentrated retracing of its path of creation, starting with the sensory neurons that anchor it.

What the discussion above shows is that consciousness can be readily explained as a set of connections among sensory neuron outputs, and links between such connections and sequences of higher-order connections.  With each neuron able to make as many as 10,000 connections, and with 100 billion neurons, there is ample possibility for each person to have subjective experiences through idiosyncratic patterns of connectivity.  The “hard problem” of consciousness is not so hard.  Subjectivity is simply each person’s unique way of connecting the higher-order neuron patterns that come after the sensory neurons.  The “easy problem” of consciousness is solved in the recognition of sensory neurons as empirical scaffolding upon which can be built a skyscraper worth of thoughts.  If it can be accepted that sensory neurons can as a group define a higher-order concept, and that such higher-order concepts can as a group define yet higher-order concepts, then the “easy problem” of consciousness is solved.  Material neurons can hold non-material thoughts because the neurons are linked members of a cognitive code.  It is the meta-material pattern of the neural connections, not the neurons themselves, that contains non-material thoughts.

Lastly, there is the question of whether there is something essential about the way neurons form into content-bearing patterns, or whether the same feat could be accomplished with software.  The strengths of neuronal couplings can be replicated with weighted strengths for software couplings in relational databases.  The connectivity of one neuron to up to 10,000 other neurons can be replicated by linking one software input to up to 10,000 software outputs.  The ability of neuronal patterns to maintain themselves in waves of constancy, such as in personality or concentration, could equally well be accomplished with software programs that kept certain software groupings active.  Finally, a software system can be provided with every kind of sensory input (audio, video, scent, taste, tactile).  Putting it all together, Daniel Dennett observes:

“If the self is ‘just’ the Center of Narrative Gravity, and if all the phenomena of human consciousness are explicable as ‘just’ the activities of a virtual machine realized in the astronomically adjustable connections of a human brain, then, in principle, a suitably ‘programmed’ robot, with a silicon-based computer brain, would be conscious, would have a self.  More aptly, there would be a conscious self whose body was the robot and whose brain was the computer.”

At least for a Materialist, there seems to be nothing essential to neurons, in terms of creating consciousness, which could not be achieved as well with software.  The quotation marks around ‘just’ in the quote from Dennett is the famous philosopher’s facetious smile.  He is saying with each ‘just’ that there is nothing to belittle about such a great feat of connectivity and patterning.


Martine Rothblatt serves on the IEET Board of Trustees and is author of several books on satellite communications technology, gender freedom, genomics, and xenotransplantation.
Print Email permalink (34) Comments (17637) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Sure consciousness can be created in software… but boy oh boy is it hard hard hard Martine!  wink  I’ve spent, what am I am up to now , 7 years thinking about it, sputtering nonsense for the first 4 years, only really starting to ‘home on’ on some sort of true understanding now.

Actually there are many more subtle positions other than just ‘Materialist’ versus ‘Essentialist’.  The truth is most likely a combination of Dennett’s ‘Narrative Center of Gravity’, Baar’s ‘Global Workspace’, Tononi’s ‘Information Integration’ and Hofstadler’s ‘Strange-Loop Analogies’.  Combine these four ideas and you get an explanation something along the lines of this (my own ideas):

‘Consciousness is concerned with integrating different high-level representations of goals into a single coherent high-level view-point which serves to coordinate all the seperate components of the brain.  This integration is done via categorization, or equivalently, analogy formation.  It takes the form of story plots, or narratives which are coherent plans for future actions’.

But I think the rabbit hole runs much deeper than most cognitive scientists think it does.  I think the *real* definition of intelligence is the ability to assign *meaning* (symbolic representations) to raw information.  So at root humans are ‘symbol makers’.  (This is in contrast to the pseudo-scientific definition of intelligence promoted by Yudkowsky and co, who sees us as mere goal-seeking machines to be ‘optimized’).

I’m a ‘non-reductive’ materialist, I agree consciousness is physical, but think that there are different irreducible levels of explanation, and concepts at higher levels of explanation are not neccesserily reducible to concepts at lower levels of explanation.
Our actions may be pre-determined at the physics levels, but all that exists at that level are mere particles, forces and fields.  What hasn’t been pre-determined is the *meaning* of all that…. and that’s what consciousness can change.





wow!  What a perfect intro to the IEET!! - And these minds!!  I’ll be back!

Thanx,
Chris





As usual, Martine’s great post makes a lot of sense. There seems to be nothing essential to neurons, in terms of creating consciousness, which could not be achieved as well with software.

Marc, I would reword your statement as: there are different irreducible levels of explanation, and concepts at higher levels of explanation are not practically or usefully reducible to concepts at lower levels of explanation.

If we live in a physical reality which is, in principle, fully explainable by fundamental physics, then life, intelligence and consciousnessARE reducible to fundamental physics (by definition). But it would be extremely hard (FAPP impossible) to compute the behavior of living and thinking organisms directly from fundamental physics, solving the Dirac field equations for zillions of interacting elements. Also, it would not be very useful since the semi-empirical rules-of-thumb that we call biology and cognitive sciences would provide basically the same results (FAPP, and once they are better understood themselves).

Marc, you will get kudos and praise from me when you write a post without mentioning Y, implicitly or explicitly. Obsessions are not good for the liver, let alone the brain.





Giulio said:

>If we live in a physical reality which is, in principle, fully explainable by fundamental physics, then life, intelligence and consciousnessARE reducible to fundamental physics (by definition).

Not neccesserily.  See wikipedia, lists ‘non-reducible physicalism’ as a logically coherent position:

http://en.wikipedia.org/wiki/Physicalism

“Non-reductive physicalism is the idea that while mental states are physical they are not reducible to physical properties.”

This is the crux of my long-standing disagreement with the SIAI crowd, and also why I don’t think that Bayesian Induction fully captures rationality.

Essentially, Quine critiqued reductionalism on the grounds that you can’t even talk about low-level concepts without referencing high-level ones.  See Quine ‘Two Dogmas of Empiricism’:

http://en.wikipedia.org/wiki/Two_Dogmas_of_Empiricism

From the paragraph on ‘Reductionism’ :

“The difficulty that Carnap encountered shows that reductionism is, at best, unproven and very difficult to prove. Until a reductionist can produce an acceptable proof, Quine maintains that reductionism is another “metaphysical article of faith”.

The problem with reductionism is that the semantic meanings we attach to things are neccessary for explanations, but these semantic meanings are only determined at a high-level!

For instance, you couldn’t describe the rights and wrongs of World War II solely in terms of particles and physics, because without assigning some *meaning* to the particle motions, you couldn’t determine whether the particle motions corresponding to Hilter’s actions were good or bad.

Bayesian reasoning relies on fixed semantic meanings… the programmers have to precisely fix the initial meanings of everything at *some* level of description… but humans can use conscious reflection to *change* the meanings of everything (our brains can always *recode* how meanings map to raw bits of information) so this shows that: (a) Reductionism fails, and (b) Bayes cannot fully capture rationality.

I pity the poor rubes who fell for SIAI reductionism all this time. 

Of course I still agree with the critical points that consciousness is physical and computational.





The key terms in this discussion are not defined precisely. What do we mean by “consciousness” and “qualia”? What question are we actually trying to answer?

Martine: “Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things”

Which presumably implies that if you have never seen a red thing before, you don’t experience the red quale? If you have never had an orgasm before you don’t experience the orgasm quale? This is false in my experience.

To me, color qualia or pain qualia are immediate visceral sensations that seem independent of such high-level things as concepts.

We lack a good definition of what a quale is, and it isn’t clear that it is a natural kind. Furthermore, we lack a good definition of what we mean by consciousness. I admit freely that I can’t define the terms used here, but I think it is better to admit this than to pretend we understand the subject and then go on to pontificate about whether software can have “consciousness”.





Marc: sorry, this does not make sense to me at a first glance. What does “while mental states are physical they are not reducible to physical properties.”. Either something is physical, which _by definition_ means “reducible to physical properties”, or it is not. What you say is like saying that not all triangle have three sides, some having less or more than three sides.

We could, in principle, describe the _facts_ of WW2, including the thought of the people, in terms of physics. Rights and wrongs are not facts, but subjective interpretations.





The solution to the hard problem in my opinion lies in the published work of physicist Antony Valentini (now at Imperial College), the work of Vitiello and Freeman (Italy & Berkeley) and my own work. The key is the non-equilibrium generalization of quantum physics to include signal nonlocality that is forbidden in orthodox sub-quantal equilibrium text book quantum theory. Using David Bohm’s ontological interpretation of quantum reality, the entangled quantum waves are intrinsically thought-like. Signal nonlocality solves the binding problem and explains the holographic nature of qualia. Bernard Carr, physics professor at University of London, wrote a review of all this in his June 2008 paper published in Proceedings of the Psychical Research Society in London.





Wow JACK! Great to see you here!





Wow Jack, that sounds like something Alan Sokal would’ve said.





Thanks for your post.  Here are some responses:

“The key terms in this discussion are not defined precisely. What do we mean by “consciousness” and “qualia”? What question are we actually trying to answer?”

MAR:  Consciousness was precisely defined in the preceding post as a continuum of maturing abilities, when healthy, to be autonomous and empathetic, as determined by consensus of a small group of experts.  Each of the component terms were defined as well.  Of course this is a subjective definition, intentionally so.  Nevertheless, it is measurable, empirical and thus pragmatic for judging cyberconsciousness.

Martine: “Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things”

Which presumably implies that if you have never seen a red thing before, you don’t experience the red quale? If you have never had an orgasm before you don’t experience the orgasm quale? This is false in my experience.

MAR:  Sorry I wasn’t clear.  I will tighten up the language.  What I was trying to say is that *gradations* of red (“redness” or qualia) depends upon multiple experiences.  I agree with you that we are hardwired to perceive red wavelengths, and orgasms, but multiple experiences tacked onto numerous neural patterns are necessary to have a subjective (personal) experience of the phenomena, ie, a qualia experience.  There is nothing magical about qualia.  They are just the idiosyncratic pattern each of our brain’s make out of the plethora of related experiences we neurally record.  Thanks again for motivating me to tighten the language.

To me, color qualia or pain qualia are immediate visceral sensations that seem independent of such high-level things as concepts.

We lack a good definition of what a quale is, and it isn’t clear that it is a natural kind. Furthermore, we lack a good definition of what we mean by consciousness. I admit freely that I can’t define the terms used here, but I think it is better to admit this than to pretend we understand the subject and then go on to pontificate about whether software can have “consciousness”.

MAR:  Really my post does just what you say—admits that it is in the eye of the beholder for all *practical* reasons.  You can know you are innocent of a crime, for example, but a jury convicts you and you rot in jail.  It doesn’t change the truth of your innocence, but that truth is practically irrelevant.  Similarly, in my definition, consciousness is what a small group of experts “Turings” you to be (if I can make the name a verb…).  For a cyberconscious being they may be right or wrong.  I am not trying to pontificate on something theoretical.  I have reduced it to being practical, and am pontificating on the practical reduction of it. 

MAR:  Giulio—Great to see you here and thanks for sharing your keen understanding of this subject with the other posters.





There will only be one real test available for any technology that we believe may induce the subjective qualic experience: I call it the “Blind Sight Test”. Blind sight is a condition some brain damaged people have in which they do not experience subjective visual qualia but they will respond to visual stimuli such as a rapidly approaching object… Read More (e.g., a thrown ball). To test the subjective capabilities of a technology it must be integrated in situ with a real human mind and replace an existing/damaged function—let’s say we replace the occipital lobe of a blind sighted subject for example. Does the subject continue to react to visual stimuli AND experience subjective qualia? If so, then the technology is indeed capable of inducing consciousness.





Guilio said

>Either something is physical, which _by definition_ means “reducible to physical properties”, or it is not.

Consciousness could be both composed of physical processes AND have some non-physical properties.  This sounds highly peculiar simply because science has never yet encountered a failure of reductionism, but consciousness could be the first instance of it.  At least, it’s a valid possibility. 

It would mean that when you beak the brain down into its individual parts it’s all physics, but when the parts are put together new properties emerge which can’t be explained as merely a combination of the parts.

If you have a complete physics description of World War II would you really have explained it?  Not neccesserily, because mere description is not explanation.  To really explain WWII you would need to refer to concepts such as nations, military strategies, political parties, economics etc etc. , and there is no unambiguous translation of these high-level concepts into low-level physics.

Physics itself relies on our understanding of abstract mathematical categories such as ‘integration’, vector’, ‘equation’ etc, and categories depend on the language or code the mind uses to define the meanings of information content. 

For instance consider a radio wave digitally modulated which is transmitting the voice of a professor giving a physics lecture.  It’s just a string of bits;
e.g.  00001011101111000001101011001 etc

But what does it mean?  That depends on the coding system used to assign meaning to the information:
e.g. 
0000=‘Dirac Equation’,
1011=‘Feynman’ etc

But the coding choice is not itself reducible to low-level physics, because only a mind can understand it.  So we appear trapped in a circular explanatory loop.

So things are not nearly as simple as reductionists claim.  In fact you could argue that Bayesian Reductionism is just the modern vastly more sophisticated version of the old discredited fallacy of Hume, who saw everything as mere associations between sensory data.





Please firstly accept my apologies for what you may find as a rather novice point of view in relation to your complex article, however..

This is weird, I have only just been debating the idea and description of consciousness, and attempting to define this abstract phenomenon in terms of ordinary physical attributes.

If an ant has a genetic algorithm that helps it find food, and return to its nest, and in the meantime avoid obstacles, fight other ants, etc etc - all actions being decided and acted out as a simple matter of priority : then is this a (simple), form of consciousness? The answer must be yes : This simple type of conscious awareness or consciousness we can already emulate with robotics. Yet is the ant Self-aware, and does it need to be? The answer is most likely no, (although we cannot guarantee that an ant is not Self-aware). Without Self-awareness it is merely an unintelligent drone, and that is all that it is required to be, a simple conscious thing?

We know that we can build simple robotic machines, which move using reference to light and solid objects and that may even solve maze problems. A human mind uses senses to deduce shapes, and sizes, and colours, and distance perception, and then makes a giant leap of perception in the form of thoughts and ideas. This is derived through previous storage of large amounts of data regarding different shapes, sizes, colours, distances etc, (experiences), and as you say, the formation of neuron pathways. In other words, is it not all about processor power and memory capacity?

The redness of red, is merely an understanding and powerful reference to a vast catalogue of memory regarding different shades of red? Brown is also a bit reddish, me thinks, (the first time I witness this), let me file this under red? It is only language, (or code), that defines the term “red” with the phenomena of red, and we only learn this from others, (we are not born with the knowledge that this phenomena is known as “red”, it is pointed out to us at an early age). Once a machine has witnessed “red” and is thus programmed with the correct response for this phenomenon as the word or reference code “red”, then various shades of redness is only a matter of experience, (memory), and utilising processor power to use this memory, (bandwidth)?

Even the more complex ideas and thoughts, such as the ability to recognise an elephant and its shape, and then combine this idea with the colour pink, is in itself not a very complex action. There is little or even anything at all abstract about this idea : we just think its radical, (or rather was radical).

Making a robot machine avoid obstacles, complete tasks, and recognise shapes and even colours, is not a difficult task!? And this does make the machine a simple conscious thing, or rather makes it perceptive of sensory input : yet it does not make the machine Self-conscious nor Self-aware.

For an AI system to learn, from its mistakes and be more efficient etc. A level of Self-awareness would seem to be required? By referencing and memorising the results of actions and perceptions, and the continual reference of Self-position and of previous experiences, (many parallel algorithms, memory power and bandwidth) - may this permit a machine finally to acquire a level of Self-reference for each and every evaluation that it makes. In other words this means creating an algorithm to include the machine’s point of reference and position as “subject” for every operation and action regarding the “objects” it perceives or witnesses.

For example, the main attribute that we possess as human beings to be Self-aware or Self-conscious is immediate short-term memory? By perceiving “objects”, and continually referencing Self or “subject” in relation to these objects, (duality), we have a perception of Self : a revelation that there is a positional reference point that is perceiving an object at each moment and for every moment.

Like continually pinching ourselves to see if we are real, we use immediate short-term memory to constantly compare “what is” with “what was”, and evaluate change in circumstance. And this aids to establish and reassert the notion of “Self”, (The “Self” or ego is ultimately a creation that we use to understand and reconcile this phenomenon of conscious reflection?)

Therefore, my point is, it is not unfathomable that a machine or AGI could in fact become Self-aware in this same way, given the same powers of perception and memory/processor power, and utilising this same process of Self-reference that we act out as humans? It is by utilising continual and immediate short-term memory comparisons that a machine may use self-reference and self-reflection to give the appearance of Self-awareness, In which case the materialist position is supported?





Thanks for your comments, Martine.





Abraham’s flame attack on me comparing me to Sokal is not good form for this discussion and probably violates the rules of decorum for this forum? In fact it demonstrates Abraham’s ignorance of modern theoretical physics.





fyi “abraham”‘s email is phony
abehh@gmail.com

Recipient address: abehh@gmail.com
Reason: Remote SMTP server has rejected address
Diagnostic code: smtp;550-5.1.1 The email account that you tried to reach does not exist.





This may have been covered already, if so I apologize, but when we finally do create an intelligent, self-aware machine will we, as a race, be willing to acknowledge it?  Or will we, as we’ve done throughout history, simply have yet another slave-race?





Hi Jack Sarfatti,

I’m trying to learn more about these kinds of quantum theories of mind.  Thank you so much for making this post here.  I worked with John Smythies who recently ‘canonized’ his quantum theory in the Smythies-Carr hypothesis camp on the open survey topic with the goal of concisely stating and quantitatively measuring the amount of scientific consensus (see: http://canonizer.com/topic.asp/53/11) for the best theories of consciousness at canonizer.com here:

http://canonizer.com/topic.asp/88/14

This theory is currently in a minority camp, compared to the ‘Mind-Brain identity theory’ camp.  However, both of these camps agree on the representational doctrines concisely described in the so far clearly consensus camp “Consciousness is representational and real”.

My question to you is, does your work differ from the theories being supported by John Smythies, and many other experts like Steven Lehar in the representational and real, and the Smythies-Carr hypothosis camps?





Hi Martine Rothblatt,

Very interesting post, thanks for starting this discussion on this great topic.

You said: “To me, color qualia or pain qualia are immediate visceral sensations that seem independent of such high-level things as concepts.”  which I very much agree with (I’m in the “Nature has ineffable phenomenal properties camp” (see: http://canonizer.com/topic.asp/88/7).  But this seems to me to contradict with your statement, which people in our camp would claim is not correct:

“The redness of red is simply (1) each person’s unique set of connections between neurons hard-wired genetically from the retina to the various wavelengths we associate with different reds, and (2) the plethora of further synaptic connections we have between those hard-wired neurons and neural patterns that include things that are red.  If the only red thing a person ever saw was an apple, then redness to them means the red wavelength neuron output that is part of the set of neural connections associated in their mind with an apple.”

How do you reconcile these two statements that appear to me to be contradictory?





Jack,

I don’t think the QM wave function has anything to do with consciousness *directly* (the brain is too nosiy to be quantum mechanical).

A more likely intriguing possibility is that they are both manifestations of some more general principle… that is, I think there’s a chance you can make an *analogy* between consciousness and the QM wave function… they are both *integrating* something (information).  So in a sense, both consciousness and the QM wave function might be considered manifestations of a more general sort of ‘integration principle’ at work in the universe.

I do definitely like the Bohm Interpretation as a strong alternative to the MWI (in fact I think Bohm and MWI are the only two interpretations that actually make sense).  All the other interpretations don’t provide a realist picture or they are incoherent.

Here is my (slightly unconventional) take on Bohm:  Reality is stratified into 3 levels, each ontologically real.  The top level has the QM wave function, analogous to a real field in classical physics, which never collapses.  All possible states exist, but only in the abstrast.  The middle level has the QM potential, analogous to momentum in classical physics. And the bottom level has the concrete particles.  The upper levels guide the particles on the bottom level.

I believe that when better developed, the Bohm interpretation of QM will give the ‘many worlds’ supporters a serious fright and a strong ‘run for their money’ wink





“He [chalmers] suggests consciousness is a mystical phenomenon that can never be explained by science.” - Chalmers actually believes in a science of consciousness, where science figures out what consciousness is and how it works.  No mysticism, just hard science.

“The “hard problem” of consciousness is not so hard.  Subjectivity is simply each person’s unique way of connecting the higher-order neuron patterns that come after the sensory neurons.”  The “hard problem” doesnt only consist of the interesting property of subjectivity, but also like you said, qualia, or what it feels like to experience a certain pain or see a certain color, to ponder a concept/meme.  The human brain’s patterns are perhaps just as unique as fingerprints and dna to each individual, but the brain plays a role just like dna does, and that is to generate qualia, so it seems.  In Buddhism and Phenomenology that are properties of mind which seem to be universal.  I am working on an entry about the Neural Correlates of Consciousness, their role in understanding qualia, and the possibility of universal qualia and what that might mean for consciousness and for AI research.  I do not believe in mystical theories of consciousness, but i do believe that physical patterns do not really explain the qualitative feeling of seeing the qualia of redness.  more later.

mjgeddes said “It would mean that when you beak the brain down into its individual parts it’s all physics, but when the parts are put together new properties emerge which can’t be explained as merely a combination of the parts.”  So what you mean by this is that the brain produces emergent properties while its operating which is compatible with emergent epiphenomenalism and some versions of emergent panpsychism - check out what Kurzweil thinks of all this:

“When people speak of consciousness they often slip into considerations of behavioral and neurological correlates of consciousness (for example, whether or not an entity can be self-reflective). But these are third-person (objective) issues and do not represent what David Chalmers calls the “hard question” of consciousness: how can matter (the brain) lead to something as apparently immaterial as consciousness?15
  The question of whether or not an entity is conscious is apparent only to itself. The difference between neurological correlates of consciousness (such as intelligent behavior) and the ontological reality of consciousness is the difference between objective and subjective reality. That’s why we can’t propose an objective consciousness detector without philosophical assumptions built into it.
  I do believe that we humans will come to accept that nonbiological entities are conscious, because ultimately the nonbiological entities will have all the subtle cues that humans currently possess and that we associate with emotional and other subjective experiences. Still, while we will be able to verify the subtle cues, we will have no direct access to the implied consciousness.
  I will acknowledge that many of you do seem conscious to me, but I should not be too quick to accept this impression. Perhaps I am really living in a simulation, and you are all part of it.
  Or, perhaps it’s only my memories of you that exist, and these actual experiences never took place.
  Or maybe I am only now experiencing the sensation of recalling apparent memories, but neither the experience nor the memories really exist. Well, you see the problem.
  Despite these dilemmas my personal philosophy remains based on patternism:I am principally a pattern that persists in time. I am an evolving pattern, and I can influence the course of the evolution of my pattern. Knowledge is a pattern, as distinguished from mere information, and losing knowledge is a profound loss. Thus, losing a person is the ultimate loss.” -
Ray Kurzweil


some consciousness links:
http://www.consciousness.arizona.edu/forum.htm
http://www.imprint.co.uk/jcs.html
http://consciousness.anu.edu.au/
http://en.wikibooks.org/wiki/Consciousness_Studies
http://plato.stanford.edu/entries/qualia/





“Abraham’s flame attack on me comparing me to Sokal is not good form for this discussion and probably violates the rules of decorum for this forum? In fact it demonstrates Abraham’s ignorance of modern theoretical physics. “

I admit to the latter (at least partially), but I must say that I compared your words to his words, not you to him. I was going for dark humor, not flaming; sorry ‘bout that.





As far as the issue of whether or not software can do consciousness, there are obviously lots of people on both the Yes and No side.

We’ve created an open survey topic at canonizer.com for the purpose of quantitatively surveying how many people, and who, are on each side of this still theoretical issue.  We also want to concisely develop the best reasons and arguments for each side.

And of course, everyone knows that eventually the demonstrable scientific data will convert all of us to THE ONE true theory, so we seek to rigorously measure this process going forward, as ever more demonstrable scientific proof comes in.

We invite everyone to participate (or canonize, if you will) what you believe and why on this issue here:

http://canonizer.com/topic.asp/83

Brent Allsop





Creating an intelligent AI from scratch? Nearly impossible even with huge numbers of programmers. We can’t even get an OS to stop from crashing let alone an entire brain.  The solution, in my eyes, would be to actually reverse engineer the human brain.  No need to create one ourselves, we already have a working example.  Once we get that we can mix and match together to get the results we need in a brain simulator.  Instead of making the computer like a brain we take the brain and use it with a computer.





@ demx..

hmm. Good point and I guess this is most likely already under hand somewhere as we speak? It certainly would be easier and “fast-track” to use a brain and add “programming” into it : yet is this just cheating, and can you really call it artificial?

Just like a brain in vat, given that it has all the potentials and components of a developed yet naïve human brain, would not this brain evolve to function and become Self-aware as a natural consequence, and therefore not be truly AGI at all, merely a human brain with attachments?

In this case we have learned nothing, and we have created nothing. AGI and consciousness is then still a mystery?

The idea still has potential however, and the ethical arguments to deal with this would be immense. Now whose brain shall we use?  Hands up?

;0]





Again, read my entry for 8/16…our “fast track” is indeed to use an existing human brain and look for “Blind Sight Test” opportunities.  Whose brain do we use?  How about someone who has lost sight or hearing or any other sense from a stroke?  Replace the destroyed part of the brain with something artificial.  Does the patient now react normally to the restored sense?  Great if they do, but are they SUBJECTIVELY AWARE of the images or sounds or other sensory information given back to them?  In other words, are they left with Blind Sight?  If they are aware of their restored sensory information like they were in days of yore, then mission partially accomplished and we have learned one thing: we can replicate the subjective experience in some way.

Next experiment: Can we then turn it off?  What will be the smallest piece of our technology that can be turned off and leave the patient with the phenomenon of Blind Sight?  Where in this new technology does the subjective experience reside?





Whilst I understand your points, is not your patient already Self-aware both prior to and after sight is restored. I can’t understand how you link the subjective sensory perception to the phenomena of conscious self-awareness, (or “I-consciousness”)?

Sure you may be successful, and have the ability to switch this sensory perception on an off, yet the mind of the patient is Self-aware regardless, even where a patient has never had the sense of sight?

Now if you could stimulate an empty brain to be perceptive and self-reflective in this way?





Thanks for the wonderful information- just wondering if anyone else has had any relevant experiences to share





Read up on “blind sight”—some people react to visual stimuli but are not consciously aware of that stimuli.  For instance you might tell them to push a button if they feel a red light is showing.  Other color lights might come and go, but when the red one shows up they press the button.  The problem is, they still report a 100% blind subjective experience!

Researcher: Did you see a red button?

Subject: No.

Researcher: Then why did you press the red button?

Subject: I don’t know.

If an artificial device could be implanted into the heads of these patience to restore the subjective conscious experience of sight, we will have learned something crucial.  Then we must figure out what is the smallest subcomponent of that device that does the trick…

Keep in mind these patients still report having subjective experience of their other senses, such as the sense of sound.





For anyone interested with the current scientific studies concerning consciousness, self-awareness, causality and freewill, then the following recent BBC TV Horizon broadcast is currently available at the BBC iplayer link below. Catch it while you can, this is an excellent investigation into self and being, and will leave you questioning your very notions of who you are.

Broadcast on BBC Two, 11:20pm Tuesday 20th October 2009
Duration: 60 minutes ~ Available until: 8:59pm Tuesday 24th November 2009

http://www.bbc.co.uk/iplayer/episode/b00nhv56/Horizon_20092010_The_Secret_You/


Program synopsis..

“With the help of a hammer-wielding scientist, Jennifer Aniston and a general anaesthetic, Professor Marcus du Sautoy goes in search of answers to one of science’s greatest mysteries: how do we know who we are? While the thoughts that make us feel as though we know ourselves are easy to experience, they are notoriously difficult to explain. So, in order to find out where they come from, Marcus subjects himself to a series of probing experiments.

He learns at what age our self-awareness emerges and whether other species share this trait. Next, he has his mind scrambled by a cutting-edge experiment in anaesthesia. Having survived that ordeal, Marcus is given an out-of-body experience in a bid to locate his true self. And in Hollywood, he learns how celebrities are helping scientists understand the microscopic activities of our brain. Finally, he takes part in a mind-reading experiment that both helps explain and radically alters his understanding of who he is.”





It’s not the quantum wave function of a single particle, it’s a robust macro-quantum coherent ground state order parameter in which a large number of boson elementary excitations all occupy the same single-particle quantum wave function. This is a condensate with “phase rigidity” see P.W. Anderson’s “More is different.”





@CygnusX1, “Whilst I understand your points, is not your patient already Self-aware both prior to and after sight is restored. I can’t understand how you link the subjective sensory perception to the phenomena of conscious self-awareness, (or ‘I-consciousness’)?”

The way you link it is by understanding the nature of “blind sight”.  These patients react to objects in their field of site, like a ball being thrown at them or an object in their path, but are not subjectively aware of the phenomena.  Parts of their brain are acting robotically, without an integrating subjective experience.  I believe this to be a very interesting frontier of experimentation to be explored.





Brent-Allsop—Thanks for pointing out the consistency.  I should have used the word “in addition” instead of “simply”.  I will update my blog text at http://mindclones.blogspot.com accordingly.

We clearly have an immediate, sensory, concept-independent capability for qualia.  We are each different, genetically, phenotypically, neurologically, and each moment of experience is different.

In addition, we layer on a plethora of experiences with time, the neural connections, such that eventually we see, we perceive, we experience qualia as much with our interpretive mind as with our front line physics.





My view on creating consciousness is that you have to establish a shell, like our DNA structure, before going any further. Also addressing the topic of purpose is equally relevant. As far as how to create consciousness, I believe the only way to is to fuse presently conscious beings with the newly created shell. Our consciousness is constantly asking new questions. The next step is to determine if this shell has coherence. Do the lines of code mean anything to the shell program? Or does the shell program gain coherence by us, its creator. We understand these lines of code, and if we understand them can that create coherence in the shell program? Our eyes are really electrical sensors. If lines of code are the same as the parameters of the world around us then could the transfer/flow of information be the shell program’s eyes? The last step is to test this shell program. Today we use monitors in order to view the digital realm. If we create a shell program that can be viewed on a monitor then there is a good possibility consciousness can be created. Notice, though, that by creating this consciousness inside a computer we have possibly created a parallel dimension. The only way to see inside this alternate dimension is to view the processes taking place in a way that makes sense to our dimension. Hence the need of a monitor.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Why I Believe Gene Patenting is Wrong, Although it is Currently Legal

Previous entry: Conceding to the Right on Healthcare Reform

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376