IEET > Rights > HealthLongevity > CognitiveLiberty > Personhood > Vision > Staff > Kris Notaro > Philosophy > Innovation
The Importance of Qualia to Transhumanism and Science pt1
Kris Notaro   Mar 25, 2013   Ethical Technology  

This is a starting point to investigate the ongoing mission of computer scientists to create AI which is self-aware and conscious. Is qualia simply materialist/physicalist information? What is going on with all the biological experiments done on people and animals?

Qualia's definition is as follows: 1. A quality, as bitterness, regarded as an independent object. 2. A sense-datum or feeling having a distinctive quality. 3. Feelings and experiences vary widely. For example, I run my fingers over sandpaper, smell a skunk, feel a sharp pain in my finger, seem to see bright purple, become extremely angry. In each of these cases, I am the subject of a mental state with a very distinctive subjective character. There is something it is like for me to undergo each state, some phenomenology that it has. [1][2]

What it feels like to experience experience, to feel a feeling, and be aware of it does not necessarily mean the presence of a dualistic mind. It may well be an emergent property of the universe created by the “slowish” Darwinian evolution over millions of years. As many emergentist philosophers admit, the very emergence of self-awareness and feeling 'what it is like' (after much debate) the emergentist will admit that it brings up a type of dualism – but a type of materialist/physicalist dualism of the emergence of mind.

The emergent property of consciousness may be very well and true; for complex systems do indeed create un-predictable properties that reductionism cannot fully explain. Does this mean that we and many animals are simply emergent complex systems with a new property of the universe called consciousness?

The bird and plane that flies are both complex systems which brings about the property of “flying”. The bird from evolution, the plane from the inventions of humans in the last few years - in scheme of things. So if qualia is an emergent property of the brain, then it would seem that there may be a chance for qualia to emerge from a complex system like a supercomputer.

Or can it?

Scientists, philosophers, and transhumanists have closely followed brain implants, deep brain stimulation, computer algorithms, going beyond the Turing Test, neural replacement and stimulation. There has also been difficult research on neural stem cells, but to the transhumanist that is not all that impressive, even though it is massively worth while and significantly important to many disabled people.

So far all forms of computer code, whether on a neural network or on a very powerful supercomputer have not created consciousness as we see in ourselves; even down to the consciousness like that of a bee. So, what then is qualia, and why are we striving to create so badly? Why is conscious experience so hard to replicate unless you physically copy the current brains of animals and humans?

Or is qualia simply materialist/physicalist information, for example, portrayed in these two cruel (to me anyway) experiments:

Monkey controls robotic arm with brain computer interface

Rat Brains Linked To Create "Biological Computer"

This is part one of a long series of articles that I will be focusing on dealing with science, philosophy, and transhumanism, and the quest to understand consciousness and qualia.

I will look into more detail about qualia, technology, and transhumanism, computer algorithms, and logic in part 2.

References:

[1]http://dictionary.reference.com/wordoftheday/archive/2012/07/20.html

[2]http://plato.stanford.edu/entries/qualia/

Kris Notaro, a former IEET intern, served as the IEET's Managing Director from 2012 through 2015. He is currently an IEET Program Director. He earned his BS in Philosophy from Charter Oak State College in Connecticut. He is currently the Bertrand Russell Society’s Vice-President for Website Technology. He has worked with the Bertrand Russell A/V Project at Central Connecticut State University, producing multimedia materials related to philosophy and ethics for classroom use. His major passions are in the technological advances in the areas of neuroscience, consciousness, brain, and mind.



COMMENTS

is “what it feels like” peculiar to you? Many philosophers reject the very concept of qualia, anyone have an opinion? please read this as well: http://rationallyspeaking.blogspot.com/2013/03/i-just-dont-get-it-one-more-on-mary-and.html

@Kris:

I tend to look at qualia as a red flag that there is something very different about our own experience of consciousness specifically and animal consciousness more generally when compared to what is going on in even our most advanced machines.

I recently hosted a bloggers’ discussion that dealt with the question of consciousness and AI. Though it’s not quite up to Hollywood standards, it might be of interest to you and readers of this post:

http://youtu.be/fXJP2OSh2sQ

The long and the short of it was that consciousness is an evolved
property of animals and that we are nowhere near creating it in machines because consciousness is a product of the architecture of the animal brain not the “software”. It is not a “magical” property but something we could replicate in a machine IF we could replicate the way brains work, but much of current AI comes at the problem from the vantage point that the neural architecture doesn’t count and only the software matters. This view is starting to change as we are turning to neuroscience as a model for AI and are even doing things like those presented in your somewhat disturbing videos where we combine the brains of animals with components of machines.

Interesting video – have not watched all of it as yet, however..

Rick – Concerning briefly the problem you posed regarding AI consciousness limitations and drawing inference from language and communication..

“When Barack Obama enters a room does his nose go with him?”

This can really still easily be resolved by artificial intelligence and algorithm and is simply a matter of applied inference from knowledge and intelligence(?) Intelligence and “naturalness” here defined purely from the effectiveness and efficiency of the algorithm?

For example..

1. “?” – incites question for sentence examination, triggers response, (Human symbolism equates this as contemplation)
2. “When” and followed by “Does” – requires conjunction resolution
3. “Nose” – an appendage of the Human face used to sense odour and smell
4. “His” – Male possessive pronoun
5. Use of “His” and “Nose” applied in a sentence following a name label implies ownership
6. “Enters” – verb/action following a name label implies ownership
7. implied ownership of constructive sentence equates as “Nose” “Does” “Enters”
8. Answer = yes

This is only a rudimentary analysis of language and symbolism. Yet every “thing” that we Humans contemplate with our phenomenological consciousness refers to an internal knowledge base of language and symbolism? Even for the contemplation of the label for “thing” has realisation in language as we effectively “see” the word in our mind’s eye – a purely mechanistic function of brain and mind?

Therefore machines can do this too, and should be able to at least as well as ourselves? The task is not to create even more complex algorithm to deconstruct language and symbolism and inference but to eventually simplify these algorithms to provide greater accuracy and less rate of error from first inference?

An example of this error is where even Humans misunderstand each other in their diverse construction of grammar and sentences when communicating, resulting in a request to repeat or reconstruct the sentence to help overcome further misconception.

This may leave us with the contemplation that the machine can cleverly understand language and symbolism as we Humans but still has no “real” consciousness that can effectively reflect upon what a “Nose” is or what it is like to have one? Well in the latter indeed not, at least not as yet until we give it a nose or a simulated organ to take possession and witness from, but it can relate the language and symbolism attached to the term “Nose” and relate this to various physical images in a knowledge bank that reference Nose – it now has a text/symbolic and pictorial reference for a Nose which can be applied to all further noses?

And is this not what we Humans “naturally” do anyhow? When asked about my nose I do not necessarily “touch” or “sniff” to provide conscious experience of noses, when I see an image of a face I reference the image and it’s nose with the symbolic text/language label without any sensation from my own nose?

If I was an uploaded mind or brain in a vat without a nose, I would thus have no sensation, but through “memory” still be able to rationalise and qualify the meaning of nose, it’s value and worth, (to others), it’s function, and possible shapes and distinctions (aesthetics)? Would this rationalisation, contemplation without sensation be acceptable for me as an uploaded mind, or as simulated existence as algorithm in machine substrate? First guess is no, as this would defeat the motive for quality of longevity? Philosophically, I may be able to deal with this and “persist” with a lack of sensation, as what is also of value is memory and intelligence – exactly what we hope to inspire in machines?

To surmise

Wisdom and intelligence can reside and persist without need for noses and sensations?
A machine can contemplate noses at least as “rationally” as we Humans do?
What then is the difference in perceived “rational contemplation of consciousness reflection” between mind and machine?

@ CygnusX1:

Hopefully I got that right ;>)

I think if the issue was as easy as you present it we’d already have AI that was clearly conscious.

The problem goes something like this: back in the days of the initial pioneers of AI - Turing et al- it was thought that you could create a human type intelligence if you could just figure out the right software. No serious figure in AI today holds that view. They’re starting to realize that not only does the human mind not work in this way, trying to get to human type consciousness using the kinds of linear code you provide doesn’t work in machines either. The initial figures in AI and many until quite recently thought it unnecessary to confront the differences between brains and computers: the fact that with the brain the hardware and the software are the same thing, that the brain is massively parallel in the way it processes information, that it does not record incoming information but creates and shapes it, that cognition is built atop of and is influenced by emotional states, that the system makes use of both electrical and chemical signaling etc etc. 

The initial pioneers in AI didn’t think we needed understand this stuff to build conscious machines, but it seems that we do.  Even our fastest computers are merely brute calculators good at one thing. Once we have a better idea of how the brain works we may be able to replicate something like our own thought patterns in machines, but it appears unlikely we will do so before before.

@ Rick..

“I think if the issue was as easy as you present it we’d already have AI that was clearly conscious.”

And really, this is my position yes, only I differentiate as to what we term and understand here as consciousness? I say machines and robots and quad-copters are “aware” and “conscious” of their environments and thus aim to deconstruct the speciality of the term consciousness applied to lesser systems, neural nets and the very foundation of nature and cosmos itself?

In short, “awareness” between particle entities, (if they really exist at all?), and their energy matter transformations has resulted in the emergence of complex biological life, molecules and proteins, and cells, them-selves “aware” – and from these, the complex organ and neural network of the brain comprised of interconnected neurons, themselves “aware” which has resulted in the emergence of “phenomenological consciousness” which we now disassociate from this fundamental awareness so abundant and ubiquitous in nature?

“All” is merely resultant from layers of “increasing” complexity? – so too the emergence of “intelligence”, temporal reconciliation of locus/position for person/Self differentiating between past, present and speculating for future, through the grace of emergent complexity of neurons associated with “memory”, (physicalism), and our empirical experience.. etc etc

I am over simplifying here yes and this was my intent, and also obviously as this is merely a philosophical position. I was specifically applying your conundrum regarding grammar with aim to hopefully reducing this as a purely logical problem that IS merely a solution for algorithm and thus software grounded – the reason that Watson perhaps cannot solve or even see the question to resolve, is purely a matter of programming? Hence as I indicated, no emotional sensation or Qualia consciousness is required to deal with matters of grammar and language and symbolism, and that this is precisely how we Humans naturally deal with grammar and language anyhow?

This must be true, as the success in language assimilation is reaping great results now and for future mind/machine communication and understanding, with emphasis on effecting intelligence in machines?

Yet like your guests on the video have also indicated, machine consciousness may be “real” or may yet evolve and still take a “different” form or rather be different from our “experiential” understanding of consciousness and what it specifically means to us Humans, or even more specifically to “me”, because I still cannot guarantee that even you or the “other” is really experiencing consciousness anyhow, (although as you have highlighted yourself, we can still make a logical inference and thus so too apply this readily to machine “intelligence”)?

“They’re starting to realize that not only does the human mind not work in this way, trying to get to human type consciousness using the kinds of linear code you provide doesn’t work in machines either.”

I totally agree, and this is why I deem “Qualia” as the hard problem to overcome and associated with physicalism, and NOT what we declare as this speciality for the term consciousness, which is another illusory construct like “Self” and yet more misconception, (if you simply replace the term consciousness with “awareness” we immediately see this, and we cannot even describe consciousness without using the term “awareness” – this is no coincidence)?

Your video commentator Toby made an important point regarding this, where he indicated that a machine may not view it’s own consciousness, (awareness), as special as it would understand the reasons, motives and mechanisms/processes – or more specifically, WE Humans would not view a machine consciousness as special because we understand it’s mechanisms – we only apply this veneration presently for ourselves, because our own processes are still a mystery to us? (which is where the commentator John makes a common error and thus justifies the speciality of consciousness to only those entities that are biological and “alive”?)

So what is consciousness? It is a specific term applied to the communication and feedback mechanisms of the “entire system of Human mind” and it’s operation, that results in an emergent phenomenon and illusion of “centre of intellect” which then reconciles it-Self through explanation and description of “formal/phenomenological consciousness” and this “awareness” that we experience and witness as/through “Self” reflexivity, (negative feedback), or “Self awareness”?

Like I have said here at IEET many times, “phenomenological consciousness” is most likely emergent and arises from the complexity of biological brain and mind. Yet all of this complexity, (like all interacting systems in nature and cosmos), arises from energy and matter transformations that possess or reside in a construct reliant upon “awareness”? (this can also be transposed and translated once again meta-physically and spiritually using the special term Consciousness – if you so wish – I do not dispute the use of the term, merely the veneration for it).

And this is why I have also indicated previously that machine and animal “consciousness” should not be the measure of personhood, (as we can never ever clarify/prove an entity as possessing consciousness – either philosophically or empirically), but rather intelligence should be the measure? And this measure of intelligence can indeed be applied to software algorithm especially where this has the ability to evolve or is empowered to improve its own efficiencies through negative feedback mechanisms?

For the practical purposes of pursuing longevity and mind/machine interface, reverse engineering the brain, preserving all physical attributes and qualia for possible uploading scenarios, I still say that the “brain in a vat” is the way to go?

Either that, or as suggested elsewhere, uploaded software minds may still experience sensation through direct interface with living mortals – so called surrogates if you will?

 

@CygnusX1:

“the reason that Watson perhaps cannot solve or even see the question to resolve, is purely a matter of programming?”

I think this is part of the conundrum for those working on AI. You would need a near infinite amount of programs to deal with all the varied contingencies an AI based on current computers would have to deal with.

This is why almost all of the “smart” programs out there have human on the back end. A program like Google translate an example of your “language assimilation” software. It essentially compiles millions of translations already done by HUMANS. Computers often can do things a two year old finds a snap like distinguish a table from a chair.

Even R. Kurzweil admits that computers are not yet able to understand words as opposed to play a matching game- albeit a really fast matching game:

http://www.youtube.com/watch?v=YABUffpQY9w (@2m50s)

“I still say that the “brain in a vat” is the way to go?”

I think this view of thinking the brain can somehow be hived off from the body is mistaken. The brain evolved and is a map of the body and its surroundings -this I think is where consciousness lies and is perhaps the reason for another problem AI researchers have discovered the difficultly of establishing anything like a conscious/aware machine in a “box” without the capacity to interact with both its internal condition and its surroundings.

For an interesting discussion of the brain as a map of the body see the neuroscientist Antonio Damasio here:

http://www.ted.com/talks/antonio_damasio_the_quest_to_understand_consciousness.html

We may be able to achieve something like human intelligence in machines by reverse engineering the brain or AI may evolve in a totally different direction -say quantum computing- where consciousness is much different from our own and which perhaps still needs human intelligence as a compliment:

http://utopiaordystopia.com/2012/11/28/our-physicists-fetish/

We might also merely have a continuation of the trends we have today where computers get better and better at doing things we consider ours alone and yet continue to have no “soul”

http://utopiaordystopia.com/2012/07/13/turing-and-the-chinese-room-2/

“This is why almost all of the “smart” programs out there have human on the back end.”

“Even R. Kurzweil admits that computers are not yet able to understand words as opposed to play a matching game- albeit a really fast matching game:”

Yet how do you differentiate this from the way that a child categorizes and uses symbolism and thus learns language? A child cannot hope to learn any language without tuition from the Human back catalogue of rules for symbols, grammar and sentence construction? And learning multiple languages is merely associating increasing labels to the original catalogue of stored symbolism information in memory?

If I present you with a word or term you have never heard of before it is meaningless to you, as meaningless to you as it is to the machine, there is little difference? It only becomes meaningful as you then associate it with other words, terms and images in your catalogue of memory experience? A machine may not truly understand this, but it may assimilate the exact technique for Human learning, (Qualia aside that is)?

A machine or Watson itself may have better predictions and success in the Chinese room scenario, at least enough success to data crunch enough random solutions as to present possible rational outcomes for organisation of the Chinese symbols - and what more is creativity than this? It is exactly how the Human would deal with the problem, and ultimately with never really understanding fully the language itself?

Again, what really is the difference? We all need our Rosetta stone?

The difference for Humans is that we can associate empirical experience using Qualia to support notions of things and entities such as say dogs, parrots and flowers. And this is the supporting phenomena that gives credence and worth to the empty words and terms that we use - thus the word dog or flower is equally meaningless to both us and the machine without sensation and experience and tactile knowledge?

Yet as I indicated, I could still assimilate and associate the word, terms and give them greater value, worth and understanding with use of images? Imagine that a child has never witnessed a chair or table, but can still associate and recognise these items from referencing a photograph only - as a machine can. Both are again oblivious to the nature of chairs and tables or even what to do with them, so Kurzweil’s claim that machines do not understand words, whilst relevant, is rather moot?

It depends on what we want from our machines anyhow? And I agree also that machines may do tasks and work much more efficiently that Humans in some cases, and these machines should remain machines as they will be tantamount to slaves?

Concerning the “brain in a vat” scenario - this would be assuming that brain and bodily chemicals can be supported, which is of great importance to sensations, qualia, feelings and emotions. Removing all senses from the brain is a rather drastic situation, and there is really no reason why this necessarily be the case, at least for vision and hearing.

Non tactile sensation may be a problem and in stories usually sends the participant into insanity and pandemonium, at least in most sci-fi horror. Yet we may rationalize that it may be possible for a brain containing a mind and memories to adapt to an isolated environment? And it still may be the best way to preserve Self and consciousness to overcome demise when the body fails?


Thanks for the links

‘And learning multiple languages is merely associating increasing labels to the original catalogue of stored symbolism information in memory?’

I’m glad you put a question mark after this (as is your peculiar habit with many of your statements), since it’s extremely problematic. Languages are embedded in the culture of which they develop. If I say ‘we’ve a mountain to climb’ it may be that this expression directly translated may have similar meanings in many languages but in a culture where mountains are considered the abodes of gods the hearer may assign very different connotations to someone from a culture who sees the mountain as a challenge to be conquered.

There is at least one language in which directions are given in terms of points of the compass and the words ‘left’ and ‘right’ are never used. To show the impact of culture, the speakers of that language (wish I could remember its name) have been found to have a greatly heightened sense of where north, south, etc. are anywhere they find themselves.

Fortunately for speakers of European languages very few if any have to learn that language. One can imagine the difficulty of trying to develop a good sense of direction in order to speak a language.

And this is just one example out of virtually infinite number that might be given. Hell, even between Europeans there are difficulties in exactly getting into the minds of each other when using each others language. In Spanish there is a word ‘duende’ often used for children having connotations of being healthy, energetic, full of stamina, bright, mentally healthy. It cannot be exactly translated into English - you have to try in a roundabout way or else settle for an inexact translation. And even between British, American and other varieties of English there are sometimes misunderstandings, if not outright incomprehension.

And we all know about the shortcomings of Google Translate and the like which have stubbornly resisted the efforts of programmers for years. I’m reminded of the story, probably apocryphal, of the computer that was instructed to translate the expression ‘out of sight, out of mind’ into Russian and back into English, resulting in ‘invisible idiot’.

@ Taiwanlight

“It cannot be exactly translated into English - you have to try in a roundabout way or else settle for an inexact translation. And even between British, American and other varieties of English there are sometimes misunderstandings, if not outright incomprehension.”

Indeed questionable translations, misconception and misunderstanding is rife between Humans, and no less difficult for machine intelligence to overcome. However, you have totally misconstrued my meaning in your reply, which also kind of serves a good example?

My point was that when an “individual” mind learns an additional language it merely allocates additional and new labels, (words - meaningless in themselves), to the symbolism already known by itself?

Yet your point is also valid, because it highlights also the subjectivism that no Human can overcome regardless of shared language, which is also no different from the difficulties we encounter attempting to build a machine that can intelligently draw conclusion and form responses for discourse?

Indeed the word “mountain” for yourself inspires different experience and contemplation than it does for me and my experiences, and more so for tribes and Humans that reside in or worship mountains.

There are many words that inspire multiple meanings, Dukkha is another. Yet this then leaves us with the problem of how we can translate these so called multiple meanings to others? By using even more words, terms and labels perhaps? How do I know that I fully understand the term Dukkha, or that I can explain it to you? or that a Buddhist monk can explain it to me?

You may now see that words in themselves are meaningless, and no less meaningless to machines. The advantage we Humans have over machines is “experience” of the world to support our meaningless words and terms?

A Google translation algorithm is locked away in box not unlike Mary, so what do we expect from machines, miracles?


“I’m reminded of the story, probably apocryphal, of the computer that was instructed to translate the expression ‘out of sight, out of mind’ into Russian and back into English, resulting in ‘invisible idiot’.”

Ha!

I would say this is the “perfect” translation for the phrase!

We can easily confuse machines, chatbots and even other Humans with colloquialism and meaning - this is no big deal. The failure is with ourselves if we cannot communicate or make ourselves understood by others or machines?

“I’m sorry. My responses are limited. You must ask the right questions” (iRobot)

@Taiwanlight:

A beautiful exploration of your point that language is embedded in culture along with a description of the language in which direction is based on the cardinal points is found here.

http://longnow.org/seminars/02010/oct/26/how-language-shapes-thought/

I believe “individuality” doesn’t spring from increased complexity of consciousness, but from evolution.  In other words, a feeling of “manifest destiny” or “divine existence” is simply an evolutionary trick (track) that was passed on because it made our ancestors fight harder when confronted with hardship and despair.  “Thru miles and miles of trial, you are the files of your forefather’s fruit.”

In other words, we will inevitably program our AGIs with the same “individuality” that evolution programmed us with.  It isn’t an “emergent property” per se, it is a refinement that is evolutionarily beneficial.

http://singularityhub.com/2012/03/10/robot-begs-to-be-allowed-to-live-dont-miss-the-impressive-“kara”-video-demo-from-quantic-dream/

@Rick

“I think this view of thinking the brain can somehow be hived off from the body is mistaken. The brain evolved and is a map of the body and its surroundings -this I think is where consciousness lies and is perhaps the reason for another problem AI researchers have discovered the difficultly of establishing anything like a conscious/aware machine in a “box” without the capacity to interact with both its internal condition and its surroundings.”

While you make a great point, as it turns out, it seems that it the brain is somewhat malleable, making new connections, utilizing neural plasticity.  In some cases of stroke and brain damage people were able to utilize other parts of their brain to take over what was lost. This suggests that the brain is not just a map of the body (or ideas), it has this very interesting feature where certain areas we once thought “controlled” such and such region of the body and/or thinking can indeed make the proper connections to do a different task. While people age, this ability gets a bit less.

However, the very concept of a brain region that is supposed to do this and that, and all of a sudden it is taking over for a damaged region is amazing. The science behind it is not finished, but it suggests to me that neural plasticity brings up all kinds of questions about the role of neurons and brain regions. How much of the thinking part of the brain, including speech is simply a representation of social constructs and other useless crap?

If someone does not have a heart, or other organs (which we will see in the near future) that is NOT connected to the brain, what happens to that brain region? It does not die necessarily, and as we discover exactly which regions do what, we can then assign tasks to people who have “cyborg” body parts with no need for a connection to the spine or brain, to utilize the region that is left over….(?)

http://www.youtube.com/watch?v=8_tWbSdzbHU
http://www.ncbi.nlm.nih.gov/pubmed/21172692
http://www.nytimes.com/2007/05/29/health/29book.html?_r=0 (old but interesting)
http://www.ncbi.nlm.nih.gov/pubmed/12744950

@Kris:

I think you’re absolutely right that the brain’s plasticity is one of it most remarkable features, and to point out that people who experience brain damage are sometimes able to take advantage of this plasticity and rewire the brain to recover lost functions.

“If someone does not have a heart, or other organs (which we will see in the near future) that is NOT connected to the brain, what happens to that brain region? “

I think we can see an example of what might happen in the case of “phantom limbs” there, the body senses a limb that no longer exists.  In this case the brain’s body map remains intact though not what is mapped.

In is an interesting questions as to how the brain would integrate
“plug ins” that are not part of its body map. In the case of amputees their artificial limbs are often wired up to the nerves of their missing limb. It would be fascinating to know how this feels and whether the brain eventually rewires itself to make this feel natural or whether it rarely does.

Rick: Thanks for the link. Bang on target, you were.

Thus, “learning new languages can change the way you think,”

So few people realize this and I think it’s unfortunate that there is now so little study of other languages in the Anglosphere. Even studying the way Americans use English as opposed to even Canadians can give insight into the intriguing and sometimes alarming way our American cousins see the world.

Cygnus: “My point was that when an “individual” mind learns an additional language it merely allocates additional and new labels, (words - meaningless in themselves), to the symbolism already known by itself?”

My point was that it is sometimes extremely difficult if not impossible to allocate labels to the symbolism already known by itself. Your position seems to me to be a kind of naive nominalism where words stand for the thing they represent. But this is a gross simplification of the way language works. Because the symbolism of each person let alone each culture, is different and there is rarely if ever, an exact equivalent of a word in one language vis-a-vis another. Even simple words like ‘a’, ‘the’ ‘I” among a thousand others can have different even opposing uses in different languages – or even different speakers of the same language. Nor is it just a case of symbolism; a person’s mind in using language subconsciously sorts through concepts, feelings, instincts, memories, attitudes and other such like.

“There are many words that inspire multiple meanings, Dukkha is another. Yet this then leaves us with the problem of how we can translate these so called multiple meanings to others? By using even more words, terms and labels perhaps? How do I know that I fully understand the term Dukkha, or that I can explain it to you? or that a Buddhist monk can explain it to me?”

Dukkha; a perfect example. When I first started reading about Buddhism many years ago, it was usually translated as ‘suffering’. Lately, it is translated as dissatisfaction. Yes, we can use more words to try to get a better grip on it but our human concepts are slippery and vague at best. Even between Buddhists, I suspect there is debate over the term.

Ayer, Wittgenstein, Russel and other philosophers tried for many years to get a grip on language; some of them hoped to develop a ‘logically perfect language’ but that has turned out to be a particularly fruitless chimera. The problem of induction is a very real one for the question of how we communicate.

This is partly why the whole idea that we can emulate a human mind if we can just get enough number-crunching power is so problematic. It may well be we are hunting yet another unicorn. Every era has tried to develop an idea of what the mind is. I heard the ancient Romans thought it was like some system of tiny catapults; the Victorians almost inevitably, a telegraph system. We talk of the ‘wiring’ of the brain (even explained with lots of Latin words I think the Romans would be in a fog of confusion over that one and it would be off to the gladiator pit with the explainer) and have a tendency to think, “well of course, it’s probably some king of biological computer”.

Yet, that assumption is by no means obvious, any more than is the assumption that languages are simply different codes substituting for some kind of common symbolism, if I read you right. Yes, it’s possible that eventually if we can get a computer that can calculate fast enough consciousness will emerge. However, it is also possible that the same superfast computer will never generate a mind and that we are barking up the wrong conceptual tree. Penrose and the other proponents of the so-called ‘Chinese Box’ argument may have a point.

Even the idea of the Blue Brain Project to try to emulate the architecture of the brain may be misconceived. It is even possible (though I think we should still try. We’ve surprised ourselves before) that the human mind simply cannot understand itself. Since the time of Turing AI researchers have thought with just a bit more computing power they could solve the problem and so far at least, nothing.

Remember, in the 18th and 19th centuries determinism ruled the roost to such an extent that by the end of the 19th some physicists thought the end of physics was close at hand. Then along came a certain German Jew, smart enough to get out before Uncle Adolf could get his murderous paws on him and wham! The paradigm was shattered utterly and a whole new universe (and possibly universes) lay before us, literally quite bewilderingly different. To such an extent that the smartest minds of the human race are still wrestling with its concepts. This could happen again and if Thomas Kuhn and his philosophy of paradigm shifts has any validity, probably will.

I beseech you, in the bowels of Multivac/Deep Thought, think it possible that you may be mistaken and that the answer to life, the universe and everything is not simply 42.

Whoops, Chinese Room. Mixing up philosophical arguments and Kung Fu movies now!

@ Taiwanlight

“Even the idea of the Blue Brain Project to try to emulate the architecture of the brain may be misconceived. It is even possible (though I think we should still try. We’ve surprised ourselves before) that the human mind simply cannot understand itself. Since the time of Turing AI researchers have thought with just a bit more computing power they could solve the problem and so far at least, nothing.”

Agreed, I am not convinced that this project will be successful, although we Humans should attempt it anyhow. What is Markham expecting, the Ghost in the machine, (mind not consciousness!) to miraculously appear? I think not, rather it will remain empty yet perhaps responsive? Hence the need and requirement for software and programming? Non-biological that is?

So we are ALL in agreement here on the importance of Qualia then? Is this not the real “Hard problem”?

Have to still disagree regarding symbolism, although there are many “feelings” we cannot quite describe, the English language comes closest and is most nuanced?

Think of some thing that you do not ascribe a label to and let me know?

“Think of some thing that you do not ascribe a label to and let me know?’

Easy enough. There are an infinite number of things I do not ascribe a label to.

To give one fairly famous example, English has only one or two words for snow where Inuit and other languages have many more (though as we will see there is some debate about this, it does not affect my point)

From Wikipedia: “Eskimo words for snow”

‘The “Eskimo words for snow” claim is a widespread, though disputed, idea that Eskimos have an unusually large number of words for snow. In fact, the Eskimo–Aleut languages have about the same number of distinct word roots referring to snow as English does, but the structure of these languages tends to allow more variety as to how those roots can be modified in forming a single word.[1][2] A good deal of the ongoing debate thus depends on how one defines “word”, and perhaps even “word root”. Since at least the 1980s[citation needed] certain academics have advanced the idea of the “Great Eskimo Vocabulary Hoax”, suggesting that the fact that number of word roots for snow is similar in Eskimoan languages and English proves that there exists no difference in the breadth of their respective vocabularies to define snow. Other specialists in the matter of Eskimoan languages and their knowledge of snow and especially sea ice, refute this notion and defend Boas’s original fieldwork amongst the Inuit of Baffin island.[3]
Languages in the Inuit and Yupik language groups add suffixes to words to express the same concepts expressed in English and many other languages by means of compound words, phrases, and even entire sentences. One can create a practically unlimited number of new words in the Eskimoan languages on any topic, not just snow, and these same concepts can be expressed in other languages using combinations of words. In general and especially in this case, it is not necessarily meaningful to compare the number of words between languages that create words in different ways due to different grammatical structures.[1][4][5]
Opponents of the “Hoax” theory have stated that Boas, who lived among and learnt the language of the Baffin islanders, did in fact take account of the polysynthetic nature of Inuit language and included “only words representing meaningful distinctions” in his account.[6] Further they state that the non-polysynthetic language of the Sami people actually includes around 180 snow and ice related words and as many as 1000 different words for reindeer.’

So there are many kinds of ice and snow I do not label. If I have to describe them I struggle with adjectives and qualifications: ‘uh, a kind of loose packed powdery sort of snow’, etc where an Inuit might make do with one word so, in this case at least, English is not the more nuanced language. The Inuit as the philosophers might put it, does one kind of speech act, I another. He quite easily and I struggling, so there is much more to language than mere labeling. And even when we label, it’s not like simply sticking a piece of paper on something. We often use the same word for many things, different words for the same thing (or very similar). Sometimes we use the word with a different tone e.g. ‘You? You have a car?’ implying it’s amazing or irritating that person has a car. This is very different from say, ‘Oh? You have a car do you? simply indicating mild surprise.

There are also many words in languages that cannot be described as labels; pronouns, linking words, adjectives, adverbs and many more. And words that are used as labels are often used in several other different ways (and that’s leaving out coinages).

So if you’ll pardon the pun, I think you’re skating on thin ice.

 

I am curious if we can distinguish between “actual” qualia, and “simulated” or “virtual” qualia.  Sort of like a program jamming the Turing Test, where it mimics a human, but isn’t conscious like one.  I mean, by it’s very definition, virtual and actual are the same in appearance.  Sort of like this:

https://www.facebook.com/photo.php?v=844600288884999&set=vb.218304001514634&type=2&theater;

YOUR COMMENT Login or Register to post a comment.

Next entry: Stupidest Budget Cuts Ever – or, Why Cutting Contraception Is Not Conservative

Previous entry: Recipe for Resurrection