IEET > Rights > HealthLongevity > Personhood > Minduploading > GlobalDemocracySecurity > Vision > Affiliate Scholar > Rick Searle > FreeThought > Futurism > Technoprogressivism > Innovation > Artificial Intelligence
Is AI a Myth?
Rick Searle   Nov 30, 2014   Utopia or Dystopia  

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge want they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological andpolitical risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

​As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”.

Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.

Rick Searle
Rick Searle, an Affiliate Scholar of the IEET, is a writer and educator living the very non-technological Amish country of central Pennsylvania along with his two young daughters. He is an adjunct professor of political science and history for Delaware Valley College and works for the PA Distance Learning Project.


I should provide a link to Luciano Floridi’s rejoinder to John Searle:

@ Rick..

Some very important points. I took a peak at the Jaron Lanier piece earlier yet stopped short in reading the whole thing - I will revisit later.

We all presume that the “New age Capitalist” will be the owners of these emerging robots, gradually replacing the Human workforce, (depending on innovation), and that these “owners” will not necessarily be the industrialists nor corporations utilizing these robotics technology?

This for sure is a danger, and so too caution of government sponsored funding that aides Oligarchy ether directly or indirectly to support such ownership, control and patents, (IBM loves patents and collects them like stamps?)

As with the historical “positives” of innovation arising from competitive markets and past/contemporary technology - this ownership may be unavoidable unless there is a review of current laws on patents and timescales for holding them, (ownership of intellectual property is also something Lanier has voiced pro opinions about).

There is certainly a real danger of indirect public consensus, ignorance and sanction resulting from this blind-sight over ownership and yet more control for Oligarchy. And perhaps it is a very good idea to ask ourselves “who owns this technology” every time we read of a new innovation?

However, there is a flipside to this argument over what defines Artificial Intelligence, and I want to focus on this?

Yes, I am a sceptic also in asserting General intelligence and understanding to machines - but I do not necessarily see the term “Artificial Intelligence” as mis-applied.

“Artificial Intelligence” by definition applies to the whole spectrum of machine learning capabilities up to “Artificial General Intelligence”? This includes all sorts of self correcting algorithms employed in industry and high-tech weapons, including missile and new pilot-less strike aircraft.

Technology and services like Watson that access the entire information of the internet as oracle, effectively and most efficiently establishing this “memory” information access as the necessary component to support the mechanism of “intelligence” - should also not be underestimated?

Now the crux, (and philosophical debate which proposes the right questions for science to investigate), is.. What is Intelligence?

Reverse engineering the brain and neurons to build artificial General intelligence may well be a wild goose chase in the same vain as attempting to scientifically discover “Consciousness” - which may never reap results because “only a Conscious entity/mind can “witness” what it deems as the importance of both it’s own consciousness and intelligence” - BOTH! of which are effectively a delusion?

In other words - by reduction - just as with delusions regarding the speciality of Consciousness, the speciality of Human intelligence is merely (bio)mechanism and no different in nature?

So again with the question - What is intelligence? And thus what qualifies as intelligence?

“A.I” applied as contemporary and colloquial expression for all manner of applications, goods and services is not necessarily counter-productive -
counter-intuitive is some cases maybe yes, yet applied to Watson, pilot-less aircraft and other, I think not - the measure as compared with Human General intelligence is obviously “creativity” or moreover, application of solutions and deductions at speed?

Intelligence requires.. ?

1. Memory
2. Application of solutions/ideas no matter how ridiculous, the faster the processing the better.
3. Evaluation of positive/best results from actions, (comprehensively and effectively negative feedback)
4. Memory

Did I say memory already?




“We all presume that the “New age Capitalist” will be the owners of these emerging robots, gradually replacing the Human workforce, (depending on innovation), and that these “owners” will not necessarily be the industrialists nor corporations utilizing these robotics technology?”

It’s a interesting question. I suppose we should look at the power groups that have gained from the current round of innovations and extrapolate from that. Tech companies, yes, but also financial firms, and security services. But actually any industry will “benefit” from not having workers to pay and provide benefits for. It’s a tragedy of the commons kind of benefit because once a large enough number of them go this route there won’t be anyone left to buy their stuff.

“Now the crux, (and philosophical debate which proposes the right questions for science to investigate), is.. What is Intelligence?”

I think anything like human intelligence in a machine is a LONG ways off, and frankly don’t even think we’re at the stage where we can properly define it. The scary thing is that it probably doesn’t matter. Eventually they’ll have very clever but nonetheless consciously empty algorithms doing everything from making our scientific experiments to writing our novels and composing our music. That is not to say that it is inevitable, the “next stage of evolution” or any such thing. It’s a choice. We can decide what exactly the boundaries of such a world are and whose benefit it will most serve.

Very good well reasoned piece. Enjoyed it immensely.

What is Intelligence?

“When asked their opinions about “human-level artificial intelligence” — aka “artificial general intelligence” (AGI) — many experts understandably reply that these terms haven’t yet been precisely defined, and it’s hard to talk about something that hasn’t been defined. In this post, I want to briefly outline an imprecise but useful “working definition” for intelligence we tend to use at MIRI. In a future post I will write about some useful working definitions for artificial general intelligence.”



It’s strange, someone brought up the same issue over at my blog, and since I’m responding to this on my lunch break I’ll give the same answer to you both.
I don’t think consciousness is necessary for behavior to be intelligent meaning “an agent’s ability to achieve goals in a wide range of environments”

We’re experiencing this with our machines, which aren’t conscious in any sense, but can often outperform us in some tasks that require intelligence e.g. chess. But we should have known that consciousness wasn’t necessary for intelligent behavior all along. Neither an immune system or a bee colony is conscious (rather than aware), but they certainly show intelligent behavior.

I guess that leads to the question of what do I mean by consciousness? To me consciousness is the “what am I doing?” it’s a situational awareness we share with many animals (and perhaps are now beginning to share with some machines. Self-consciousness is a “higher” version of this it is the “what are you doing?” it is my own explanation of my situation and behavior as if I were “outside” of myself.

I my view, we are only just beginning to crack the nut of machine consciousness and nowhere near obtaining self-consciousness which would require semantic understanding of language.

I think we still could have super-intelligence without consciousness, but there would be gaps in such an intelligence’s understanding between its “internalized” world and the real world that would make it much less threatening than some think. The problem with super-intelligence that wasn’t conscious would be in it being hacked by human beings that knew very well what they were doing. 

@ Rick.. maybe not so strange nor coincidence that someone else raised the question?

“But we should have known that consciousness wasn’t necessary for intelligent behavior all along. Neither an immune system or a bee colony is conscious (rather than aware), but they certainly show intelligent behavior. “

Indeed, and my point was to differentiate between consciousness and intelligence, and highlight the speciality and uniqueness we Humans perhaps unnecessarily attribute to both?

The “Chinese room” dilemma proposes that without Self-reflexivity, (first/third person), a machine has no sense of “meaning” and cannot “question” why or how concerning it’s own actions - However, this should not distract us from an AI’s capability or efficiency for solution solving and deduction?

Your article raises important points for reflection, but your point that AI success is supported by the hard works and historical knowledge of Human minds is cast in the negative, whereas this is in fact a positive?

So it is with Humans also. Isolate any Human from birth and they will not even be able to produce fire, (although hopefully they will be intelligent enough to work it out - eventually)?

This is the teology of “world spirit” that Hegel proposed - intelligence “stands on the shoulders” of historical knowledge, it is no detriment to AI that it utilises and relies heavily on this collective knowledge/information, as we Humans also rely on our evolving technology?

Detective Del Spooner : “Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a… canvas into a beautiful masterpiece?”
Sonny : [With genuine interest] “Can you?”

I would say that all complex and intelligent systems rely on negative feedback, from a Bee successfully landing on a flower, to hive minds and behaviour, to the thermostat controlling room temperature on your wall - these phenomena comprise ability for self corrections be they alive or inert, and this a key component for intelligence?

Also, immune systems comprise white cells that are not only consciously/aware - they also possess short and long term memory, such like neurons?

ps. That link to Luciano Floridi’s reflection is excellent!



“Your article raises important points for reflection, but your point that AI success is supported by the hard works and historical knowledge of Human minds is cast in the negative, whereas this is in fact a positive?

So it is with Humans also. Isolate any Human from birth and they will not even be able to produce fire, (although hopefully they will be intelligent enough to work it out - eventually)?

This is the teology of “world spirit” that Hegel proposed - intelligence “stands on the shoulders” of historical knowledge, it is no detriment to AI that it utilises and relies heavily on this collective knowledge/information, as we Humans also rely on our evolving technology?”

I am not sure Hegel would appreciate the comparison, but yes, I do find problems with this model. First off, it means humans don’t get paid. We’re moving into this system where people perform what used to be called work. but all the value is captured by someone else who is able to aggregate that work. I suppose we could solve this with a universal basic income, but I see little chance of it when even food for the poor is labeled “handouts”. At some point we’re going to reach a place where the majority of people who have performed the work for these things are dead and gone- an unexpected version of silicon immortality indeed. 

The other problem I have with it is that it kind of breaks the essence of what communication is all about. Any kind of art, and writing especially, has this “message in a bottle” quality. A writer is actually trying to say SOMETHING even if he’s never fully understood. An algorithm though isn’t trying to say anything- it’s like a very sophisticated form of ouija board where the person on the other side of it thinks they are being spoken to. I suppose the programmer is actually trying to say something, or better the company that hired the programmer, but then we’re turning all artists into programmers and what happens when the algorithms program themselves?

@ Rick.. That’s not my point

The Hegel analogy is a bit of a stretch, yet close enough to make the point that historical knowledge supports perceived increasing and evolving intelligence in Humans, which is arguable? And that AI is just as suited to this use and application of “world knowledge/information” - In this teology of world history the evolution of knowledge/information is that which supports Human continued existence, not the majority of individuals?

Sure enough your points regarding Human worth and future work are important, and this is something Lanier is also concerned about. Universal Basic Income can help solve these issues, and it will - I can see no other option other than increasing world poverty/inequality driving societies to conflict and civilization to war?

The quote from “I Robot” above was to highlight Human misperception as to their own individual abilities as compared with unique genius and the possibilities of AI - it was not a sanction for AI to supplant Human artistic endeavours. In fact I would propose that with aims to end poverty and UBI Humans will have much more leisure time to pursue artistry and education/knowledge?

Humans are innovating AI as necessary tools, (arguable), to support growing world population and quality of life for all? - Oligarchy understand ownership as supporting their positions of power, and as you hint, see technology as just more means to the same ends?

Regardless of whether we argue Watson or complex systems as inherently intelligent, it is still Humans that derive meaning from use of these tools, and this is something that will never change/alter?

Some, even Kurzweil, are eager to downplay the function and abilities of Watson as not truly intelligent AI, and this is the argument - what defines our notions of intelligence? Yet the possibility of having Watson as personal assistant for each of us on our smartphones, (costs and bandwidth permitting), would be a remarkable achievement, and a progressive move towards Human/machine interfacing/learning/education/symbiosis?

What is intelligence?
Not a “thing” in and of itself? Nor is it inseparable from (World) knowledge?



We are talking past each other, I think, but at least we’re talking - unless you’re a bot. ;>)

This is probably our main difference:

“Humans are innovating AI as necessary tools, (arguable), to support growing world population and quality of life for all? - Oligarchy understand ownership as supporting their positions of power, and as you hint, see technology as just more means to the same ends?”

AI, as it exist today, is largely being built to support traditional centers of power and wealth and as a human “replacement”.  It’s not the ugliest of human occupations that are being automated, but high value middle class occupations. Kevin Warwick thinks by the end of this century non- “upgraded” humans will end up in a “zoo” or dead. He is not upset about it:

I can imagine a world where AI was used to serve humanity, where dumb robots did jobs below the dignity of human beings, and we were helped by intelligent AI to support a just society like in the novels of Iain Banks. Alas, this is not the world we are creating and just allowing the technology to unfold will not take us there.

@ Rick.. Yes it does feel like we are talking past each other.

Again, your points are important, your scepticism I share. But where we disagree is with this exploitation of Human intellect, which is a real concern yes, but my point is that collective and historical knowledge is something we Humans are all privy to, and also take for granted? So AI Relies heavily on this also - so what? How else?

Quid pro quo vs exploitation ..

Yes, intellects are being deployed and most likely than not in future may be unemployed - yet there are more ways than jobs and wages to reward the hard efforts of Humans?

We could say that every scientist that has a job today owes a debt of more than gratitude to Einstein and others, (and no doubt Oligarchy made some tidy indirect profit someplace from these persons also)?

Who are the real Looters?
A capitalist is not an industrialist nor an engineer?
A Banker produces no material thing of worth, but earns easy profit from little or no physical work?
What did the Romans ever do for us?
Render unto Caesar that which has his face onnit, and only this?

Some AI does not qualify - agreed! But don’t ditch the baby at such an early age!

What is intelligence?

If Humans were as intelligent as they make out, then they would re-evaluate a broken/antiquated socioeconomic system, prevent the exploitation of the “few”, save the planet, and invest in the future of security, prosperity and welfare of all.. And use AI to help us get there?




I hope I am being clear that I am not suggesting that we be like the inhabitants of Samuel Butler’s Erewhon and smash our machines. AI even as currently conceived will have many good uses including mining the human past and perhaps even expanding our intellectual and artistic horizons. What I object to is the misunderstanding that these machines themselves are intelligent rather than being just our own version of the Mechanical Turk with a human being hidden behind them. They are not our “children” but a mask over other human beings.

We have a real tendency to occlude our understanding of the reality of the world from the kinds of ugly Dicksonian worlds which our goods and food comes from to our use of violence. The way we think about AI now is another such occlusion which we need to avoid or escape.

Amazon’s Mechanical Turk workers protest: ‘I am a human being, not an algorithm’

“Users of Amazon’s Mechanical Turk crowdworking marketplace have launched a Christmas letter writing campaign to the company’s founder and CEO, asking him to stop selling them as cheap labour and to give them tools to represent themselves to employers and the world at large.

Mechanical Turk launched in 2005 as a way for companies to farm out digital tasks that computers find difficult but humans breeze through, such as transcribing, writing and tagging images.

“I am a human being, not an algorithm, and yet [employers] seem to think I am there just to serve their bidding,” writes Milland in her letter to Bezos. Amazon does not set minimum rates for work, which can pay less than $2 a hour, and takes a 10% commission from every transaction. Employers can even refuse to pay for work altogether, with no legal consequences.”

Read the rest here..

Dats Capital!



Amazon’s Mechanical Turk is like many of the AI systems we use- though there the source of the intelligence is explicit.

An addendum: they may have found a reason why AI methods like deep learning perform like the human brain. Both linked to physics:

If that’s proven correct it would run counter to Jordan’s argument that what AI does and the human brain does are not really connected.

And yet another person thinks “killer” AI is not the real problem we should be worried about:

@ Rick..

Thanks for the link, interesting article. I don’t fully understand the principles, and don’t expect to, However, this “renormalization” seems to make much sense in biological brains. Our visual attention is usually drawn to movement firstly before we appear to make sense of Cats, (or Tigers), in bushes and drill down to recognise these via their visual characteristics?

For animals and things we don’t recognise from experience, we can also “reflect” on our own thoughts as we analyse and process what we are seeing, (at least I do, but I might be labelled weird?)

Also, have you ever had an eye test with those mosaic pictures constructed from pastel coloured spots - progressively it takes more time to discern images on more difficult cards - one can almost “feel” oneself “normalizing’ the image and making sense of it in the brain/mind? (not that these thoughts are “feelings” of course, and I am still adamant these thoughts are totally impartial and comprises no feelings of qualia)?

Even if machine algorithms do not exactly function the same as in biological entities, similarities help us to understand our own brains/minds. I would also expect the strive for greater efficiencies for algorithms and neural nets far less complex in nature than Human brains, and intelligence engineered that is truly “Artificial” as defined by nature?

I still hold “faith” that a truly democratic Internet has the potential of/for a global neural net and for massive parallel processing for multiple super-computers and specialist AI Data mining. With the memory and storage of all Human knowledge text/sound/visual effectively accessed via nodes such like memory in neurons and biological brains.

I hold the position that all memory stored in my neurons is inert and that this information is quite useless accessed independently, and that my neurons, whilst independently aware/conscious are totally impartial - even down to the neurons which cause chemical processes acting out and that presume to make me “feel” emotions and mood triggered by circumstances and past experiences?

Of course it is my “emergent”, (Phenomenological), mind which attempts to give meaning to these processes and effects of chemicals and hormones released?

Thus I can envisage machine intelligence using information, data mining in a totally impartial way similar to my “independent” neurons, and without any real sense of meaning, (the “complexity” of a “system” defines its level of intelligence)?

I am guessing that we may be in agreement that “intelligent” systems”/processes may still exist in objective reality and without any real “Human” meaning or value applied to such intelligence? And that these may still prove to be both useful and efficient?

Aside from the present scare mongering regarding paper clips and spam Terminators there are sinister applications for AI, especially by govt agencies - imagine intelligent bots scouting the internet and vigilantly blocking or deleting information and knowledge 24/7 and acting as independent agents?

It is historically much easier to destroy than create/recreate?



“I am guessing that we may be in agreement that “intelligent” systems”/processes may still exist in objective reality and without any real “Human” meaning or value applied to such intelligence? And that these may still prove to be both useful and efficient?”

Yes, I believe there are forms of intelligence other than our own, some of which we cannot even imagine.

“Aside from the present scare mongering regarding paper clips and spam Terminators there are sinister applications for AI, especially by govt agencies - imagine intelligent bots scouting the internet and vigilantly blocking or deleting information and knowledge 24/7 and acting as independent agents?”

That’s one dark scenario- here’s another: I am sure you’ve heard about how the Chinese communist party has an army of bloggers whose job it is basically to find, challenge and provide official narratives for stories circulating in China. It may be that with an army of algorithms the state wouldn’t need censors at all - it would just flood the system with alternative pro-state news until you’d have to dig forever to find alternative versions of events. Wait a second…. aren’t we already there? 

I can add something to this dialogue that might change things a bit.
In the fastest way I can:

TURING: “Can machines think?”
COLIN:  “Yes”

REST OF TECH WORLD WRONG TRACK FOR 60+ years: “Can computers think?”
COLIN: “Never”.

We are not answering Turing’s question.

If you build the right machine it will be conscious and think like us. A whole ecology of intellect, like a natural ecology, is possible. I am building artificial brain tissue. It will do AGI for real because the same physics is doing the same thing. No computing whatever. I build AGI the same way the Wright bros built a plane.

I know that computers will never and never ever had any hope of doing AGI. They are only a threat in the same way any automation is a threat. Now even Hawking is weighing in on the hysteria. None of these people have any clue about the reality of actually doing it. Are we going to get another AI winter because of this garbage? Enough already. Lanier is right. But he also gets the real solution wrong like everyone else. It won’t happen with computers and it’s not the risk/reward profile everyone’s thinking about.

<very grumpy>

@ Colin..

I take it I have the correct Colin?

The modern phlogiston: why ‘thinking machines’ don’t need computers

You appear to imply that a non-material form of AGI cannot exist, and yet non-material AI in computational systems do exist? I don’t quite understand this difference, as the goal must be to create self-learning intelligence processes regardless of materialism or the substrate they inhabit?

Sure enough, I agree that simulated systems will always be replications and representations of the material world, but that is the whole point isn’t it - to build “intelligence” that need not necessarily impose upon the real world at all? - the expression of software solutions to physical actions is indeed via machines and robots, (ie the aerobatics of Quadcopters and etc)?

Do you agree that “Intelligence” is non-material and “emergent” from complex systems which incorporate feedback and memory to help make comparisons of “what was” to “what is” and then form solutions/reactions, no matter how impartial and non-thinking these may actually be? (physical system loop delays can also be deemed as a form of simple simulated memory - for example using integral/reset time in a control system can affect the efficiency and response of a system - whilst this is not evidence of intelligence itself, the whole system could be deemed as “intelligent” if it then applies its own tweaks and adjustments)?

Do atoms in metals and other materials express memory of form and perhaps a rudimentary expression of “intelligence” by changing their structure under duress/temperature change? These phenomena are obviously constrained by physics/mathematics - and mathematics perhaps underlies all complex system intelligence?

How do neurons connect to each others? Blue Brain Project opens new insights


Hi @cynusx1,

Perhaps I can address your observations as follows:

You stand in front of three identical human forms. H human. R replicant. E emulant. You hold a baseball bat. Next to you is your very young daughter and police. E and R are robots. H and R are conscious. H and R are conscious because they have identical material physics in their craniums. H is organic. R is inorganic. E’s cranium is filled with computer-material physics. Not brain physics. I say this as an expert in it. I am literally building and studying the inorganic version of the physics. Artificial brain tissue is not a computer.

If you both think E can be equivalent to H and R then you are making an assumption of some principle (say X) that does not exist and never did. It is a logical error of the ‘map/territory’ kind unsupported by any experimental work (more on this below).

Now I hold that H & R can be _scientists_ and E will never become a scientist. Because E is not conscious it’ll never perform as well as H & R because consciousness is, literally, scientific observation (in context of course). Primary sensory consciousness is an input. You cannot compute inputs. Obviously. There’s not enough information in the senory measurement to construct what consicousness provides because, as an input, it adds to it in the process of being facilitated by the sensory feeds.

Scientific behaviour is unique because it is directed at radical novelty in the natural world and only H & R access the (unknown) natural world itself. No matter how sophisticated, E accesses a model of the natural world grounded in the consciousness of its programmers and driven instead only by sensory feeds, not consciousness. That is how you scientifically tell E from the other two. Scientific behaviour itself.

E has the experiential life of a dreamless sleep because there’s no consciousness physics there and it doesn’t have a self or even know it’s in the universe with H, R, you, daughter and police. That is the result of principle X being false. To think otherwise is to assume principle X is true without proof (more below).
============================== ASIDE
I wrote an entire book on this.
Hales CG. 2014. “The Revolutions of Scientific Structure”
I’ll send pre-print to anyone that wants it.

Continuing our ditty:..... you systematically beat H, R and E to a pulp with the baseball bat.

What happens? You get charged with 2 counts of murder (H, R), one count of property damage (E) and 3 counts of child abuse. If H or R or you run amok its criminal law. If E runs amok it’s an industrial accident. The entire issue rests upon the consciousness of the three. No matter how authentically E might react to its bashing, to the point of dmaging your daughter, E feels nothing. E is not conscious and we know it.

You can test for consciousness in a 3rd person scientific way without any human involvement (chapter 12) by asking each to be a scientist on something proved first to be unknown and in which none of them have any prior experience. H & R will win every time in this. Proof: they deliver a ‘law of nature’. E will fail. In the formal test E will fail to escape the test rig and run out of energy. You are not proving knowledge. You are proving an ability to be trained by the natural world.

Back story. In the late forties/early fifties computers arose. For the first time in history a brand new community of workers that had never done actual science before split off from real science where they remain today, a fossil monument to a colossal instance of presupposition born in discipline isolation. The 300 years of empirical method (replication, not emulation) was invisible to a new community of investigators. Experimental theoretical science (emulation, computational exploration of models) is not empirical science (replication). We learned about fire physics by building fire. Not in a combustion simulator. We learned about flight physics by flying. Not in a flight simulator. In AI ..... obvious isn’t it?

None of the AI workers realise this and none of the rest of science realise the mistake, and there’s no training of scientists that addresses it. When computers turned up no review of science itself took place. Indeed I predict that this whole issue will result in a major science self-governance overhaul and a new understanding of science itself. That’s what the mistake will tell us and do to us. Not out-of-control-AI catastrophe.

The entire risk/reward profiling and all the hysteria about AI-armageddon has completely missed the reality of the technology kind and its impact.

Look at the way AI is being discussed. In the 1st sentence the word ‘computer’ is used! Why? This has got to stop. For the first time in history a computed model of the natural world is being mistaken for the natural world. If principle X was true then a computed model of fire would burst into flames. No, you say? Says who? Principle X? What principle X?

If anyone objects to my claims here, then I say OK. Prove it. How do you prove it? Replication of physics must be contrasted with a computer model. Guess what is the one thing that has never been done in the history of neuroscience/AI? Replication (please don’t cite neuromorphic chips at me – they do not replicate. They emulate in hardware.) So I don’t claim anyone to be wrong. I claim them not to know. Formally, nobody knows because the replication needed to determine the truth of principle X has not been done. I.e. we have not done for intelligence what the wright bros did for flight. Unique in the history of science. We may be forgiven for not doing this earlier because of fabrication limitations. Now, in 2014, we have no such limitation. Yet we continue to do what amounts to the greatest/longest failed test of a hypothesis in the history of science: that principle X is true. Way longer than the Higgs Boson. Way longer than relativity. And we keep assuming it’s true with no evidence, no historical precedent and by generationally trained-in presupposition.

I am trying to get people to think again. There is a real schism operating here. It doesn’t mean computers aren’t useful. It just means that human-level AGI won’t arise by computers, but by another different kind of machine.

So when I see ignorance operating at the peak of a very public (Hawking) discourse and potentially damaging to AGI investment…it makes me very frustrated and angry. It’s time we started dealing with reality and questioned _ourselves_, not panic about poorly thought out futures.
The Human Brain Project will be a neuromorphic E and never an R. It will underperform in mysterious ways that are not mysterious if you look at it from the perspective I have given. It’s underperformance and indeed the failure of AGI for 65 years is predictable. Expected. Normal. It’s because we haven’t started building real AGI yet. We are sitting in a flight simulator waiting endelessy to fly, building ever bigger simulators when it doesn’t work.

The physics in question is electromagnetism (originates at the neuron membrane). That’s all there is to a brain. Replicate the same EM field _fully_, inorganically, and you get all the signalling (action potential and ‘ephaptic’ coupling) and consciousness automatically as emergent phenomena. The mechanisms (even a 1st/3rd person reference frame shift) are here:

Hales, C.G. (2014). The origins of the brain’s endogenous electromagnetic field and its relationship to provision of consciousness. Journal of Integrative Neuroscience 13, 313-361.

Too many words. I know. Thanks for the opportunity to articulate this. I know most of the readers here won’t want to hear it. 


Colin Hales


Based on your comments and after reading the link to your article that CygnusX1 provided, I think you are correct: if we want to achieve human like AGI we’ll have to mimic the biological architecture of cognition.

What I wonder though, and I am not saying I think we should go down this route, only that I wonder, why we need to recreate an inorganic example when we already have a working form of general intelligence from biology? Why not use actual brain matter from living creatures- animals one would hope? What’s the reason we have to create a whole new substrate? AGI is different from flight in that recreating birds rather than an inorganic form of flight would not have gotten us very far. But isn’t intelligence different from this? What were trying to do in inventing AGI is something we have done with animals for thousands of years- harness a natural capacity- in this case intelligence- to do what we define as meaningful work.

@ Rick Searle

You’ve touched upon a key point. It’s the idea of ‘essential physics’. Physics that, if you don’t include it in the replication, causes a loss of function. Perhaps total. Perhaps partial.

Yes. Artificial flight replicates the essential physics (flight-surface/air interaction) of flying. It doesn’t replicate the bird. Compute the flight physics and you do not get flight.

In artificial fire we retain the essential physics of combustion, not the forest fire we saw. Compute the combustion physics and you do not get combustion.

In artificial hearts we retain the essential physics of the heart (pumping action), not the cellular basis of the pump. Compute the pump (Navier stokes) physics and you do not get a pump.

In artificial kidneys we retain the essential physics of filtration, not the cellular basis of the kidney. Compute the filtration physics and you do not get a filter.

In artificial stomachs we retained the essential physics (chemistry) of digestion, not the cellular basis of it. Compute the digestion physics/chemistry and you do not get digestion.

And so forth. Science is a blizzard of these examples and in every case we retained the essential physics to replicate some kind of actual function. In every case we abstracted away the original cellular/biological substrate in favour of an inorganic replication of the essential physics.

My question is “Why should intelligence be any different?” See my angle? My question has 300 years of precedent. Assuming “Isn’t intelligence different from this?” (perfect way to characterise the issue!) is a presupposition; an unproved conjecture without precedent. For 60+ years this assumption, in effect, is that there’s no essential physics of the brain. That no abstraction, however extreme, will alter function. Assumed without a shred of a scientific principle. Unique in the history of science and goes against 300 years of practice.

So exactly where does the “Isn’t intelligence different from this?” question get so much superiority over “Why should intelligence be any different?”. So much superiority that for 60+ years it results in the 100% confinement to emulation and nil of the alternative (replication).

So let’s test the conjecture. If “Isn’t intelligence different from this?” is true, then what you’d do is (1) replicate what you think might be the essential tissue physics and (2) contrast it with a computed model in some revealing context. Like everywhere else in science. Right?

Yet that very test we have not yet done. Ever. So the hypothesis “Isn’t intelligence different from this?” is not just unproved. It’s completely untested. Ever. Such a conjecture is not proved by presupposing it true for 60+ years and failing non-stop. Which is exactly what is going on.

Don’t you find this situation weird? I do. It’s so obviously broken and logically compromized that I find I almost have to pinch myself that I’m even in conversations like this.

NOTE: Neuromorphic chips do not replicate the physics. They emulate using hardware. Totally different. Even if the voltages on the chip are the same (this does not replicate the EM fields – physics - responsible for the voltages), there is no replication. This is another misdirection operating in the area. This is a complex set of diffuse errors. I have isolated 11 of them. I have a paper in review about these as we speak.

I hope I get people to see what is going on. This area of science is seriously messed up.

Colin Hales

@ Colin..

Thanks for your comprehensive reply, your position is much clearer now.

Too many words. I know. Thanks for the opportunity to articulate this. I know most of the readers here won’t want to hear it .”

Not at all – I think many, including myself, want to hear constructive debate against assumptions made about applied technology. I understand your points regarding H, R and E -  However, your arguments for “materialism” do not necessarily explain the phenomenon of “emergentism” nor as to what we deem/define as consciousness and more specifically the expression/definition for “intelligence”? (I understand your argument does not necessarily concern these but rather the construction of “real” and valid intelligent machines, but answers to the question “What is intelligence” is still not clear)?

Mine is also to specifically deconstruct assumptions of principle X, of both consciousness and intelligence, (and also ultimately by reduction to electro-magnetism) – Levels of “complexity” in systems is the key to mysteries of both “presumed” consciousness and intelligence, and yet I cannot deny, the “proof is already in the pudding, (brains)” that both these phenomena X appear real and valid and as “emergent” from the complexity of my material brain.

We agree on an important point I think.. that Human, AGI and H,R notions of “meaning” and applied “intelligence” are reliant upon consciousness – yet here is another swerve ball – what precedes the other? Does Human level intelligence rely upon consciousness or is it vice versa? And that perhaps what we presume to be consciousness, (phenomenological), is rather complex “system” intelligence, and feedback systems, (Self-reflexivity)? – Moreover, that the phenomenological “Self” and identity has evolved as executive function through some necessity, rather than by pure chance?

Regardless of this AGI argument, does AI exist?

I say it does, for example the complexity of the bees/hive and “natural” behaviours of individuals results in an emergent intelligent/or efficient systems that employs feedback for survival/success and future sustainability – for this we usually presume principle X to be Darwinism theory, not necessarily system theory, for no other reason than that bees are biology?

How does a bird actually fly? How is the bird’s nature to fly? With wings firstly obviously, but when a bird flies, it’s brain and body continually compensates, (negative feedback), for eddy’s and wind currents, and as subconsciously – a complex system process “evolved” in the brains of birds? Sure the Wright bros. as opposed to Da Vinci, created a contraption that represented nothing like a bird and flew for a few yards initially, and has brought forth a history of Human Artificial(?) flying machines and technology – birds are not and will never be supersonic jets. Yet modern sleek and “stunted” heavy fighter jets cannot glide as they are designed more like missiles with small wings and require an onboard computer to continually correct their flight independent of the pilot – is this an example of computer AI or merely application of engineering?

Your analogy of fire is also somewhat misleading, because your argument is again primarily concerning “materialism”. A simulation of fire in a computer game obviously cannot be the real thing as VR is detached from reality and the “means” to create fire with the necessary components – Heat, Oxygen and Fuel. Yet we are still debating about “intelligence” inherent in systems – and how to prove whether an AI system can be defined as intelligent although it be totally impartial, non-thinking and detached from “materialism”? I believe it can, maybe not to the level of AGI and as detached, perhaps never will be – yet this is also a good thing isn’t it? That we do not in our ignorance hope or accidentally create consciousness “locked in” to a machine – as this opens a whole other level concerning ethics and responsibilities?

Although the topical focus presently is specifically with AGI and this contemporary mythology as threat rather than benefactor. I feel there is still room to contemplate the differences between AI and AGI, how these differ, the difficulties of “re-creating” Human level phenomenology in machines, (assuming this doesn’t miraculously appear – yes scepticism!), and most importantly, the usefulness of “AI” and how these systems are presently providing real results and their future potential – Rick’s article is specifically titled “Is AI a myth” – I say no, it is not a myth?

Let me also qualify my last statement further..

“These phenomena are obviously constrained by physics/mathematics - and mathematics perhaps underlies all complex system intelligence?”

Many “Mathematicians” would be adamant that mathematics underlies all physical laws and grounding for scientific predictions, experimentation – and yet, in the same context as arguments against “simulation” hypotheses – mathematics is merely the way Human minds understand and reconcile the actual “physical” world and laws of our Universe and thus essentially does not underlie the laws of physics as we assume, but rather and perhaps it is vice versa? (or at least this further heresy is up for yet more arguments)?

@ Rick..

“What I wonder though, and I am not saying I think we should go down this route, only that I wonder, why we need to recreate an inorganic example when we already have a working form of general intelligence from biology? Why not use actual brain matter from living creatures- animals one would hope?”

Don’t get me started about “Brains in vats”, been there already, I still propose this as the shortest path to longevity and VR retirement? Connect me lame brain to the internet/VR environment “before” I expire – if you please, (or not, I leave it up to you?) However, not “Call of duty”, more green fields, blue sky, peace and tranquillity, daily chess games and philosophical discussions?… *sighs*



Perhaps the reason that our confusion of emulation of thought with thought itself come from philosophical dualism via Descartes. That and the fact that we have been able to do some things that mimic the results of human intelligence even with the flawed model- stuff like play chess, and drive cars. 

Do the artificial brains you are working on change their structure as they interact with the environment? That to mean seems like a major difference between what the brain does and other attempts to replicate the physics of living things. The action and change of muscles is important for movement in living organisms but isn’t essential to replicate for mechanical versions- things like cars and planes don’t have to be like horses and birds whose muscles develop through use. The brain is different though, how it develops through use is precisely one of the aspects we need to replicate.

If you send me a pre-print copy of your book I will review it here. You can send a link or request for address to .(JavaScript must be enabled to view this email address).



I think I’d rather just croak.

@ Rick..

What? and miss out on the garden of Eden? ;0)

ps - is chess really a measure of intelligence or of memory, (for Humans and machines alike)? We’d have to ask some chess players and grand masters?

The game on my smartphone is a killer, yet does it display intelligence? - I think not. Sometimes I feel like gee.. there must be some smart tiny mini master inside - but it’s all just memory of moves of games long past?


Oh, Eden: we were bored enough with it the first time to risk eviction.

No, I don’t think chess programs are intelligent, just a sign of how, for some tasks at least, what we’ve traditionally defined as intelligence doesn’t matter. We are getting extremely good at automating things we once thought could only be arrived at through intelligence. I think Colin is right, the only way to get something like our form intelligence might be to duplicate the way the brain functions, though I am not sure this is the way we will go if we can get most of the same effects (or even better) following the same course we’ve been on since Turing or whether we can even replicate the brain’s electro-chemical processing in a “non-living” substrate.

Again with the questions..

What is intelligence?
What is consciousness?

Descartes: “I think therefore I am”?
Spock: “Nothing unreal exists”?

At least I am not alone in suffering these delusions?

Eden is for the aged, (tired, weary, exhausted) I would insist on clothing?

@CygnusX1 @Rick

Will forward .PDF for review. I can get the publisher to send hardcopy if you give me an address… or maybe just send it to IEET? Let’s do both. Xmas present.

In relation to definitions. I advocate ‘point_and_gruntism’. If you can’t point and grunt at it then it’s an opinion, not science. The act of pointing and grunting is the definition.

“What is intelligence?”  scientist.
“What is consciousness?”  at what the scientists doing the science of consciousness know very well they are studying. They need define it no more than they needed a definition of fire before they knew what they were studying.

The only useful observation I can make is that unless you are conscious (an observer) you will not be able to be a scientist and therefore you won’t be able to be as intelligent as a human scientist. This can be tested.

In relation to whatever the word ‘material’ is supposed to mean in this:  brain. To an accuracy of 1 part in somewhere between 10^15 and 10^20 it’s entirely an EM field system impressed on space. So material is (what is described by scientists made of it, using it to observe, as) EM fields and space. EM fields = Atoms. Molecules. Cells. Tissue. Creatures. Rocks. Computers. The ‘substantiveness’ of the fields is an illusion created by force mediation within the fields. Mass is a field system too. But much deeper (Higgs Boson deeper).

It’s really easy to claim you are on a direct route to consciousness and intelligence when you get to choose from a list of one thing: EM fields. I’m a simple guy. I like short lists.

In relation to the comments of flying/flight. It’s an example of retention of essential physics. Not an example related directly to AI. Take the essential physics of flight out of it and there will be no flight. The real question is why “Take the essential physics out of a brain and there will be no intellect” has been assumed false for 60+ years. When 300 years of evidence says otherwise.

In relation to the consciousness/intellect connection I just use evolution. If we didn’t need consciousness (an ability to observe, as opposed to just measure) then we wouldn’t have this massive organ between our ears. The likelihood is that the environmental circumstances would be constant and devoid of novelty. No need to observe. Experienceless automation good enough.  Creatures able to encounter the actual natural world of radical moment-to-moment novelty survived better. Consciousness (imperfectly) connects us most directly to the novelty. So creatures with it survived better. The rest is history.

I can’t really add much more to this.

I note a theme I come across regularly: uploading. This, I find the most remote possibility as to make the idea a joke. That said I have worked out a possible route using implantable versions of the chip I plan. Lots of surgery over many years. Plus the wearing, full time, of a really bad wig/hat that has to be installed at birth. The only thing that I can say is that being uploaded into any form of computer (as we currently understand it, quantum or otherwise, no matter how powerful) is indistinguishable from death. Computer-based ‘uploading’ is, at best, a storage facility for data. A waiting room for when the real hardware gets made.

I guess I’ll leave this there. I’ll get the book sent out.

Colin Hales

The Genius of Swarms

““Ants aren’t smart,” Gordon says. “Ant colonies are.” A colony can solve problems unthinkable for individual ants, such as finding the shortest path to the best food source, allocating workers to different tasks, or defending a territory from neighbors. As individuals, ants might be tiny dummies, but as colonies they respond quickly and effectively to their environment. They do it with something called swarm intelligence.

Where this intelligence comes from raises a fundamental question in nature: How do the simple actions of individuals add up to the complex behavior of a group? How do hundreds of honeybees make a critical decision about their hive if many of them disagree? What enables a school of herring to coordinate its movements so precisely it can change direction in a flash, like a single, silvery organism? The collective abilities of such animals—none of which grasps the big picture, but each of which contributes to the group’s success—seem miraculous even to the biologists who know them best. Yet during the past few decades, researchers have come up with intriguing insights…”



Oh sure, I totally agree, something like an ant colony shows a very real form of intelligence, so does something like automobile traffic, where the general intelligence of the drivers is irrelevant and what’s important is that they just follow a few simple rules.

Perhaps what makes human intelligence so distinct (you’d agree that we’re different in kind from other vertebrates and eu-social insects like ants?) is that we somehow exhibit eu-social intelligence while having the individual/pack intelligence shown in higher mammals- such is the view of E.O. Wilson. I for one, wonder if what makes us distinct is not just language but something language allows us to do which is overlay sensory reality with an imaginary world we share with others of our kind and is somehow as or even more important than the world we share with other animals that sense the world much as we do.

Couple of quick comments. I did a previous post on this but for some reason it didn’t come through.

In relation to ‘intelligence’.

1) Of ants. Colony smart, ant not so much. Fine. In my framework for dealing with this, this idea is a side issue. Ants are not scientists. They don’t deliver ‘laws of nature’ as abstractions to other ants. If you set the benchmark at this level, then you get to ignore a lot of argumentation nose. Ants are not confused about what intelligence is. I say that something intelligent has to be capable of that kind of confusion

2) “Isn’t intelligence different?” remarked above.

I could run through 100 historical instances of the kind “natural function F has essential physics X” (like a kidney) has essential physics of X (filtration). Stop doing X and F stops. Compute a model of X and it is not an instance of F. It is an F simulator.
and then we get to brains.
101 Natural function BRAIN has essential physics X. Stop doing X and BRAIN (intelligence) stops. Compute a model of X and it is not an instance of a BRAIN. It is a BRAIN simulator.

You asked the question “isn’t intelligence different?” If there was an argument or a principle to that effect then you could cite it. The fact is, there isn’t such an argument.

Instead I ask: “Why is intelligence any different to 1… 100?” Wouldn’t the logically justified first position be o assume that ignorance of X is important and is part of the job of figuring out how to make artificial intelligence? Instead, what have we done? We have assumed “Isn’t intelligence different?” is true and then spent 60 years testing that hypothesis, failing, and never even starting the real test: Pick physics X. See if intelligence is critically dependent on it. That is replicate the physics. Like filtration for kidney, digestion for stomach ...etc…etc…. 100 x etc….

This situation is unique in the history of science. And it only happened when computers were invented.

So I have 300 years of precedent for “Isn’t intelligence the same as everywhere else in science?” and “Isn’t intelligence different?” has no justification whatever.

Don’t you think this rather odd? (Rhetorical question!)

I guess I’ll leave it there.

@ Colin..

Just seen your posts. Yes your first post seemed to be delayed.

“In relation to whatever the word ‘material’ is supposed to mean in this:  brain. To an accuracy of 1 part in somewhere between 10^15 and 10^20 it’s entirely an EM field system impressed on space. So material is (what is described by scientists made of it, using it to observe, as) EM fields and space. EM fields = Atoms. Molecules. Cells. Tissue. Creatures. Rocks. Computers. The ‘substantiveness’ of the fields is an illusion created by force mediation within the fields. Mass is a field system too. But much deeper (Higgs Boson deeper).”

Agreed. However, just when you thought you were reconciling the mind/body problem, you are now faced with the dualism of energy/matter? I am perfectly happy to accept all of these phenomena as real, simply because I cannot overcome or reduce them further. I fully subscribe to the reduction of all “material forms” to EM energy fields, as well as the brain.

For sure “we” Humans/minds are all encompassed within the ocean of consciousness that permeates the entire Universe/cosmos, (this is not a spiritual observation BTW) - the only barriers appear to be material/molecular/atomic elements that separate our physical forms from the exterior world and sunlight or background radiation. The problem is indeed one of semantics.. Humans apply the speciality and species prejudice of terms like “consciousness”, a term which cannot be defined or even discussed without using the term “awareness” - thus for me, these are the same - yes, yes.. the qualification and quality of Human level consciousness may be deemed as somewhat superior to the atom, (phenomenology and emergentism), yet the reduction is by the same measure?

So too my point/position regarding intelligence and complex adaptive systems - Principle X?

You say that intelligence cannot be deemed as detached from the mechanism, (brain/machine)? Yet this is still “process”, (non-spiritual), which are not naturally emergent from inert mechanism/machines - these characteristics and “behaviourism” still needs to be input/programmed by the intelligent engineer/scientist?

Don’t have much time presently, so here is a couple of quick links to wiki pages. In short, I subscribe to Dennett’s conclusions, and not only does this apply to consciousness, maybe it can be readily applied to this phenomenon and “emergence” of “intelligence” which is no more, and no less, than the evolution of complex adaptive systems/parallel processing in the brain?

Both may be so intertwined that is as impossible to separate or distinguish the difference between Consciousness and intelligence, (as it is the complex system and brain neural net that has evolved and permitted Self-reflexivity as well as this intelligence to reflect on this/these questions)? Another heresy I know, and not many would agree that these are the same, as it is not practicable for science to do so?

The Self-reflexive “executive function”, (brain hierarchy), to coin an enlightening phrase from David Eagleman, is like the CEO who is mostly oblivious to the hard works, actions and processes of the company and employees, (subconscious), and yet, is still in a position to influence and direct strategy/outcomes or adamantly “veto” actions/proposals, (limited expression of Free Will)?

Dan Dennett - The Cartesian theatre

Destroying the zombic hunch

“Demystifying consciousness is Dennett’s forte, and probably the main reason for his status as the devil – after all, people like their mysteries. Some even think that if consciousness is explained they will be diminished as people, being turned into “mere things.” They don’t like the idea that there is no one at home inside their head; no audience watching the magic show of conscious experiences from the safety of the Cartesian theatre. Dennett contrasts theorists to whom it is obvious that a theory which leaves out the subject cannot explain consciousness, with those to whom it is obvious that the subject has to vanish. The first type must be wrong, he says; “A good theory of consciousness should make a conscious mind look like an abandoned factory.”

Complex adaptive system

David Eagleman


“Ants are not confused about what intelligence is. I say that something intelligent has to be capable of that kind of confusion”


Ps. How bad does this wig have to be exactly?


Next entry: “What is Technoprogressive Thought? Origins, Principles, Agendas”

Previous entry: “Unequal access to technology: what can we learn from smartphones?” (50min)