IEET > Rights > HealthLongevity > GlobalDemocracySecurity > Vision > Contributors > Piero Scaruffi > Philosophy > Artificial Intelligence
Nick Bostrom’s “Superintelligence” (Part II)
piero scaruffi   Oct 14, 2014  

Bostrom writes that the reason A.I. scientists have failed so badly in predicting the future of their own field is that the technical difficulties have been greater than they expected. I don't think so. I think those scientists had a good understanding of what they were trying to build. The reason why "the expected arrival date [of Artificial Intelligence] has been receding at a rate of one year per year" (Nick Bostrom's estimate) is that we keep changing the definition. There never was a proper definition of what we mean by "Artificial Intelligence" and still there isn't.

Bostrom notes that the original A.I. scientists were not concerned with safety or ethical concerns: of course, the machines that they had in mind were chess players and theorem provers. That's what Artificial Intelligence originally meant. Being poor philosophers and poor historians, they did not realize that they belonged to the centuries-old history of automation, leading to greater and greater automata.

And they couldn't foresee that within a few decades all these automata would become millions of times faster, billions of times cheaper, and would be massively interconnected. The real progress has not been in A.I. but in miniaturization. Miniaturization has made it possible to use thousands of tiny cheap processors and to connect them massively. The resulting "intelligence" is still rather poor, but its consequences are much more intimidating.

The statistical method that has become popular in Artificial Intelligence during the 2000s is simply an admission that previous methods were not wrong but simply difficult to apply to all problems in general. This new method, like its predecessors, can potentially be applied to every kind of problem... until scientists will admit that it cannot. The knowledge-based method proven inadequate for recognizing things and was eventually abandoned (nothing wrong with it at the theoretical level).

The traditional neural networks proved inadequate for just about everything because of their high computational costs. In both cases dozens of scientists had to tweak the method to make it work in a narrow and very specific problem domain. When generalized, the statistical methods in vogue in the 2010s turn out to be old-fashioned mathematics such as statistical classification and optimization algorithms. These might indeed be more universal than previous methods but, alas, hopelessly computational-resource intensive. It would be ironic if Thomas Bayes' theorem of 1761 turned out to be the most important breakthrough in Artificial Intelligence.

Unfortunately, it is easy to find real-world problems in which the repeated application of that theorem leads to a combinatorial explosion of the space of potential solutions that is computationally intractable. We are now waiting for the equivalent of John Hopfield's "annealing" algorithm that in 1982 made neural networks easier to implement. That will make this Bayesian kind of Artificial Intelligence go for a little longer, but i am skeptic that this will lead to a general A.I.

The most successful algorithms used in the 2010s to perform machine translation require virtually no linguistic knowledge. The very programmer who creates and improves the system has no knowledge of the two languages being translated into each other: it is only a statistical game. Translations between two languages for which millions of translated texts exist are beginning to be decent enough, while translations between rarely translated languages (such as Italian and Chinese) are still pitiful. I doubt that this is how human interpreters translate one language into another, and i doubt that this approach will ever be able to match human-made translations, let along surpass it (i assume that the Singularity is also supposed to be better at translating from any language to any language).

Bostrom quotes Donald Knuth's famous sentence that A.I. seems better at emulating "thinking" than at emulating the things we do without thinking. As Bostrom points out in Note 60 of Chapter 1, even that is optimistic, but there is larger truth in that statement: it is relatively easy to write an algorithm when we can tell how we do things. The real hard problem is that we don't know how we do the vast majority of things that we do, otherwise philosophers and psychologists would not have a job. A conversation is the typical example. We do it effortlessly. We shape strategies, we construct sentences, we understand the other party's strategy and sentences, we get passionate, we get angry, we try different strategies, we throw in jokes and we quote others. Anybody can do this without any training or education. Check what kind of conversation can be carried out by the most powerful computer ever built.

It turns out that most of the things that we do by "thinking" (such as proving theorems and playing chess) can be emulated with a simple algorithm (especially if the environment around us has been shaped by society to be highly structured and to allow only for a very small set of moves). The things that we do without thinking are still a mystery. We can't even explain how children learn in the first place. Artificial Intelligence scientists have a poor philosophical understanding of what humans do and a poor neurophysiological understanding of how humans do it.

In a nutshell, Bostrom first tries to estimate how far we are from achieving super-intelligence. Even if he sounds cautionary, i still think he is wildly optimistic. When he talks about whole brain emulation, he seems to reduce the problem to figuring out the connectionist structure of the human brain, ignoring that the brain uses more than 50 kinds of neurotransmitters to operate that network. I don't think that is a negligible detail. He seems confident in "biotechnical enhancements", but ours is a species that can't even figure out how to defend its brain from a tiny virus like ebola.

We are more likely to produce a race of idiots than a race of geniuses if we use today's science for "biological cognitive enhancements" as he calls them. Like everybody else he can't quite define what a super-intelligence is (how will i know it when i see one? again, a clock is already a machine that does something that no human being can do), but he does some sophisticated analysis of how it might "take off". Except that the conclusion is a kind of "I don't know". He speculates that most likely there will be only one kind of super-intelligent (and, if there are multiple ones, it won't be good news), but his speculations are based on the assumption that his knowledge and his logic are good enough to understand how a super-intelligence will behave, which sounds like a contradiction in terms.

​He even employs historical precedents (from human civilization) and Malthusian theories to analyze the super-intelligence: isn't the super-intelligence supposed to be a different kind of intelligence that we cannot understand? The way Bostrom treats it, super-intelligence sounds like nothing more than a faster car or a stronger weapon, something that we can know how to handle if we think hard enough. He even closes the book with interesting discussions about morality: how we create a moral machine? Philosophers will love these chapters. His methods for addressing the problem (capability control and motivation selection) have many predecessors in ethical philosophy (i.e. philosophers offering advice on how society can create better citizens) and, of course, at the end of the day the discussion shifts to "who is the supreme moral authority to decide which moral values are to be taught?"

Moral values have changed over the centuries. It used to be perfectly normal to marry and have sex with a woman under 18: now you go to jail and are listed as a sexual offender for the rest of your life. Most societies punished homosexuality and most religions consider it a horrible sin, but an increasing number of states are recognizing same-sex marriage. Given the horrible track record of the human species, why would it be bad if the super-intelligence simply wiped out the human race from the face of the Earth? Philosophers have a job precisely because they spend their careers discussing topics like these. Regardless of the answers to these centuries-old questions, the fundamental contradiction is that Bostrom treats the super-inteligence as something whose behavior is human-like.

Hence, we are just talking about yet another human-made technology and what effects it may have on human society. Every technology ever invented by humans has had unwanted consequences, and it is certainly a good idea to prevent them instead of having to fix them later. But what exactly will be different with this "super-intelligence" is not explained. Exactly like nobody really knows what will happen when the Messiah, Jesus or the Mahdi will come.

My version of the facts is here.

piero scaruffi is an author, cultural historian and blogger who has written extensively about a wealth of topics, ranging from cognitive science to music.


A philosophical pennies worth..

To recognise “Super intelligence”, one must first define “intelligence” -  what is it, not a phenomenon but phenomenological? Nature comprises no “inherent and natural” intelligence or learning capabilities does it? - Well yes, it does, we, (and other specialised life forms), are the evidence of such universally evolved “intelligence” are we not - thus this appears at first inspection to be a contradiction?

Yet a Star comprising chemical and atomic processes whether deemed as efficient or inefficient cannot claim to comprise “intelligence”, yet it does obey some consistent laws of physics that may be analogous to some un-shakeable mathematical algorithm, (whence from is a conundrum for another discussion?)

Complexity provides for the “sum” of what appears to be “intelligent” processes/actions within systems, and not merely as applied to the power of computation in artificial substrate, but also to biological neurons in brains. Through reductionism, to single neurons and beyond to bio-chemistry comprising singular atoms we may then deduce that all actions/reactions are “non-intelligent” by nature and as such form some natural processes, (such like the stars above)?

For sure, when we play chess, we are utlizing our “formal consciousness/reflexivity” to process, experience and enjoy the game - and these formal self-reflections are slow and feeble as compared to our sub-conscious brain processes doing all the hard work - to the point that every chess move we “think” we are making has already been decided by our fast reacting “sub-conscious” brain processes, (Free will vs. determinism). This is easy to reflect upon and test when you play the game, you may find like myself, that is very difficult to overcome first impulses to moves and strategies, the more difficult, the more bias towards thoughts of initial moves present themselves. Similar to what we may describe as “gut feelings”, and which may perhaps be explained as purely the “tenacity” and persistence of subconscious processes and neurons firing?

Yet there is another important “component” and process which must support and comprise the practicable and useful definition and description of “intelligence” > Memory.

Without memory, (neurons/bio-logical processes), there is no referential data/experience and position from which either your formal conscious, (executive witness), or impartial and underlying subconscious processes can function and react efficiently through cause and effect from any initial conditions?

This leaves us again with the conundrum of explaining complex algorithm and mathematics alone as comprising a true definition of “intelligence”? A system comprising of algorithms can be created/programmed to display, (through it’s complexity), actions analogous to our own and that which we “think” and feel are intelligent, (i.e.; IBM Watson), although the program does not possess memory, we may introduce or apply data storage for such.

Bottom line is.. without “physical memory”, there is no empirical data from which to draw any comparisons of “this and that”, of “past and present”, and thus even of “Self and other”, to make any decisions, as predetermined/programmed or not, (at least in Human brains)? Thus an AGI cannot wholly be defined/designed from it’s complexity of algorithm/programming alone?

However, the “complexity of systems” and the inherent delays introduced/programmed, combined with negative feedback can also emulate memory through space-time delay between change of state at the level of the atomic/very small - in the same fashion as “entanglement” of particles can be mistaken as some preter-natural communication/intelligent action between entities?

There can be no rationalisation of “Self” without memory, even with possession of Self-reflexivity/negative feedback mechanism, (Emptiness?) Yet still, an AGI need not have any notion of Self to be intelligent and deemed worthy?

(Ps. “Droidfish” available on Google apps is an excellent Free Chess program, from which many other similar android chess apps are designed. It requires the customary scrutiny of apps permissions, yet simply disconnect your phone/device data connection and you will find like any other app, that you will not be pestered with continual advertisements. It has many features, and I have it set at 50% strength, and have not yet beaten it - it is very testing. If you find that you are “learning” from your continual defeats - then this may perhaps be a description of applied “intelligence” supported by “memory”?)

Why didn’t “Book review: Nick Bostrom’s “Superintelligence”” say “Part 1”?

YOUR COMMENT Login or Register to post a comment.

Next entry: Genetic engineering leads to glow-in-the-dark plants

Previous entry: Populism: A Light Against Republican Darkness