Historians, scientists and poets alike have written that the human being strives for the infinite. In the old days this meant that it strives to become one with the god who created and rules the world. As atheism began to make strides, Schopenhauer rephrased the concept as a “will to power”. Nietzsche confirmed that god is dead, and the search for “infinite” became a mathematical and scientific program instead of a mystical one. Russell, Hilbert and others started a logical program that basically aimed at making it easy to prove and discover everything that can be.
Wow. Thank you, thank you very much. What a fine bit of work this is. You gave us the history, succinctly presented the arguments pro and con and introduced a whole new level to the discussion. A really fine piece of work.
I like Mr. Kurzwell’s enthusiasm, but I always questioned the lack of a “body”. We are much more than our brain.
And the timing issue as well. It is NOT about computing power - that will increase without question. And I think we can accelerate evolution - but we are a LONG way from being able to do that.
I have a background in computers and ethics and law so it hit me just right. And you also touched on the societal impact really well also.
“I feel that, in fact, the Singularity argument is mostly a philosophical (not scientific) argument.”
Yes, and I will not repeat my take of the point which is illustrated at http://www.divenire.org/articolo_versione.asp?id=1
underlining inter alia that matters of technical feasibility, let alone difficulty, are essentially immaterial to the issue at hand.
As for robots able to read novels, anybody who made the effort to read the Fifty Shades trilogy know for sure that we already have today robots able to… write them, and certainly without any need to understand what they are doing.
Posted by antistenes on 02/13 at 11:01 AM
“Focusing only on mental activities when comparing humans and machines is a categorical mistake.”
What you mean by categorical mistake is in fact called category mistake (a term invented by Gibert Ryle).
Posted by CygnusX1 on 02/13 at 11:25 AM
I would beg to differ..
“A brain kept in a jar is not a human being: it is a gruesome tool for classrooms of anatomy.”
A brain kept “alive” in a jar is precisely a Human being.
If you are describing bodily and brain “chemicals” stimuli, then a Human is indeed more than a computational machine, it is an irrational, emotional, highly deterministic bio-logical mechanism, with very little free will of it’s own?
Now you can begin to rationalize the difference between a Human and a logical machine.
If a Human is really that smart, then it should be able to describe all of these irrational feelings and emotions and transcribe them to memory banks for a logical machine to process, and even though it’s true the machine may never truly understand them, it may still use them to great advantage and “assimilate” the Human condition..
And it’s worth noting also that Human’s don’t really understand their reasoning and emotional irrationality either, we just learn to live with it?
Posted by RichardPrins on 02/14 at 02:38 PM
“Murphy’s law (that translates into the doubling of processing power every 18 months) is nothing special”
Which is Moore’s law, though it is interesting seeing Murphy’s law (what can go wrong, will go wrong) pop up in this context.
Posted by SHaGGGz on 02/16 at 09:42 AM
“They self-drive only in highly favorable conditions on well marked roads with well marked lanes.”
You must not have heard of the numerous off-road autonomous vehicle competitions, then.
“Humans, to me, are biological organisms who (and not “that”) write novels, compose music, make films, play soccer, ride the Tour de France, discover scientific theories, hike on mountains and recommend restaurants. Which of these activities are becoming obsolete because machines are doing them better?”
This line of reasoning seems to be at odds with your contention that humans are decreasing in intelligence as a result of the jobs that have been automated.
“In fact, there has been virtually no progress in building a machine that will cross that street.”
You keep coming back to this streetlight example, which basically serves as a “the perfect is the enemy of the good” type argument; in other words, since machines can not presently do these tasks that integrate a whole swath of intelligent behaviors across multiple domains, there have been no gains in intelligence. The many different intelligent behaviors we integrate into a seeming unity are really just separate modules, which machines get better at day by day, e.g. cooking, running, natural language processing, etc. Once they conquer a particular domain, it mysteriously is no longer considered intelligent, a comical parallel to the “god of the gaps” line of reasoning.
“Talking about the intelligence of a machine is like talking about the leaves of a person”
Here you are restricting the term “intelligence” to solely mean “human-level sapience,” which is fine if you do, but it severely limits the usefulness of what you are saying, as this goes against a mountain of precedent wherein we have referred to various animal behaviors as exhibiting varying degrees of intelligence, demonstrating that intelligence is not a binary but a continuous spectrum.
” For every technological innovation there was a moment when it spread “exponentially”, whether it was church clocks or windmills, reading glasses or steam engines; and their “quality” improved exponentially for a while, until the industry matured or a new technology took over.”
You list many disjointed examples of S-curves, but this has little relevance to Kurzweil’s thesis (extracted from converging data from a multitude of reputable sources) of a wider, underlying property of information technology’s exponential growth stretching across multiple paradigms, and really, all of life going back to its dawn on Earth.
“What has truly changed is that today we have extremely powerful computers squeezed into a palm-size smartphone at a fraction of the cost. That’s miniaturization. Equating miniaturization to intelligence is like equating an improved wallet to wealth.”
Yes, if you focus on just the advance of Moore’s law, without regard to improvements to software’s problem-solving capability (a trajectory that has also kept pace with a rate analogous to Moore’s law, as Kurzweil noted in his end-of-decade report a couple years ago), you have something of a case. This misses the point, of course.
” Hence, technically speaking, there has been no evolution of technology. This is yet another case in which we are applying an attribute invented for one category of things to a different category: the category of living beings evolve, the category of machines does something else, which we call “evolve” by recycling a word that actually has a different meaning. “
Here you are again excessively restricting the meanings of words. Categories are far more fuzzy and less discrete than you seem to think, and it’s common practice to use the same word to describe phenomena that are highly analogous but distinct. Nobody literally means that the technologies humans designed are biological carbon-based life forms competing for chemical food sources to reproduce and end up with differential allele distributions over time. No, we mean that technologies share characteristics that are analogous to this process, namely largely inherited changes in characteristics over successive generations, usually in response to a (human-sourced) selection pressure. Kevin Kelly lists lots of example diagrams of technological evolution in his highly-recommended What Technology Wants.
Posted by b. on 02/28 at 12:28 PM
Thanks for posting Piero. (sometimes I wonder if any of the authors actually read this comments!)
@SHaGGGz, you certainly make some good points. The whole crossing the street thing really stood out to me, because at least where I live cars only pass for peds if they are out into the street. Seems a robot that looks like a run-away baby carriage would have pretty good street crossing abilities.
Perhaps we could consider bird and mammalian intelligence as a category, rather than human intelligence specifically. Obviously semantics are important, and I think there is no single definition that does the concept justice.
“You list many disjointed examples of S-curves, but this has little relevance to Kurzweil’s thesis (extracted from converging data from a multitude of reputable sources) of a wider, underlying property of information technology’s exponential growth stretching across multiple paradigms, and really, all of life going back to its dawn on Earth.”
I interpret Piero’s point regarding exponential growth is the same as one I have made in the past: that exponential growth has always lead to some kind of collapse. While you may argue that there is contiguous progress from windmills to mechanics to electronics, the abstraction of those things as “information technologies” that are be objectively compared using a meaningful measure is problematic.
It is interesting that Piero focuses on evolution. True that there is no evolution of hardware without human guiding, but there is something like evolution happening in software. It is clear that machines have evolved designs (using GAs) for things such as antennas and electronic circuits. We as human don’t even understand how some of these work because they exploit aspects of the simulation/material that we have deemphasized because they are too complex to easily control/predict/tune (ie nonlinear properties, etc.)
I would still say this is not evolution, because the utility (purpose) of the system is still imposed and validated by a human need.
Still, as we are clearly thinking about machine creativity in relation to search, then this search problem is just another big-data number crunching problem that machines are obviously better than us at. Its just a matter of the machine searching in a space of functional value that humans would not have thought to look (the machine only looks there because they don’t know better and look everywhere, but they can still find things we have ignored). I’m personally not terribly convinced by creativity as simply search, its strikes me as too easy…
Its obvious to me that software will become more and more apparently creative, particularly in design and engineering domains. But, just like intelligence, it all depends on how we define creativity.
I think the main point if the article is that we tend to think through our technologies, and that perhaps we should be more critical of how we define things and not so easily jump into a trap of: because this is how we think now, that must be how it is. Things change, even what we mean by change.
I guess the lesson I have come away with is that we all can participate in the construction of our future, and the most intelligent way to do that is through discourse and the exploration of varied points of view and semantics. (At least evolution would tell us something like this.) This requires us to take on an alien point of view, and also be flexible and critical of our own viewpoints.
Posted by SHaGGGz on 02/28 at 08:51 PM
@b.: “exponential growth has always lead to some kind of collapse.”
Yes, if we are dealing with something like a “closed system” where the growth is consuming the static resources of a static environment, which is the sort of error that led to widespread doom mongering of Malthus and the Limits to Growth and Peak Oil crowds - not to say that it’s a foregone conclusion that we will win the race against the peak oil clock, but improvements continue apace so far.
“While you may argue that there is contiguous progress from windmills to mechanics to electronics, the abstraction of those things as “information technologies” that are be objectively compared using a meaningful measure is problematic.”
I’m less sanguine about his charting of the far vaguer notion of “paradigm shifts” over historical time, though this strikes me as an interesting aside that’s not central to the logic of his argument.
“I would still say this is not evolution, because the utility (purpose) of the system is still imposed and validated by a human need.”
There’s a whole subset of evolution called “artificial selection.”
“I’m personally not terribly convinced by creativity as simply search”
It seems to be the search of a space that is heavily pruned by sophisticated algorithms we don’t yet fully understand (the lack of which in machine creativity accounting for the far larger search space they navigate).
Speaking of machine creativity, musical composition is yet another domain whose Turing test threshold has recently been passed, and thus continues the ever-narrowing definition of “intelligence” AI denialists pigeonhole humanity’s mystical essence into.
Posted by b. on 03/01 at 01:03 PM
“I’m less sanguine about his charting of the far vaguer notion of “paradigm shifts” over historical time, though this strikes me as an interesting aside that’s not central to the logic of his argument.”
Indeed, I have not read the book all this paradigm shift stuff is based on, so I don’t have too much to say, but my point is really that its hard to meaningfully measure continuity between such disparate technologies whose comparison is not obvious. Maybe there is some role of Tononi’s “phi” measure here, intended to measure consciousness, its supposed to be applicable to anything.
“There’s a whole subset of evolution called “artificial selection.” “
Of course, but that is just moving up one level of description, not separating the system from human utility. We still must specify the parameters of the algorithms that determine the selection, perhaps that could be another GA, which would then require its own selection criteria. Abstraction is not disconnection.
My lab is composed mostly of musicians, and indeed these systems are working very well.
I don’t know about “humanity’s mystical essence”. I think there is a lot of intelligence in animals, and even that most of what may be considered as the magic that makes us human is also shared with animals. We would not study intelligence on animals if that was not true. To some degree this shows a broadening of the definition of intelligence.
I absolutely agree that our conception of intelligence (and consciousness) are shifting and evolving ideas that are not so solid objectively. Frankly, I think the only really objective and useful measure of intelligence is performance on a particular task. Different types of tasks require different types of intelligence. I’m not sure intelligence without a particular task context makes any sense at all, and gets back to these “mystical essence[s]”.