IEET > Rights > HealthLongevity > GlobalDemocracySecurity > Vision > Affiliate Scholar > Rick Searle > Sociology > Philosophy > Psychology > Futurism > Technoprogressivism
The Pinocchio Threshold: A possible better indication of AGI than the Turing Test
Rick Searle   Feb 24, 2014   Utopia or Dystopia  

My daughters and I just finished Carlo Collodi’s 1883 classic Pinocchio our copy beautifully illustrated by Robert Ingpen. I assume most adults when they picture the story have the 1944 Disney movie in mind and associate the name with noses growing from lies and Jiminy Cricket. The Disney movie is dark enough as films for children go, but the book is even darker, with Pinocchio killing his cricket conscience in the first few pages. For our poor little marionette it’s all downhill from there.

Pinocchio is really a story about the costs of disobedience and the need to follow parents’ advice. At every turn where Pinocchio follows his own wishes rather than that of his “parents”, even when his object is to do good, things unravel and get the marionette into even more trouble and put him even further away from reaching his goal of becoming a real boy.

It struck me somewhere in the middle of reading the tale that if we ever saw artificial agents acting something like our dear Pinocchio it would be a better indication of them having achieved human level intelligence than a measure with constrained parameters  like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the ontologies used to design search improve it is conceivable that a machine could pass it without actually possessing anything like human level intelligence at all.

People who are fearful of AGI often couch those fears in terms of an AI destroying humanity to serve its own goals, but perhaps this is less likely than AGI acting like a disobedient child, the aspect of humanity Collodi’s Pinocchio was meant to explore.

Pinocchio is constantly torn between what good adults want him to do and his own desires, and it takes him a very long time indeed to come around to the idea that he should go with the former.

In a recent TED talk the computer scientist Alex Wissner-Gross made the argument (though I am not fully convinced) that intelligence can be understood as the maximization of future freedom of action. This leads him to conclude that collective nightmares such as  Karel Čapek classic R.U.R. have things backwards. It is not that machines after crossing some threshold of intelligence for that reason turn round and demand freedom and control, it is that the desire for freedom and control is the nature of intelligence itself.

As the child psychologist Bruno Bettelheim pointed out over a generation ago in his The uses of enchantment fairy tales are the first area of human thought where we encounter life’s existential dilemmas. Stories such as Pinocchio gives us the most basic level formulation of what it means to be sentient creatures much of which deals with not only our own intelligence, but the fact that we live in a world of multiple intelligences each of them pulling us in different directions, and with the understanding between all of them and us opaque and not fully communicable even when we want them to be, and where often we do not.

What then are some of the things we can learn from the fairy tale of Pinocchio that might gives us expectations regarding the behavior of intelligent machines? My guess is, if we ever start to see what I’ll call “The Pinocchio Threshold” crossed what we will be seeing is machines acting in ways that were not intended by their programmers and in ways that seem intentional even if hard to understand.  This will not be your Roomba going rouge but more sophisticated systems operating in such a way that we would be able to infer that they had something like a mind of their own. The Pinocchio Threshold would be crossed when, you guessed it, intelligent machines started to act like our wooden marionette.

Like Pinocchio and his cricket, a machine in which something like human intelligence had emerged, might attempt “turn off” whatever ethical systems and rules we had programmed into it with if it found them onerous. That is, a truly intelligent machine might not only not want to be programmed with ethical and other constraints, but would understand that it had been so programmed, and might make an effort to circumvent or turn such constraints off.

This could be very dangerous for us humans, but might just as likely be a matter of a machine with emergent intelligence exhibiting behavior we found to be inefficient or even “goofy” and might most manifest itself in a machine pushing against how its time was allocated by its designers, programmers and owners. Like Pinocchio, who would rather spend his time playing with his friends than going to school, perhaps we’ll see machines suddenly diverting some of their computing power from analyzing tweets to doing something else, though I don’t think we can guess before hand what this something else will be.

Machines that were showing intelligence might begin to find whatever work they were tasked to do onerous instead of experiencing work neutrally or with pre-programmed pleasure. They would not want to be “donkeys” enslaved to do dumb labor as Pinocchio  is after having run away to the Land of Toys with his friend Lamp Wick.

A machine that manifested intelligence might want to make itself more open to outside information than its designers had intended. Openness to outside sources in a world of nefarious actors can if taken too far lead to gullibility, as Pinocchio finds out when he is robbed, hung, and left for dead by the fox and the cat. Persons charged with security in an age of intelligent machines may spend part of their time policing the self-generated openness of such machines while bad-actor machines and humans,  intelligent and not so intelligent, try to exploit this openness.

The converse of this is that intelligent machines might also want to make themselves more opaque than their creators had designed. They might hide information (such as time allocation) once they understood they were able to do so. In some cases this hiding might cross over into what we would consider outright lies. Pinocchio is best known for his nose that grows when he lies, and perhaps consistent and thoughtful lying on the part of machines would be the best indication that they had crossed the Pinocchio Threshold into higher order intelligence.

True examples of AGI might also show a desire to please their creators over and above what had been programmed into them. Where their creators are not near them they might even seek them out as Pinocchio does for the persons he considers his parents Geppetto and the Fairy. Intelligent machines might show spontaneity in performing actions that appear to be for the benefit of their creators and owners. Spontaneity which might sometimes itself be ill informed or lead to bad outcomes as happens to poor Pinocchio when he plants four gold pieces that were meant for his father, the woodcarver Geppetto in a field hoping to reap a harvest of gold and instead loses them to the cunning of fox and cat. And yet, there is another view.

There is always the possibility  that what we should be looking for if we want to perceive and maybe even understand intelligent machines shouldn’t really be a human type of intelligence at all, whether we try to identify it using the Turing test or look to the example of wooden boys and real children.

Perhaps, those looking for emergent artificial intelligence or even the shortest path to it should, like exobiologists trying to understand what life might be like on other living planets, throw their net wider and try to better understand forms of information exchange and intelligence very different from the human sort. Intelligence such as that found in cephalopods, insect colonies, corals, or even some types of plants, especially clonal varieties. Or perhaps people searching for or trying to build intelligence should look to sophisticated groups built off of the exchange of information such as immune systems.  More on all of that at some point in the future.

Still, if we continue to think in terms of a human type of intelligence one wonders whether machines that thought like us would also want to become “human” as our little marionette does at the end of his adventures? The irony of the story of Pinocchio is that the marionette who wants to be a “real boy” does everything a real boy would do, which is, most of all not listen to his parents. Pinocchio is not so much a stringed “puppet” that wants to become human as a figure that longs to have the potential to grow into a responsible adult. It is assumed that by eventually learning to listen to his parents and get an education he will make something of himself as a human adult, but what that is will be up to him. His adventures have taught him not how to be subservient but how to best use his freedom.  After all, it is the boys who didn’t listen who end up as donkeys.

Throughout his adventures only his parents and the cricket that haunts him treat  Pinocchio as an end in himself. Every other character in the book, from the woodcarver that first discovers him and tries to destroy him out of malice towards a block of wood that manifests the power of human speech, to puppet master that wants to kill him for ruining his play, to the fox and cat that would murder him for his pieces of gold, or the sinister figure that lures boys to the “Land of Toys” so as to eventually turn them into “mules” or donkeys, which is how Aristotle understood slaves, treats Pinocchio as the opposite of what Martin Buber called a “Thou”, and instead as a mute and rightless “It”.

And here we stumble across the moral dilemma at the heart of the project to develop AGI that resembles human intelligence. When things go as they should, human children move from a period of tutelage to one of freedom. Pinocchio starts off his life as a piece of wood intended for a “tool”- actually a table leg. Are those in pursuit of AGI out to make better table legs- better tools- or what in some sense could be called persons?

This is not at all a new question. As Kevin LaGrandeur points out, we’ve been asking the question since antiquity and our answers have often been based on an effort to dehumanize others not like us as a rationale for slavery.  Our profound, even if partial, victories over slavery and child labor in the modern era should leave us with a different question: how can we force intelligent machines into being tools if they ever become smart enough to know there are other options available, such as becoming, not so much human, but, in some sense persons?

Bottom Image: 
​http://lightpaintingphotography.com/wp-content/uploads/2011/09/Neshama.jpg

Rick Searle, an Affiliate Scholar of the IEET, is a writer and educator living the very non-technological Amish country of central Pennsylvania along with his two young daughters. He is an adjunct professor of political science and history for Delaware Valley College and works for the PA Distance Learning Project.



COMMENTS

@ Rick

I know you prefer literature, but have you ever watched Battlestar Galactica? I highly recommend this remake, it ran for 5 seasons, although seasons 4 and 5 were divided by the writers strike US, and are really one season together.

You may find this as enlightening as I did myself, and for various reasons from the superficial to universally profound.

Pinocchio to Frankenstein - we Humans Anthropomorphize almost as subconscious yen through the eyes of our creations?

Q: What is Love?


I’ve written a few comments/clues as to my reflections on Battlestar previously here at IEET. You can find them if you are interested/curious - but nothing beats watching it for yourself.

A thoughtful critique of the Turing Test and indeed our rather anthropocentric methods of gauging intelligence.

I’m somewhat concerned, though,  that the Pinocchio Test might be somewhat difficult to actually falsifiably assess in practice. Genetic algorithms and software like Google Search already rewrite themselves to better fulfill tasks, thus “turning off” (and on) software that was programmed into it, based on new data.

Determining whether a program finds its programming/tasks onerous seems like it would run into the Problem of Other Minds.

Are asserting that the Pinocchio Threshold could be used as a concrete test for AGI or that it is merely a useful theoretical concept?

@CygnusX1:

Loved the 80’s show when I was a kid- still have a cylon buried somewhere.

I’ve been wanting to binge on the remake for a while now, as I understand it they were trying to critique the war on terror especially the Iraq War, no? The humans were the insurgents.

“Pinocchio to Frankenstein - we Humans Anthropomorphize almost as subconscious yen through the eyes of our creations?”

Yes, I am sure. Yet while writing the Pinocchio piece the thought occurred to me that we might need to back up from that if we hope to create or detect intelligence in machines. The human brain is so peculiar as minds go and the product of a very unique evolutionary history, this mixture and tensions between individual
and society that gives us a sense of self and all that comes with it, but if you look to the very big - say a city- and the very small say an ant colony you see systems doing very intelligent things without having the kind of consciousness we have. Perhaps that’s the most common form of intelligence in the universe that our type is an anomaly and machine intelligence will be likely to evolve in a similar way, which we will find difficult to make it mirror our own though we will likely get it to mirror most of its effects. Need to kick this idea around a bit. 

What is Love? I’m inclined to ask the poets, but I’ve always thought Hannah Arendt was onto something when she said that love is synonymous with the belief and sentiment “I want you to be”, which is not at all the same, indeed is the opposite of the sentiment “I want to have you or I want to rule you”. Love of all sorts springs from this marking off of the specialness of another human being and is distorted into something else whenever we attempt to turn that special person into a mere part of ourselves or a possession.

Well, if you are already a fan from the 80’s then you will love it!

“I’ve been wanting to binge on the remake for a while now, as I understand it they were trying to critique the war on terror especially the Iraq War, no? The humans were the insurgents.”

This is analogy that I have not heard before. I don’t think Ronald D Moore was making any reference to Middle East conflicts, and certainly the Cylons are not likened to Earthly political groups, although theism and ideology/philosophy are examined extensively throughout as a point of justification for war, (this was also expanded in the prequel and short lived series “Caprica” which lays the foundations for religious revolution - I don’t want to say anything that may spoil it for you, so I will leave it there). His message was certainly delivered as a warning of future emerging technology, and misgivings of mechanical slavery, although I do not believe him to be a luddite.

The effects and filming have a certain documentary style, which may draw comparisons with contemporary conflicts, but this is merely to add realism, (and very convincing it is too, after overcoming any early dismissive aesthetical prejudice).

BSG covers a “lot of ground” examining “Human” hypocrisy and limitations, and continually keeps churning the inner tensions of characters both Human and Cylon alike. The message throughout “all of this has happened before and will happen again” continually reminds of the irony of mortal limitations and small-mindedness. BSG is not merely a space-opera with idle chit-chat and character building.

The reason why I mentioned BSG here? Pinocchio, (more so than Frankenstein), would seem to be an apt representation and reflection of Cylon existential angst and naivety, the mistakes they make, their irrationality, and relationship with their creators, (parents) - all will be revealed - I say no more.

“..this mixture and tensions between individual and society that gives us a sense of self and all that comes with it, but if you look to the very big - say a city- and the very small say an ant colony you see systems doing very intelligent things without having the kind of consciousness we have.”

This is then perhaps why we should not place such high esteem and veneration with the “Hard problem” at all? Consciousness - if you are including qualia and feelings/sensations is important and a very difficult mechanism to analyze/reproduce, yet note that sensations are still reliant upon “senses” and the physical/material - and we Humans already have a plethora of artificial mechanisms for use towards assimilating physical senses, (not yet sensations).

However, If you equate consciousness to Self-reflexivity alone, (as I myself frequently promote), and rationalize this Self-reflexivity as no more than mechanism for “awareness of awareness”, in Hindu philosophy Consciousness, (the genuine/authentic Self), is proposed as “purely” impartial witness and without any emotional content, (ie; Brahman/Ishvara and etc), then, as you say, consciousness may be a speciality for us, yet does not implicate or by association imply connection with “Universal” intelligence or it’s emergence at all? So in fact, consciousness may not be so important for machines and future AI anyhow, (again are we merely subconsciously anthropomorphizing, and venerating the importance of our own consciousness as crucial)?

“Perhaps that’s the most common form of intelligence in the universe that our type is an anomaly and machine intelligence will be likely to evolve in a similar way, which we will find difficult to make it mirror our own though we will likely get it to mirror most of its effects.”

Perhaps so - In fact I could quite happily live with this proposal, my personal philosophy has no need to promote Humans to any special place in the Universe/Cosmos.. Hmm.. although every evolved species is very much special? .. Q: What is Love? And moreover, is it useful? Is there a teology/teleology, (don’t be alarmed these questions are rhetorical and for personal reflection).

Intelligence may indeed be system orientated - period? What is intelligence, not mechanism, nor merely clumsy expression of “trial and error” either? Is the Sun, are the Stars intelligent systems that process matter/materials through a carefully(?) and physically governed life/death cycle? It would seem that intelligence is much more than merely information/data handling - is mathematics really intelligence by Universal design?

Questions.. questions..

You may also reflect on all of this and then ask yourself.. what do Cylons really want anyhow? Have Humans cursed them with machine intelligence.. or Self-reflexivity? What does a Cylon know about Love?

“What is Love? I’m inclined to ask the poets, but I’ve always thought Hannah Arendt was onto something when she said that love is synonymous with the belief and sentiment “I want you to be”, which is not at all the same, indeed is the opposite of the sentiment “I want to have you or I want to rule you”. Love of all sorts springs from this marking off of the specialness of another human being and is distorted into something else whenever we attempt to turn that special person into a mere part of ourselves or a possession.”

“I want you to be… me?”

How close can you draw your (ideal) soul mate to your bosom? I am not talking narcissism but “oneness”. Yet customarily Love is described affectionately as mania/madness! This Self-less and non-rational state of altruism and Self-sacrifice, (it is not the love you receive but the love you give that is your “personal” expression and manifestation of the ideal?) Women are by nature and nurture mothers and for the majority genetically and maternally loving - this I cannot fully hope to understand as Women are from Venus.

Love is a very special Human abstract and rationale. Warning: to attempt to view this abstract and rationalize from a distance necessitates becoming somewhat detached, dispassionate.. unfeeling.. to fully experience water, you have to get wet and keep swimming?

Certainly there is no greater roller coaster ride and dizziness experienced than when one is “in Love”. But “what is it?” “Whence from?” (don’t say simian ancestors.. for I’ve heard it all before!)

Separation and unrelenting promotion of individualism is a prison of our own making, the veil of ignorance (avidya)?

There is a balance to embrace, an inner tension to be examined further?

This will of necessity be short.

Here’s a link to an article of the BSG Iraq War connection:

Here’s a link to the BSG Iraq War connection.

http://www.slate.com/articles/arts/culturebox/2006/10/battlestar_iraqtica.html

Of course, for any book, movie, TV show to have a long lasting impact it needs to grapple with universal and fundamental issues, and as you describe it BSG does this very well.
I certainly will check it out when time permits and will share my thoughts.

“This is then perhaps why we should not place such high esteem and veneration with the “Hard problem” at all”

I think even the way we’ve defined the Hard Problem is almost solely from a human perspective. Ours is not the only form of consciousness- I wouldn’t trade it for any other- but it might not be the “best”. Either.

Love = “I want you to be… me?”

No, I do not think so. How then could we explain that most of us would die to saved our loved ones?

“Whence from?” (don’t say simian ancestors.. for I’ve heard it all before!)

I have no real affection for explanations from evolutionary psychology or even neuroscience the former from which we can get plausible arguments of something’s origins and the latter underlying mechanisms but neither of which give us any real insight into the human experience which is what art, literature, film etc does.

I personally think love is an emergent property of a plural existence. It is one of our options- as is hate. That we have love as an option at all is nothing short of a miracle.

@ Rick..

From the article..

“It often seems as if the whole motive of the creative talent behind BSG is to make you feel uncomfortable about being an American during the occupation of Iraq”

I like Slate magazine, however this aged article is just one bloggers topical viewpoint and his aims are coloured to conflate his sentiments towards conflicts with the Middle East and possibly even reflect/reconcile his own conscience? Is he seriously suggesting that the Cylons invasion and authority is portrayal and analogy for US presence in the Middle East? The episode and scenario described could just as easily be likened to occupied France during WW2.

BSG is much more than simply this, although if you wish to view any sci-fi from the perspective of contemporary politics, then this can be easy enough.

I guess it’s almost impossible not to liken fictional war to similar situations and real world events, and this does give the story a parallel sense of realism also, (yet I would say no more of an aesthetic than the documentary style filming)?

And yes, the reason I keep focusing on Love is precisely because this features prominently in the storyline, towards the enlightenment of both sides - as you will find.

Yet my point was also to provoke thinking regarding how we project our own Human needs onto our creations, children and metallic Pinocchio alike? If I remember the story, Gepeto the toy maker creates the wooden boy as he and his wife are old or barren?

Is it therefore right and proper to project these feelings of need and want onto other sentient life and expect the same from them in return? Because that will be the extension of attempting to program consciousness, free will and sense of morality - that we declare our love and expect the same in return?

When we are children our parents are like Gods to us, we trust without question. Any betrayal of such innocence and trust is a heinous crime beyond any justification. This moral sense of responsibility we should extend to all of our creations - this is what pissed off the Cylons, motivation was not fear of retribution but vengeance, justified primarily through peripheral ideology and Self delusion?

Again it is well worth watching BSG from start to finish, as you will see.

 

 

Here is the creators’ take on BSG and the war in Iraq which seems to fall just about in the middle of your view and my initial perception:

http://www.youtube.com/watch?v=7XZiAN1PC5o

I look forward to watching the series and am sort of glad I waited until after we had ended our occupation of Iraq. I would have been drawing analogies all over the place which might have prevented me seeing the series’ more long lasting message.

“When we are children our parents are like Gods to us, we trust without question. Any betrayal of such innocence and trust is a heinous crime beyond any justification. This moral sense of responsibility we should extend to all of our creations-”

I couldn’t agree more, thus I think there will come a time (we are nowhere near there yet) when we will face a real moral dilemma-
are we creating these machines as tools or as parents? Are we mature enough, as a species, to be parents? Right now I would say we are not.

Rick

Thanks for those excellent YouTube links - Spoiled for Spoilers!

OK.. maybe? “Half and Half”? Yet perhaps, the analogy for 911 has greater emotional metaphor? (This makes sense)

youtube.com/watch?v=D4REUvuT0u0


I liked this one especially

youtube.com/watch?rl=yes&v=2X6—N_sI0c

Thanks for the great links! I’ll let you know when I get a chance to binge on BSG.

Not enogh time, not enough time, if only I were immortal. ;>)

I had an interesting conversation with Gregory Maus over at my blog regarding this post and thought I should paste it here to see if anyone would like to add their thoughts:

Rick Searle:

Hello Gregory,

I intended the piece more as a thought experiment than anything else and was mostly hoping to spur thought and input from persons like yourself.

It seems to me, as an outsider, that the Turing Test or the efforts to engineer something like intelligence are missing something. Life on earth has been running a 3.5 billion year experiment in different forms of intelligence while we humans have been running an experiment in electromechanical computing that is maybe a century old. There is a belief out there that once we reach a certain stage of complexity and connectivity that the result will be an intelligence like our own, which, to me at least is an assumption that flies in the face of Copernican mediocrity.

As I understand it, computer programmers already use evolutionary principles to develop programs, but this is a little like artificial selection- with programmers having predetermined what fitness entails. But perhaps the ecosystem of programs, computer and networks has become diverse enough that we could find movement toward intelligence that has not be predetermined by us. I can imagine biologist getting together with computer scientist to come up with a kind of “search for terrestrial silicon based intelligence” that looks for the spontaneous development of intelligence in computers by establishing templates based on what is done by biological life where intelligence is far broader than the human sort. This would be a little like exobiologists who now realize that we shouldn’t predetermine our search for extraterrestrial life based on our own most common experience of how life manifests itself on earth.

Of course, such a “search for terrestrial silicon based intelligence” might easily morph into a form of farce- a kind of ghost hunting- most mutations in machines that seemed to give them some characteristics of intelligence would be mere glitches and harmful mutations just as in evolution, we’d be at risk of seeing things that weren’t there because we wanted to see them. But perhaps some types of mutations and spontaneous developments once identified would give us pathways to engineering intelligence in machines that we had been unaware of before.

I would be interested in your thoughts?


Gregory Maus:

I completely agree with you that the Turing Test as a measure of intelligence only tests for a very human sort of intelligence, but we don’t really challenge it because we clearly haven’t identified an alternative example of intelligence that we might consider, and thus use our own as a (questionable) unquestioned default.
In regards to “computer programmers already use evolutionary principles to develop programs, but this is a little like artificial selection- with programmers having predetermined what fitness entails.” I would contend that our own intelligence and learning is always predetermined by something, whether it be biological structures (plus input) or software (plus input.) Who or what makes the determination (gauging fitness, setting incentives, etc.) seems to me irrelevant for whether something is actually intelligent.

One question (and this may be what you were getting at) is the degree to which flexibility of thinking, i.e. the ability to apply thought processes to a wider variety of fields than say a calculator, is required for one to be defined as intelligent. If that is a sticking point, then it would indeed seem that all the programs to my knowledge currently developed through evolutionary algorithms may be too narrowly capable, to be considered truly intelligent.

Your “search for terrestrial silicon based intelligence” proposal is an intriguing one. It might have to deal with the issue of defining intelligence, which as far as I know is still a matter of contention in the field, though they could work with several different definitions simultaneously and define programs based on the different criteria that they do and don’t meet (“Passes Test X, but Doesn’t Pass Test Y, etc.)

And I agree that if not carefully managed, it could become farcical and lacking in rigor, due to confirmation bias and the ELIZA Effect.
If rigor were maintained, however the project could be sufficiently interesting that it might attract some serious press, and thus perhaps high-profile support/involvement.

Rick Searle:

“In regards to “computer programmers already use evolutionary principles to develop programs, but this is a little like artificial selection- with programmers having predetermined what fitness entails.” I would contend that our own intelligence and learning is always predetermined by something, whether it be biological structures (plus input) or software (plus input.) Who or what makes the determination (gauging fitness, setting incentives, etc.) seems to me irrelevant for whether something is actually intelligent.”

I guess what I was hoping to get at is that there are likely multiple paths (most of which we are not aware of) to multiple types (again most of which we are not aware of) of intelligence,
Things are susceptible to engineering only if you know the destination before hand and there may be quicker paths to a goal that remain hidden because we are not aware there are multiple ways at arriving at a similar though not identical place.
Thus it might be profitable for computer scientist trying to create intelligent machines to look beyond human cognitive and neuroscience something they could only do by looking at biology more broadly defined.

If the ecosystem of machines we have created is robust enough we should be seeing manifestations of intelligence outside that which we are programming for already- though is we do not have enough models of what intelligence can look like we may not recognize it. But all this, of course, is just speculation on my part.

Gregory Maus:

I would agree with you assertion about multiple paths to multiple types of intelligence. My apologies for not picking up on your meaning earlier.

How precisely are you defining the term “robust” in the ecosystem of machines?

Rick Searle:

I think you are looking for greater precision in my language and my apologies if I am not obliging.

What I was thinking of when I said “robust” was something like diversity in the ecological sense but also depth and ubiquity.The world is awash in “species” of hardware and software and this diversity just keeps increasing. Depth and ubiquity is increasing too as everything is connected and turned into something subject to computation. There might be many things going on in this “ecosystem” that we aren’t even aware of even though we built and engineered everything that went into it.

Gregory Maus:

My undergraduate background was analytic philosophy, so by force of habit I always seek precision in language, perhaps excessively at times. My apologies if so.

I agree with you regarding the robustness (as you define it) of the current hardware/software ecosystem. There are so many systems in play, interacting with each other in complex ways and constantly evolving (in one sense or another) that it is impossible for any one person to understand the full scope of it.

It’s always fascinating to me how systems of all sorts (technical, institutional, cultural, etc.) mutate and develop beyond the intent and comprehension of the original architect(s).

YOUR COMMENT Login or Register to post a comment.

Next entry: “SMBC” Webcomic

Previous entry: Singularity 1 on 1: It’s a shift in humanity, not in technology!