IEET > Vision > Galactic > Contributors > Lincoln Cannon > Artificial Intelligence
The Semi-Orthogonality Thesis - examining Nick Bostrom’s ideas on intelligent purpose
Lincoln Cannon   May 22, 2015   Lincoln.Metacannon  

In his Orthogonality Thesis, Nick Bostrom proposes that “intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.”

However, there’s a problem hinted at by the combination of “orthogonality” and “more or less”. Nick acknowledges that intelligent purpose actually does have some constraints. And arguably those constraints are actually quite strong,  which would mean the Orthogonality Thesis is rather weak

But the weakness may not be fatal. We can formulate a Semi-Orthogonality Thesis that actually accounts better for Nick’s own observations and reasoning without overstating their ramifications, which remain momentous.

As illustrated in the chart below, the semi-possibility space of intelligent purpose is congruent with the impossibility space of intelligent purpose.

By definition, a simple intelligence can have only simple goals, final or otherwise. Reasons an intelligence might be simple include limited resources or poor optimization of available resources for sensing, storing, processing, or effecting. Whatever the case may be, a simple intelligence cannot have a goal that exceeds its anatomical capacity.

As the complexity of intelligence increases, the semi-possibility space of final goals expands, but the impossibility space always remains congruent. A complex intelligence can have simple goals or complex goals. A simple goal could be (but isn’t necessarily) well within the capacity of a complex intelligence and could match the goal of a simple intelligence. A complex goal would make use of more of the capacity of a complex intelligence and would demonstrate a corresponding impossibility for simple intelligence.

The semi-possibility space is qualified with “semi” because complexity is not the only constraint on the possibility space of intelligent purpose. While we can cleanly conclude that any given final goal is impossible for any simpler intelligence, we cannot cleanly conclude that any given final goal is possible for any intelligence of the same or greater complexity. Arguably, the laws of physics and logic make some things meaningfully impossible.

It may be that an intelligence is structured in such a way that it is anatomically incompatible with some simpler goals. For example, human intelligence is generally more complex than butterfly intelligence but human intelligence may be anatomically incapable of some goals for which butterfly intelligence is optimized. Even dynamically over time, some final goals may remain meaningfully impossible for some intelligences of the same or greater complexity. For example, although we might meaningfully imagine a superintelligent posthuman with capacity for the goal of transforming itself into a butterfly, we might have trouble meaningfully imagining a normal dog with capacity for the goal of transforming itself into a butterfly.

Relatedly, it may also be that an intelligence is structured in such a way that it is incompatible with sustaining some of its goals. For example, as Nick observes, “an intelligent self-modifying mind with an urgent desire to be stupid might not remain intelligent for long.” Likewise, although less directly, an intelligent environment-modifying mind with an urgent desire to annihilate its environment might not remain intelligent for long.

Despite real constraints on intelligent purpose, the Semi-Orthogonality Thesis remains momentous. Our human anatomy may bias us to suppose intelligence in general to be far more constrained than it actually is. We are tempted to anthropocentrism. Projecting ourselves carelessly, too many passively suppose the final goals of extraterrestrial or artificial or posthuman superintelligence will prove to be compatible with our own. And such carelessness could contribute to suffering and destruction, as warned by celebrity technologists and scientists like Elon MuskBill Gates, and Stephen Hawking.

Humans represent only a small part of the possibility space of intelligent purpose. We have a hard time imagining the intelligent purpose of our own evolutionary ancestors. Perhaps we no longer even have the anatomical capacity to do so, let alone imagine the purposes of non-human intelligence. And of course we can be perfectly confident that we’re incapable of fully imagining superintelligent purpose. This should give us pause. While we have reason for hope, we face real risks.

Lincoln Cannon is a technologist and philosopher, and leading advocate of technological evolution and postsecular religion. He is a founder, board member, and former president of the Mormon Transhumanist Association. He is a founder and advisor of the Christian Transhumanist Association. And he formulated the New God Argument, a logical argument for faith in God that is popular among religious Transhumanists.



COMMENTS

Aren’t intelligences scalable?  In other words, different levels of AI can do blocks of code, which then can be joined into a more complex system.

Hi Dobermanmac. Sure. Intelligent components can be joined into more intelligent systems, at least so long as we’re talking about intelligence in the goal optimization sense.

http://www.breitbart.com/big-government/2015/05/25/elon-musk-vs-the-google-terminators-the-perils-of-artificial-intelligence/

“The question of A.I. reproduction is what concerns Musk the most, according to his public statements on the matter. He once described his nightmare scenario as “rapid recursive self-improvement in a non-algorithmic way.”...

Musk, like other heralds of A.I. doom, warns that a super-intelligent machine could easily kill us with kindness, interpreting even the most benevolent mission statement to unleash catastrophe. He once gave the example of an A.I. tasked with eliminating email spam that “determines the best way of getting rid of spam is getting rid of humans.””

First, I would like to apologize for citing breitbart.com, but it was convenient for me in making this point.  Second, rapid recursive self-improvement (in this instance) suggests sub-goals, or intelligent components joined to form more intelligent systems.  Third, an unintended consequence of a mission (goal) could be catastrophic.  Finally, the point I am trying to make, rather than one singular goal (mission), I think a better, more rich understanding would be preliminary or sub-goals, that would be joined into a more broad goal or mission.

For instance, the metaphor of a human bureaucracy, where the organization has a broad goal, and then operatives are charged with specific tasks (goals) in relation to this broad goal.  Many times, those subordinates subvert the larger goal in pursuit of the more limited mission they are tasked with.

What am trying to say is that the subtlety of achieving a over arching goal can be inadvertently subverted by subsystems of a more complex super intelligent system.  This does not contradict anything in the above article, but just an observation on my part.  To summarize, a more complex intelligence may be necessarily formed by scalable intelligent components, which may have sub-goals that are counter-productive to the final goal.  A chain is only as strong as the weakest link.

Agreed, Dobermanmac.

I loved this article. There are quite a few terms I had to google but still a fascinating piece. However, there is one thing I’d disagree about:
“although we might meaningfully imagine a superintelligent posthuman with capacity for the goal of transforming itself into a butterfly, we might have trouble meaningfully imagining a normal dog with capacity for the goal of transforming itself into a butterfly.” I doubt one can imagine a human with capacity for the goal of transforming himself into a butterfly simply because we humans (well, the majority of us) has the ability to distinguish between the reality and fantasy. Surely our creativity is limitless but still our reason is stopping it from going overboard. But with animals it is otherwise. For example a female gorilla Koko imagined herself to be a bird and she actually strongly believed she is. And the conclusion is animals can imagine though their imagination may be limited and they may fail to distinguish between the fantasy and reality (data from BBC, wrapped up by dissertation writer). That’s why I’ll be more likely to believe a dog rather than a human (unless he’s from psych ward) would imagine himself being a butterfly.

That’s a thought-provoking observation, Clair. Thank you.

YOUR COMMENT Login or Register to post a comment.

Next entry: Martine Rothblatt and Bina48 interviewed by Joe Rogan

Previous entry: BiZoHa (in Uganda): the World’s First Atheist Orphanage