The Semi-Orthogonality Thesis - examining Nick Bostrom’s ideas on intelligent purpose
Lincoln Cannon
2015-05-22 00:00:00
URL

As illustrated in the chart below, the semi-possibility space of intelligent purpose is congruent with the impossibility space of intelligent purpose.



By definition, a simple intelligence can have only simple goals, final or otherwise. Reasons an intelligence might be simple include limited resources or poor optimization of available resources for sensing, storing, processing, or effecting. Whatever the case may be, a simple intelligence cannot have a goal that exceeds its anatomical capacity.

As the complexity of intelligence increases, the semi-possibility space of final goals expands, but the impossibility space always remains congruent. A complex intelligence can have simple goals or complex goals. A simple goal could be (but isn't necessarily) well within the capacity of a complex intelligence and could match the goal of a simple intelligence. A complex goal would make use of more of the capacity of a complex intelligence and would demonstrate a corresponding impossibility for simple intelligence.

The semi-possibility space is qualified with "semi" because complexity is not the only constraint on the possibility space of intelligent purpose. While we can cleanly conclude that any given final goal is impossible for any simpler intelligence, we cannot cleanly conclude that any given final goal is possible for any intelligence of the same or greater complexity. Arguably, the laws of physics and logic make some things meaningfully impossible.



It may be that an intelligence is structured in such a way that it is anatomically incompatible with some simpler goals. For example, human intelligence is generally more complex than butterfly intelligence but human intelligence may be anatomically incapable of some goals for which butterfly intelligence is optimized. Even dynamically over time, some final goals may remain meaningfully impossible for some intelligences of the same or greater complexity. For example, although we might meaningfully imagine a superintelligent posthuman with capacity for the goal of transforming itself into a butterfly, we might have trouble meaningfully imagining a normal dog with capacity for the goal of transforming itself into a butterfly.

Relatedly, it may also be that an intelligence is structured in such a way that it is incompatible with sustaining some of its goals. For example, as Nick observes, "an intelligent self-modifying mind with an urgent desire to be stupid might not remain intelligent for long." Likewise, although less directly, an intelligent environment-modifying mind with an urgent desire to annihilate its environment might not remain intelligent for long.

Despite real constraints on intelligent purpose, the Semi-Orthogonality Thesis remains momentous. Our human anatomy may bias us to suppose intelligence in general to be far more constrained than it actually is. We are tempted to anthropocentrism. Projecting ourselves carelessly, too many passively suppose the final goals of extraterrestrial or artificial or posthuman superintelligence will prove to be compatible with our own. And such carelessness could contribute to suffering and destruction, as warned by celebrity technologists and scientists like Elon Musk, Bill Gates, and Stephen Hawking.

Humans represent only a small part of the possibility space of intelligent purpose. We have a hard time imagining the intelligent purpose of our own evolutionary ancestors. Perhaps we no longer even have the anatomical capacity to do so, let alone imagine the purposes of non-human intelligence. And of course we can be perfectly confident that we're incapable of fully imagining superintelligent purpose. This should give us pause. While we have reason for hope, we face real risks.