IEET > Rights > HealthLongevity > Personhood > Minduploading > Vision > Affiliate Scholar > Roman Yampolskiy > FreeThought > Psychology > Artificial Intelligence > Neuroscience
The Space of Mind Designs and the Human Mental Model

The following essay is an excerpt from Chapter 2 of my recently published book, Artificial Superintelligence

2.1 Introduction

In 1984, Aaron Sloman published “The Structure of the Space of Possible Minds,” in which he described the task of providing an interdisciplinary description of that structure. He observed that “behaving systems” clearly comprise more than one sort of mind and suggested that virtual machines may be a good theoretical tool for analyzing mind designs.

Sloman indicated that there are many discontinuities within the space of minds, meaning it is not a continuum or a dichotomy between things with minds and without minds (Sloman 1984). Sloman wanted to see two levels of exploration: descriptive, surveying things different minds can do, and exploratory, looking at how different virtual machines and their proper- ties may explain results of the descriptive study (Sloman 1984). Instead of trying to divide the universe into minds and nonminds, he hoped to see examination of similarities and differences between systems. In this chapter, I make another step toward this important goal.

What is a mind? No universal definition exists. Solipsism notwithstanding, humans are said to have a mind. Higher-order animals are believed to have one as well, and maybe lower-level animals and plants, or even all life-forms. I believe that an artificially intelligent agent, such as a robot or a program running on a computer, will constitute a mind. Based on analysis of those examples, I can conclude that a mind is an instantiated intelligence with a knowledge base about its environment, and although intelligence itself is not an easy term to define, the work of Shane Legg provides a satisfactory, for my purposes, definition (Legg and Hutter 2007). In addition, some hold a point of view known as panpsychism, attributing mind-like properties to all matter. Without debating this possibility, I limit my analysis to those minds that can actively interact with their environment and other minds. Consequently, I do not devote any time to understanding what a rock is thinking.

If we accept materialism, we also have to accept that accurate software simulations of animal and human minds are possible. Those are known as uploads (Hanson 1994), and they belong to a class comprising computer programs no different from that to which designed or artificially evolved intelligent software agents would belong. Consequently, we can treat the space of all minds as the space of programs with the specific property of exhibiting intelligence if properly embodied. All programs could be represented as strings of binary numbers, implying that each mind can be represented by a unique number.

Interestingly, Nick Bostrom, via some thought experiments, speculates that perhaps it is possible to instantiate a fractional number of mind, such as 0.3 mind, as opposed to only whole minds (Bostrom 2006). The embodiment requirement is necessary, because a string is not a mind, but could be easily satisfied by assuming that a universal Turing machine (UTM) is available to run any program we are contemplating for inclusion in the space of mind designs. An embodiment does not need to be physical as a mind could be embodied in a virtual environment represented by an avatar (Yampolskiy and Gavrilova 2012; Yampolskiy, Klare, and Jain 2012) and react to a simulated sensory environment like a “brain-in-a-vat” or a “boxed” artificial intelligence (AI) (Yampolskiy 2012b).

2.2 Infinitude of Minds

Two minds identical in terms of the initial design are typically considered to be different if they possess different information. For example, it is generally accepted that identical twins have distinct minds despite exactly the same blueprints for their construction. What makes them different is their individual experiences and knowledge obtained since inception. This implies that minds cannot be cloned because different copies would immediately after instantiation start accumulating different experiences and would be as different as twins.

If we accept that knowledge of a single unique fact distinguishes one mind from another, we can prove that the space of minds is infinite. Suppose we have a mind M, and it has a favorite number N. A new mind could be created by copying M and replacing its favorite number with a new favorite number N + 1. This process could be repeated infinitely, giving us an infinite set of unique minds. Given that a string of binary numbers represents an integer, we can deduce that the set of mind designs is an infinite and countable set because it is an infinite subset of integers. It is not the same as a set of integers because not all integers encode a mind.

Alternatively, instead of relying on an infinitude of knowledge bases to prove the infinitude of minds, we can rely on the infinitude of designs or embodiments. The infinitude of designs can be proven via inclusion of a time delay after every computational step. First, the mind would have a delay of 1 nanosecond, then a delay of 2 nanoseconds, and so on to infinity. This would result in an infinite set of different mind designs.

Some will be very slow, others superfast, even if the underlying problem-solving abilities are comparable. In the same environment, faster minds would dominate slower minds proportionately to the difference in their speed. A similar proof with respect to the different embodiments could be presented by relying on an ever-increasing number of sensors or manipulators under control of a particular mind design.

Also, the same mind design in the same embodiment and with the same knowledge base may in fact effectively correspond to a number of different minds, depending on the operating conditions. For example, the same person will act differently if under the influence of an intoxicating substance, severe stress, pain, or sleep or food deprivation, or when experiencing a temporary psychological disorder. Such factors effectively change certain mind design attributes, temporarily producing a different mind.

2.3 Size, Complexity, and Properties of Minds

Given that minds are countable, they could be arranged in an ordered list, for example, in order of numerical value of the representing string. This means that some mind will have the interesting property of being the smallest. If we accept that a UTM is a type of mind and denote by (m, n) the class of UTMs with m states and n symbols, the following UTMs have been discovered: (9, 3), (4, 6), (5, 5), and (2, 18). The (4, 6)-UTM uses only 22 instructions, and no less-complex standard machine has been found (“Universal Turing Machine” 2011).

Alternatively, we may ask about the largest mind. Given that we have already shown that the set of minds is infinite, such an entity does not exist. However, if we take into account our embodiment requirement, the largest mind may in fact correspond to the design at the physical limits of computation (Lloyd 2000).

Another interesting property of minds is that they all can be generated by a simple deterministic algorithm, a variant of a Levin search (Levin 1973): Start with an integer (e.g., 42) and check to see if the number encodes a mind; if not, we discard the number. Otherwise, we add it to the set of mind designs and proceed to examine the next integer. Every mind will eventually appear on our list of minds after a predetermined number of steps. However, checking to see if something is in fact a mind is not a trivial procedure. Rice’s theorem (Rice 1953) explicitly forbids determina- tion of nontrivial properties of random programs. One way to overcome this limitation is to introduce an arbitrary time limit on the mind-or-not-mind determination function, effectively avoiding the underlying halting problem.

Analyzing our mind design generation algorithm, we may raise the question of a complexity measure for mind designs, not in terms of the abilities of the mind, but in terms of complexity of design representation. Our algorithm outputs minds in order of their increasing value, but this is not representative of the design complexity of the respective minds. Some minds may be represented by highly compressible numbers with a short representation such as 1013, and others may comprise 10,000 completely random digits, for example, 735834895565117216037753562914 ... (Yampolskiy 2013b). I suggest that a Kolmogorov complexity (KC) (Kolmogorov 1965) measure could be applied to strings representing mind designs.

Consequently, some minds will be rated as “elegant” (i.e., having a compressed representation much shorter than the original string); others will be “efficient,” representing the most efficient representation of that particular mind. Interesting elegant minds might be easier to discover than efficient minds, but unfortunately, KC is not generally computable.

In the context of complexity analysis of mind designs, we can ask a few interesting philosophical questions. For example, could two minds be added together (Sotala and Valpola 2012)? In other words, is it pos- sible to combine two uploads or two artificially intelligent programs into a single, unified mind design? Could this process be reversed?

Could a single mind be separated into multiple nonidentical entities, each in itself a mind? In addition, could one mind design be changed into another via a gradual process without destroying it? For example, could a computer virus (or even a real virus loaded with the DNA of another person) be a sufficient cause to alter a mind into a predictable type of other mind? Could specific properties be introduced into a mind given this virus-based approach? For example, could friendliness (Yudkowsky 2001) be added post-factum to an existing mind design?

Each mind design corresponds to an integer and so is finite, but because the number of minds is infinite, some have a much greater number of states compared to others. This property holds for all minds. Consequently, because a human mind has only a finite number of possible states, there are minds that can never be fully understood by a human mind, as such mind designs have a much greater number of states, making their understanding impossible, as can be demonstrated by the pigeonhole principle.

This except is from from Chapter 2 of my recently published book, Artificial Superintelligence References available there

Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach.

COMMENTS No comments

YOUR COMMENT Login or Register to post a comment.

Next entry: Do Transhumanists View Overpopulation as a Global Threat? - interview with Steve Fuller

Previous entry: New IEET Affiliate Scholars: Nikola Danaylov, Steve Fuller, Rene Milan, Tsvi Bisk