Consider your smartphone for a moment. It provides you with access to a cornucopia of information. Some of it is general, stored on publicly accessible internet sites, and capable of being called up to resolve any pub debate one might be having (how many U.S. presidents have been assassinated? or how many times have Brazil won the World Cup?). Some of it is more personal, and includes a comprehensive databank of all emails and text message conversations you have had, your calendar appointments, the number of steps you have taken on any given day, books read, films watched, calories consumed and so forth.
Now consider a question: is this information part of your mind? Does it form part of an extended mind loop, one that interfaces with and augments the mental processors inside your skull? According to some philosophers it does. They believe in something called the extended mind hypothesis, which goes against the neuro-physicalist wisdom and holds that the mind is not necessarily to be identified with the brain. On the contrary, they suggest that humans are natural-born cyborgs, constantly expanding their minds into their external environments.
This is an intriguing hypothesis, and one that has been much-debated in the philosophy of mind. But does it have any ethical implications? If your mind extends into the external environment, wouldn’t we be obliged to treat anything that forms part of your extended mind loop in accordance with the ethical principles that are usually thought to apply to the treatment of any part of one’s non-extended mind? In other words, shouldn’t we adopt a parity-stance when it comes to the treatment of the internal and external mind?
Maybe. One philosopher who has taken up the parity-stance in recent years is Neil Levy. He has used it explicitly in relation to debates about neuroenhancement, i.e. the use of drugs and other forms of biotechnology to enhance and tweak the elements of neural anatomy. In this post, I want to take a look at Levy’s argument and at a recent response to it. I’ll proceed in three stages. I’ll start with a description of the extended mind hypothesis. I’ll then look at Levy’s “parity” argument. Following this, I’ll consider some obvious criticisms of the parity argument.
If that sounds tolerable, let’s proceed.
1. A Quick Outline of the Extended Mind Hypothesis
The extended mind hypothesis (EMH) was first introduced to the philosophical world by David Chalmers and Andy Clark in 1998. Their claim was simple enough. The most prevalent version of mind-body physicalism in the latter half of the 20th century was functionalism. According to functionalism, whether or not something counted as a mental state (e.g. a belief, desire, intention, memory, experience and so forth) depended not so much on the stuff it was made of but on its place within a functional system. In other words, the mind was like a mechanism, with particular mental states playing different causal and functional roles within the mechanism, all leading to the creation of this phenomenon we call the “mind”.
Because functionalism placed such an emphasis causal roles in the production of mental phenomena, it led philosophers of mind to propose a natural corollary. It led to them to claim that mental phenomena were multiply realisable. That is to say, mental phenomena could supervene upon many different physical systems. There was nothing uniquely special about neurons and other brain cells in this respect. In theory, a mind could be instantiated in other systems, for example an artificial neural network or digital computer. All that mattered was whether that system had all the relevant component parts playing the appropriate functional roles. Functionalists are still physicalists. They still think the mind requires some physical system. They just don’t think that the brain is the only eligible physical system.
Chalmers and Clark’s extended mind thesis was another natural corollary of functionalism and multiple realisability. It added that if the mind is multiply realisable it can surely be jointly (or, rather, conjointly) realisable. In other words, the brain and other physical systems could combine to form a mind. Chalmers and Clark provided a striking illustration of their hypothesis. Imagine there is a man named Otto, who suffers from some memory impairment. At all times, Otto carries with him a notebook. This notebook contains all the information Otto needs to remember on any given day. Suppose one day he wants to go to an exhibition at the Museum of Modern Art in New York but he can’t remember the address. Fortunately, he can simply look up the address in his notebook. This he duly does and attends the exhibition. Now compare Otto to Inga. She also wants to go to the exhibition, but has no memory problems and is able to recall the location using the traditional, brain-based recollection system.
Chalmers and Clark argue that there is nothing fundamentally different about Otto and Inga. They both remember the location. It just so happens that Otto uses an extended mind loop for recollection, whereas Inga uses an internal one. In this sense, Otto’s notebook forms part of his mind.
To be clear, Chalmers and Clark do not think that everything in the physical environment will form part of an extended mind. Certain conditions must be met. These include:
Accessibility: The external prop must be constantly and easily accessible to the individual.
Endorsement: The contents of the external prop (the notebook in Otto’s case) must be automatically endorsed by the individual and must have been consciously endorsed in the past.
These conditions are (allegedly) sufficient for something forming part of the mind, but they may not be necessary. What kinds of external prop meet these conditions? Otto’s notebook obviously fits the bill, but that’s an archaic example. I would argue that most smartphones and artificial assistants now meet these criteria. And as I said some of the contents of those props can include information from the (publicly-accessible) web. Of course, this raises interesting questions about whether the information on the web forms part of my mind, as well as your mind and everyone else’s. I think it does, if we follow Chalmers and Clark’s conditions. I frequently consciously endorse information on the web (e.g. the location of my hotel when travelling) and then access and automatically endorse it at a later point.
Strangely, Levy tries to resist this claim (at least in his own case). He says that something like the information on Wikipedia does not meet these conditions because access is relatively slow and effortful, and he does not always trust it (Levy 2011). Of course, I can’t speak directly to his own experience, but it does seem like an oddly out-dated claim. A lot of information on the web is no longer effortful and slow to access (no more so than accessing a notebook) and is automatically endorsed. I readily access information via my smartphone in virtually any location and at any time. Furthermore, the trust issue seems no greater in the case of certain types of web-based information than it would in the case of the information stored in Otto’s notebook.
But this is a digression. The important point for now is to grasp the essence of the extended mind hypothesis. Remember, the claim is that the mind isn’t all inside the skull. External props can form part of extended mind loop, provided that the contents of those props meet certain conditions. The simplest example of this is in how external props form part of our memory system. But it doesn’t end there. External props can form part of other mental systems too, such as our motivational systems.
2. Ethical Parity and the Neuroenhancement Debate
There are many criticisms of the EMH in the literature. I will discuss some a little later on. For now, I want to take it as a given, and see what difference it makes to the neuroenhancement debate. That debate is all about the use of pharmacological and biotechnological devices to alter and enhance the neural anatomy. Examples might include the use of methyphenidates to enhance memory and attention, propanolol to suppress painful memories, and deep brain stimulation to regulate mood. Some people are positively disposed to such enhancements, others are negatively disposed. Each side has a set of standard arguments at their disposal. The positively disposed appeal to removal of barriers to self-optimisation and the associated positive societal effects. The negatively disposed worry about things like upsetting the natural order, inauthenticity and the possible negative societal effects.
Can the EMH be used to break the deadlock between the two sides? Levy thinks it might help. Specifically, he thinks it might help in a way that supports the pro-enhancement side. As he sees it, part of the opposition to the use of neural enhancement is based on the notion that there is some principled distinction between enhancements that are “inside the head” and enhancements that are on the outside. Thus, enhancing my memory consolidation with the use of methylphenidate is deemed to be very different from enhancing my memory through the use of my smartphone.
But the EMH calls this principled distinction into question. If the EMH is right, then both the internal and external realms form part of our mind. And if we have no objection to enhancing the latter, we should (by implication) have no objection to enhancing the former. If we accept that I can enhance my extended mind loop by buying a newer, smarter, smartphone, then why shouldn’t we accept that I can do the same with a smartpill? As Levy puts it:
Much of the heat and the hype surrounding neuroscientific technologies stems from the perception that they offer (or threaten) opportunities genuinely unprecedented in human experience. But if the mind is not confined within the skull…[then] intervening in the mind is ubiquitous. It becomes difficult to defend the idea that there is a difference in principle between interventions which work by altering a person’s environment and that that work directly on her brain, insofar as the effect on cognition is the same; the mere fact that an intervention targets the brain directly no longer seems relevant.
Levy is working here with an ethical parity principle (EPP), one that claims there is, ethically speaking, no important principled difference between internal and external interventions in the mind. This is a strong parity claim, premised on our acceptance of the EMH. The principle can be formulated in the following way:
Strong EPP: Since the mind extends into the external environment, alterations of external props used for thinking are (ceteris paribus) ethically on a par with alterations of the brain.
This can then be used to support Levy’s argument in favour of neural enhancement:
(1) Since the mind extends into the external environment, alterations of external props used for thinking are (ceteris paribus) ethically on a par with alterations of the brain.
(2) We have no “in principle” ethical objection to alterations to external props for thinking.
(3) Therefore, we ought to have no “in-principle” ethical objection to alterations of neural mechanisms for thinking.
Just to be clear, this argument is not claiming that interventions to the external and internal mind are fine and dandy. It is, rather, claiming that they should be treated to equivalent forms of ethical scrutiny. If it would be wrong to remove the part of Inga’s brain that allows her to remember the location of the Museum of Modern Art, then it would be wrong to remove Otto’s notebook. And vice versa.
3. Is the EPP credible?
Now we come to the crux of the issue. Is the EPP, as outlined, any good? Should we agree with Levy that there is no “in principle” difference between internal and external interventions into the mind? Some are skeptical. They argue that the strong form of the EPP is unsustainable, but that a weak form may be allowed to stand in its stead (something Levy himself falls back on). DeMarco and Ford are two such critics. I want to close by outlining their critique.
To understand this critique, let’s dwell on the example of memory. The EPP claims that there is no in-principle distinction between alterations to external memory props (like Otto’s notebook, or my email history) and alterations to internal memory systems (presumably regions or networks of the brain). While this sounds initially plausible on the EMH, there are at least three important differences between internal and external memory. Each of these differences has some ethical salience:
Dynamic integration: Internal memory is a dynamic, not a static, phenomenon. The information stored in Otto’s notebook or my smartphone is static. Once inputted, the information remains the same, unless it is deliberately altered. Internal memory is not like this. As is now well-known, the brain does not store information like, say, a hard disk stores information. Memories are dynamic. They are changed by the act of remembering. What’s more, memories integrate with other aspects of our cognitive frameworks. They affect how we perceive and how we desire. Perhaps external props can do something similar, but their effects are more attenuated. Internal memory is more closely coupled to these other phenomena. Consequently, tinkering with internal memory could have a much more widespread effect than tinkering with external memory. To the extent that those effects are ethically significant, we have found a reason to reject the strong EPP.
Fungibility: External memory props may be more easily replaceable (more fungible) than internal memory. If I destroy your smartphone, you can always get another one. And although you may have lost some of your externally stored memories (maybe some pictures and messages) you will still be able to form new ones. If, on the other hand, I destroy your hippocampus (part of the brain network needed to form long-term memories), I can permanently impair your capacity to acquire new long-term memories. This isn’t a hypothetical example either. This has really happened to some people. The most famous being patient HM, who had part of his hippocampus removed during surgery for epilepsy in the 1950s, and was never able to form another long-term semantic memory. Again, this difference in fungibility seems like it is ethically significant.
Consciousness: Another obvious difference between internal and external memory is the degree to which they are implicated in conscious experience. Consciousness is usually deemed to be an ethically salient property. Entities that are capable of conscious experience are capable of suffering and hence capable of being morally wronged. What’s more, the nature and quality of one’s conscious experiences is often thought to be central to living a good life. Although the information stored in an external prop may, eventually, feature in one’s conscious experiences, it does not shape the very content of those conscious experiences in the way that something which is internally stored may do. As I noted above, internal memory can get deeply integrated into our mental models of the world, affecting how we perceive and act in that world. So if we alter an internal memory system, it could have a much more significant impact on the quality of our conscious experience.
These aren’t perfect reasons for rejecting the strong EPP. One could dream up examples of external props that seem to elide the alleged distinctions. For example, some external props may get deeply integrated into our mental models of the world and change how we perceive it. And perhaps the difference in fungibility is merely temporary: with advanced technology, even brain parts may be as readily replaceable as smartphones. Nevertheless, taken together, I think these three distinctions provide some reason for doubting the strong version of the EPP, and there are, in any event, more distinctions to be made (see work by Evan Selinger, for example)
The question is: where do we go from there? Levy introduced the EPP in an effort to make people more comfortable about the prospect of neural enhancement. He reasoned that if people had no problem with external enhancement, then if you could convince them that there was no in principle difference between external and internal enhancement, you could, in turn, convince them of the acceptability of the latter. In others words, he tried to draw us from our embrace of external enhancement to an embrace of internal enhancement, via the EPP. Given this, one might be inclined to think that any claim to the effect that there are important differences between the internal and the external would lead us to be more wary of internal forms of enhancement. But this is not my view. I think the differences may draw us in the opposite direction. I think they show that there are greater moral risks associated with tinkering with the internal realm, but, at the same time, there are greater benefits too. For instance, if consciousness is such an important ethical property — so deeply implicated in what it takes to live a good life — then surely that is the very thing we should be trying to enhance?
DeMarco and Ford make a slightly more general point. They think that the differences between the internal and external worlds should lead us to abandon the strong version of the EPP, but retain, in its stead, a weaker version. Levy himself drafted a weaker version of the principle, one that was not so reliant on the EMH. DeMarco and Ford try to modify this weaker version in light of a series of criticisms. I won’t rehearse their arguments here. Instead, I’ll simply skip to the end and to their attempt at a weak EPP:
Weak EPP (DeMarco and Ford): Alterations of external props are ethically on a par with functionally similar alterations of the brain, to the precise extent to which reasons for finding the functional alterations of the brain morally acceptable or unacceptable equally apply to reasons related to the functional alteration of external mental props.
As they see it, this weaker version of the EPP has a modest, but important effect on debates about enhancement and other mental interventions. It forces us to focus on the ethical permissibility of tinkering with different mental functions (like memory, desire, belief, mood and so on). If it is permissible to tinker with a given mental function, then its permissibility should not depend on whether the tinkering is internal or external. It should depend on the moral reasons for the alteration. But this is only the beginning of the ethical inquiry. If the functional alteration is permissible, then other more complex issues will need to be addressed. How effective are current technologies for alteration? What are their side effects? Do they have social implications? And so on.
To sum up, the EMH argues that the mind is not all in the head. External props can form part of a functional mind loop. Neil Levy argues that this functional equivalence between the internal and external has some ethical implications. In particular, he thinks it can affect the debate about the moral propriety of neural enhancement. In brief, his argument is that since we typically have no problem with the alteration and enhancement of external mental props, so too should we have no problem with functionally equivalent alterations to and enhancements of internal mental systems.
Others disagree, arguing that there are important ethical differences between internal and external mental systems. DeMarco and Ford argue that these differences should lead us to a revised, weaker version of Levy’s parity principle. I have argued that these differences may indirectly play into Levy’s hands. This is on the grounds that they may suggest that internal alterations have greater ethical priority. To be sure, this is an incomplete argument. But it is one worth developing and one which I hope to develop in the future.