In Favor of the Functional Separation of Uploaded Minds and Simultaneous Mind Clones
Ben Hyink
2012-09-11 00:00:00

The saying, “What you don’t know can’t hurt you,” may sound condescending or even patronizing when delivered in most cases. Surely, just because you don’t know of a threat can’t mean that you are immune to it any more than a young child covering his eyes makes himself disappear to playmates in a game of hide-and-go-seek. However, there are cat-and-mouse games in which one may be exposed to much greater risks as a knower of secrets than as a civilly-acting oblivious person of good will. The heaviest price of knowledge may be the burden of having to make use of it in determining one’s own actions with an understanding of the effects they will have on others. One may find oneself with power and responsibility in a negative sense.



What started me on this line of thinking was a section of the book, “Where good ideas come from: The natural history of innovation” [1], by Steve Johnson. In it, he describes independent events that occurred in the U.S. before the September 11th attack that could have alerted the FBI in time to intercept the terrorists before they struck. He blamed the intelligence failure on what he considered to be an outmoded practice of hierarchical filtering of intelligence information before it is flagged as important. Instead, he suggested radical openness and non-hierarchical access to intelligence information to help agencies recognize patterns better and faster than at present. He expressed disappointment that after the September 11th investigation and hearing no such changes were made in how the agencies process information.

I don’t consider myself qualified to analyze best practices of intelligence communities, but from the outside I have a sense that the more coherent and comprehensive one’s picture of highly classified information becomes, the less freedom one would have in important respects. I don’t envy people in high offices who do access that information and have to make decisions in the interest of their constituents and their nations that may never be widely known. The military-industrial-intelligence complex has grown tremendously since WWII, and some legacy elements and practices would be practically impossible for any particular leader or small group to change.

The international community Is semi-anarchic, with rogue states that do not have our qualms about using the “dark side” to accomplish ends, so to some extent people at the top must negotiate paths forward not just for the greater good but also the lesser evil, acknowledging that unavoidable evils are inherent in the work. Some of that work includes secret development of emerging technologies in ways that will transform the future of human culture and the world economy. The innovative independent reporter Gina Rydland of Future Extreme Media [2] has made it the agenda of her new organization “The Virtual Center for the Study of Progressing Technologies” [3] to assist laypersons in analyzing anticipated developments and proposing ways to promote technology introductions that will result in desirable outcomes for humanity. This article was written with her center in mind.

As a former undergrad focused on neuroscience, my thoughts often drift toward mind uploading and the future of intelligent sentient life. I think that the emergence of mind uploading and the intelligence augmentation it enables will release unprecedented potential to improve the condition of life for humanity, especially at the point advanced physics can seamlessly interface biological brains with functional copies in supercomputers. However, ubiquitous radical intelligence augmentation, like any widely distributed technology that results in superempowerment of individuals, has the potential to threaten established centers of power, particularly governments. The ultimate agenda of a nation is its own preservation, ideally to ensure a free and civil society for the governed, and governments will seek to prevent disruptions that could threaten their very existence. The natural protectors of the existing power relations from such threats are the intelligence communities.

While I do not doubt the sincere desire of most of the people in these communities to preserve and advance certain interests for the common good, isolated cases of abuses have been recorded, such as LSD experiments in hippy communities. It is entirely possible in the future that individuals within those communities that are ahead of their time in different ways, who present alternatives to the mainstream lifestyles, will be prone to falling afoul of preservers of the status quo realities (which I acknowledge are in some ways better than historical alternatives). In fiction, people who learn too much secret information often come to unfortunate ends, as may the people they affect through their lives. Moreover, once information has been learned it seems virtually impossible to “unlearn” it (amnesia drugs are the first step toward accomplishing that, though early intervention would be crucial), so one is stuck in whatever new personal reality has realized, perhaps a dystopian one. A question that I arrived at was, “How can we protect people’s liberty and state security in the era of mind uploads, when information is radically free and insight is easier than ever to obtain?” I suggest calling it the “intelligence hazard problem.”



In an earlier article titled, “Uploading for Life Extension Will Be Valid” [4] I argued that as uploaded minds process more like software it should be possible to swap core aspects of the mind, such as episodic and semantic memory and cognitive abilities. I said that continual swapping of such functions would erode the discrete self to the point that individual minds blurred and by implication would be hard to differentiate from one another. On further reflection, I think there is a risk in that approach for most people: the TMSI risk (“too much sensitive information”). TMSI is a condition that could be incurred when the specific information unusually and undesirably constrains one’s options and imposes burdensome weight of responsibility on one’s decisions. Some people are attracted to that kind of insight. To them I say, “More power to you, and please use it wisely.”

I suspect that most people would rather avoid such information, and for them avoidance probably is the wisest choice. Whatever they might do, I suggest that people in such a hypothetical circumstance go with their strengths and consider the welfare of others as well as their own good. The danger of a completely uninhibited mind commune, as I see it, is that the blind may lead the blind into a TMSI situation with unfortunate outcomes. With discrete upload minds that selectively share content, the risk and speed of disastrously harmful information leaks would be lessened, and utility would be maximized by preserving the safety and freedom of those not well-suited to bear knowledge of state secrets, or well-positioned to advance their interests if they did possess such knowledge.

The same utility maximization (and protection) strategy as pursued by groups could be pursued by individuals who upload themselves. It is obvious that uploading will enable routine storage of “back-up copies” that could be instantiated if some sort of accident or virus destroyed the active functional simulation program. However, suppose for instance the upload was investigating something dangerous, like a drug cartel, and the criminals murdered him. If the back-up copy contained the same TMSI content as the destroyed version, it would be instantiated to no avail because it still would be targeted. Likewise, the same would hold true in the case in which an upload diverged into a team of individuals but kept no filters between the clone minds so that they acted more like one individual than closely related individuals with common values thriving in their own respective life paths.

In the latter approach, which I would endorse, if one individual chose to accept sensitive information as part of a leadership position, or accidentally came upon TMSI that constrained her life options, the others would remain unaffected and continue sharing what they have to offer the world, potentially even living longer in dire circumstances (e.g. entanglements with a drug cartel). I will leave aside the topic of melding a copy of one’s simulated body with a copy of a loved one’s simulated body for an “upload child,” but I suppose that may happen as well. Just based on the availability of processing resources, probably at a monetary expense, the number of mind clones an individual could make would be limited. However, in relation to the intelligence hazard problem, it would seem safer to have a few to at most a dozen separate mind clones who acted responsibly than an unlimited number of roving clones exploring the whole possibility space who might cause enough trouble to jeopardize the safety of all the related mind clones since they might be considered too similar in nature to be trusted. In speculative prisoner’s dilemma situations, I can imagine that some types of behavior could be considered a sacrifice for the good of others. A component of the psychological pain of imprisonment or loss of life is the knowledge that whatever one had to offer the world will either not be shared freely or will cease to exist.



How much more likely would a person sacrifice herself if she knew that people much like herself, who had similar capabilities and would make similar contributions, would remain active in the world? It could promise to take some of the sting out of different forms of self-sacrifice for individuals who found themselves entrapped by TMSI with dangerous groups. This would hold especially true for people who believe they have much to offer the world, such as geniuses and bright people (attributes that require wise formative time investments beyond what nootropics will offer), ethical crusaders, artists and creative types, and people with leadership abilities. It is easier to be stoical about such things when, in a clear sense, all hope is not lost for oneself in relation to the world.

We are entering a time in which the world is more united than ever before and because of this dense interconnectivity what happens in one nation usually affects many others. There is growing consensus on representative democracy as the form of government, yet the international community seems likely to remain semi-anarchic due to a variety of factors and some major threats now come from non-state actors. The unprecedented super-empowerment of individuals that uploading promises will alter societal dynamics in fundamental ways. We should begin to assess possible dynamics and prepare for them now. My hunch, based on the preceding analysis, is that it is in the interest of mind uploaders and society in most cases to retain individual identity boundaries with selective sharing, and also for responsible uploads to split into a small number of mind clones with discrete boundaries who live independent lives to maximize utility and avoid TMSI traps, though they may act cooperatively for their mutual benefit.

Addendum: After a couple days of reflecting on this issue, I’ve realized that some threats, like the one I’ve mentioned, are relatively small, whereas other threats, like motor vehicle accidents or the equivalents for an uploaded mind, loom much larger. There are special considerations in the problem I identified that make unlimited simultaneously instantiated mind clones undesirable, as well as a single operational one or ones with constant unfiltered updates from each other. I know that I’ve been disappointed with some aspects of my life, and considering the possibility of minds like mine on alternative trajectories in a multiverse offers a speculative consolation, but any such minds are absent in this trajectory of the possible multiverse. I propose that the main issue uploading as I suggested it be done could solve would be identified as “the multiverse problem,” and the “intelligence hazard problem” (which could be about any group) would just be a special case of threat that would argue against unlimited number of uploads bumbling into disaster for their mind-clones as well as unfiltered continual communication between the mind clones. I sincerely hope this domain of analysis and similar arguments to mine will be considered by those who first realize and commercialize mind uploading.

Knowing that minds very much like yours, who might even take up your work if you died, exist in the world might bias people to not defect in hypothetical “prisoner’s dilemma” threat situations, enduring a harder or even shorter life as a result of a desire to protect themselves in a way. Different people have different priorities, and that probably won’t change, but with a manyworlds approach to accessible reality all is not lost for oneself in possible situations in which it would be today – which isn’t to say that most people wouldn’t still strive to survive as they could.




Notes

[1] Johnson, Steve. Where good ideas come from: The natural history of innovation. New York: Riverhead Books, 2010. Print. http://www.amazon.com/Where-Good-Ideas-Come-From/dp/1594487715

[2] Rydland, Gina. Future Extreme Media: For a sustainable future! (n.d.). 4 Sept. 2012. Web. http://www.futureextrememedia.com

[3] Rydland, Gina. The Virtual Center for the Study of Progressing Technologies. (n.d.). 4 Sept. 2012. Web. http://www.irpt.info/vcspt/index.html

[4] Hyink, Ben. Uploading for Life Extension Will Be Valid. 30 Mar. 2010. Web. 4 Sept. 2012. http://ieet.org/index.php/IEET/more/3856