Why it matters that you realize you’re in a computer simulation
Eliott Edge
2015-11-14 00:00:00

What Conway’s rules produced were emergent complexities so sophisticated that they seemed to resemble the behaviors of life itself. He named his demonstration The Game of Life, and it helped lay the foundation for the Simulation Argument, its counterpart the Simulation Hypothesis, and Digital Mechanics.  These fields have gone on to create a massive multi-decade long discourse in science, philosophy, and popular culture around the idea that it actually makes logical, mathematical sense that our universe is indeed a computer simulation.  To crib a summary from Morpheus, “The Matrix is everywhere”. But amongst the murmurs on various forums and reddit threads pertaining to the subject, it isn’t uncommon to find a word or two devoted to caution: We, the complex intelligent lifeforms who are supposedly “inside” this simulated universe, would do well to play dumb that we are at all conscious of our circumstance.

The colloquial warning says we must not betray the knowledge that we have become aware of being mere bits in the bit kingdom. To have a tipping point population of players who realize that they are actually in something like a video game would have dire and catastrophic results. Deletion, reformatting, or some kind of biblical flushing of our entire universe (or maybe just our species), would unfold. Leave the Matrix alone! In fact, please pretend it isn’t even there.

The basic idea is that the intelligent lifeforms that have evolved inside a simulation are somehow made non-viable, or undesirable as samples, once they become aware of the simulation that they live in. Their own awareness of their plight (their environment) somehow excludes them from being valuable experimental samples. Samples that are aware of the truth of their simulated environment can, or will, compromise themselves, the simulation, or both.

So to avoid this possibly cataclysmic fate, some put forward a kind of survival strategy of “We better not know”, and if we do know, “We better play dumb”.  It’s a position that comes with several interesting problems. The first of which should be obvious enough; having just read the last few paragraphs, you are now irrevocably in the know regarding the theory, whether you actually believe the universe to be a simulation or not. Reading this very article is potentially putting reality itself, or maybe just the continuation of our species, at extreme risk. That is supposedly how flimsy the cosmos is to the grandest secret of its truest nature—The universe can be unraveled with the simple transmission and comprehension of just a few sentences describing its features. Only a handful of axioms that explain the environment are apparently enough to destroy us all. Something about this theory feels unlikely, because it means that if you have a deep enough textbook on the nature of reality, the very act of reading it is enough to unmake reality.  That sounds a lot like a literary device out of an H.P. Lovecraft short story; imagine an obscure occult science text so dangerous that to utter its very table of contents is enough to return the whole cosmos to total chaos. 

Another issue to consider is in the conceivably deeper purposes for simulating a life-sustaining and life-evolving universe. Conceding the problem of anthropomorphizing the motives of our hypothetical simulation-designers, let’s nonetheless indulge and imagine ourselves in their position. 

If your simulation includes evolving conscious entities that are allowed to develop an intellect (learning), and they have a recursive method to expand and explore that intellect (science), then it is likely that over time and after enough observations those entities will inevitably bump into the “writing on the wall”, as it were.

In the case of our own universe, physicist Tom Campbell of NASA has argued that the constant speed of light, the observer effect, and the Big Bang—all matter, energy, and physical laws arriving simultaneously out of nowhere—are tells of just such a situation.  Brian Whitworth has published several papers on how the physics we experience could be easily explained with computable analogs.  Martin Rees’s book Just Six Numbers could be argued as a whole set of tells.  Max Tegmark summarizes the position in the PBS documentary The Great Math Mystery:

“If I were a character in a computer game that was so advanced that I were actually conscious, and I started exploring my video game world it would actually feel to me like it was made of real solid objects made of physical stuff.  Yet if I started studying, as the curious physicist that I am, the properties of this stuff, the equations by which things move and the equations that gives the stuff its properties, I would discover eventually that all these properties were mathematical.  The mathematical properties that the programmer had actually put into the software that describes everything.”

Via Tegmark’s thinking we can assume that if the physics and/or nature of any given universe that lends itself to be described through mathematics or exhibits mathematical constants, then it can be surmised to be analogous to, or a derivative of, a computer simulation—even by the entities within that simulation.  In other words, if you can compute it, it’s likely the result of a computer itself.

In the case of our hypothetical evolving lifeforms, their science, if it is robust enough, should show that their universe is indeed logically the result of a computer simulation.  Otherwise, what is the value of all their science?

We could call this the Simulated Intelligence Hypothesis. If you grow an evolving intelligence in a simulated environment it should, given enough time, be able to deduce, infer, or observe that its environment is indeed the result of a computed simulation. If this is true then it should lead to an interesting circumstance: an evolving intelligence within a simulated environment cannot be occluded from the fact that its environment is a simulation, given enough time and a robust enough science. This we could call The Sims Situation—You cannot evolve an intelligent sample inside a simulation whilst keeping that simulation hidden indefinitely. Eventually their science will reveal their circumstance, unless of course there is some kind of outside intervention—The same kind of intervention that we should supposedly play dumb in an effort to avoid provoking.  Nevertheless, let’s return to imagining the evolution of our simulated lifeforms.

If we have a simulated universe that provides a platform for intelligent lifeforms to evolve, we could break these lifeforms up into at least 3 categories: (1) Simple, (2) Complex, (3) Savvy.




  1. Simple, they can make decisions and engage meaningfully with their environment.

  2. Complex, they record history as well as develop sciences, cultures, artifacts, and arts.

  3. Savvy, they are conscious of the fact that they are in a simulated universe.



Once an intelligence moves from a Complex orientation to a Savvy orientation, it has crossed an ontological Rubicon that divides these two distinct viewpoints.  We could call this divide the Edge Threshold.  If we put any real weight into the computer running this intelligent lifeform evolving universe simulation, then we might in fact hope that it grows something slick enough to figure out what’s really going on. Not just for the sake of amusement either, but for an insight into our own motives and nature as simulation-designers. We would actually want a Savvy intelligence inside our simulated universe. The reason why is very simple: If we only have access to observe intelligent lifeforms that are restricted to not knowing that they are in a simulation, then our own sample pool and thus knowledge base will always be restricted to intelligences that are out of the loop. Complex level lifeforms (like human beings just prior to the computing revolution) would still be complex and interesting, but they would by definition always already be operating from an ontological ignorance of the true nature of their environment. They would be complex indeed, but far from savvy.

On the other hand Savvy lifeforms would probably be extremely likely to produce fascinating forms of expression, technology, novelty, social organization, and so on. They would also likely begin to create their own life-producing simulated universes themselves.  They may even attempt to signal their outside simulation-designing hosts somehow. Therefore I, as part of the original hypothetical simulation-running team, would be extremely hesitant, if not downright protective, of that Savvy sample’s survival and evolution—That is if I were to interfere at all. What could possibly give me more insight into what I, the original simulation creator and maintainer, have done than this Savvy sim living in my ever-growing mock universe? Would I really throw out the sim that realized they were in The Sims? Indeed, evolving a sim that realizes they are in The Sims might feel like I’m actually getting my computational weight’s worth—That goes especially if I was putting in all this effort to power and evolve a simulated universe in the first place. If our simulated universe is inadvertently an intelligence test for the evolving lifeforms inside it, then I’d hope we grow a winner. A sample so intelligent that it can actually see the code at the edge of matter is likely a sample we’d benefit from studying.  It’s not too far removed from teaching great apes to sign.

All of this presents another interesting circumstance to evolving intelligent lifeforms in a simulated universe in the first place. If they, the sims inside, are given enough time to develop their intellects and sciences, then bumping into the truth that they are products of a simulated environment seems nothing less than an inevitability.

In other words when evolving intelligent lifeforms in a simulated environment, either the outside simulation-designers always eventually intervene or the evolving sims inside always eventually figure it all out—barring that they, or their sciences, don’t collapse beforehand.

To recap: First, if the Simulation Hypothesis is true, then the Simulated Intelligence Hypothesis is likely also true—an evolving intelligence in a simulation will eventually become aware that they are in a simulation, barring extraordinary intervention.

Second, if the Simulated Intelligence Hypothesis is true than it should lend credence to The Sims Situation—an evolving intelligence in a simulated environment cannot be denied from the knowledge that it is in a simulated environment forever. In other words you cannot evolve an intelligence in a simulated environment, and also hide the fact that the environment is a simulation.

Third, this leads us finally to The Savvy Inevitability—if the Simulated Intelligence Hypothesis and the Sims Situation are true, then crossing the Edge Threshold (the ontological divide between Complex and Savvy intelligences) should be assumed as inevitable, given enough time to evolve any given intelligence sample.

Ergo, if all of the above is correct, the hypothetical simulation-designers likely anticipate the eventual emergence of intelligent lifeforms that can accurately sense what their environment truly is.  The simulator(s) may even relish the moment of paradigm shift for their sims in the same manner that adults celebrate their adolescent going off to build their own lives and families.



Figure 1 — Evolving intelligent sims



Figure 1 — Evolving intelligent sims

Outside of the assumption that a Savvy sample is valuable in the way just outlined, there are other problems with the previously mentioned “playing dumb” suggestion. The notion that we should (or even could) occlude our “outside” observers, the simulator(s), or ourselves from whatever knowledge we may have about our environment, is not only probably impossible, it is also metaphysically unreasonable. “We better not know”, even if it is the correct recourse, is impossible to maintain. Ethically, this notion is odious in that it is not only ultimately anti-science, anti-intellect, and indeed anti-evolution, but it goes on to actually assume punishment for such evolutionary developments, which are in part outside of the evolving intellect’s hands.  We can’t be held responsible for natural discoveries, just as we can’t help but see the sun. They are the very fingerprints of the gods, so to speak. We can only truly be made responsible for what we do with natural discoveries; we cannot be made responsible for the fact that we can actually make these natural discoveries. Arguably, nearly all conscious life is defined by its ability to sense its environment. Discovering that the environment is a computer simulation, if that is the case, is a natural consequence of the environment itself.

In summary, if you are evolving intelligent life in a simulated environment, you must expect its simulated nature to be eventually discovered by its inhabitants as a logical consequence of your intelligent lifeforms’ evolution.

For these reasons any anxiety regarding our own awareness of evolving within a simulated universe should probably be dismissed outright, because if we are, it would be impossible to hide it from our own, or any other evolving intelligence, indefinitely. If something waits to be discovered and the universe itself provides a platform to develop evolving intelligent lifeforms, then its eventual uncovery is inevitable. As George Berkley repeated, “To be is to be perceived”.

It may very well be a crucial point of existence to discover evidence that this is a simulated universe, if that is the case. The evolution of such a Savvy intelligence is likely far more interesting to the simulator(s), given the Savvy intelligence’s likelihood to develop insightful technologies and forms of expression for not just the benefit of the Savvy sample in question, but for the simulator(s) as well. In the possibly endless number of simulated universes, each nested within the other, where in all directions you find simulated universe within simulated universe, like an evolving fractal upon itself, perhaps it is also inevitable that once an intelligence begins to build computers and simulate universes themselves, as we already do in laboratories around the world today, then the questions and ideas that we are wrestling with in this field are likely commonplace throughout the universe, if not the multiverse. Perhaps all this too informs the simulator(s) as well. Perhaps the road that we choose in building our own simulated universes enlightens the creators of the simulated universe that we occupy—Maybe the path we choose informs the path they chose, or didn’t choose.

There is a term in professional wrestling called kayfabe. Similar to the suspension of disbelief, it means that wrestlers should always be in character in order to make the overall melodramatic narrative feel exciting and palpable to the fans, even though they too are aware that the entire exchange is scripted beforehand. We all know the blood flowing in a horror film is only dyed corn syrup, nonetheless the false reality is maintained in order to enjoy the spectacle. We are reaching a point in simulism and digital physics where it may be time to drop the kayfabe and peer more deeply into the question: What does it actually mean to be Savvy?

If we are in a computer-simulated universe, we must embrace this new horizon of learning—as we have with heliocentrism, DNA, and evolution before it.  We obviously mustn’t fear it or pretend that it isn’t there. If a rock is truly code, then that is the universe’s responsibility, not ours. If the universe is a computed simulation then so too is universal Savviness inevitable for all evolving intelligences that are fortunate enough to survive its slings and arrows.  If this is a Matrix then we should probably see what happens when we begin to think and act like it is.  Perhaps accepting this is the beginning of kindergarten and the teacher is only moments away from entering the room.  Perhaps Morpheus is waiting to give us our phone call. 

Special thanks to Nikki Wyrd and Dr. Timothy Brigham for their edits and to Tom Campbell for pointing out this rabbit hole.