Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

MIT Journal, Presence: Teleoperators and Virtual Environments Call for Papers

Capitalism Mandates a Basic Income Guarantee

Overcoming the Obstacles on the Path to Post-Scarcity

A New Theory of Time: X-tention is Simultaneously Discrete and Continuous

Episode #1- Tal Zarsky on the Ethics of Big Data and Predictive Analytics

IEET Affiliate Scholar Hank Pellissier’s Athiest Ugandan Orphanage and School

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


spud100 on 'Immortality: When We Digitally Copy Our Minds, What Happens to Humanity?' (Apr 27, 2016)

almostvoid on 'Angels and Demons of A.I.' (Apr 27, 2016)

almostvoid on 'Immortality: When We Digitally Copy Our Minds, What Happens to Humanity?' (Apr 27, 2016)

instamatic on 'Why Woman-as-Abortion-Victim is Even Worse than Endorsing Punishment' (Apr 25, 2016)

dobermanmac on 'Is The Singularity A Religious Doctrine?' (Apr 25, 2016)

almostvoid on 'Is The Singularity A Religious Doctrine?' (Apr 25, 2016)

Peter Wicks on 'Fermi Paradox, Doomsday Argument, Simulation Hypothesis -- is our view of reality seriously flawed?' (Apr 25, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Cyborgs are not just science fiction, they are becoming reality
Mar 31, 2016
(4064) Hits
(0) Comments

How Augmented and Virtual Realities Might Change Productivity Forever in the Next 10 Years
Apr 14, 2016
(3856) Hits
(0) Comments

Christianity and Transhumanism Are Much Closer Than You Think
Apr 10, 2016
(3851) Hits
(60) Comments

“Intelligence Squared” Artificial Intelligence Debate
Apr 1, 2016
(3573) Hits
(0) Comments

IEET > Life > Innovation > Vision > CyborgBuddha > Affiliate Scholar > John M. Smart > Contributors

Print Email permalink (2) Comments (6336) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Preserving the Self for Later Emulation: What Brain Features Do We Need?

John M. Smart
By John M. Smart
Ever Smarter World

Posted: Nov 29, 2012

Let me propose to you four interesting statements about the future…

1. As I argue in this video, chemical brain preservation is a technology that may soon be validated to inexpensively preserve the key features of our memories and identity at our biological death.
2. If either chemical or cryogenic brain preservation can be validated to reliably store retrievable and useful individual mental information, these medical procedures should be made available in all societies as an option at biological death.
3. If computational neuroscience, microscopy, scanning, and robotics technologies continue to improve at their historical rates, preserved memories and identity may be affordably reanimated by being “uploaded” into computer simulations, beginning well before the end of this century.
4. In all societies where a significant minority (let’s say 100,000 people) have done brain preservation at biological death, significant positive social change will result in those societies today, regardless of how much information is eventually recovered from preserved brains.

These are all extraordinary claims, each requiring strong evidence. Many questions must be answered before we can believe any of them. Yet I provisionally believe all four of these statements, and that is why I co-founded the Brain Preservation Foundation in 2010 with the neuroscientist Ken Hayworth. BPF is a 501c3 noprofit, chartered to put the emerging science of brain preservation under the microscope. Check us out, and join our newsletter if you’d like to stay updated on our efforts.

As one of the themes of this blog I’ll try to explain why I’m optimistic about these technologies, and to enlist your help in pushing forward their validation or falsification as fast as feasible. If validated, I’ll be pitching to you for help in making the brain preservation option accessible and affordable around the world, as fast as feasible. To these ends, thank you for any frank and constructive feedback you can leave in the comments.

In this post, I’d like to try to provisionally answer a question relevant to the first three statements above:

To preserve the self for later emulation in a computer simulation, what brain features do we need?

We can distinguish three distinct information processing layers in the brain:[1]

1. Electrical Activity (“Sensation, Thought, and Consciousness”)
These brain features are stored from milliseconds to seconds, in electrical circuits.
2. Short-term Chemical Activity (Short- and Intermediate-term Learning – “Synapse I”)
These brain features are stored from seconds to a few days in our neural synapses (synaptome), by temporary molecular changes made to preexisting neural signaling proteins and synapses.
3. Long-term Molecular Changes (Long-term Learning – “Nucleus and Synapse II”)
These are stored from years to a lifetime in our neuron’s connectome, nucleus (epigenome) and synaptome, by permanent molecular changes to neural DNA, the synthesis of new neural proteins and receptors in existing synapses, and the creation of new synapses.

At present, it is a reasonable assumption that only the third layer, where long-term durable molecular changes occur, must be preserved for later memory and identity reanimation. The following overview of each of these layers should help explain this assumption.

1. Electrical Activity (“Sensation, Thought, and Consciousness”)

Our electrical brain includes short-distance ionic diffusion in and between neurons and their supporting cells (i.e., calcium wave communication in astrocytes), action potentials (how neurons send signals from their dendrites to their synapses), synaptic potentials (how signals cross the gaps between neurons), circuits (loops and networks) and synchrony (neurons that fire in unison, though they are widely separated). Electrical features operate at very fast timescales, from milliseconds to a few seconds, and are variable (not exact), volatile, and easily disrupted.

Neural Synchrony – Our Leading Model of Higher Perception and Consciousness . Image: Senkowski, 2008

These features certainly feel very important to us. They include our sensations (sensory memory) and current thoughts (commonly called “short-term” memory by neuroscientists). Recurrent loops, special electrical circuits that cycle back on themselves, hold our current thoughts (when you rehearse some information to avoid forgetting it, you are literally keeping it “in the loop”). Neural synchrony creates our conscious perceptions, and when it happens in the self-modeling areas of our brain, it gives us self-aware consciousness.

Yet electrical features are also fleeting. When you sleep, or are knocked unconscious, or are given an anesthetic, your consciousness disappears, only to be “rebooted” later, from more stable parts of your brain. Our memories aren’t even recalled with precision but are rather recreated, as volatile electrical processes, from these molecular long-term stores, in ways easily influenced by our mental state and cognitive priming (what else is on our mind). That’s why eyewitness testimony is so variable and unreliable.

The electrical features of our self are thus like the “foam” on the top of the wave of our long-term memories and personality. They make us unique for a moment, as they hold only our most immediate thinking processes.[2] Amazingly, people who undergo special surgeries that stop their heart, and some who drown in very cold water, can have no detectable EEG (electrical patterns) for more than thirty minutes, and their brains successfully reboot after rewarming them. Essentially, these individuals are recovering from clinical brain death. Not only do they not have consciousness during this period, they have no unconscious thoughts. Yet because their deeper layers aren’t too disrupted, they can restart their electrical activities.

An excellent book about neural spikes, loops, and synchrony is Rhythms of the BrainGyorgy Buzsaki, 2006. It explains the emergent properties and integrative functions of these “highest order” electrical features of our brain. My late mentor at UCSD, Francis Crick, and his Caltech collaborator, Christof Koch, call this topic the search for the Neural Correlates of Consciousness. It’s a great phrase. Consciousness is not a mystery we’ll never solve, but according to a number of neuroscientists it is a physical process of neural synchrony, in particular regions of your brain. These brief, rhythmic synchronizations share information between groups of neurons in distant regions of the brain by tightening up (“binding”) their interdependent sequences of action potentials. The synchronizations are controlled by the inhibitory neurons in our brain, which use the GABA neurotransmitter. Disrupt gamma synch, as with anesthesia, and you take away consciousness. Give a drug like zolpidem, which activates GABA neurons and increases gamma synch, to patients who are in persistent vegetative state, and you wake 60% of them up from their comas, to varying degreesWikipedia doesn’t yet have a good explanation of the gamma synchrony model of consciousness, but they will in a few more years. Laura Colgin at Kavli has found two reliable gamma synch mechanisms in rat hippocampus. She speculates that slow gamma makes stored memories available to current consciousness, and fast gamma integrates sensations to create conscious perceptions.  Though neuroscientists don’t yet all agree on the details, many have found neural correlates of sensations, thoughts, emotions, and consciousness in the electrical features of our brains. These features, in conjunction with the short-term chemical changes we will describe next, represent the moment-by-moment updates to our long-term memory, self, and intelligence.

2. Short-term Chemical Activity (Short- and Intermediate-term Learning – “Synapse I”)

Short-term chemical activity is the next layer down. It involves all our short- and intermediate term learning and memory, everything beyond our sensations, current thoughts, and consciousness, but not including our long-term memories. We can call this layer “Synapse I.”

As your electrical experiences and thoughts race around the various circuits in your head, you make a number of short-term learning changes in your neural networks to capture, for the moment, what you’ve learned. These involve changes to preexisting proteins in your preexisting synapses (communication junctions), changes that last for minutes (short-term) to days (intermediate-term). These are changes in both the mechanics of neurotransmitter release and short-term facilitation (strengthening) or depression (weakening) of synaptic effectiveness. Synapses are modified by the precise timing and frequency of electrical signals (action potentials) received by the postsynaptic neuron, a process called spike-timing dependent plasticity. There are short-term changes in signaling molecules (neurotransmitters, cAMP, Ca++, CamKII, PKA, MAPK), and membrane receptors (NMDA). Phosphorylation states (chemical tags) are altered on some of these molecules, and a temporary equilibrium between kinases (enzymes that add phosphates to key molecules) and phosphatases (enzymes that take them away) is established in the synapse. [Note: On Oct 15, 2012, Ye et. al. showed in Aplysia how precise spatiotemporal signaling in the synapse involving PKA holds short-term memories in synaptic electrochemical networks, and the interaction of PKA and MAPK holds intermediate-term memories in these networks, in a process called synaptic facilitation.] If the short- or intermediate-term learning or memory is to become long-term, communication with the cell nucleus must now occur, and new membrane proteins and synapses are then built, involving new or altered circuits in the connectome. If not, the new memory dies out.[3]

Every night, when we sleep, our short- and intermediate-term brain writes important parts of its experiences to our long-term memory, building durable new synaptic connections, where this learning can now stay with us for years to life, in a process called memory consolidation. This process moves a subset of our recent learning and memories, apparently the most relevant parts, from temporary spatiotemporal signaling states to permanent new synaptic structures, anchored to the cytoskeleton of each neuron. We can think of these new proteins, synapses, and circuits established in neural synapses and nuclei in a way that is very roughly like DNA, as they are long-term stable structures, encoded in a partly digital form, that will endure all the flux and variability of the biochemistry within each neuron, over a lifetime.  It is these unique synaptic and epigenetic networks that we must preserve, scan, and upload in creating neural emulations, as we will discuss. Long-term memory formation happens best when we are in slow wave (deep and dreamless) sleep, which we get in cycles during the night (and especially well if our sleeping room is dark and quiet) and also during a good nap (a great way to “lock in” what you’ve learned, after a demanding learning period that will naturally make you sleepy).

Neural dendrites, cell body,  action potential, and synapses. Image: Gallant’s Biology.

All our neurons work in circuits, and strengthen or weaken their connections based on chemical and electrical activity, in a process called Hebbian learning. Just like your muscles, which come in two sets that oppose each other around every joint, neural circuits are both excitatory and inhibitory at many decision points in the network. Perhaps most important decision points are the cell bodies of each neuron, where the nucleus is. The electrochemical current from all the dendrites (“roots”) of each neuron flows toward its cell body, and action potentials (current waves) flow from the cell body to its synapses (“branches”), along the axon (“trunk”) of each neuron. Glutamate is the main neurotransmitter we use to send excitatory current from a synapse to the dendrite of the next neuron in a circuit (the postsynaptic neuron). Glutaminergic synapses are thus called “positive” in sign, and they promote electrical activity throughout the brain. GABA is the main neurotransmitter we use to let inhibitory current leak out of a postsynaptic dendrite. GABAergic synapses are thus called “negative” in sign, and they depress circuits throughout the brain.

Each neuron sums the net result of the positive and negative inputs it receives from its dendrites, over milliseconds to seconds. If the current exceeds that neuron’s threshold,  it sends an action potential (depolarizing electrochemical signal) to all its synapses. As the brain learns, our synapses enlarge or shrink, giving them greater or lesser excitatory or inhibitory effect, and we may grow more or lose our synapses. With few exceptions, each neuron also uses just one type of neurotransmitter (eg., glutamate or GABA), or the same small set of neurotransmitters, at all its synapses.

The architecture of memory, thought, emotion, and consciousness may thus be reducible to a surprisingly simple set of algorithms, connections, weights, signaling molecules and electrical features in each neuron, working together in a massively parallel way to create computational networks that are far more complex than the individual parts.

Hippocampus and frontal lobes. Image: NIH

In higher animals, the neurons in our hippocampi (two c-shaped organs in each hemisphere of our brain), and the connections they make to the rest of our cerebral cortex (especially to our frontal cortex), store all kinds of episodic (experiential) and declarative (fact-based) information, all from our last few days of life. At the same time, neurons in our cerebellum (a more primitive, “little brain” at the base of our skull) store procedural learning and memory (how to move our bodies in space). Experiments with rats and primates tell us that each hippocampus makes perhaps tens of thousands of new neurons every day, from neural stem cells. Other than for repair after certain kinds of injury, no other part of the adult brain is able to use stem cells in detectable numbers, as far as we know. The rest of our brain is postmitotic (unable to use cell division to maintain its structure), as neuroscientists learned in an elegant experiment in 2006. Our neurons must be maintained by our immune and repair systems, and as they die via natural aging, or kill themselves in apoptosis, memories start to die.

Hippocampal dendritic spines. Image: Fiala & Harris, 2000.

Our hippocampal neurons have the very tough job of temporarily holding, in their uniquely dense synapses, and via their connections to the rest of the cortex, much of the new information we have learned over the last day or two, during our entire adult life. Here is a picture of a computer reconstruction of a small section of ten columns of synapse-rich “spiny dendrites”, from the CA1 (input) region of the hippocampus. CA1 contains areas like place cells, imprinted genetically with detailed maps of 3D space. Like the digestive cells lining our gut, and the skin cells at our fingertips, certain hippocampal neurons appear to get worn out on a regular basis by this demanding short-term memory holding function, and so some neuroscientists think new ones must regularly grow and mature to replace them.

People whose hippocampi are both surgically removed, like the memory disorder patient Henry Moliason, who had this done at the age of 27, can’t update their long-term episodic and declarative memories. H.M.’s long-term memory was mostly “frozen” at 27. He could occasionally add bits of new information to long-term memories of the same type he’d built before the surgery, and he could learn new procedural (spatial and muscle) memories in his cerebellum, but he had no cerebral knowledge that he’d added these memories. H.M.’s amazing life suggests that if the brain preservation process damaged the hippocampus, but not the rest of our brain, we’d come back without our most recent experiences (retrograde amnesia), but our older memories and personality would still be intact.  Ted Berger at USC managed to build a simple version of an artificial electronic hippocampus for mice in 2005, so there’s a good reason to believe that this part of our brain, though important, isn’t irreplaceable. As long as you could install an artificial hippocampus in the computer emulation constructed from your scanned brain, you’d be back in business as a learning organism, with only some of your more recent memories and learning erased. This all helps us understand that what cognitive scientist Daniel Dennett would call our center of narrative gravity, our most unique self, is our long-term memory.

The fact that only special areas of our hippocampus can add new cells during life exposes a harsh reality about our biological brains. We are all born with a very large but fixed long-term memory capacity, and this capacity gets increasingly used up, pruned and potentiated, the older we get. Anyone over 40, like myself, knows they are considerably less flexible at learning new things than they were at 20. It’s far easier for older people to add more twigs to branches of knowledge we’ve previously built in our “tree of experience” than to form new branches. We can do it, but gets progressively tougher and slower the older we get.

This means, if we want to be lifelong learners in a world of accelerating technological and job change, it is critical to get an early education that is as categorically complete (global, cosmopolitan, and scientific), moral (socially good, positive sum) and evidence-based as possible. Our children need the best mental scaffolds they can get early on, or they’ll spend the rest of their lives trying to prune away harmful and untrue thoughts and beliefs acquired in their youth. Psychologists have long known that it is much easier to add increasing specificity to a neural network than it is to unlearn (depress) any branch, once it’s built. We need to be careful about what we allow into our memory palaces.

That said, children also benefit greatly from freedom, early on in life, to study what they themselves desire to learn, and to have a good degree of control over learning outcomes and style. This freedom, and appropriate rewards for effort of any kind, induce them to build intricate mental specializations in areas they are personally passionate about. For those who want to know how to implement a 50/50 balance of broad, state-mandated learning in future-critical STEM fields, analytical thinking, and civics (the “hilt of the sword”), and a personalized program of student-directed specialized learning, creativity, and play in the other half of the time (cutting into and mastering whatever they can convince their teachers is worth studying, or the “blade of the sword”), I strongly recommend The Finland Phenomenon, 2010 . This film, and to a lesser extent Tony Wagner’s book Creating Innovators, 2012, demonstrate key elements of the future of learning for enlightened societies, in my opinion. It may take 20 years for the evidence to be incontrovertible and for this model to be implemented in the US, but you can give it to your child now, if you find it appealing.

Cybertwin – Virtual Assistants With Simple Models of Our Interests Will Be Useful for Many of Us By the Early 2020′s. Image:

It is also liberating to realize that while our biological brains are less able to learn fundamentally new things as they age, all the digital technologies we use, technologies which will bring our emulations back at an affordable price later this century, will continue to get exponentially more powerful every year. Most of us don’t realize this, but everyone who uses a social network, email, or any other technology to capture things they say, see, and write about is also creating a digital simulation of themselves. By 2020 we’ll all be talking to and with our best search engines in complex sentences (the conversational interface), and shortly thereafter, we’ll all be able to use simple software agents, cybertwins, which will have crude maps of our interests and personality, so they can serve us better. Computational linguists know that if you capture what a person says for just two years, we are so repetitive about what we care about that a cybertwin could whisper into our ear the word that natural language processing algorithms predict we want when we are having a senior moment, and they’ll be right most of the time. That’s how repetitive we are, and how good web search will be by 2020. As I wrote in 2005, people who don’t run cybertwins will be much less productive, so they’ll be very popular, even though they’ll bring lots of new social problems in their first generation.

These simulations won’t be turned off by our loved ones when we die. Our children will use them to interact with a simulation of us, and to keep the best of our thoughts, experiences and personalities accessible to them. Teaching our children and ourselves to be digital natives and digital activists is thus a very important way for us to build an ever more capable cybertwin, even as our biological self naturally slows down and simplifies (prunes away branches of knowledge and memories we once had ready access to) with advancing age.

Now we arrive at our truest self, the part we care most about preserving and sharing with our loved ones and society. It is this self that I expect will later merge with the cybertwin that many of us will leave behind, as strange as that might sound.

Experience-based learning. Image: Graham Paterson, Children’s Hospital Boston

3. Long-term Molecular Changes (Long-term Learning – “Nucleus and Synapse II”)

The production of long-term memory, personality, and identity requires all the short-term synaptic changes above, plus permanent molecular changes in the neuron’s Nucleus (DNA and its histones, or wrapping proteins), and the permanent creation of new cellular proteins, synapses, and circuits (Synapse II). Here’s a brief summary of our understanding of the process[4]:

Nucleus (“Genome, Transcriptome, and Epigenome”)
1. Retrograde transport and signaling from the synapse to the nucleus
2. Activation of nuclear transcription factors and induction of gene expression
3. Chromatin alteration and epigenetic changes in gene expression (gene-protein networks)
Synapse II (“Connectome and Synaptome”)
4. Synaptic capture of new gene products, local protein synthesis, and seeding of new synaptic sites
5. Permanent synaptic changes, activation of preexisting silent synapses, formation of new synapses.

We used several “-ome” words above. Let us briefly consider each. They are very roughly ordered below in terms of their likely contribution to our unique self, from least to most important:

The Genome. These are inherited genes and gene regulatory networks that control instinctual behaviors. Our genome includes the unique alleles we received from our parents. It is easy to preserve, as it is the same in all cells. With one tissue sample we can create a clone later, either physically, or far more likely, in a computer simulation. But this clone has only our inherited uniqueness. We’ll need contributions from the next four “omes” to add our life memories and learning to the emulation.

The Transcriptome. This is the set of proteins made (transcribed) by cells. While proteomics (another “ome” word) is in its infancy, scientists estimate each of our cells has the DNA to express ~20,000 basic protein types. Each type can be further modified after creation by adding or removing chemical tags like phosphate, methyl, ubuquitin, and other small molecules, so that more than a million protein subtypes may exist in a typical human body. Fortunately, each of our ~220 cell types only uses around 5,000 of these 20,000, and perhaps less than 2,000 of the 5,000 are unique to each cell type. Neurons and glia, the cell types we are most interested in, may use just a few hundred protein types to store our higher learning and memory in the nucleus and synapses. The other proteins are there to keep all of our cells alive, which is a critical precondition to being able to store long-term memories in a special subset of neural structures. All this suggests the proteomics of memory and identity, and of later memory and identity reconstruction from scanned brains, are not impossibly complex, but rather highly challenging, fascinating and eventually solvable problems.

The Epigenome. These are learning-based changes in gene-protein networks that happen in the nucleus of each neuron, mostly during the life of the organism. The Dutch famine of 1944 and the Överkalix study in Sweden tell us that some epigenetic changes can be inherited in humans, so we all should seek good nutrition and avoid toxin exposure, as we may pass some of that to our children in the form of compromised and undermethylated epigenomes. But there is a lot more to the epigenome story still to be uncovered, as this 2011 article on epigenetic regulation in learning and memory in Drosophila makes clear. Our epigenome is a gene-regulatory layer that involves chemical changes, mostly methylation, to DNA and to the histone proteins that wrap and expose DNA in the cell nucleus. These changes determine how DNA, RNA, and protein are expressed in the nucleus, and they may affect how the cell body integrates incoming electrical signals and manages its synapses.

The Connectome. This is a map of our neural cell types, and how they connect. Our connectomes and much of our dendrite structure is very similar in all of us. This shared developmental structure makes it easy for us to communicate as collectives, for ideas or “memes” to jump from brain to brain. Yet with 100 billion neurons making an average of 1,000 connections to other neurons, and most of these not being developmentally controlled, we’ve got the ability to make 100 trillion connections, the large majority of which will be unique to each individual.

The Synaptome. These are key features of the ~1,000 synapses that each neuron makes to others. They are the particular long-term molecular features that determine the strength and type of each synapse, its signaling states and electrical properties, as we’ve described them above. The synaptome is the weight and type of the 100 trillion connections described above, and this information may be the most important “recording” of our unique self. Fortunately, because memories are stored in a highly redundant, distributed, and associative manner in our synaptic connections, our synaptome is to some degree fault tolerant to cell death. Both artificial and biological neural networks experience graceful degradation (partial recall, incremental death) of higher memories as individual neurons die. We also know the molecular code of long term memory is fault tolerant to the noise, deformations, and chaos of wet biology. The feedback loops between the electrical and gene-protein network subsystems interact somehow to stabilize long term memories in a special subset of durable molecular changes, in spite of all the other biochemistry furiously going on to keep the cell alive.

Single-celled animal. Image: Anthony Horth

I am sure the distinguished futurist and technologist Ray Kurzweil will have a lot more to say about these topics in his next book, How to Create a Mind, which comes out next month. You can preorder a copy here. To understand how these subsystems interact in a living organism, let’s start in as simple a model organism as we can find, single-celled animals, organisms that don’t even have nervous systems as we know them. Wetware, Dennis Bray, 2009 is a great tour of these animals. Single-celled eukaryotes like Stentor, Paramecium, and Amoeba do complex information processing, and hold short-term memories in their chemical networks. In 2008, we learned that Amoeba remember and anticipate cold shocks, for example. These networks include the cell’s genome, epigenome, cellular proteins, cytoskeleton, receptors, and cell membrane. They are true computational networks, with both neural-network like and Boolean logic properties. Genes and proteins integrate signals from other genes and proteins, and selectively switch and transmit signals, just like neurons do. The genes in each cell, via RNA, determine which proteins are made, when and where. Most protein changes are part of the short term computation being done in a cell, but a special few will lead to lasting changes in the epigenome and the cytoskeleton and receptors in and on the surface of the cell. These long-term changes are the ones we care most about, as they store the cell’s unique memory and identity.

Until computational neuroscience[5] can predictively model how the gene-protein networks in a Paramecium allows these animals to evaluate options, assign priorities, regulate their moment-by-moment computational attention, continually vary strategies for chasing prey and avoiding toxins, and chemically store their representations, habituations, and memories in an intracellular environment, all without a proper nervous system, the field will be missing its Rosetta Stone. Electrical waves exist in these single-celled animals, but with the exception of mitochondrial energy production, they are of the most primitive, diffusion-based kind. All the considerable intelligence in these animals is coursing, moment by moment, through their gene-protein networks.

In multicellular organisms with neurons, the cytoskeleton and receptors have specialized into the synaptome, the pre-and post-synaptic molecular modification of our synapses, including phosphorylation of switching proteins like calmodulin kinase II. While there are over 50 known neuromodulators and 14 neurotransmitters in our brain, only six neurotransmitters have been regularly implicated in long term learning and memory in our synaptome. It is these and their partner molecules in the synapse and nucleus that are probably most important to understand and model to crack the long-term memory code.

C. elegans connectome. Image:

Fortunately, even with our very partial molecular and functional maps today we have still managed to work out some basics of neural network interaction in very small neural ensembles, like the somatogastric nervous system (~30 neurons) in lobsters. We’ve even created early maps of very small whole-animal neural systems, like the nematode worm C. elegans, with its 302 neurons and ~6,000 synapses. We mapped the C. elegans connectome in 1986, but we still know just pieces of its synaptome and transcriptome, and even less about its epigenome. Fabio Piano et. al. give us an overview of the state of C. elegans gene-protein network knowledge in 2006. Note their subtitle is “A Beginning.” Jeff Kaufman has recently summarized the very early status today of whole brain emulation in nematodes. David Dalrymple in Ed Boyden’s lab at MIT is working on C. elegans simulation, and he is optimistic about new tools in neural state recording, optogenetics, and viral tagging for characterizing each neuron’s function. As Derya Unmatz reports in a blog post that sounds like science fiction,  Sharad Ramanathan et. al. at Harvard can now take control of C. elegans locomotion by firing precisely targeted lasers at individual neurons in an optogenetically modified worm’s brain, controlling its chemotactic behavior and convincing it that food is nearby.

A small international collaboration exists to emulate the C. elegans nervous system, called OpenWorm. There’s even a Whole (Human) Brain Emulation Roadmap, started in 2007 by Anders Sandberg and Nick Bostrom at Oxford, and a few other visionary folks in biology, computer science, and philosophy. These important projects are quite early and extremely underfunded at present. The biggest problem today is getting more funded people working on them.

To emulate how C. elegansDrosophilaAplysiaDanioMus, and other neural networks actually work, and to begin to extract even crude and partial memories from the scanned brains of any of these and other model organisms, we’ll need a better understanding of behavioral plasticity, and the way the synapse, the nucleus, and neuromodulators bias the pattern generators in neural circuits into a particular set of behavioral patterns. This may require not only better neural circuit maps, but better maps of several still partly-hidden intracellular systems involved in long-term memory formation: gene regulatory networks, the transcriptome, and the epigenome[6]. There are gene-protein networks controlling human neural development, neural evolution, and our long-term learning and memory. A special few of these regulatory networks, their proteins, and the epigenomic changes these networks store during a lifetime of human learning may be as important as the synapse, if not more, in determining how our brain encodes and stores useful information about the world.

A great textbook on gene regulatory networks is The Regulatory Genome: Gene Regulatory Networks in Development and Evolution, Eric Davidson, 2006. It will amaze you how much Davidson’s group has learned about these networks, primarily by studying the evolutionary development of one simple organism, the sea-urchin, over several decades. Last month, Isabelle Peter and others in Davidson’s group at Caltech published the first highly predictive model of how these networks control all the steps in sea urchin embryo development over the first 30 hours of its life. 50 genes are involved, and their regulatory interactions can be fully described in Boolean logic. Now they want to model all of development, and some of the networks controlling its variational processes. Consider the magnitude of their achievement: Davidson et. al. have reduced an incredibly complex biochemical process down to a far simpler algorithm. This is what must happen in long-term memory, if we are to use scanned brains to abstract the key subsets of molecular structures that reliably encode it in our neurons.

Protein Microarrays – An Exciting New Tool. Image:

Neural proteomics and the transcriptome are entering an exciting new phase as we use DNA and RNA microarrays, and now protein microarrays to catalog neural transcriptomes and compare them to other types of human cells, and to other primate and mammal neurons. In August, Genevieve Konopka and colleagues published an exciting paper comparing human, chimpanzee, and rhesus monkey neural transcriptomes. We’re finding genes and proteins unique to particular areas in human brains, especially our frontal lobes. We’re building our first maps of the critical differences in the gene and protein regulatory networks that allowed us to wake up, make tools, and walk out of Africa less than two million years ago.

Epigenome (methylated DNA and modified histones). Image:

We recently learned that what was long called “Junk” DNA, the 98% of each cell’s non-exonic DNA (DNA that doesn’t code directly for proteins), participates at various levels in gene regulatory networks, and through epigenomics these networks can change to some degree over the life of the cell. We’re learning now to map gene-protein interactions in these networks, including epigenomic changes, using tools like Chromatin ImmunoPrecipitation and sequencing (ChIP-seq). Unfortunately, this work is also seriously underfunded. We’ve known about the importance of the epigenome for over a decade. Epigenomic changes can be inherited (watch what you do with your body, as your kids will inherit a record of some of your bad or good life habits in their epigenome!), and thus record unique learning in each cell over its lifetime, in ways we are still uncovering.

The NIH started a Roadmap Epigenomics Project for mapping the human epigenome in 2008, but the funding is a pittance, roughly $40 million a year. There is also a global collaborative research database, ENCODE, for sharing what is presently known about all the functional elements in the human genome. We give it roughly $20M/year, barely life support. There are also various Human Proteome Projects under way, but no one seems to be funding any of these seriously, either. None of the politicians or key philanthropists who could make the Human Proteome and Epigenome into national research priorities have proposed any big initiatives, as far as I know. Even our science documentaries don’t adequately convey the promise of these fields. The scientific community is tooling along as best it can in spite of the fact that the public still hasn’t gotten the clue on how much better medicine would be in ten years if we were spending a whole lot more money on this right now.

Recall by contrast the Human Genome Project, which began with fanfare in 1990 and was rough draft completed in 2000, for $3 billion, a price gladly paid by the U.S. and four other motivated nations. The Human Genome Project was, to put it in proper perspective, our planet’s Moon Shot in the 1990’s, our species latest great leap into “inner space.” As those who’ve read my Race to Inner Space post know, I think understanding the machinery of life and intelligence, and nanotechnology in general, is a destination far, far more valuable to us than outer and human scale (as opposed to cell and molecule-scale) space. We need an international Human Proteome and Epigenome Project race. With good funding and leadership, we might nail our first good maps of the neural gene-protein interaction layer in a decade. With business-as-usual, it will likely take much longer.

As we learn the languages of gene regulatory networks, the transcriptome, and the epigenome in coming years, we should learn how to influence these networks in many powerful ways. Do you think the trillion dollar global pharmaceutical industry is big now? Wait for the therapeutics that may start to arrive in the late 2020s, as we begin to learn how to intervene in these networks. I think it is only when we have good maps of these gene-protein networks that we can finally expect medical advances like better learning and memory formation, elimination of a vast range of diseases including cancer and Alzheimer’s, immune system boosting, aging reduction (epigenomics repair), and perhaps even the uncovering of genetically latent skills like tissue regeneration and hibernation. We are not talking about gene modification (inserting new genes in the germline, or in an adult), but rather about improving dysfunctional gene network regulation, and learning how to assay and minimize important parts of the network dysregulation that goes wrong in each of us as we get older and get various diseases.

Ken Hayworth

There’s a nice analogy here, pointed out by my Brain Preservation Foundation co-founder, Ken Hayworth. The Human Genome Project gave the world affordable gene sequencing in the mid-2000’s, and ten years later, we are beginning to see the major fruits: the uncovering the previously hidden worlds of gene regulation networks, the transcriptome, and the epigenome. Likewise, the Human Connectome Project and the still-unfunded Human Proteome and Epigenome Projects could get us affordable neural circuit tracing and functional gene regulatory network modeling in the late 2010s. Just as the Human Genome Project showed us we had a lot fewer genes than we thought (~21,000 rather than 100,000) the Human Epigenome Project may tell us that our gene regulatory networks are simpler than we currently think, and that of the ~5,000 proteins in a typical cell, there are just a handful that matter to our long-term self. With luck, the remaining hidden layers of the neural transcriptome and epigenome will be functionally understood in the late 2020s. In that exciting time, our ability to understand memory and learning, to read memories from the scanned brains of model organisms, and to build biologically-inspired computer models, will all be greatly enhanced.

So to answer our original question, we need to find out if both chemical preservation and cryopreservation will preserve the connectome, the synaptome, and any long-term memory-related changes in the epigenome in a living brain.

Our Brain Preservation Technology Prize, which focuses on the connectome and many but not all features of the synaptome, is an important start down this road. As we understand better what molecular features in the synaptome and epigenome need to be preserved to capture and later retrieve memories, we’ll also need to find out if either chemical or cryopreservation, or ideally both, will reliably preserve those structures at the end of our biological lives, and whether it will be possible for future scanning algorithms to repair any damage done by the preservation process. We’re too early to answer such questions today, but it is encouraging to remember that long-term memory is a very redundant, resilient and distributed system.  Extensive neural destruction can occur in brains via Alzheimer’s, stroke, and other diseases before our memories are substantially erased and cognitive reserve is no longer available.

Sixty years of histology practice tells us that good perfusion of special chemical fixatives such formaldehyde and glutaraldehyde at death will immediately preserve everything we can see by electron microscopy in neurons. A great book on how this works is John Kiernan’s Histological and Histochemical Methods: Theory and Practice, 4th Ed., 2008. Kiernan has been publishing since 1964, and is a leader in the theory and practice of chemical fixation. There are even a few published fixation methods for whole mice brains. Here’s a 2005 paper by Kenneth Eichenbaum demonstrating a whole brain fixation technique that claims “complete preservation of cellular ultrastructure”, “artifact-free brain fixation” and “no signs of cellular necrosis” in an entire mouse brain. Presumably these methods also protect DNA methylation and histone modification in the epigenome, the phosphorylation of dendritic proteins like CamKII, the anchoring of AMPA receptors in the synapse, and any other elements of long-term memory formation. Presumably these molecules are protected today for years just by aldehyde fixation, if kept at low temperature (4 degrees).  Companies like Biomatrica have even developed ways to store human and bacterial DNA and RNA at room temperature for years. Long term storage of whole brain connectomes, synaptomes and epigenomes at room temperature, an ideal outcome for simplicity and affordability, may work today via additional chemical fixation steps like osmium tetroxide, a process that crosslinks fats and cell membranes, and plastination, a process that draws all the water out of a preserved brain and replaces it with resin.

But all this remains to be proven. If you know of experts who have done work in this area who would be willing to help BPF write position papers on these topics, and who can envision research projects that will answer these questions more definitively, please let me know, in the comments or by email at johnsmart{at}gmail{dot}com. Thanks.


1. There is a much older layer of unique learning in each of us that is also important, the intelligent behaviors that gene networks have recorded in each of us over evolutionary time, as instinctual programs, and the unique assortment and variants of genes we each received at birth. Such networks determine our inherited neural programs, instincts and behaviors that are executed mostly unthinkingly and robustly, and during which other forms of learning, like short-term learning, often does not even occur. To preserve this layer we just need a DNA sample of the preserved person, and that particular uniqueness can be incorporated in any future emulation, assuming future computers are up to the task.

2. Some scientists working on brain emulation, like BPF Advisor Randal Koene, suspect that measuring and modeling the brain’s electrical processes, a topic called Computational Neurophysiology, will give us powerful new insights into artificial intelligence. There are new tools emerging for in situ functional recording of electrical features of the neuron. These may be critical to establish the “reference class” of normal electrical responses, for each type of neuron and neural architecture, the class of electrical representations of information. But if the model I’ve presented here is correct, we won’t need to record any electrical features of individual brains in order to successfully reanimate them later. We’ll see.

3. In Aplysia (sea slug), the sensory neuron neurotransmitter serotonin (5-HT) binds to postsynaptic receptors, activates adenylyl cyclase (AC) in the cell to make the second messenger cAMP, causing a short-term facilitation (STF) in strength of the sensory to motor neuron connection. More of the excitatory neurotransmitter glutamate is released by the neuron to its follower motor cells, and Aplysia pulls away harder from its shock. The neuron is also sensitized: K+ channels are depressed, more Ca++ enters the presynaptic terminal, and the action potential spike broadens. Kinases and phosphatases (phosphate adding and removing enzymes) including cAMP-dependent PK, PKA, PKC, and CamKII control duration and strength of these changes. In facilitation, the spike broadens temporarily, as both pre- and post-synaptic Ca++ and CamKII make molecular changes that temporarily strengthen the electrical signal across the synapse. In short-term depression (STD), the same mechanism temporarily weakens the signal. If water is gently shot at Aplysia’s gills ten times in a row, it temporarily learns not withdraw them, via synaptic depression of motor circuits. This short-term memory lasts for ten minutes, and involves a short-term reduction in the number of glutamate vesicles that are docked at presynaptic release sites in sensory neurons (undocked vesicles can’t be immediately used). Repeat this training four times and the slug will turn this into an intermediate-term memory, making chemical and electrical changes in the synapse that now last for three weeks. Again, all this involves changes only to preexisting proteins and synaptic connections in neurons.

4. In rat and human hippocampus, the primary excitatory neurotransmitter is glutamate. This causes Ca++ influx through NMDA receptors at postsynaptic membranes, and activation of CamKII, PKC, and MAPK. Permanent synaptic changes (Early LTP) include increased insertion of AMPA receptors in the membrane, and phosphorylation of proteins to change the properties of the channel. These receptors are anchored to the neural cytoskeleton, so they have reliable long term effects. Later LTP involves recruitment of pre- and postsynaptic molecules to create new synaptic sites. A few key gene-regulatory networks are involved, with transcriptional and translational control at both the nucleus and the synapse, and control molecules including BDNF, mTOR, CREB, and CPEB. We’ve recently found a memory encoding master control gene, Npas4, that encodes nuclear transcription factors (the copying of other genes into messenger RNA) which interact with hippocampal neurons to encode episodic memory. When Npas4 is knocked out of mice, they can’t learn. We’ve found RNA binding proteins like Orb2, that bind to genes involved in long-term memory. A great and reasonably current text on the molecular basis of memory and learning is Mechanisms of Memory, David Sweatt, 2009. We’re still figuring out the epigenomic regulation that occurs in long-term learning and memory, so you’ll need to go to journals for most of that story, like this 2011 PloS Biology paper on epigenetic regulation of learning and memory in Drosophila. The full size of the memory puzzle is becoming clearer every day. Now we just need to fund the work to complete it. We sure could use this knowledge in all kinds of good ways today, if we had it. Here’s a cartoon of long-term memory formation in both Aplysia and rat hippocampus, from Learning and Memory, John Byrne (Ed.), 2008 (Vol 4., David Sweatt, p. 14):

Print Email permalink (2) Comments (6337) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Regarding the future, here’s an article you guys should consider:

The Future Is Not Accelerating
Annalee Newitz

I have some bad news and some good news for you about the future. First, the bad news. The future is not coming at us any faster than it ever has. We will not become immortal cyborgs with superintelligent computer friends in the next twenty years. The good news is that means we have a lot more time to get our shit together, and possibly to save the world. Welcome to the slow future.

Sculpture by Christopher Locke

One of the big mistakes that futurists make today is suggesting that our future is accelerating because science is operating at a fever pitch. We’re churning out so many magical devices that in twenty years we’ll have transcended death, disease, and poverty. Whether they’re wild-eyed Utopians like Ray Kurzweil or pessimistic doomsayers like Bill Joy (who popularized the idea of a “gray goo” apocalypse), they’ve made the error of assuming that all aspects of our lives will change as quickly as microchips do under Moore’s Law. When you consider that our technology has advanced from the first telephones to smart phones in roughly a century, it’s easy to understand why it seems like tomorrow is arriving faster than it ever did.

Geological Time and Species Time

The problem is that very few things in our lives are like technology. Indeed, most things on the planet — including many subjects of that supposedly accelerating scientific research — are operating on a geological timescale. Evolution, climate change, and the construction of the physical universe down to its atoms are processes that we measure in millions or billions of years. To understand the future properly, it’s crucial that we listen to geologists as often as we do computer scientists. Scientists like Peter Ward and Lynn Margulis, who study billion-year changes in life on Earth, have a much better perspective on tomorrow than someone who has only studied the past century. Earth-shattering events such as climate change are almost never visible from the tiny flash of time allotted us as individual humans.

Because of this observational challenge, it is hard to speed up the process of geological discoveries, whether they relate to climate change, or to materials science that could one day give us fine control over molecules. Unlike computers, which we invented, the Earth’s processes are something we can only understand through observation. And we need time to do it. Maybe not millions of years, but certainly not just a century either.

There is another kind of slow time that we often ignore in our rush to hurtle into tomorrow at light speed. This is called species time. It is the amont of time that a species, like say Homo sapiens, is likely to exist. Most species are only around for about a few million years at most — then they die out or evolve or a little bit of both. Often you hear about organisms like sharks or algae that have lasted for tens of millions or billions of years, but those numbers apply only to a general description of these creatures. Specific species of shark and algae evolve and die out over the millennia, though the same forms re-evolve over and over. In this chart (via Wikipedia) you can see what the typical lifespan of a species is. Note that mammal species like ourselves tend to last about a million years.

The Future Is Not Accelerating

Most evolutionary biologists believe that H. sapiens evolved about 200 thousand years ago. So we’re pretty early in our species life cycle. I know we like to think of ourselves as special creatures, and to be fair it does seem like we are the only superintelligent life that’s ever existed on Earth. But it’s worth keeping in mind that despite all our accomplishments, like electric blankets and cities and videogames, that we are still part of a species whose lifespan is measured in tens of thousands of years.

This is particularly important when you start to think about a reasonable timeframe for the development of space travel and solar system colonization. There is strong evidence that humans first began exploring the oceans by boat about 50 thousand years ago. Reed boats are the technological advance that helped us reach the shores of Australia from Asia at that time. Now, there is mounting evidence that these same kinds of boats, lashed together with simple tools, bore our ancestors from Asia to the Americas about 15,000 years ago. But it was only about 500 years ago that ocean exploration really started to transform our civilizations. Thanks to new shipping technology, buttressed by international trade, we have begun to form a global society. Airplanes have helped too, as has instantaneous communication. But looked at from the perspective of species time, our interconnected world was 50 thousand years in the making.

What if our space probes and the Curiosity rover are the equivalent of those reed boats thousands of years ago? It’s worth pondering. We may be at the start of a long, slow journey whose climactic moment comes thousands of years from now.

In Your Lifetime

Let’s return to the one timeframe that we can all grasp easily: the length of a human lifespan, which under ideal circumstances is around 75-85 years. This is also the lifespan of our computer technology, whose development appears so rapid to us in part because we actually witnessed it in real time. Unlike the development of our climate, or of our species’ ability to travel the planet in miraculous vessels of our own making.

I think it’s obvious why we want to measure the pace of the future using technology, and make computer scientists our guides. Technological change is both familiar and easy to observe. We want to believe that other scientific and cultural changes can happen in similarly observable way because generally we think in human time, not species or geological time. Put another way: We all live in a hyper-accelerated timeframe. Slow time is essentially inhuman time. It is what exists before and after each of our individual lives.

That said, it’s undeniable that technological change and fast human time can profoundly affect events unfolding in slow time. For example, we must act now, in our lifetimes, to prevent climate change from destroying our food security, our livelihoods, and the millions of species who share the planet with us. We must act now to keep our space programs alive. And of course we must keep innovating new computers to help us analyze everything from genomes to carbon atoms more quickly and efficiently.

Still, we can’t expect all the efforts we make in our short lifetimes to pay off in our lifetimes, too. You will not live to be 200 years old. I repeat: You will not live to be 200 years old. Life extension like that is not going to happen in our lifetimes because quite simply it takes time to analyze our genomes, then it takes more time to test them, then it takes more time to develop therapies to keep us young, and then there is a lot of government red tape and cultural backlash to deal with too. Maybe our grandchildren will have a chance to take a life-extension pill. But not us. And that has to be OK. Making scientific promises we can’t keep will do a lot of harm. Ultimately it undermines the public’s trust in both science and people who prognosticate about it.

Many Timelines, All At Once

We need to think about the future as a set of overlapping timelines. Some events take place in human time. Others exist in the slow time of Homo sapiens or the planet’s carbon cycle — or even the Milky Way’s collision course with Alpha Centauri. Problems arise when we believe that all time is human time. We lose sight of long term goals like species survival on a constantly-changing planet. We fail to prioritize projects like food security and instead focus on curing aging. Both are very worthy goals. But one needs to happen now, in human time. The other will take generations.

In a sense, we are trapped in accelerated time. We cannot feel or observe the slow future because we will not live to see it. But it exists, in a way that is more vital and important than any one of us. The slow future is our best hope if we want to steer humanity toward a tomorrow where our species survives.

Link: (

If your “cybertwin” is set up by Facebook, it will be available to
marketers to help them figure out how best to exploit you, and to the
FBI in order to entrap you.  If you don’t want this to be a lever
against you, you had better ensure it is made entirely of free
software, running on a computer that belongs to you.

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Top Hitting Articles for November

Previous entry: A Conversation With Nobel Prize Winning Neuropsychiatrist


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @