(Transcript of the speech presented at Lincoln Center, New York, at the conference Global Future 2045: Towards a New Strategy for Human Evolution.)
I am going to discuss whole brain emulation, about what it takes to reverse engineer a mind. This is a topic that you’ve heard mentioned a few times over, that term at least (at least during the conference), and several of the speakers that you saw today - and more that are coming up - are going to be talking about technologies, or have talked about technologies, that address a specific part of that. But I want to show: How does all this come together? How could you reverse engineer a mind? And I wanted to show: How do you actually determine the goals for something like that?
So where does that come from? How do you specify this? I should mention also that this talk is really just a part - an excerpt - of a book that’s coming out: It’s called the Whole Brain Emulation Coffee Table Book. And yes, it’s a coffee table book, it’s going to have pictures and be easy-to-read, and everyone of you who’s currently attending the Congress, is going to have access to this - a preview of this - so that you can give comments on it, and I hope that you do.
So, I think when you’re talking about what are the goals of something like Whole Brain Emulation (WBE), the real challenge is to first determine what is it that we’re trying to replicate here when you reverse engineer a mind? What is this sense of being? What are we? Well, you’re all sitting here right now and you’re listening to me, you’re feeling the chair underneath you, but what is that? What are those sensations, these experiences? Really, that’s processed stuff, that’s something that’s already being generated inside your mind - and without that processing there’s really nothing there. In fact you can’t really experience the universe directly. It’s all being processed through something. If I’m touching the surface of this lectern here, it’s not that I’m really aware of the touching of it. The atoms in my fingers and the atoms in that lectern, they never really collide. It’s the forces that are then initiating an electrical signal inside my nerve that travels up my arm; and I’m not even aware of that until it’s getting processed in my mind.
So, it’s really this processing, that’s the important part here. We are evolved to be able to experience, and to be in a certain niche, in a certain epoch in evolution, in fact we are really well-suited to all the problems of Paleolithic Earth, maybe a couple of million years ago. The problem is that things don’t always stay the same. So even though we are suitable to those challenges, we don’t know if we are going to be suitable to all the future challenges. The Earth isn’t always going to be a great shelter for humanity. It may come through our own hand that we change something in the environment drastically, or this big hammer from above that can come down and change things on you. Or perhaps, eventually, just as Marvin Minsky pointed out, when the sun goes through its cycles and reaches the point where it makes the Earth unlivable.
Now, it’s not just about this matter of survival. Survival is important, but there’s also the matter of where do we want to go? Where do we want to develop towards? It’s a matter of what can we experience, what is the fullness of where we can go with our ability to take on challenges? And if you think, for example, about the space program, if you think about Neil Armstrong, was he really out there experiencing with space is like? To some degree he was, but he could never truly experience everything, because he had to take a piece of the biosphere of Earth with him. He had to inhale his own scent inside that spacesuit and that was part of his experience of being there. There’s that aspect as well that we need to consider.
Now, do we have an ability to adapt ourselves to different challenges? Yes, we do. It’s in fact a very human thing to do that. We augment ourselves all the time. The clothes we wear, the cell phones you have, the cars you drive: these are all augmentations that we use to be able to fit into different places and do different things. But, there are some things that we’re maybe just barely managing to augment and these are the mental limitations that we have. What we do mostly is we distribute the burden. We specialize. We allow ourselves to depend on others to do things that they do very well. We do it as a society, and now we’re also offloading a lot of what we do, a lot of the computation to computational devices. A lot of that, the storage and things like that. So we’ve got databases that store information. The Internet provides us with info and we can even do things like augment the way that we find romantic relationships by using OkCupid or something of that nature.
So we’ve got all these extra things helping us out in the technological realm, but we know that we have limited working memory. We have limited long-term memory that fades after a certain while and that isn’t very perfect. We’ve got limited sensory experience. There are certain things like x-rays that we can’t even see, and so, really, there are some things that we simply can’t get around when it comes to mental experience and mental limitations. Now the mind being a central part of human experience - this is something that science has been talking about for long time, it’s very clear that that is what it is, it’s central to that experience - and we also consider it a biological machine. Now, even if we think of it as a biological machine, there are some questions being raised. For example, Miguel Nicolelis recently made a comment pointing out that the brain is built up of a vast number of analog nonlinear components that are working in parallel; and that this isn’t really like a computer. And that’s probably true, it’s not quite like what we do in a regular sequential computer. But then again, we know that there are tons of brains out there of various sizes, so it’s certainly not something that is impossible to engineer. And we also know that currently, if we’re looking, for example, at our electronics and we see what’s inside those electronics, those transistors, they themselves are being used in parallel and they are also nonlinear analog components. It’s not like this is something unique to biology.
So, unless we have evidence to the contrary, it seems that the processing that’s going on in the mind could certainly be done in another substrate, as long as that substrate is carrying out the same functions. And if you can do that, then that gives you greater adaptability to different environments and to different challenges. And we call this goal, this idea, to get to Substrate-Independent Minds (SIM). Now the goal itself is not a technique, it’s not a method of doing it. But you can imagine what you could gain from this. You could gain the ability to remember things with the precision of a database, or to be able to experience something like extreme acceleration that you wouldn’t survive in a regular body, or be able to organize your thoughts and your memories in a way that you normally can’t retrieve them, because right now, we rely on this cued retrieval, rather than, say, for example, being able to look up what happened last Tuesday.
Now, if we think of a software analogy, it’s kind of like what we’re trying to do is we’re trying to build different ways of implementing the same function so that it works in another platform. It’s kind of like building platform independent code. You want something that, even though it requires something to run on, it could run in many different kinds of things. Like a PC, a Mac, whatever we want to have there.
In the last hundred years, neuroscience has mostly focused on looking at these basic components in the brain, things like neurons and synapses, neurotransmitters. How do you measure signals in there? How do you determine what’s in there? What’s it doing? So when we’re looking at the fundamental level, we consider what would happen if we could recreate neural tissue at that level, and can we reproduce functions of the mind from there up? That’s something that we may be able to do even if you don’t understand all the strategies of the brain from the top down at the moment. It’s kind of like taking a software program and copying all the instructions rather than trying to come up with a new program.
So what is it that we’re trying to capture here? We’re going to see this person pop up a few times, because I’m taking you through this process: What would it take to do a Whole Brain Emulation for that person? How will all these technologies fit in? Well, you are experiencing a number of things: You’ve got phenomena in your mind that are sometimes conscious, some subconscious; you’ve got subconscious processing going on like your vision, your auditory, motor processes, stuff like that. We want to replicate all of that experience. You want to replicate the subconscious and the conscious.
So what I want to make, first of all, is a distinction between simulating and emulating just so that we can talk about these things, and what it really means, because there’s a lot of work going on right now that is involved in looking at how do systems work generically. What would be a piece of neocortex, for example, what would it be doing, what sort of oscillations can you see in there, what connectivity? You’ve got projects there that are taking statistical data from a lot of animals looking at how the cells connect to one another, what sort of neurotransmitters are there, and using stochastic data to put together a model with which you then can simulate dynamics activity.
Now, if you’re looking at a specific piece of tissue and you want to make something like a neuroprosthetic, the sort of thing that you saw Ted Berger talk about, then you want to be able to get the data from that specific piece of tissue, because you want to know how is what is going on in there - the memories, the stuff that it knows - how is that implemented? You don’t just want a generic idea. So this, the specific way of putting the neural circuitry together, that’s what we call an emulation.
Now, if you’re going to try to re-implement something by doing an emulation like that, what you’re actually doing is what we call System Identification. You’re trying to identify, for all the different things that can happen to the system (this black box you’ve got input coming into), what does it do to create the output? How can we predict what’s going on there? Predicting things is something that as scientists we try to do for all sorts of stuff in nature. Nature is composed of a lot of different parts, and those parts aren’t completely independent. You can say something about this part if you know something about that part. That’s often the case, that’s what we try to do. Now, what you do when you’re trying to do this kind of System Identification of something you don’t know is you want to try to start simple. You don’t start with the most complex model. You’re not trying to describe how gases expand by looking at the movement of all the individual molecules. You want to make a model that can perhaps fit what you’re trying to look at, study the effects are interested in, focus on those, and then find out what the scope is, what the details are that you need to get.
In neuroscience, we look at these effects - we often call them behaviors. It’s things like sensory perception, learning and memory, emotional responses and consciousness, and self-awareness. Those are the kinds of effects that are interesting to us. And when we are trying to try to make a prediction for that, we’re trying to see if we know the state of the system at this time and we look a small time later on: What could we say about the system then, and how well could we then predict what’s going on there?
Of course, with all of these pieces, that’s a lot of data in there. And if we were looking at a computer, and this is an example, then we’d know that the signals we’re interested in are the zeros and the ones. In that case we know that what we want is we want to replicate how the zeros and ones on the input are being turned into the zeros and ones on the output. So, we’re not going to look at things like how does cosmic radiation affect this chip, we’re not going to look at how does it heat up its environment, what’s the other kind of noise going on in there. We’re really just interested in: How does this affect the processing going on in there? When we’re talking about brains it’s is a similar problem. We don’t want to describe necessarily everything at the molecular level, unless it’s necessary to get at the effects we’re interested in. And we certainly don’t want to start replicating things that make it just as difficult for us to interface with the emulation as it would be to interface with us right now.
Now, neural tissue responds to a whole bunch of different things. It can respond to electric signals, it can respond to temperature, to pressure, to electromagnetic fields. There are a lot of things that you can do here. A lot of the signals tend to drown in noise when you look at it in more detail, so you have to really study that. But as a first approximation there may be some initial assumptions we can make about the types of signals that neurons are really suited to dealing with. We can start there. We can start there as a first system and then iterate down. Now one thing we know is that neurons tend to produce these things we call spikes or action potentials, and these carry a lot of information. For example, when you look at our sensory system in the cochlea, the sounds that you hear are being turned into trains of these spikes, so times at which there is this spiking going on and it’s being delivered to your neurons. The same way when you look at vision, the retina is turning things into spike trains. When you look at motor control, say for instance how I’m speaking to you right now, the vocal cords, how are they being driven? This is being driven through spikes and rates of spike trains and things like that. And we look inside deeper cortex, and when you’re looking at how are we storing all the changes - these things that make us characteristically who we are - the synaptic changes. They also depend on differences in spike timing. So, if a neuron here fires and then another one fires, then we strengthen the synapse between it. Those are the important things to look at there. So, at least we could say that perhaps, as a first approximation, a model where we can predict the timing of spikes in all the neurons of the system that we’re looking at, be it a fruit fly, be it a human, then perhaps we would have a good first approximation emulation of a system like that.
Of course, you’re not going to start with the most complex descriptions of even that system. So, for example, later on we may want to look at things like: Are glial cells important individually? Do we need to describe what all of those are doing? Those are little cells around the neurons that have a support function there. Right now we describe them in a very global way. We say they have neuromodulatory effects that are more global, but, perhaps we would need to look at them in more detail. We don’t know all this yet, and that has to be studied as you get closer iteratively. And if you want to look at more of the details about this, that’s also going to be in the book there. The most important point I just want to make right now is that you need to have some success criteria. You need to know what are we aiming at here? When do we call it a successful emulation? How far do you have to go with this?
And there are some subjective things going on there. Because when you say, what would you call - what would you still consider your “self” or how do you say that your “awareness”, your “experience” is fully experienced and it is you? Well, I’m not the same person I was when I was five years old. I’m definitely not the same person. And if I jumped from then to now, it probably wouldn’t seem like that was still me. But with all the pieces in between, all the gradual changes that were going on, it seems acceptable somehow. So you could think that there is some degree of change that is acceptable, that is a close enough approximation of whatever you call the system and its experiences.
Now, you’ve already heard Ted Berger talk about the hippocampal neuroprosthesis. You heard him talk about how you would look at some black box where you know it has a function inside the brain, it receives some inputs and outputs. When trying to describe what are these functions being carried out there, you need to observe a lot of data and find out how to put that together. The good thing about this, as we’ve already seen from his example, as a proof of concept, is that it is possible - at least within constrained tasks, within a constrained set of what we want to emulate or make a prosthetic for - that it is indeed possible to build that.
The problem here is that, of course, you don’t want to miss any of the latent function that’s inside of the system. If you get to something more complicated than the experimental task, the model that the chip that Ted Berger built was made for, you’ve got systems that are so big that there are a lot of parameters in there. And if you’re just observing the input and output of a large system – say if you were to observe the entire brain, for example - you could watch its entire lifespan and you probably wouldn’t be able to represent everything that it could possibly do, like every way it would respond to a tune that it heard or something that was going on, because there are things in there that were laid down genetically. There are things that were laid down in processes of development that you couldn’t observe, and there are so many other aspects that you just can’t figure out. Plus, it’s a huge computational problem to figure out how to set up the parameters for a model based on something that big. What you do if you have a big problem like that is you break it down into smaller problems.
So, that’s the first step really: If you want to do Whole Brain Emulation, you need to break it down into small problems. That’s why there’s this picture up here, basically of a description of a neuron in the sense of tiny compartments, and you can say something about each compartment, making it a much simpler system that you can describe with just a few parameters and say more things about.
So, what we have now, by looking at it in that way, is we’ve actually got a roadmap to how you would manage to describe what’s going on in a piece of neural tissue or in a brain. What kind of tools would you need? There are a number steps here. It’s kind of a roadmap and I like to look at it in terms of four different pillars or areas.
The first one, which is here, Validating Scope and Resolution, is really about iterative hypothesis testing. It means, we’ve got this initial model and then we want to test how well does it fit all the behaviors and things we’re looking at? What do we need to do to drill down to where we’re really trying to go? And this testing has to happen over and over again. We’re going to see some examples of that as we go the through various other pillars here.
Then there’s the structural part where we know that we’re breaking the system down into pieces, so we need to know how are these pieces communicating with one another?
The third one is the functional characterization, which is that we need to understand inside of each of the simpler systems, “So how do they work?” We still need to characterize those transfer functions of what’s there, make prosthetics like when Ted Berger did that for that particular part of the brain.
And then we need someplace where you can put all that data together, represent what it means, have a platform on which you could emulate a brain. So it’s hardware, software, those sort of things.
Again, going back to a particular brain, a particular person, your self, your mind, your experiences – how are you going to identify all those little pieces? How would you get that structure out of there? This is what we call connectomics, really. Connectomics is the study in neuroscience of how do all the pieces fit together, what are all the connections in the brain? This is because we know that these spikes, for example, like to travel along axons and dendrites, from neuron to neuron, so for them that’s the main way of communicating. You could use something like an MRI to look at the brain and break it up into little 3-D voxels and say something about how those voxels can communicate with one another. That’s a bit rough. Furthermore, you could take something like, say, put a piece of tissue on an array of electrodes – we’ve got microelectrodes there - and you could say something about all those neurons and how they’re talking to each other, which one fires first. But that’s at small scale. So what can we do to move on beyond that?
An interesting approach was one taken by Tony Zador who works at Cold Spring Harbor Laboratories, which is to take synthetic pieces of DNA or RNA and use them as barcodes. If you put one of these barcodes inside every single neuron and they have this host barcode, but then there is a trans-synaptic virus, say for example a modified rabies virus, that can take it right across one synaptic boundary - it’s like you’re sending a piece of your DNA to your nearest neighbor. It’s as if all of your neighbors were sending you postcards with their address on it. And then you could look at all of the snippets that you find inside a cell - the host DNA and the DNA from the neighbors – and this gives you an idea of what are all the connections there.
So, it’s a way of looking at the connectome in a certain way. You don’t know how many synapses there are, you don’t know what type of synapses, you don’t know where the synapses connect to certain dendrites, so there’s a certain level of resolution again in this case.
If you go beyond that, we can start looking at something called volume microscopy, which you can do in many different ways: You can do it optically, you could do it in electron microscopy…Optically, a very new process that came along recently is from the Deisseroth lab at Stanford. They developed the CLARITY protocol, which is one where you can replace the lipids in the brain with a hydrogel polymer that allows you to keep the morphology of the brain intact, keep the proteins in place. You can still examine them, find out what sorts of neurons might be in certain places, things like that, and shine light through it, so you can do optical microscopy. It’s powerful, but it won’t get you down to these 40 nm tiny pieces of axon where you get synapses there. So the high resolution details, you still can’t see those with that, but it’s a good tool.
So, you have to look at many things. Now the next one I wanted to talk about was really electron microscopy. Although, this is actually a video of optical, because this is from the Knife-Edge Scanning Microscope and I didn’t have a good video ready for the electron microscope version, but I’m sure you’ll see some in the next talk, because Ken Hayworth will talk about it.
This is where you get down to 5-10 nm resolution, and you can see individual synapses and vesicles, so you can determine something about what is the strength of the connection, what kind of connections are there. Great results have come out of the labs of Winfried Denk and Jeff Lichtman, and now also Ken Hayworth who’s down at Jenalia Farms; and recently in 2011, there were some really breakthrough publications. There were publications by Brigmann et al. And Bock et al., that actually show that you could make predictions looking at just reconstructions from this geometry, from this morphology, about, for example in retina, well, which ganglion cells there were sensitive to what type of information. So, you know how they’re sensitive either to, say, horizontal or vertical bars, things like that in your retina. They could make predictions about that, and they tested it by looking in the same tissue at activity recordings they had done. And they found that they could indeed predict something about it. So it was a good proof of concept for that.
But of course there are some issues with this. There are some problems of reliability of your measurement. So, if you can only do a measurement once, and you can’t go back and validate whether the activity corresponds to what you’re assuming in your model, it makes it a little bit difficult. Also, there is a problem of volume, because right now the results we’ve seen out of those labs are from really tiny volumes of brain tissue, and if you wanted to go up to something much larger you would need automation and speed-ups, which I think Ken Hayworth will also address in the next talk. So there’s that problem, and you can’t necessarily constrain all of your parameters just based on what you can see, because not everything is obvious from the morphology. You can’t judge a book by its cover.
So, the connectome helps us to break down the problem into all these pieces, and we can look at how those pieces connect, but what can we still say about the functions that are going on in your mind in each one of these little bits? How can we characterize those models better? We need to look at the activity that’s going on in there. So the next real challenge is large-scale high-resolution activity mapping in the brain - the Brain Activity Map. This is the main topic of the BRAIN Initiative that recently came out. And there are some straightforward approaches that you heard Ed Boyden talk about, some of the ways that we can move forward, for example, with just making arrays that produce way more microelectrodes so we can get a lot more data from the brain; maybe analyze an entire volume of tissue in three dimensions by using that in combination with optoelectric methods and get a lot of data out of it using protocols that we’re sort of familiar with that we know how to work with. But you can see how, if you wanted to do this at the highest possible resolution, if you wanted to get data from every neuron in a brain, you’re going to turn it into a pin-cushion that way, and this could not only be a bit harmful, but it may also affect what you’re trying to measure.
So there are some ways to move in different directions there. Another way to do activity measurements is for example fluorescent microscopy, where you can look at the activity in cells with a microscope by using these fluorophores that are going to light up and show you something about activity in a cell. One place where this is used is in animals that are transparent, for example, or in small animals. We’ve got here the C. Elegans nematode. It’s only got 302 neurons. It was mentioned before in the Congress, and there are at least two projects going on right now that are trying to specifically get at all that detail, get the connectome out, emulate that system. The company here, Nemaload, which is run by David Dalrymple is actually gathering all of that activity data about these cells in the nematode, so that they can get around to doing that kind of emulation.
Now, there are some other interesting approaches, such as the Molecular Ticker-Tape, this notion that you can take - again kind of like in Zador’s approach - you can take little snippets of DNA or RNA, and you can use them inside a cell, circularly amplify it so that you got a repeated version of the same synthetic DNA that you know what the code is supposed to be; and you’ve got a channel in there - for example voltage gated channel - that is interfering in some way with the circular amplification so that you have a change of error that depends very much on the activity of the cell. If you look at that, then you can have a cellular level or subcellular level device, a little synthetic biology device, really, that you can use to record from many cells at the same time. This is the work that, of course, George Church is involved in, and many others who are in the same group that was involved with the BRAIN Initiative. And the idea is that, yes, that way we can read out from many cells at the same time, and at high resolution.
But there are still some issues with that as well. For example, with the Molecular Ticker-Tape there are some issues of reliability. So how many of these tapes do you need per cell to make that work reliably? How do you align all these tapes in different cells and know when they all started recording, that they all started recording at the same time? Where can you line up the time course in those things? How do you get the data out? So, all this sequencing that needs to happen. For example, how do you combine that technology with technology where you want to get the structure out as well? So, if you want to do electron micrographs and look at that too, how could you do that when you’ve already had to take this neural circuitry apart by taking out the DNA in order to sequence it? Well, there are some approaches that you might be able to use, but you can see that there are some hurdles to overcome. And I think we’re going to see many of those addressed in the next few years.
But instead of working with that, there’s another approach that’s interesting, which is that we can go to a technology more familiar to us like integrated circuit technology, electronics. And we’ve heard people from UC Berkeley talk about this on the first day. Their “Neural Dust” approach is very similar. This is just another way of representing that in this case. For example, the idea was to power it with infrared light and communicate with infrared instead of ultrasound. That’s because there are several labs working on this technology in different ways at at UC Berkeley, MIT, and Harvard. The notion here is that if you get down to a size like 8 microns, where these chips are small enough that they fit inside the body without breaking barriers and they can even fit inside the vasculature, then you can take them to basically every part of the brain. Because every neuron requires nutrition from the vasculature, so you can get in that area and then you can communicate with them through, for example, procedures that are similar to radio frequency identification, except perhaps in the infrared domain or through ultrasound, so that you can set up a network and talk to them. The really interesting part is when you look at what we can do nowadays at that scale. The the number of transistors that you could, for example, get on a block that is the size of a red blood cell would be about the same number of transistors that were used in the original navigation systems for cruise missiles. So it’s really not that little.
Of course, even getting down to that size isn’t really going to get you the whole picture. If you want to be able to do something where you gather data from the entire brain, you need a hierarchical approach, where you have some systems that are communicating data out, gathering data; some of them that are recording stuff near neurons. So, you have smaller and bigger pieces, and some of that was also illustrated yesterday when they were talking about the Neural Dust approach. What it turns into is basically having a cloud of specialized probes that are working together as a team and recording things. But we have the huge advantage in this case that what we’re working with is a mature field in the sense that integrated circuits are something we’re very familiar with. We know how to build networks between them. We have lots of engineers and designers who can work with it. We have a product path where Intel, for example, knows exactly what they’re doing in the next few years, so we know how it’s going to be scaled down. It can take advantage of Moore’s law. Right now, we see that some of the stuff that’s being looked at in the labs, perhaps, has a diameter 10 times larger than we’d like it to be, but we can see where that’s going in the next few years and how we can scale everything down.
We still have challenges, of course. You need to power all these things, you need to get signals back in and out, and we know that in order to explore all the possible ways to do this, we still need to look at all of the ways in which neurons can communicate. For example, they are sensitive to many different types of signals. So, there’s a lot of room to explore. And in addition to exploring these different ways of getting information in, getting information out, powering things, how do you get down to that scale? There’s also the possibility of combining a lot of the technologies that we’re looking at. So, instead of merely betting on many horses instead of just one (by looking at all these different technologies I just discussed), there are ways of combining that work, trying to get the best data. Say, for example, if you have something that does use wires near the surface of the cortex, because it’s a good way of getting a high bandwidth of data somewhere, but you have these little Neural Dust things deeper down, you can imagine how you can put things together in a good way when people are talking about this and trying to target the same goal, which is this notion of how you emulate or how you reconstruct an entire piece of tissue, an entire brain. And again there is more about this in the book.
But even before we get total access, before we have this ability to emulate entire brains or put neural interfaces everywhere, you’re going to get some really interesting benefits out of working in this area. So, for example, if we put sensors inside of ourselves we might be able to record the sounds and sights that we that we are experiencing at the moment and put them somewhere else so that we can re-experience them in exactly the same way without worrying about degradation of those memories. In the same way, if you put recording devices inside the hippocampus you can imagine that if we can record how these cells that are kind of turning on and learning what is happening at this moment - you know, the hippocampus is responsible for episodic memory, with the ability to learn what’s going on in this event - you could store that away. If you re-stimulate those cells in the same order, you experience the same event again. So this means, you could put that information about that event somewhere. You could label it, tag it, “this happened on that day,” “this was when I was at the Congress.” You could recall that experience explicitly when you want to, without having to go through this cued process that we use right now. It’s a whole different way of working with out own memory. You can imagine eventually being able to take information from something like the Wikipedia or Internet and just Googling things and it would appear in your memory, feeling and seeming like something that you just pulled out of your memory.
So, there are ways that you can go with this high-bandwidth neural interfacing as we go along, but it’s even better. Even before we get to high bandwidth, when you’ve got neural interfaces, even at low bandwidth, simple interfaces, just a few probes might be able to do really useful things. For example, they could tell us something about, well, are you stressed? Or, what’s happening, what are your levels of anxiety? And, can we do something about that? They can help you through a feedback loop to try to either achieve wakefulness when you want to, or go to sleep when you want to. Or, it could help out with the detection of epileptic seizures, and when there might be an onset of epilepsy and maybe even prevent that. And then of course, there’s the matter of paralysis. If you’re paralyzed, if you could just detect the will, the desire to move in a certain way and then power functional electrical stimulation (FES) electrodes in your muscles, you could regain the ability to move those muscles by will. So, even with just simple interfaces you can already start doing some pretty cool things.
Now, there is there another step here: The final step really is to take all these measurements. What are we going to do with that? Where do we go once we have all the tools to acquire all that data? How do you turn that into a model? You need to turn it into something that makes sense, where you know, okay, this is how we fill in all those parameters. You need to be able to combine results from different tools, different ways of measuring. You need to know how to put those together. And eventually, of course, you need to have a suitable platform to run that on. You need to have something that can carry out the functions that you are looking at. And there’s some theoretical work that’s gone into this, and also some practical work into neuromorphic technology that I won’t go into in detail here, but you can again look at the book when it comes out, and in the preview that you’re all going to have access to.
Really, what we’re trying to do in this whole process, aside from figuring out how deep do we need to go, what do we need to do to create this experience of “being,” in an emulation, is: Where’s that sweet spot between developing ever-more-complicated technology that needs to get at a higher resolution of data, and where can we go by filling in the system, by building these model parameters out, by computing, by optimizing parameters, which is really something that kind of works in the opposite direction. If you have more measurements, your systems become smaller. The systems become easier to predict. If you have less measurements then you have bigger systems, so you have a larger computational problem, and you also need to observe the system over a longer period of time.
Now, what would it really take to emulate your brain, for example? If we just assume for a moment a very simple approach, which is to just use a supercomputer and no special hardware, and use this approach of breaking a neuron apart into lots of compartments. Let’s just say, for example, that we did that and there were 10,000 compartments for every neuron and that if we want to emulate these things with the Hodgkin-Huxley equations that we would take about 1200 floating-point operations per second for each of those compartments and you have 100 billion neurons. What it means, in the end, is that you need to be able to store about 10 PB of data, and you need to be able to process about 1 ExaFlop, or about 10^18 floating-point operations per second. This seems huge, but when you look at, for example, the type of data that is currently being processed by Google - it’s about 24 PB a day. When you look at what’s being targeted in terms of ExaFlop computing, so even in the supercomputer domain, by 2018 we should expect to be able to have that. But of course, that’s not a very efficient way of doing things.
Now, if we’re looking ahead – again, this is not really a prediction, it’s more sort of a “let’s look back and see what can we expect coming in the next couple of years” - in the last few years, when we look between 2008 and 2012 we saw that the area of connectomics really took off and gave these wonderful results. Now, in 2013, we’ve got the Human Brain Project in Europe, we’ve got the BRAIN Initiative and we’ve got things like the 2045 Initiative that are also trying to pitch in and help out, and we’ve got the Allen Institute and other institutes like that that are all trying to help out in the same domain, in that area. What could we expect to see by, say, for example, another five years from now – in 2018? If we see the same sort of development in getting activity data from the brain as we saw with the structure, then perhaps by 2018 it would be acceptable to, say, come up with the project where you say let’s take Drosophila, this fruit fly with 100,000 neurons, and we’re going to get both the activity data and the structure data, and we’re going to put it together and we’re going to make an emulation of that, or try to make an emulation of a fruit fly brain. So perhaps by the year 2018, that’s a project you could start.
Now, of course all of these technologies have risks and benefits, and the only way you can really look at any risks and benefits of technologies is by learning about them, because we don’t know in advance how things are going to be applied. So the good thing about it, of course, is that all of this gets developed iteratively. It’s little steps. So, as we’re going along these steps we can learn about that. You know, back, back, way back when Aristotle [wrong: Socrates] thought that the idea of reading and writing was really bad, because it’s going to change us fundamentally at the core as humans, because we wouldn’t be able to remember things we used to. Stuff like that, now we can hardly imagine a time without reading and writing. So we know that sometimes making steps like that is exactly what humans are about and we need to learn about it as we go.
So, what would it feel like to actually become emulated like this? I don’t really know what it would feel like, except I think, because we’re already used to a certain degree of change from moment to moment, we can imagine that if the amount of change is similar, then it won’t feel all too strange to go into a different substrate, to be able to experience that. And as we go, making neural prosthetics, we’re going to be able to help out with things like solving problems of trauma and disease, and it helps us to store away information we find very valuable, more valuable than many of the documents that we make backups of. I think in a sense, this allows a certain diversification of our species in the way that we move forward, that gives us a much better chance of dealing with challenges in the future. There are so many things here, like personal identity, self-awareness, our qualia, all stuff that I haven’t addressed, but that are actually also addressed in the book. If you want, take a look at that.
So, I’ll leave you with this final bit, which is that – over the years, organizations like my own, carboncopies.org and now with the 2045 Initiative, we’ve been trying to map out these areas. We’ve been working with the experts, and with projects that are out there, and also with you, the public, to try to give more understanding to what may be possible, how you would go about it, what are the goals of something like that. And the thing is, right now we’ve got this wonderful infrastructure here, science infrastructure, an economy that is able to tackle big problems and we don’t know for sure if we’re going to have that in 20, 40, or 60 years. How can you say for sure what you’re going to have then, right? So we have this opportunity now to understand more about ourselves, to learn what it means to exist, and to help us thrive. So I think that we really should grab that opportunity.
Image 1: Neural Prosthetics
Image 2: Whole Brain Emulation
Image 3: Whole Brain Emulation: The Blue Brain Project
Image 4: A Goal of Neuroscience
Image 5: General vs. Subject Specific Prosthese
Image 6: Brain Emulation in Health Care
Image 7: A Final Thought