The human brain is a big, complicated system, with different parts doing different things. No one fully understands how it works, yet. But like many other researchers, I think I have a fairly good idea, at a high level.
What I’m going to give you here is a capsule summary of how one AI researcher, and sometime computational neuroscientist, thinks about the structure and dynamics of the human brain — the “brain according to Ben,” if you will. I’ll also briefly discuss the relation between brain function and current approaches to artificial general intelligence.
If you want an in-depth yet concise tutorial on the current state of textbook neuroscience knowledge, try this one from Columbia University. Rather than reviewing the basics in “Neuroscience 101″ style detail, what I’m going to do here is give an overview of what I think are the most critical points and how they all fit together.
Reader beware, though: Neuroscientists have discovered a lot, but there are many different, widely divergent expert opinions about how to integrate the diverse data available from neuroscience into a coherent whole. The ideas I give here are just one opinion, albeit one I think is well grounded from a variety of directions.
The Big Picture
I like this picture created by IBM researcher Dharmendra Modha and his team:
As I discussed in an earlier blog post, this picture shows 300+ regions of the macaque monkey brain and how they connect to each other. Most of these correspond to similar regions in the human brain; and a similar diagram could be made for the human brain, but it would be less complete, as we’ve studied monkeys more thoroughly.
Each of these brain regions has a literature of scientific papers about it, telling you what sorts of functions they tend to carry out. In most cases, our knowledge of each brain region is badly incomplete. The nodes near the center of his diagram happen to correspond to what neuropsychologists call the “executive network” — the regions of the brain that tend to get active when the brain needs to control its overall activity.
But all these different parts of the brain do seem to work according to some common underlying principles. Each of them is wired together differently, but using the same sorts of parts; and there’s a lot of commonality to the dynamics occurring within each regions as well.
Between Neurons and Brains
All the parts of the brain are made of cells called neurons, that connect to each other and spread electricity amongst each other. The spread of electricity is mediated by chemicals called neurotransmitters — so, one neuron doesn’t simply spread electricity to another one, it activates certain neurotransmitter moleculee that then deliver the charge to the other neuron. Things like mood or emotion or food or drugs affect these neurotransmitters, modulating the nature of thought.
There are also other cells in the brain, like the glia that fill up much of the space between neurons, that seem to play important roles in some kind of memory. Some folks have speculated that intelligence relies on complex quantum-physical phenomena occurring in water mega-molecules floating inbetween the neurons — though I have no idea if this is true or not.
The part of the brain most central for thinking and complex perception — as opposed to body movement or controlling the heard, etc. — is the cortex. And neurons in the cortex are generally organized into structures called columns. The column is the most critical structure occupying the intermediate level between neurons and the large-scale brain regions depicted in Modha’s diagram above. Each column spans the six layers of the cortex, passing charge up and down the layers and also laterally to other columns. There are a lot of neurons called “interneurons” that carry out inhibition between columns — when one column gets active, it sends charge to interneurons, that then inhibit the activity of certain other columns.
And columns tend to be divided into substructures that are often called “mini-columns”, or sometimes just “modules.” In some cases, it seems that each mini-column represents a certain pattern observed in some input, and the column as a whole represents a “belief” about which patterns are more significant in the input.
In the visual cortex, you can have columns recognizing particular patterns in particular regions of space-time, for instance. So one column might contain neurons responding to patterns in a particular part of the visual field — where the neurons higher up in the column represent more abstract, high-level patterns. Lower-level neurons in the column might recognize the edges of a car, whereas higher-level neurons in the same column might help identify that these edges, taken together, do actually look like car. But the functions of columns and the neurons and minicolumns inside them seem to vary a fair bit from one brain region to another.
If you’d like to dig deeper into the column/minicolumn aspect, check out this recent review of mini columns; and this more speculative paper, that proposes a particular function and circuitry for mini columns. A capsule summary of the literature these papers represent is:
* cortical columns are in many cases well-conceived as hierarchical pattern recognition units, using their minicolumns together to recognize patterns
* the minicolumns in various parts of cortex are implementing a variety of different sorts of microcircuitry, rather than possessing a uniform internal mini-columnar structure.
Glocal Memory and Complex Neurodynamics
One of the tricky things about the brain is the way it mixes up local and global structure and dynamics. Each cortical column does something on its own, but also, it stimulates and inhibits many other columns — thus potentially causing a brain-wide pattern of activity. So each column has a local and a global aspect — something I like to describe with the weird word “glacial” (see this Neurocomputing paper for a technical treatment of the concept). There’s a lot of evidence for this glocal aspect in terms of human memory — memories of specific objects or people seem to be stored in networks of hundreds to thousands of columns, but the network corresponding to, say, “Barack Obama” can be triggered into activity by stimulating just a few of the columns involved in the network.
This glocal aspect allows columns to react to each other in subtle ways. If one certain column causes a global brain activity pattern, and then other columns react to this pattern, then basically these other columns are reacting to that one certain column. This turns the collection of columns into a complex network of “actors” that act on each other. Since each column can learn and adapt based on experience — using the ability of each neuron to modify its connections to other neurons based on experience — we have a population of actors (columns) that are constantly acting on each other (by reacting to the global activation-patterns each other cause) and then adapting based on this interaction.
One can prove that this kind of system is able to give rise to endlessly complex forms, and do any kind of calculation that a computer can do. In a technical paper from 2008 (and see video of the corresponding talk here), I gave some specifics regarding how certain sorts of neural circuitry could give rise to the sorts of abstraction we see in mathematics and language.
Sound complicated? Yeah, of course it’s complicated — what did you expect?
In technical terms, this sort of dynamical system is not only complicated but chaotic and complex. Via combination of holistic and localized dynamics, cortical columns spawn complex networks that respond to the world and each other in complex ways. The real story of brain function lies, not in the division of the brain into regions, nor in the particular algorithms inside minicolumns, but in the dynamical networks that self-organize in the overall network of columns. These self-organizing networks guide the process via which the brain’s holistic state guides the behavior of each brain region. These networks are learned via experience, meaning that the brain/body’s experience continually guides the columnar network’s global self-organization process.
And the columns and minicolumns in the different regions of the brain, underlying all this self-organizing network activity, are all architected and interconnected in slightly different ways. And they’re not architected by some clever rational designer with an eye for elegance and order — they’re architected by evolution, with its penchant for heterogeneous intercombination of elegant efficient order and confused wasteful mess.
Neuroscience and AGI
Suppose this general picture of how the brain works is correct — what would it mean for Artificial General Intelligence, for the quest to build thinking machines with intelligence at the human level or beyond?
It would mean that, once we’ve unraveled the specifics of how all the columns and their internals and interconnections work, then we could build a digital brain. We’d need a heck of a lot of computers to do it — because in the human brain all the neurons can act independently at the same time. But with the help of ongoing exponential acceleration in computing power, we could do it!
On the other hand, before brain imaging technology advances far enough to tell us how the internal details of the brain actually work — then what? Someone could certainly try to architect an artificial brain-like system by guessing how all the neurons and columns are wired together, in all the different subsystems of the brain… or by making something up that’s generally similar to how the brain works, but different in details.
However, that seems extremely difficult — which is probably why nobody is actually trying to do this at the moment. Instead, it seems to me that what folks who advocate brain-like AGI architectures are doing, is basically like this:
* take a crude approximation of one part of the brain
* hypothesize that the whole brain basically works like that one part in detail
* try to make a quasi-simulation of that part of the brain
* make various compromises in biological accuracy to achieve more computational efficiency.
A great example of this is the HTM — Hierarchical Temporal Memory — system proposed by Jeff Hawkins’s company Numenta. Jeff Hawkins made a fortune with the PalmPilot and Treo handheld devices, and after he retired from that business, he went into neuroscience and AGI. The HTM system is basically a model of the visual cortex — maybe the auditory cortex too — but it’s being proposed as a model of the whole of intelligence. It may be an OK model of vision and audition — though there are arguments to be made even there — but it has nothing to say about action, language parsing, social reasoning and emotion, and a whole lot of other things that are critical to human intelligence. It doesn’t even have much to say about senses like smell and touch, whose corresponding brain regions don’t have the marked hierarchical structure and dynamics that the HTM model focuses on.
I’m not saying this sort of work is worthless, by any means. One of my good friends, Itamar Arel, is developing a system called DeSTIN that somewhat like Hawkins’ HTM — but Itamar’s seem to work better. It’s a hierarchical pattern recognition system, that recognizes patterns in a stream of inputs. It doesn’t have cortical columns exactly, but it’s kind of similar — it has “nodes” corresponding to different space-time regions of the observed world, and they’re arranged in a hierarchy, and higher-up nodes refer to larger space-time regions and more abstract patterns. It’s pretty nice, and if you hook it up to a webcam or a robot’s camera eye, it reacts to its inputs and settles into states that tell you something about what objects and events the robot is seeing. Excellent! But my own interest in DeSTIN, is largely because I can connect it to the OpenCog AGI architecture I’m working on — and in my own view, OpenCog takes care of a lot of other aspects of intelligence, that DeSTIN in its current form doesn’t touch.
Itamar, on the other hand, thinks he can basically take DeSTIN, implement it on a lot of machines, tweak the algorithms a little, connect it to a robot, and get advanced general intelligence (see my H+ Magazine interview with Itamar here). He has plans for an action hierarchy similar to the perception hierarchy, and then a reward hierarchy that gets a stimulus when the system has done something good or bad and passes this along to the action hierarchy, which then passes it along to the perception hierarchy. I agree that adding some stuff onto DeSTIN would be necessary to make it do anything like human-level intelligence. But I think you’d need to do a lot more than just add action and reinforcement hierarchies. I think the human brain is just a lot more complex than that, and any AGI system that’s vaguely like the human brain is going to have to be a lot more complex than that. There will have to be many different architectures corresponding to many different brain regions, each one carrying out its own functions and all connecting together appropriately.
To take just one example almost at random, the human brain is known to deal with episodic memory — memory of your life-story and the events in it — quite differently from memory of images or facts or actions. But nothing in architectures like HTM or DeSTIN tells you anything about how episodic memory works. Jeff Hawkins or Itamar would argue that the ability to deal with episodic memories effectively will just emerge from their hierarchies, if their systems are given enough perceptual experience. It’s hard to definitively prove this is wrong, because these models are all complex dynamical systems and we don’t know how to predict their behavior exactly. But yet, it really seems the brain doesn’t work this way — episodic memory has its own architecture, different in specifics from the architecture of visual or auditory perception. I suspect that if one wanted to build a closely brain-like AGI system, one would need to design fairly specialized circuits for episodic memory, plus dozens to hundreds of other specialized subsystems.
The more we learn about how the brain works, the more sensible it will be to pursue brain emulation based AGI as one among many paths. Right now, any attempt to emulate the brain in an AGI system involves an awful lot of guesswork, because our understanding of the brain is still so primitive. And my own feeling as a researcher is that, if I’m going to do that much guesswork, I might as well liberate myself from the restrictions of emulating the brain and just think about the best way to create a digital mind given the hardware available to me. So that’s what my colleagues and I are doing with OpenCog — trying to build a thinking machine that doesn’t emulate the brain in any detail, drawing some inspiration from neuroscience but just as much from cognitive psychology, computer science and other areas. But if other researchers want to apply their talent for creative guesswork to figure out how to make more closely brainlike AGI systems — more power to ‘em! I have fairly strong intuitions about what path toward AGI is best to follow at the present time — but I also do strongly believe there are going to be many different workable paths to AGI, probably leading to many different kinds of minds.
AGI systems may also one day help us better understand the complexities of neuroscience — one can easily envision an AGI system making better sense of the intricacies of Modha and Singh’s diagram, that I showed above, than any human brain. There may be a future phase where AGI and neuroscience advance in synergy, each helping the other along — though my own guess is that, after this phase, AGI will continue advancing far beyond the restrictions of the human brain architecture. I see the human brain as one particular, albeit very fascinating and relevant, way of achieving a modest amount of general intelligence. It’s fascinating to watch our understanding of general intelligence unfold, both in the context of the general intelligence machines in our heads, and more broadly.
Toward Fuller Understanding
Neuroscience, like molecular biology and many other aspects of biology, has been advancing steadily and rapidly during recent decades. It is possible that future discoveries will yield radical conceptual breakthroughs, invalidating the ideas reviewed here. My guess, though, is that this won’t be the case. There will be new insights, there will be exciting neural phenomena unknown to 2012 science. But on the whole, I suspect that ongoing empirical neuroscience discoveries will serve to fill in more and more details — until, finally, we understand how the various minicolumns and other modules inside the various cortical columns distributed throughout the cortex operate and co-operate in such a way as to give rise to the diverse coordinated behaviors of the multiple complex networks binding the brain’s multiple, differently-architected regions into complex adaptive behaviors.
In other words, I suspect that when we finally do thoroughly understand the brain, what we will have will be : A long list of particular variations on the structures and dynamics I’ve described here, and a solid understanding of how these variations work together to let all the numerous parts and subnetworks of the brain collaboratively do their particular jobs.
What, specifically, would need to be done to move beyond our current level of understanding of the brain? — to validate or refute the general ideas outlined here, and to try to fill in the multitudinous missing details? There are many possible routes, but by far the easiest would be to find a way of imaging the living brain with high spatial and temporal resolution. Till we have that, moving rapidly toward a fleshed-out holistic theory of brain function is going to be really tough — though we will continue to make step by step progress. So — personally, if I wanted to work on neuroscience, I’d focus on developing radically better imaging tools.
Conceptually, the picture I gave there was basically the same as the one I have described here. Back then I hadn’t done any concrete computational neuroscience work, nor much practical AI work; and the neuroscience papers I read had even more gaps than the ones I read today. There were no detailed models of minicolumnar structure then, for example; and empirical data in favor of memory glocality was very scarce. But the high-level view of the brain I abstracted from the literature was about the same as the one I limn from what I read today. Little by little, year by year, we understand more and more. Till eventually — perhaps after a breakthrough in brain imaging — we’ll see the whole picture in detail … though the picture may end up so complex that no individual human mind will be able to fully understand the whole thing!
Image #2 Cortical columns, imaged
Image #3 Hypothetical “wiring diagram” of certain cortical mini columns, from a recent neuroscience paper
Image #4 Hierarchical structure of the visual cortex
Image #5 Hierarchical structure of HTM, DeSTIN and other similar vision processing / AGI architectures, modeled conceptually on the hierarchical structure of visual cortex