New Computer Programming Language Imitates The Human Brain
George Dvorsky
2013-08-25 00:00:00
URL



For nearly 70 years, computer scientists have depended upon the Von Neumann architecture. The computer that you’re working on right now still uses this paradigm — an electronic digital system driven by processors and consisting of various processing units, including an arithmetic logic unit, a control unit, memory, and input/output mechanisms. These separate units store and process information sequentially, and they use programming languages designed specifically for those architectures.



But the human brain, which most certainly must be a kind of computer, works a lot differently. It’s a massively parallel, massively redundant “computer” capable of generating approximately 1016 processes per second. It’s doubtful that it’s as serialized as the Von Neumann model. Nor is it driven by a proprietary programming language (though, as many cognitive scientists would argue, it’s likely driven by biologically encoded algorithms). Instead, the brain’s neurons and synapses store and process information in a highly distributed, parallel way.



Which is exactly how IBM’s new programming language, called Corelet, works as well. The company disclosed its plans at the the International Joint Conference on Neural Networks held this week in Dallas.





Researchers from IBM are working on a new software front-end for their neuromorphic processor chips. The company is hoping to draw inspiration from its recent successes in “cognitive computing,” a line of R&D that’s best exemplified by Watson, the Jeopardy-playing AI. The new programming language will be necessary because once IBM’s cognitive computers become a reality, they’ll need a completely new one to run them. Many of today’s computers still use programming derived from FORTRAN, a language developed in the 1950s for ENIAC.



The new software runs on a conventional supercomputer, but it simulates the functioning of a massive network of neurosynaptic cores. Each core contains its own network of 256 neurons which function according to a new model in which digital neurons mimic the independent nature of biological neurons. Corelets, the equivalent of “programs,” specify the basic functioning of neurosynaptic cores and can be linked into more complex structures. Each corelet has 256 outputs and inputs, which are used to connect to one another.



“Traditional architecture is very sequential in nature, from memory to processor and back,” explained Dr. Dharmendra Modha in a recent Forbes article. “Our architecture is like a bunch of LEGO blocks with different features. Each corelet has a different function, then you compose them together.”



So, for example, a corelet can detect motion, the shape of an object, or sort images by color. Each corelet would run slowly, but the processing would be in parallel.



IBM has created more than 150 corelets as part of a library that programmers can tap.



Eventually, IBM hopes to create a cognitive computer scaled to 100 trillion synapses.



But there are limits to the proposed technology. Alex Knapp explains:




Of course, even those hybrid computers won’t be a replacement for the human brain. The IBM chips and architecture may be inspired by the human brain, but they don’t quite operate like it.

“We can’t build a brain,” Dr. Modha told me. “But the world is being populated every day with data. What we want to do is to make sense of that data and extract value from it, while staying true to what can be build on silicon. We believe that we’ve found the best architecture to do that in terms of power, speed and volume to get as close as we can to the brain while remaining feasible.”




Corelet could enable the next generation of intelligent sensor networks that mimic the brain’s abilities for perception, action, and cognition.



More in this video:





And the IBM white paper can be found here.



[Via MIT Technology Review, Forbes, IBM]

 





​​This computer took 40 minutes to simulate one second of brain activity



And it required 82,944 processors, to do it — showing that we're still quite a ways off from being able to match the computational power of the human brain.



Top image: Japan's K Computer — a massive array consisting of over 80,000 nodes and capable of 10 petaflops (about 1016 billion operations per second). The system requires 9.89 MW of power to function, the equivalent of 10,000 suburban homes. Credit: RIKEN.



The simulation, which is now considered the largest general neuronal network simulation to date, was performed by a team of Japanese and German researchers on the K Computer — a Japanese machine that only two years ago ranked as the world's fastest.



According to the researchers, it took the 82,944 processors about 40 minutes to simulate one second of neuronal network activity in real, biological time. And to make it work, some 1.73 billion virtual nerve cells were connected to 10.4 trillion virtual synapses.



Each virtual synapse, which was positioned between excitatory neurons, contained 24 bytes of memory, thus allowing for an accurate mathematical description of the network. The simulation itself was run on open-source NEST software and had about one petabyte of main memory — which is equal to the memory of 250,000 desktop PCs.



The simulation wasn't designed to emulate actual brain activity (the synapses were connected at random) — just its network power. And though massive in scale, the simulated network only represented 1% of the neuronal network in the brain.



“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explained Markus Diesmann through a RIKEN release.



Not sure how he can make such a grandiose claim given that this machine — as impressive as it is — required 40 minutes to just crunch a second's worth of raw brain processing power. And that it represented only 1% the brain's entire network (could you imagine an array 99 times larger than the one featured above!? Though to be sure, Moore's Law will have something to say about the physical size of such arrays by the end of the 2020s.).



Regardless, the researchers say the result will pave the way for combined simulations of the brain and the musculoskeletal system using the K computer. They're obviously hoping that scientists working on various brain mapping initiatives will latch on to their technology.





Needless to say, matching the computational power of the human brain is one thing; emulating it is something entirely different. Due to its complexity, the human brain likely won't be emulated by a computer until sometime after the 2050s. 



These two articles were originally published on io9