Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

The Future of Robotic Automated Labor

Consciousness and Neuroscience

Fusion: “Posthuman” - 3D Printed Tissues and Seeing Through Walls!

Philosopher Michael Lynch Says Privacy Violations Are An Affront To Human Dignity

Transhumanism: The Robot Human: A Self-Generating Ecosystem

Indefinite Life Extension and Broader World Health Collaborations (Part II)


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

Kris Notaro on 'The Future of Robotic Automated Labor' (Oct 25, 2014)

instamatic on 'Why “Why Transhumanism Won’t Work” Won’t Work' (Oct 24, 2014)

Abolitionist on 'Is using nano silver to treat Ebola misguided?' (Oct 24, 2014)

cacarr on 'Book review: Nick Bostrom's "Superintelligence"' (Oct 24, 2014)

jasoncstone on 'Ray Kurzweil, Google's Director Of Engineering, Wants To Bring The Dead Back To Life' (Oct 22, 2014)

pacificmaelstrom on 'Why “Why Transhumanism Won’t Work” Won’t Work' (Oct 21, 2014)

rms on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7542) Hits
(2) Comments

Dawkins and the “We are going to die” -Argument
Sep 25, 2014
(5744) Hits
(21) Comments

Should we abolish work?
Oct 3, 2014
(5182) Hits
(1) Comments

Will we uplift other species to sapience?
Sep 25, 2014
(4611) Hits
(0) Comments



IEET > Rights > FreeThought > Life > Innovation > Vision > Virtuality > Directors > George Dvorsky

Print Email permalink (2) Comments (12347) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


New Computer Programming Language Imitates The Human Brain


George Dvorsky
By George Dvorsky
io9

Posted: Aug 25, 2013

As we pointed out earlier this week, we’re still far from being able to replicate the awesome power of the human brain. So rather than use traditional models of computing, IBM has decided to design an entirely new computer architecture — one that’s taking inspiration from nature.

For nearly 70 years, computer scientists have depended upon the Von Neumann architecture. The computer that you’re working on right now still uses this paradigm — an electronic digital system driven by processors and consisting of various processing units, including an arithmetic logic unit, a control unit, memory, and input/output mechanisms. These separate units store and process information sequentially, and they use programming languages designed specifically for those architectures.

But the human brain, which most certainly must be a kind of computer, works a lot differently. It’s a massively parallel, massively redundant “computer” capable of generating approximately 1016 processes per second. It’s doubtful that it’s as serialized as the Von Neumann model. Nor is it driven by a proprietary programming language (though, as many cognitive scientists would argue, it’s likely driven by biologically encoded algorithms). Instead, the brain’s neurons and synapses store and process information in a highly distributed, parallel way.

Which is exactly how IBM’s new programming language, called Corelet, works as well. The company disclosed its plans at the the International Joint Conference on Neural Networks held this week in Dallas.

Researchers from IBM are working on a new software front-end for their neuromorphic processor chips. The company is hoping to draw inspiration from its recent successes in “cognitive computing,” a line of R&D that’s best exemplified by Watson, the Jeopardy-playing AI. The new programming language will be necessary because once IBM’s cognitive computers become a reality, they’ll need a completely new one to run them. Many of today’s computers still use programming derived from FORTRAN, a language developed in the 1950s for ENIAC.

The new software runs on a conventional supercomputer, but it simulates the functioning of a massive network of neurosynaptic cores. Each core contains its own network of 256 neurons which function according to a new model in which digital neurons mimic the independent nature of biological neurons. Corelets, the equivalent of “programs,” specify the basic functioning of neurosynaptic cores and can be linked into more complex structures. Each corelet has 256 outputs and inputs, which are used to connect to one another.

“Traditional architecture is very sequential in nature, from memory to processor and back,” explained Dr. Dharmendra Modha in a recent Forbes article. “Our architecture is like a bunch of LEGO blocks with different features. Each corelet has a different function, then you compose them together.”

So, for example, a corelet can detect motion, the shape of an object, or sort images by color. Each corelet would run slowly, but the processing would be in parallel.

IBM has created more than 150 corelets as part of a library that programmers can tap.

Eventually, IBM hopes to create a cognitive computer scaled to 100 trillion synapses.

But there are limits to the proposed technology. Alex Knapp explains:

Of course, even those hybrid computers won’t be a replacement for the human brain. The IBM chips and architecture may be inspired by the human brain, but they don’t quite operate like it.

“We can’t build a brain,” Dr. Modha told me. “But the world is being populated every day with data. What we want to do is to make sense of that data and extract value from it, while staying true to what can be build on silicon. We believe that we’ve found the best architecture to do that in terms of power, speed and volume to get as close as we can to the brain while remaining feasible.”

Corelet could enable the next generation of intelligent sensor networks that mimic the brain’s abilities for perception, action, and cognition.

More in this video:

And the IBM white paper can be found here.

[Via MIT Technology Review, Forbes, IBM]
 


​​This computer took 40 minutes to simulate one second of brain activity

And it required 82,944 processors, to do it — showing that we're still quite a ways off from being able to match the computational power of the human brain.

Top image: Japan's K Computer — a massive array consisting of over 80,000 nodes and capable of 10 petaflops (about 1016 billion operations per second). The system requires 9.89 MW of power to function, the equivalent of 10,000 suburban homes. Credit: RIKEN.

The simulation, which is now considered the largest general neuronal network simulation to date, was performed by a team of Japanese and German researchers on the K Computer — a Japanese machine that only two years ago ranked as the world's fastest.

According to the researchers, it took the 82,944 processors about 40 minutes to simulate one second of neuronal network activity in real, biological time. And to make it work, some 1.73 billion virtual nerve cells were connected to 10.4 trillion virtual synapses.

Each virtual synapse, which was positioned between excitatory neurons, contained 24 bytes of memory, thus allowing for an accurate mathematical description of the network. The simulation itself was run on open-source NEST software and had about one petabyte of main memory — which is equal to the memory of 250,000 desktop PCs.

The simulation wasn't designed to emulate actual brain activity (the synapses were connected at random) — just its network power. And though massive in scale, the simulated network only represented 1% of the neuronal network in the brain.

“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explained Markus Diesmann through a RIKEN release.

Not sure how he can make such a grandiose claim given that this machine — as impressive as it is — required 40 minutes to just crunch a second's worth of raw brain processing power. And that it represented only 1% the brain's entire network (could you imagine an array 99 times larger than the one featured above!? Though to be sure, Moore's Law will have something to say about the physical size of such arrays by the end of the 2020s.).

Regardless, the researchers say the result will pave the way for combined simulations of the brain and the musculoskeletal system using the K computer. They're obviously hoping that scientists working on various brain mapping initiatives will latch on to their technology.

Needless to say, matching the computational power of the human brain is one thing; emulating it is something entirely different. Due to its complexity, the human brain likely won't be emulated by a computer until sometime after the 2050s. 

These two articles were originally published on io9


George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.
Print Email permalink (2) Comments (12348) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


I think it is generally agreed that you don’t have to have either the human brain’s processing power or it’s inputs to achieve powerful AI.  Instead, there are advantages that AI has (i.e. memory, pre-structure, exactitude, focus, duplication, analytical, lack of extraneous processes, etc) that will inevitably help it exceed human capability in many areas.

Predictably, in the medium term we can expect a human/AI team, or an augmented human, to be the most powerful and meet with the most success.  After all, we are looking to build tools that will help us build better tools.  The Law of Accelerating Returns.  If a new computer language, or a new computer processor, helps us get better results, it is just a means to an end of creating even better AI architecture, so eventually we can reach the Singularity.





interesting!





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Fixing Poverty with a Basic Income Guarantee

Previous entry: Book Review: The Transhumanist Wager, by Zoltan Istvan

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376