The most surprising feature of “Cablegate” is how throughly old-fashioned it is. Like a WWI cavalry battle, it’s loud, it’s interesting, and it makes for a good story, but it’s also a painfully traditional conflict between fundamentally obsolete forces.
If you could live in a world that was just the way you wanted it to be, with specifications you’d chosen, customized and personalized to meet your every need and fulfill your fondest desires, would you spend all your time there? Or would you prefer to stay here, in the real world?
With the headlines screaming “age-reversing” possibilities regarding the Dana-Farber Cancer Institute at Harvard University’s results with mice telomerase manipulation, I felt a bit of cold water was in order.
Maxwell Mehlman is a professor of law and bioethics at Case Western Reserve University, and author of Wondergenes: Genetic Enhancement and the Future of Society and The Price of Perfection: Individualism and Society in the Era of Biomedical Enhancement. Max is final speaker of the Transforming Humanity conference held this weekend at the University of Pennsylvania by the Center for Inquiry. He is speaking here on Can Humanity Survive Evolutionary Engineering?.
In the final stretch of this exciting conference on “Transforming Humanity” we start with an excellent overview of the enhancement debate by Ronald Lindsay the new director of the Center for Inquiry, a lawyer and bioethicist, and author of Future Bioethics.
A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior.
After the exciting bath of left vitriol directed at enhancement and explicitly at my efforts to articulate a technoprogressive approach to enhancement, we turn to a friendly set of papers on neuroethics and biopolitics. (Live-blogging this weekend from the conference on the ethics of human enhancement, organized by the humanist Center for Inquiry and being held at the University of Pennsylvania in Philadelphia. You can follow George Dvorsky’s thoughts over at Sentient Developments.)
Eventually, we may reach a point where humans are immortal, hyperintelligent, and don’t suffer from mental illnesses. However, we will still probably argue with those we love, want things we don’t have the ability to get, and experience stress from most of the same factors that have caused it since the dawn of time.
Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ‘‘simulation,’’ noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
Today George Dvorsky and I are live-blogging from the conference on the ethics of human enhancement, organized by the humanist Center for Inquiry and being held at the University of Pennsylvania in Philadelphia. We’re in the Biomedical Research building with about fifty people in attendance. You can follow George’s thoughts over at Sentient Developments, and I’ll be appending him here as well.
We are in for a time of major decision-making as the Moore’s Law of Cameras (sometimes called “Brin’s Corollary to Moore’s Law”) takes hold and elites of all kinds are tempted to utilize surveillance in Orwellian/controlling ways, often with rationalized good intentions.
Three new Program Directors, the appointment of three additional Fellows and nine Affiliate Scholars, a dozen new contributing writers, over 600 articles published - and we have still have another month to go in 2010!
A novel approach to sentence generation - SegSim, Sentence Generation by Similarity Matching - is outlined, and is argued to possess a number of desirable properties making it plausible as a model of sentence generation in the human brain, and useful as a guide for creating sentence generation components within artificial brains. The crux of the approach is to do as much as possible via similarity matching against a large knowledge base of previously comprehended sentences, rather than via complex algorithmic operations. To get the most out of this sort of matching, a certain amount of relatively simple rule-based processing needs to be done in pre- and post-processing steps. However, complex algorithmic operations are required only for the generation of sentences representing complex or unfamiliar thoughts. This, it is suggested, is the sort of sentence generation approach that makes sense in a system that - like a real or artificial brain - combines the capability for effective local application of logical rules with the capability for massively parallel, scalable, inexpensive similarity matching.
Over at the Journal of Evolution and Technology we’ve published a new article by Nicholas Agar, in which he summarises some of the argument from his new book, Humanity’s End, which focuses on and critiques the work of Ray Kurzweil, and the IEET’s Nick Bostrom, James Hughes and Aubrey de Grey.
There are a lot of things to be thankful for in this world, and I’ve got a pretty good list: A loving family, the glittering splendor of the cascading galaxies, Eddie Hinton’s guitar solo on the Staples Singers’ “I’ll Take You There” ... you know, the usual stuff. But here’s something you may not think warrants much gratitude this November: The wisdom and common sense of the American people.
University of Connecticut professor emerita Susan Anderson and her research partner, husband Michael Anderson of the University of Hartford, a UConn alumnus, are teaching machines how to behave ethically.