Douglas Rushkoff talks to Brian Lehrer on WNYC at his book Get Back in the Box: Innovation from the Inside Out (Collins Business, 2005), and about his gentrified Brooklyn living situation after getting mugged while taking out the trash.
American Civil Liberties Union’s campaign for better data privacy laws: Governments and corporations are aggressively collecting information about our purchases, medical records, voting and other behavior. Bush Administration’s policies, coupled with new surveillance technologies, could eliminate our right to privacy completely.
In discussions surrounding bodily-autonomy issues associated with disability, there can be a tendency on all sides to attempt to reduce everything to gut reactions and sound bites when, in fact, a more complex analysis is needed.
(Jon Lebkowsky, over in the conversation with Bruce Sterling at the Well, reminded me of one of my favorite and most difficult posts over at WorldChanging, one that’s worth bringing over here. It’s an exploration of “geoethical principles”—the values we’d need to hold, and to hold tightly, should we ever be faced with the need to engage in geoengineering. Originally written in July of 2005, here it is in its entirety:)
One of the most obscene things about the burden of global misery - the extremes of poverty and disease we see in the developing countries - is that the money is actually there to relieve it. All that we need is the political will. Every rational calculation shows that the resources available to richer nations could be put to work with truly massive impact in improving the plight of the world’s poorest people, and with no real harm to the lifestyles of any well-to-do Westerners.
On the CBC’s science show Quirks and Quarks host Bob McDonald talks to biologists about whether natural selection still shapes humanity, and to journalist Joel Garreau about whether we are now determining our own evolutionary future.
Nanofabbers are on my mind right now. They’ve shown up in some work I’m doing with IFTF; they’re the focus of a project underway with CRN; and they’re one of the manifestations of the “software control of matter” conversation underway at the EPSRC Ideas Factory.
A fairly short Episode 7 of EIW Audio is now available—as noted in the episode I did not prepare a science lesson this time (though I hope to do more of those in the coming year) and I focused here on talking about transhumanism as a concept, followed by a quick discussion of bias, followed by more commentary on I Was A Teenage Popsicle, since I’m quite enthused about seeing this sort of thing in literature that isn’t explicitly science-fiction.
As far as the discussion of transhumanism goes, to get some perspective on why I felt the need to discuss the terminology a bit, I recommend the following links:
So, to basically summarize what I said in the podcast, I see transhumanism as far more than an attitude than as a club. I wouldn’t agree with efforts to turn it into a club, and frankly I don’t think they’re necessary—people who have a strong need for group association will perceive it and create it even if nobody makes an attempt to “market” it as such. (See: ten gazillion “fansites” out there devoted to everything from Harry Potter to knitting.) I see the term “transhumanism” as a means to find interesting people to talk to and as a powerful information-mining tool—and that’s it. I think that groups OF transhumanists can get together and organize to accomplish goals, but that the focus should always be on the goals themselves, and not on whether so-and-so is or isn’t really a transhumanist, or how we can “recruit” other transhumanists, etc. (Not that I’ve seen much of this, though.)
Turning the concept into an identity draws focus away from things like, “How do we defeat age-related disease and death?” and “How do we improve health care?” and “How can we help to ensure maximal morphological and cognitive liberty?” and “How might biotechnology help fight disease and address issues like hunger in developing nations?”
As social animals, humans already have a built-in mechanism for organizing—yes, even autistic humans, as is evident from sites like Aspies for Freedom. There’s no real need to convince people that “something” is worth joining, rather, there’s a tremendous need to focus on specific issues and attempt to transmit accurate and clear information that will enable people to address those issues. Perceiving this, I write articles here about longevity research in the hopes that actual longevity research will become more widely known and supported, and I write about neurodiversity for the sake of hoping to bring about greater respect for, and acknowledgement of, different kinds of brains.
I am a member of the WTA, mainly because I think there is a need for people from different backgrounds and philosophical positions in such organizations (I’m especially concerned about disability rights being properly represented), but this in no way means that I feel the need to agree with all the other members on everything! Keeping dialogue going is essential, and though I can see why some people might just want to bypass such organizations altogether (I’m usually one of those people—there’s a reason I understand cats!), sometimes it’s at least worth trying the experiment.
And, in case I haven’t communicated this clearly enough already, there’s a difference between using appropriate descriptive terms (particularly when they help you find information) and mucking about with identity politics.
Psychology Today recently published an article entitled, The Girl With a Boy’s Brain, about 24-year-old neuroscience graduate student Kiriana Cowansage. No, it’s not an article about a transgendered person, but rather a female who is described as having Asperger’s Syndrome. I was intrigued to read such an article because while AS females most certainly exist, most writings that mention autism in any form are primarily focused on boys and men.* And this article almost isn’t an exception—look back at the title.
In some of my recent posts, I have been defending the rationality of certain moderate kinds of moral relativism, while making the usual sorts of points that philosophers love about the incoherence of vulgar moral relativism. I’ve also been trying to convey some idea of why I don’t like to wear the moral relativist tag, myself - even though I’d be in good company with Gilbert Harman (for example).
This 20 minute film on “The Future of Augmented Cognition” is set in the year 2030, and is set in a cyber-security command-and-control facility for what appears to be the global bourgeoisie. The film was underwritten by DARPA to illustrate how augmented cognition will allow workers to integrate multiple sources of information without blowing their minds. It was directed by veteran TV producer Alexander Singer. [Thanks to the Neurophilosophy blog for the tip.]
Directed by Alexander Singer
“This short film takes place in 2030 in a command center that is ... all » tasked with monitoring cyberspace activities for anomalies that could threaten the global economy. The economy, which functions largely in cyberspace, is the link between countries and is extremely susceptible to instability. As might be expected, given the ever-increasing amount of data to be analyzed even in today’s world, the workers in 2030 are inundated with information from all sources. They have so much information to contend with that they are literally unable to process it all unaided. Fortunately, AugCog technologies have matured by this point and are commonly integrated into information-rich domains, including the featured command center. The film takes viewers through a near incident that is resolved by one of the analysts in the command center and is designed to tell two sides of the AugCog story: the innumerable benefits of the application of AugCog technology and the explanation of how that technology works.
In the first fifteen minutes of the film, viewers see the events as one who is observing the analysts perform their jobs. In the second portion of the film, viewers go inside the processes that allow the analysts to work, including the mental processes that enable them to process information and make decisions – all aided by AugCog technologies. The last five minutes act almost as a stand-alone documentary in which the S&T behind the closed-loop system is defined via computer animation.
AugCog researchers and developers are commonly asked about the future applications of AugCog, and FAC explains one such scenario—in one of many information-rich and demanding environments predicted to be even more prevalent in the future than they are today. While advances in knowledge management and human-computer interaction will be necessary to allow people to function in these environments, there will always be variability between people, within the same person over time, and even within the same person in real-time, as they move through the stages of a task. Augmented cognition technologies will be key in adapting computational systems specifically to individual users in order to maximize information processing.
Thanks to the input from neuroscience experts, FAC represents the future head-mounted sensor technology, as well as the entire enhanced closed-loop system. By 2030, vast strides will have been made in the way our computers process information. So, while the information barrage will remain the same, computing systems will be more capable of filtering information in real-time, helping the user increase speed and productivity.
The consideration of an operational scenario for AugCog technologies, combined with an accurate and intriguing portrayal of human information processing and closed-loop system functioning, makes the Future of Augmented Cognition a must see for AugCog scientists, engineers, and practitioners or those with an interest in any discipline that is based on human-system computing concepts.”
The New Year has provided the occasion for the usual spate of to-do lists, wish-lists, and so on for the upcoming Congress. I for one am quite pleased to note how many of these lists have testified to what I have been calling here at Amor Mundi the politics of an emerging technoprogressive mainstream.
IEET Fellow Andy Miah will be consulting for the European Union-funded, multi-institutional Nano-Bio-Raise project the goals of which are to “establish a multi-disciplinary expert working group of scientists, ethicists and social scientists to examine ethical and societal issues relating to nanobiotechnology and its converging technologies.”
In December Andy spoke on “Genetic Tests for Performance” at The Hastings Center in New York, and on “Human Enhancement & the Bioethics of Cultural Studies” at Loughborough University. Andy will be speaking on “The Challenge from Posthumanity” at the annual conference of the Australian Sports Commission, Brisbane, March, 2007.
Andy will be lecturing at the Royal College of Art on ‘Posthuman Designs’ for their ‘Design Interactions’ Masters programme, which is interested in provoking public debate about the ethics of new technologies through the construction of prototypes.
Andy also just published (with E. Rich) “Genetic Tests for Ability? Talent Identification and the Value of an Open Future,” in Sport, Education and Society, 2006, 11(3), pp.259-273.
Now that we’ve gotten false notions of “god” out of the way, we come up against the question from which He insulated us: if human beings are not the “chosen” species, then are we at least capable of transcending nature, from which we emerge?
There is little doubt that if technoscientific developments were rapidly to transform long-customary limits that have defined human capacities, life-span, scarcity as a material hurdle to good will and so on, and all within the lifetimes of many millions of people now living, this might seem for those of us caught up in the transformation as a kind of bacchanal, a throwing off of human boundedness altogether.
Abstract: Most agree that our lives and our world are better if we are happier. So linking the moral goal of greater happiness with our biological understanding of happiness seems obvious. Let us think of the position that it is permissible for individuals to make this linkage—to use pharmacology and other technologies in the service of increased happiness—as the ‘bio-happiness’ proposal. Several different technologies might be used in pursuit of this goal, e.g., pharmacological agents (“happy pills” ) might be developed, or pre-implantation genetic diagnosis (PGD) to select embryos with genes associated with a high level of happiness, or genetically engineering embryos for happiness. Most of the paper is devoted to defending bio-happiness against criticisms. The field of which may be characterized as follows:
(1) Happiness is not of moral importance.
(2) Bio-happiness cannot increase our happiness.
(3) Bio-happiness will come at too great a cost to other moral values.
What key terms do you think the culturally literate layperson, policy wonk, thinker or visionary should know, but probably doesn’t? Look over George and Jamais’ lists, and then give us suggestions here.
The T word is slowly but steadily penetrating the collective consciousness, and Fukuyama’s statement on transhumanism as “the most dangerous idea in the world”, as well as less sophisticated but perhaps more widely disseminated statements, for example by representatives of the world’s religions, ensure that more and more persons everywhere on the planet try to understand what transhumanism is about by reading the sources. I think transhumanism is still in a phase where “there is no such a thing as bad press” (well, almost), so I welcome almost any attack, even some delirious hate pieces, with some pleasure.