Human Changing
Jamais Cascio
2005-05-25 00:00:00
URL





Worldchanging
May 25, 2005



vitruvian-manx4440x.jpgThe
question of how society changes when we can enhance aspects
of human capabilities is something

we


touch


on


regularly
at
WorldChanging. It's at least as important a question as how
society adapts to climate change or embraces new tools for
networking and communication; some of us would argue it may
be even more important. As a topic of discussion, it
has often been relegated to fringe culture and science
fictional musing, but a series of books over the last year
have brought the idea ever closer to the mainstream -- and
the most recent may be set to make the question of how
humankind evolves a front page issue.


Dr. James Hughes, bioethicist and sociologist at Trinity
College and director of the World Transhumanist Association,
published

Citizen Cyborg: Why
Democratic Societies Must Respond to the Redesigned Human of
the Future
in late 2004, examining the ways
in which the technological enhancement of human capabilities
and lives can strengthen liberal democratic cultures, not
threaten them. (I interviewed Dr. Hughes last November,
shortly after Citizen Cyborg was released:

Part 1
,

Part 2
,

Part 3
.) In
March of this year, Ramez Naam, software engineer and
technology consultant, brought out

More Than Human:
Embracing the Promise of Biological Enhancement
,
focusing on the ways in which biomedical treatments can and
will improve human abilities and happiness. Both of these
books -- which I highly recommend reading, even if you're a
skeptic about the implications of human augmentation
technologies -- received highly positive reviews and greatly
advanced the conversation over whether and how to enhance
human capabilities through technological intervention.


But I suspect it's the most recent book in this line
which will have the greatest mainstream impact. Joel
Garreau's

Radical Evolution: The
Promise and Peril of Enhancing Our Minds, Our Bodies -- And
What It Means To Be Human
just came out a few
days ago, and I expect it to end up on the summer reading
lists of policy-makers and pundits everywhere. If Joel's
name is familiar, it could be because he's a senior writer
for the Washington Post, covering technology and
society; it also could be because of his highly-regarded
earlier books,

The Nine Nations of
North America
and

Edge City: Life on the
New Frontier
. Joel approaches his subjects
with a journalist's detachment but a partisan's passion;
I've known him for about a decade (he's a part of the GBN
"Remarkable People" network), and he's never failed to have
his finger on the pulse of the zeitgeist. If Joel's covering
it, there's little doubt it will soon be a regular part of
our cultural conversation.


Earlier this month, I had the extreme pleasure of hosting
a conversation between James Hughes, Ramez Naam and Joel
Garreau, exploring the implications of human enhancement
technologies. While none of the three could be termed a
"bio-conservative," there are clear differences between
their perspectives on how society can and should respond to
new technologies (the lack of a bio-conservative in the
discussion was intentional; I wanted the group to be able to
explore the edges of implications, not get tied up in
arguments over terminology or moral standing). The
conversation ran over two-and-a-half hours; the resulting
transcript is correspondingly lengthy. But I expect that
you'll find the discussion compelling and fascinating, and
well worth your time.


And, as always, we appreciate your comments to continue
the discussion.


(Please note that the interview was sufficiently
lengthy that Movable Type was unable to hold it in a single
post; the continuation of the interview follows at a link at
the bottom of this post.)


Jamais Cascio: I think the one thing we're all in
agreement on here is that we are as a society on the edge of
a really profound transformation of how we identify
ourselves, how we interact with each other, and most broadly
how we live. At the same time I think all three of you have
different perspectives on precisely what that means. Since
I've known Joel the longest, we'll go age before beauty
here. Why don't you go ahead and give us a start, Joel?



RadicalEvolution_smx4440x.jpgJoel
Garreau
: Well my perspective may differ a little bit
from the rest of us in that I view myself as a reporter, not
an advocate. I don't think I have a dog in this fight. I lay
out three scenarios in "Radical Evolution" for how this
might play out. You're looking at a curve of exponential
change in technology. There are four technologies that are
the driving forces. I call them the GRIN technologies:
genetics, robotics, information, and nanotechnology. With
the recurring doublings of this curve of exponential change,
you have a situation in which the past 20 years is not a
guide to the next 20 years. It's a guide at best to the next
8. And the last 50 years is not a guide to the next 50
years. It's at best a guide to the next 14. That alone is
disconcerting. But then I start asking, what does this mean
for society? Because I don't really care that much about
transistors. I care about who we are, how we got that way,
where we're headed, and what makes us tick. I think that's
probably true for all of us.


I sketch out three scenarios. The first is the Heaven
scenario. That's basically the Ray Kurzweil memorial
scenario. In that, human society moves on a nice smooth
curve equivalent to the curve of technological change and in
no time at all we've conquered pain, suffering, death, and
stupidity. It's a dramatic change in the society within 10
or 20 years and it's all largely good. I take that very
seriously as a scenario, although I'm not a particular
advocate of any of these.


Then scenario two is the Hell scenario, which is the Bill
Joy memorial scenario. That's eerily a mirror image of the
Heaven scenario. It agrees that we're looking at a curve of
exponential change. It agrees that we're looking at the time
compression. But the premise of that one is that these GRIN
technologies are offering unprecedented power to
individuals. If you do that, it's equivalent to handing a
million individuals an atomic bomb and asking yourself do
you suppose one of them might go off. In that scenario we're
talking about extincting the species in 20 or 30 years.
That's the optimistic view. The pessimistic view would be
extincting the biosphere. I take that very seriously too.


(When I identify Kurzweil and Joy with these scenarios,
they're just the most obvious people to talk about. But
there are lots of others who agree.)


Then there's the third scenario, Prevail. It's not a
middle scenario between Heaven and Hell. It's way off in an
entirely different territory. The poster boy for that one is

Jaron Lanier
,
who was important in the formulation of virtual reality. The
critique that Prevail offers is that both Heaven and Hell
assume that society is going to be pretty much driven by
technology. If you were doing summer movies of Heaven and
Hell there wouldn't be much of a plot. It would be: there
are amazing changes occurring; there's not a hell of a lot
we can do about it; hold on tight; the end. With fabulous
special effects.


With Prevail the assumption is that human history is not
necessarily linked to any driving forces, no matter how
apparently powerful. It's assuming that even if the
technology is on a smooth curve, that doesn't mean that the
changes in human nature are on a smooth curve. They'll have
farts and belches and reverses and loops like everything
else throughout all of human history. It also assumes that
the measure of change should be different from Heaven and
Hell. Heaven and Hell are both basically measuring the
number of contacts between transistors as the measure of
change. Prevail assumes that the appropriate measure is the
number of connections between human beings, not between
transistors. Does that all make sense?


Cascio: Yeah, it does. And it's a good
introduction. James? Talk to us about what you think the
next 10 to 20 years holds for us.



citizencyborg_smx4440x.jpgJames
Hughes
: Well Joel does lay out some of the things that
are also on my time horizon. But I do have a dog in the
fight: I've always been a political activist and on the side
of democracy, equality, freedom, things like that, and one
of my concerns here is that these technologies will change
the terrain for a lot of the concerns that we've had over
the last couple hundred years about creating a more
democratic and equitable world. I want to make sure that we
chart how to best create that world given those
technologies. What I see as an initial response from a lot
of the people that I would otherwise consider to be my
allies politically is to try to shut down those processes of
change.


So my intervention in these arguments has been to get
people who are politically of good will to start thinking in
a more serious way about the positive benefits that these
technologies can bring to people's lives in the Heaven
possibilities, as Joel puts it. And also to think about the
Hell possibilities in a serious way because I don't think
that simply banning technologies or even proposing that we
ban these technologies is either a realistic option or sets
us up for the appropriate policy discussions that we need to
be having. Finally, in terms of the Prevail scenario, I do
think that a lot of the techno-wonks on the other side have
not given sufficient attention to the ways that technologies
are the product of social relations and that different kinds
of technologies can be produced by different kinds of
societies depending on how those societies are structured.


In other words I think we need to be having a lot more
discussion of intellectual property and we need to be having
a lot more discussion of equitable distribution of these
technologies. There's been a prevailing assumption that once
they're available then everyone will eventually have them or
will very quickly have them and I don't think that's
necessarily the case.


So those have been the two ways that I've been trying to
intervene. With the enthusiasts I talk about the equity and
freedom issues, and with the equity and freedom folks I talk
about the technology issues. So in terms of the way I see
the next 30 years developing, I think we're gonna have a
restructuring of our political landscape in really traumatic
ways. There's going to be a political Moore's Law, political
singularities to accompany the technological singularities
and I've been trying to figure out where the dogs to bet on
are in those fights.


Garreau: I'd like to hear more about the political
Moore's Law.


Hughes: I've always been as a social theorist, a
social scientist; I've always been pretty much a
technological determinist, at least in the sense that
technologies create the context for the kind of social
relations and political relations that one can have. They
change the terrain. They don't necessarily determine who the
winners are going to be but they create these new
possibilities. And so insofar as we're going to have a
human-level brain on everybody's desk in 20 years connected
to a global network of high-bandwidth connections that then
start burrowing inside our own brains and connecting us one
to each other I just don't see how we could possibly go on
with the same kind of political structures and ideologies
and so forth that we've had in the past.


I think that we'll need global governance to prevent the
spread of very dangerous technologies, to have prophylactic
answers to global climate change and species extinction and
near earth asteroids and cataclysmic tsunamis and all these
kinds of things. We also need global governance and global
distribution in order to ensure that everyone gets access to
the Heaven parts of these technologies, so that everyone in
the developing world will have the life-extending shot when
it becomes available. So a lot of the political fantasies
that people like me, left-wing folks like me, have had for
many hundreds of years will have an opportunity to be
brought into fruition in the next 30 years.


Cascio: Ramez?



morethanhuman_smx4440x.jpgRamez
Naam
: Well I take a slightly different tack, I think,
than either of the two last ones and part of it I think
stems from the fact that I work in R&D myself. I work in
software and I see big, complex things get built all the
time and I see how much longer they take to come to fruition
than people usually anticipate or plan for. So despite the
Moore's Law and so on, I think change will be a bit slower
than is projected by Kurzweil or Joy. I'm pretty skeptical
on nano in it's most kind of world changing potential
manifestations.


But I agree with something that I heard both James and
Joel say, that a lot of what we'll see in terms of change is
not driven by the technology itself, it's driven by human
nature and innate human desire and kind of innate emergent
features of systems in which many people interact. Things
like the fact that once you create information it can be
leveraged by many people, and that we as a society are just
creating more and more of it. Some of that information is
knowledge about how to do things like alter our genes and so
on.


But let me elaborate somewhat on that being driven by
human nature and human desires. What I would say -- let's
say an exaggeration of my stance -- is that there's nothing
new under the sun here. That these technologies, human
enhancement technologies, are a logical extension, and an
extension in degree rather than in kind, of things that
we've been doing for millennia. That we've always sought
more knowledge about the world around us, always sought more
knowledge about our own minds and bodies, and new ways and
more powerful ways to alter them. And more specifically that
people have always been looking for ways to extend their
health, extend their life spans, to advantage their
children, to have more control over the material world.


The area that I focus on is really human enhancement
rather than the whole spectrum of GRIN technologies, as Joel
put it. Those technologies, the human enhancement ones, are
very different from things like nuclear bombs, in that they
for the most part have impact on the world one person at a
time. They do not have the capacity for explosive,
exponential spread throughout the planet destroying
everything that they touch. The technologies that are
forecast to do that -- I think we should be slightly
skeptical of the claims of their coming into being. The most
dangerous technology on the planet for the next 50 years, I
suspect, will continue to be nuclear weapons.


As far as kind of social policy, I do have a stance on
that, which is that throughout history it seems the
societies that have flourished have been those that have
maximized individual choice. We've seen this in terms of
communist societies versus more open societies in the last
century, and before that as well. The reason for that is
that many people making individual decisions collectively
form a more intelligent network, if you will, or a better
global brain, than a few central policy makers. That's not
to say that I'm against government regulation, I'm not; I
think there are very valuable places for government to
intervene, specifically around regulating safety, around
regulating access to accurate information, and helping to
guarantee that consumers have accurate information, and in
assisting those who don't have the means to acquire these
technologies when that's necessary. An analogy I'd draw
there is to public education or vaccination programs.
There's a lot of history of doing those things in a way that
does not reduce human liberty and is still helps to boost
equality and benefit society.


Cascio: So Mez, you feel that enhancement
technologies are, in many respects, simply a continuation of
historical trends?


Garreau: I'm not sure I agree with that, by the
way.


Cascio: Well tell us why you don't agree with
that, Joel.


Garreau: I think that this is an inflection point
in history. This is a change in direction. For the last
several hundred millennia, our technologies have been aimed
outward, at modifying our environment, in the fashion of
fire and clothes, agriculture, cities, airplanes, space
travel, and so forth. What these technologies are doing are
for the first time aimed inward, at modifying our minds,
memories, metabolisms, personalities, progeny, and,
therefore, possibly what it means to be human. When you
start increasingly blurring the line between the made and
the born, when you are increasingly controlling your own
evolution, I think that's a real inflection point in
history.


Cascio: Joel, how would you draw a difference
between the biomedical technologies for internal
transformation, internal enhancement, and something like
yoga or lengthy therapy which can have very profound effects
in terms of changing one's personality, changing one's
behavior and beliefs?


Garreau: I don't know enough to answer that
question. I mean I just don't know that much about yoga.


Cascio: I just use that as an example of something
that has been going on for a long time that has an effect
that sounds very similar to what you describe.


Garreau: It seems to me what we're talking about
in the next 20, 30, 40, 50 years is a shift as profound in
what it means to be human as was the case when we moved out
from being Cro-Magnon or Neanderthals into modern humans.
That's a pretty breathtaking thought. The question is where
does the wisdom come from to handle that kind of
transcendence. It's not that we humans haven't tried to
transcend in the past. We've tried Cartesian logic, and
we've tried Christian sanctification, and we've tried
Buddhist enlightenment, and we've tried the new Soviet man,
and there's countless others. These have had limited to
mixed impact over history in terms of what it means to be
human.


Cultural evolution has made a difference over the last
8,000 years. What it means to be human today is not what it
meant to be human in the world of the nasty, brutish, and
short lives of the people who first came across the Bering
Straights into North America. I do think cultural evolution
has mattered. Thus, if you're counting, I think what we're
looking at is the third evolution. The first evolution being
biological evolution that Darwin describes. The second being
cultural evolution that basically covers the history of we
humans being able to store and retrieve and collect the
wisdom of billions of us collectively because of reading and
writing and storytelling. But now with this engineered
evolution, or this radical evolution, you're talking about
making big changes in the internal aspects of our minds,
memories, metabolisms, personalities, and progeny in a way
that I don't think we've seen before. I don't know enough
about yoga, but I don't think yoga has had a collective,
massive effect on what it's meant to be human and I'm afraid
this time that may be what we're talking about.


Hughes: If I can pick up on that, I emphasize more
of the continuity, in that I argue in my book that the
aspirations for transcendence which Joel references have
intellectual and cultural precursors in our spiritual
traditions, with every shaman who was trying to escape from
sickness, aging, and death through their spiritual
practices, and every religious tradition that promised a
brighter, better world. These were the precursors that show
that we have this aspiration for a dramatic transcendence of
some kind. The melding of those aspirations with rationalism
and humanism in the 16th, 17th centuries began to give birth
to what we now understand to be a transhumanist movement, a
movement toward a radical transformation of the human
condition through science and technology. And I'm very taken
with the argument also of Andy Clark when he argues in

Natural Born Cyborg

that the very first human intelligence augmentation was
written language because when we wrote numbers down on a
piece of paper in order to calculate, we're using those
external objects in order to supplement our own cognition.
We're offloading our memory into a piece of paper. We don't
have to memorize those things anymore. And those
technologies have had dramatic effects on human culture such
as the decline of the oral tradition.


But I think we have a continuity of processes and
aspirations at the same time as we have something
qualitatively different emerge. For me another example of
the political singularity that I see coming down the road is
that although we've always had people who wanted to
transcend the self - from a Buddhist point of view,
recognizing that the self didn't really exist in the first
place - one of the things I see coming about, probably in
this century, is that nano-neural networks melded with
cybernetics will make it extremely clear to people that the
self is an illusion. We will begin to backup and copy and
rewrite and be creative with our own selfhood in ways that
will eventually lead to its dissolution and the
reconfiguration of liberal democracy which is based on the
notion of individual autonomous selves. The last chapter of
Mez's book is pretty good on depicting what it might look
like when people are walking around with these nano neural
nets in their heads. But who gets a vote when we have the
Borg? Does the whole Borg get one vote or do they get a
million votes? Those questions are the ones that blow my
mind.


Cascio: Is there a fundamental difference between
augmentation and enhancement that's based on external
artifacts and augmentation and enhancement that's based on
genetic or very deeply internal changes?


Garreau: What are you thinking about when you say
augmentation?





maxsightx4440x.jpg
Cascio:
Well I'm trying to use a fairly broad term because I think
that to an extent I agree with the notion that things like
paper and calendars and the like are in fact technologies of
enhancement, technologies that add to individual
capabilities. So augmentation can include memory
enhancement, can include enhancements to one's physical
abilities. I don't know if you saw the report that came out
recently about a new kind of contact lens that makes it much
easier for athletes, baseball players is the example they
use, to see during the day. It's like wearing sunglasses
right against your eyeballs.


Garreau: It gives you a really spooky pink ring
around your eyeball, too.


Cascio: Exactly. So that's an example of an
artifact-based augmentation as opposed to gene doping.


Hughes: Well if they were to get lasik, that does
the same thing, that would be changing their own bodies. Are
you asking whether there's a difference between wearing a
contact lens and getting lasik?


Naam: Or if there's an important difference. I
don't think there is, to be honest. Or if there is, it's
really a function of how much can you do with kind of
external augmentations. I think when you talk about internal
augmentations the reason they seem more profound to people
is that people imagine them being able to have more dramatic
impacts. If I could have an internal alteration to my neural
chemistry that changed me from an introvert to an extrovert
or changed my sexual orientation that seems very profound to
people. But it would seem just as profound, I think, if it
were an external augmentation that did the same thing.


I think maybe a more interesting axis than external
versus internal is permanent versus temporary. One of the
things that people think about a lot, when they think about
genetic alteration of humans, and human behavior in
particular, people jump to the model of a parent genetically
altering their children. I actually suspect that we'll see
more use of genetic techniques, whether through
pharmacogenetics and the creation of small molecule drugs or
gene therapy or things like them for people to alter
themselves rather than to alter their children. Parents,
while highly motivated around their children's welfare, are
even more motivated, it seems to me, around their children's
safety. Whereas a lot of individuals, especially young
adults in their early to mid 20's, are much more willing to
take risks.


So one of the points that I try to make is that to the
extent that it's a change in human nature, I don't think
that we're seeing any permanent changes in human nature. I
think what we're gonna see is more empowerment of
individuals to alter their personality, behavior, emotional
landscape, cognition or whatnot, on either a temporary or
permanent basis as they choose. That's an additional
capability to humans. It's not one they didn't have before,
but it's one that's more dramatic I'd say.


Hughes: And it's one technology that's very
troubling because when people are able to permanently change
their own motivations, and what they consider to be
important, how do we preserve individual choice and freedom?
When you change your display in Windows it gives you a 10
second window in order to switch it back.


(LAUGHTER)


Hughes: When we have the ability to permanently
rewire our brains to want new things we should build in that
kind of fail-safe so that we switch back to our default mode
and then say "now do I really want, for the rest of my life,
to be a grub" or whatever it is that you're switching
yourself over into.


Cascio: I'm really kinda concerned about the idea
of using Windows as a metaphor for any of this.


(LAUGHTER)


Garreau: Yeah, that's the Hell scenario. A Windows
crash, what an awful way to end the species.


Cascio: Actually that's part of Jaron Lanier's
argument about why he doesn't worry about the Hell scenario,
is that computers are fallible. And

the Singularity

-- the robot revolution -- will end with an operating system
crash.


So there are several of these axes or tensions that seem
to be coming up both in this conversation and in the broader
literature: internal versus external augmentation, the
permanent versus temporary, there's also the enhancement
versus therapy concept, that some of these proposed and
extant augmentations are valid and acceptable to society as
long as they are bringing people who have some kind of
disability up to the broadly accepted norm, whereas if
they're used for enhancing the abilities of people who are
otherwise able it's forbidden or prohibited or at the very
least discouraged.


One example from Mez's book concerns the use of Ritalin.
For people who have not been diagnosed with ADD, Ritalin
actually can be extremely valuable as a way of focusing your
attention. But you can't go out and get a Ritalin
prescription or you can't buy it over the counter without
having this particular medical diagnosis. Do you three feel
that the enhancement therapy axis will continue to be an
issue or how will that play out?



JoelGarreaux4440x.jpgGarreau:
Well I have a scenario on that. Take any enhancement
technology. I'm think of the ones that exist, like Modafinil,
trade name Provigil. This is the primitive prescription drug
that allows you to stay awake without any of the side
effects of speed or caffeine like jitter or paranoia. You
always see the same path. The drug is originally aimed at
the sick. In this case it was aimed at the narcoleptics who
fall asleep uncontrollably. But within the blink of an eye
it moves on to group two, which is the needy well; in this
case it was instantly tested on Army helicopter pilots who
were young and healthy. The Army discovered that these
helicopter pilots could function splendidly for 40 hours
without sleep and then have 8 hours of sleep and then do it
again for another 40 hours.


And that's just the first iteration of this. The stuff
that's in the pipeline is much more impressive in its
effects. But the third group to be attracted to enhancements
like this is where people start getting creeped out. And
that's the merely ambitious, the people who want to stay
awake either in the immortal words of Kiss, to "rock and
roll all night and party every day," or they're just
ambitious because they want to make partner in a law firm
and they want to outperform their peers. And so they lunge
at any enhancement that you can offer. Viagra was originally
created for some other therapeutic reason but of course its
big market has been the ambitious, if you will.


I think we're going to see that path with any enhancement
and I think what freaks people out is the idea that it's
going to be used by people who simply want to have advantage
over their competitors. If you buy that path, then you're
looking in the very near term at a potential division of the
species between the Enhanced, the Naturals, and the Rest.
The Enhanced are the people who have the interest and the
money to embrace all of these enhancements. The Naturals are
the ones who could do it if they wanted to, but they're like
today's vegetarians or today's fundamentalists, and they
eschew these enhancements for either aesthetic or political
or religious reasons. The third group is the Rest and either
for reasons of geography or money, they don't have access to
these enhancements and they hate and envy the people who do.
That division could get pretty exciting pretty fast in terms
of conflict.


Cascio: I couldn't help but think as you were
talking about the ambitious people taking the Provigil, the
parallels there to mobile phones, fax machines, and being
online. At a certain point over the last decade it became an
expectation that if you were in certain businesses you had
to have a mobile phone, you had to be online all the time
and reachable all the time, such that that was not a choice.


Hughes: My answer to that complaint is that
literacy is in the same boat. When you teach people to read
are you making the illiterate less well off? Yes, in fact,
in a generally literate society employers will generally
want to hire literate people. But we don't then argue that
we shouldn't teach people to read because we're making the
illiterates worse off. We argue that we should teach
everyone to read. So if there is a substantial population of
Amish in the future who feel disenfranchised because they've
decided not to take the cognitive enhancement drugs, and
aren't able to work at what's considered the then normative
level of work productivity and cognitive performance, I
don't really think that the answer is to have a regulatory
approach. I'm not suggesting that that's Joel's answer, but
that is a lot of people's answer.



Jthin120x4440x.jpgI
also don't think that there's any useful distinction between
therapy and enhancement although many people will persist in
making it. My favorite example is that anti-aging medicine
will stop an awful lot of diseases. I don't see how you can
distinguish in that case between saying well this is also a
prophylactic against cancer, and saying that it will extend
my life a couple tens of decades. In terms of the
psychopharmaceuticals I'm generally in favor of
deregulation. As I said I think that there are gonna be some
psychopharmaceuticals and neuro-nano technologies which will
have very profound dangers attached to them, much more
dangerous than heroin and cocaine are today. But we see with
the Drug War today the tremendous social costs associated
with restricting people's cognitive liberty.


My final point about this is that the real distinction in
the future will be between what we have "in the Plan," that
is what we have as a matter of universal access, and what we
have in the market. Already we have "enhancements" covered
by Medicaid or Medicare or by private health insurance, like
breast reconstruction after cancer or Viagra, and so we just
stretch our boundaries of what we consider to be therapeutic
to include these cosmetic or life enhancements. At the same
time, over in the marketplace, we have things like aspirin
and Band-Aids which are indisputably therapeutic but we've
decided that there's no useful reason why they need to be
"in the Plan." So I think that's the kind of decision that
we're gonna have to make in the future. If there are drugs
or treatments or devices which threaten to radically
exacerbate inequality in society that is the point at which
you say everybody needs access to this through some kind of
universal access system - put them in the plan and give them
to everybody. But if the enhancements don't threaten those
kinds of inequalities, then we can have a debate about
whether they belong in the market or not.


Cascio: So is post-singularity Medicare the
answer?


Garreau: It's a very attractive picture that we're
painting here of things like global government and
super-Medicare and all that. I wonder whether that's
realistic politically.


Hughes: Well every other industrialized country in
the world has single-payer healthcare. We're the only one
that doesn't. So in every other country in the world
there'll be this debate. In the U.S. we have to create that
single-payer before we have that debate.


Garreau: It's not an accident that we're not
agreeing on these things in this country. It's not just an
oversight. I mean, I'm not defending this arrangement, I'm
just observing this situation that we have. It's not like
we're going to wake up some night and say "Oh, God, how
silly of us, we're going in the opposite direction from the
rest of the universe, let's change this overnight." These
politics exist for a reason. I guess I'm saying this because
I'm based in Washington, but I'm having difficulty picturing
how we're going to get the Congress to buy our more utopian
hopes and dreams.


Hughes: Joel don't you think that once we have an
anti-aging pill that Medicare will provide it?


Garreau: That's a good question. I'm interested in
guys like Francis Fukuyama and Leon Kass and people like
that in the President's Council on Bioethics. They go so far
as to defend pain and suffering as being essential to what
it means to be human. And I don't necessarily agree with
them but I give them plenty of points for style...


LAUGHTER


Cascio: I really wonder how many people,
especially people who have gone through the very painful
death of a loved-one, would accept that argument.



rameznaamx4440x.jpgNaam:
Can I just step in here a little bit? I think when we get
into these conversations we're again succumbing to wild
exaggeration of what is possible.


We will never eliminate death. We will never eliminate
pain. We will never eliminate people who have personalities
that are not exactly what they want. So we may talk about
changes to the degree of those, but you could easily argue
that aspirin should be subject to a moral debate about
whether or not pain is a good thing. Well we've got lots of
aspirin and other painkillers but that hasn't eliminated the
phenomenon.


Garreau: Right. Well, do we want to discuss the
bio-conservatives argument? These guys do exist and they are
raising interesting questions. The way we got to this was
because of the question of what's going to be politically
possible. Right now these guys have the ear of a lot of
powerful people.


Cascio: I think one thing that's interesting to me
about this is that it underscores a point that James made
earlier: that the course of these technological
transformations is greatly shaped by the nature of the
societies from which they spring. So yes, what you're
describing, Joel, is a very accurate depiction of the way
things stand in the U.S. but the U.S. is not going to be the
only place that's dealing with the onset of these kinds of
technologies.


Garreau: Right.


Cascio: So one thing that will be very interesting
to watch will be the divergence between the U.S. and Europe
and India and China -- about how these technologies are made
available. The cost. Would there be old people sneaking over
the border to Canada to get anti-aging treatments because
they aren't paid for by Medicare?


Garreau: I'm particularly interested in seeing
where the Europeans go with this. If they can't take
genetically altered food I wonder what's going to happen
when they start being offered the possibility of, as you
say, exceedingly long lifetimes or any of the other things.
Lord knows Asians have plenty of their own taboos, but they
are frequently different from the taboos that you'll find in
the West. My understanding is that the idea of mucking
around with your body to enhance it, increase muscle mass
and stuff like that -- the whole Barry Bonds thing -- just
totally perplexes them. They don't understand why we're
getting so upset. It's going to be interesting.



Take
the guys at the University of Pennsylvania who are
genetically enhancing the so-called Schwarzenegger-mice [and

cattle
]. Lee
Sweeney, the guy who does this, believes that the Athens
Olympics was the last Olympics without genetically-modified
participants. I keep on wondering what's going to happen if,
for example, the entire Chinese Olympic team in 2008 comes
out looking a whole hell of a lot different from a lot of
their competitors and doing so in a fashion that you can't
test. Is that going to be like Sputnik? Is that just going
to rock people in a similar way?


Hughes: I think China is one of my biggest
questions about the future because I, of course, like most
people, never foresaw that the Soviet Union would collapse.
And as soon as it collapsed I thought that in a matter of
very short time that China would collapse as well, the
Chinese Communist Party would collapse, and I thought that
Tienanmen Square was the beginning of that. It hasn't been.
China has established a relatively stable authoritarian
model of capitalism and they're extremely enthusiastic about
the GRIN technologies and they've poured a lot of money into
them. Fifty percent of all bachelor's degree graduates in
China are getting engineering degrees and we know that they
have that kind of prowess. And I think if you imagine the
authoritarian application of those technologies to a whole
population of a billion and a half people the kinds of
consequences in terms of social productivity that you could
have. So I think that's my big fear, that authoritarian
capitalist techno-savvy model might become the dominant
model for a lot of the world in the coming 50 years.


Naam: I think both Joel and James make good points
about Asia and the different culture there and I think
James' concern about authoritarian models there is a good
one. If that doesn't happen, I think there's something very
interesting here which is that there is a world market
independent of whether governments decide to pay for these
technologies. It seems that the different social mores in
Asian countries -- where not just the state but individuals
are much more friendly to enhancement technologies, and
genetics in particular -- mean that these things are going
to be on the market and you can imagine that even in a
non-authoritarian model that ends up producing an awful lot
of competitive pressure on the U.S. and the European
nations. If an economically strong country like South Korea
or China starts to see wide-spread adoption of things like
memory enhancers which are not adopted here in the U.S., and
those things give a productivity boost, you can start to
imagine people in D.C. talking about that as an economic
disadvantage for the U.S. and about how we have to address
that in the same ways they talk about college enrollments
and the number of degrees coming out of universities and so
on.


Garreau: Oh well, hell, how about the military
implications?


Hughes: Nukes would be the only thing we would
have left as an advantage.


Garreau: By the way, can we come back to that
nukes proposition?


Naam: Sure.


Garreau: Are we in agreement with Mez's statement
that the most dangerous thing for the next 50 years is going
to be nukes? Because I'm not sure I'd agree.


Hughes: I'm not. I think there's gonna be a
proliferation of weapons of mass destruction coming out of
these technologies. Although we didn't actually go to war
because of weapons of mass destruction in Iraq - there
weren't any there and we know the problems with that - we do
actually need to create a much more interventionist and
backed-up model of international action to track down
weapons of mass destruction. And they're gonna become a hell
of a lot more complicated to find because they are gonna be
coming out of labs the size of the recording studio I'm
sitting in right now.



Garreau:
Right. This is the underpinning of Bill Joy's Hell scenario.
He makes a big distinction between weaponized versions of
the GRIN technologies and nukes. He says nukes require an
industrial base that is at least the size of a rogue nation
and it involves rare minerals and it's difficult and costly
and blah, blah, blah. Whereas he argues the GRIN
technologies are easily within the reach of a bright but
demented graduate student. The reason he is totally
convinced that the end of the species is nigh is because
regulating nuclear weapons is child's play compared to
regulating the GRIN technologies. Very few people want to
have a nuclear bomb go off. Lots of people want to have
brighter, healthier, more beautiful children. There
enhancements are dual-use, he would argue, would be
impossible to regulate.


Hughes: We need technological police. But I want
to go back to a point that Ramez made earlier which is that
we also need to imagine the ways that free people armed with
the free use of technology can prophylactically protect
themselves. So I think there's both a libertarian argument
here that we need to give people as wide an access to things
like the next version of Symantec Anti-Virus systems will
be, for your body and for your ecosystem, but we also need
to have regulatory police at an international level to track
down and close down the most dangerous operations.


Cascio: I think that's one crucial point about the
Bill Joy argument. He uses the phrase "knowledge-enabled
weapons of mass destruction" and it's important to remember
that the responses are also knowledge-enabled. Because
there's a strong correlation between the ability to develop
and apply knowledge and collaboration between numbers of
people, we'd be in a much better position to be able to
respond or develop prophylactics, as James puts it, with a
scenario of knowledge-enabled problems.


Naam: I think that's a good argument. Just to come
back to why I made that statement about nukes, I think that
Joy's assessment -- that self-replicating technologies are
inherently more dangerous than nukes in the long-run because
they can spread like wild fire from a single source and
potentially be manufactured using a much smaller base than
nukes -- is correct.


The problem is not the manufacturing. It's the R&D. And
the reality out there right now is that no one knows how to
make a self-replicating nanobot. No matter what they say, no
one is anywhere near this. In fact, if they could, you could
imagine that they could build it out of macro-sized parts.
The self-replication is not necessarily something that is a
feature only of things built at this scale. It is a kind of
a systems design problem, a complexity problem similar to
complexity problems of large-scale software, but massively
larger than anything that we've faced to date.


The other approach to kind of self-replicating agents
that could be weaponized is the biological one, starting
with existing pathogens and so on. I think that's much more
plausible in the next few decades, but those things also
take a bit of time, they run up against the kind of
naturally evolved defenses that we have, and even there,
while the synthesis might happen in a room as big as the one
that James is in, the design takes a rather large
infrastructure of a different sort which is potentially
hundreds or thousands of scientists working on it, places to
test it, try it out, and so on, and those things aren't that
easy either. So overall I just think that designing any
weapon of this sort is a much harder problem than either
extreme enthusiasts or deep pessimists or those who fear
these technologies make it out to be.


Cascio: One other data point I'd throw in here is
that for a variety of reasons,

the "gray goo" scenario

-- which you didn't mention, but alluded to -- of
self-replicating disassembling nanobots is essentially
impossible, at least according to work done a few years back
by Ralph Merkle at the Foresight Institute. The replicating
threat issue is really overblown.


Garreau: Yeah, I agree. I'm in this kind of
awkward position where I don't want to be constantly
counterpunching people with whom I basically agree. But
there are all these guys I talk to like Bill Joy who, if he
were here, would bring up certain points. I guess it's my
role to say what Bill would worry about. He's not worried
about nanobots. That's a ways down the road. He's worried
about accidents in a bio lab with a bright biology student.
He points to the Australian mouse pox incident. Australia is
one of those isolated ecosystems that has invasive species
that sometimes run amok because they don't have any natural
opponents. Apparently one of these species is mice. From
time to time it's just mice everywhere. It just drive them
nuts. So they're very keen on controlling mice. They were
working on a mouse contraceptive and what happened was that
they altered a gene in mouse pox and this new enhanced mouse
pox virus was 100% fatal. Every mouse died. So they then
tried it on mice that had supposedly been inoculated. And
half of those died.


It was one of these kind of "oops" moments. Up until
then, people had thought that if you genetically altered a
virus it would necessarily become less effective. But in
this case it became more effective. It was a surprise. Joy
is terrified of surprises like this. Especially because what
these researchers then went and did was publish their
findings. It's available on the internet. You can go out
there and create all the enhanced mouse pox you want today
just by going to the literature. Mouse pox doesn't affect
humans. But it's a close cousin of smallpox and this
information, this knowledge-enabled weapon, is out there on
the web right now. That just drives Bill Joy nuts. He thinks
that this is suicidal behavior and that we're going to pay
for it with hundreds of millions of deaths in the not too
distant future.


Hughes: The problem with Bill's argument is that
he's not even willing to argue for the global technology
police which I'm arguing for. He wants some kind of vow of
renunciation on the part of all technologists and I don't
think that that's a realistic policy solution to this. I
think that we need to have a more open source approach to
the prevention of these kinds of disasters. We need to have
everything out in the open as much as possible. And I think
that could go along with having a technology police. Like
the IAEA, the International Atomic Energy Agency, it's not
that certain kinds of countries aren't allowed to do certain
kinds of research, it's that we have to be able to inspect
and make sure you're not making weapons out of the stuff.


Garreau: Do you think that's realistic?


Hughes: I think that that's the kind of regime we
have to develop. The more open source solutions we have, the
more we'll be able to detect when people are doing bad stuff
and the better kinds of prophylactic measures we'll be able
to take when they do. So with the case of bioweapons and
bioterrorism, if a country was not preparing actively for
the prospect of bioterrorism that would be crazy so I think
we need to know that there are these threats out there, we
need to know what kinds of threats there are and we need to
be preparing for them.


Cascio: Well I wrote a piece a couple of years ago

arguing for a strongly
open-source approach to these kinds of technologies
...


Garreau: To what end?


Cascio: Well precisely because of the security
question. Because you're right, these Australian scientists
discovering this thing about mouse pox and publishing that,
that would make it simpler for somebody else to do the same.
But the Australian scientists were not the only ones in the
world who could discover that about mouse pox. And if it was
a black lab somewhere that discovered this about mouse pox
and did try to create a weaponized version of small pox out
of it we're in a better position now to be able to respond
to it because the biologists of the world have access to
this information than we would if this had all been done in
secret and nobody had any ability to even recognize what was
going on.


Garreau: I think you're displaying a touching
faith in the Department of Homeland Security.


Cascio: I'm not talking about homeland security.
I'm talking about the global community of biological
scientists.


Garreau: So a jar of this stuff gets opened up on
Capitol Hill tomorrow and you think we're going to be able
to respond to it correctly?


Cascio: Fast enough to save the Congress and the
President? Probably not. But I will refrain from any further
comment there.


(LAUGHTER)


Hughes: One of the few things the Bush
Administration has recently said that I agree with is that
we need to create a global system for the monitoring of
emergent infectious diseases. I remember way back when I
first got terrified by the prospect of bioterrorism. I
started interviewing folks from the Federation of American
Scientists and they said "look, SARS, avian flu, AIDS, these
are killing real people. Bioterrorism incidents have not
killed very many real people yet but these other diseases
that emerge naturally have killed real people. What we need
is a global monitoring system and an emergency fast response
set of technological solutions for dealing with emergent
diseases. Then it doesn't make any difference whether they
come out of bioterrorism or not because most of the ones
we're gonna have to deal with won't come out of
bioterrorism."



SARSx4440x.jpgNaam:
One point I'd add is that when SARS came on the scene I
think the gene sequencing of SARS happened in like eight
days after people started working on it. A lot of the same
technology that is making it much easier to develop these
weapons is also making it much easier to decode them and
figure out the cures.


Cascio: I'm personally very much in agreement with
what Mez just said.


I'd like to shift the discussion a bit. Mez said
something early on that I thought really struck home for me
as being descriptive of a plausible scenario of the next 10
to 20 years is that there's a difference between
augmentations and enhancements people do to themselves and
augmentations and enhancements people do to their kids --
or, more broadly, that there's a difference between the
somatic and the germ line transformations. I'd like to have
you guys think a little bit more about that. What kinds of
scenarios do you see in terms of choices between making
changes that don't propagate down to your progeny and
changes that do? Is this something that people end up using
themselves to test things that down the road they'll trust
enough to apply to their kids?


Naam: I think it's a slightly different debate for
people. They're not thinking n generations down, they're
just thinking about caution. You know you ask people what do
you want for your baby, do you want a boy or a girl, and
nine times out of ten you get an answer that's something
like we just want a healthy baby. Of course they'll check to
see what the gender is and so on. So I think people are just
more resistant to that.


I suspect that they don't think about that very much when
they're modifying themselves. Maybe they would. But if you
go off, let's say you're in your early 20's, people go off
and do things like get tattoos and piercings and change
their hair color. And with things like gene therapy you can
imagine a number of things people in that age group would be
interested in like getting naturally fluorescing tattoos,
for instance. People are gonna, I think, view a lot of
things like that as fairly harmless, and pursue them.


Now if you told that person, gosh, in the course of
getting this it guarantees that any child you have will have
the same naturally fluorescing tattoo, I think a certain
portion of the population would think twice, others
wouldn't. But even so, in the course of seeing this
technology applied to adults, we'd probably be able to get
some kind of longitudinal safety data that'll give us an
idea of how it's gonna impact kids in the future.


Hughes: This is the loophole that we're gonna
drive germline genetic modification through in that people
have a right, in our society at any rate, to have children
in a natural way. We don't go around saying that they're not
allowed to have children because they're not genetically
correct. And if I'm given the right to change my own genome,
and also given the right to have children in an ordinary
biological way with a woman, then that means that we have
germline genetic modification because I will be able to,
theoretically at least, at some point in the future, change
my own reproductive cells. And parenthetically, one of the
bioethics statements that Pope Benedict released three years
ago said that, although he was opposed to germline genetic
modification, if a man changed his own sperm - he didn't
discuss the option of a woman changing her eggs - and then
had, in a marital relationship, the normal sexual
intercourse and had a child conceived in that way, that that
would be okay by him.


So I think this is a huge ethical loophole and I just
can't foresee a society where we go around and sterilize
people because they have changed their own reproductive
cells , or refuse to allow them to have children, or
confiscate their children at birth or something. That's not
the kind of society that any of us want to see.


Garreau: Agreed. On the other hand, the scenarios
that I would paint would not be as far down the road as
this. It's easy for us to dismiss an awful lot of
hypotheticals about germline engineering because it just
sounds so horrif