The Future of Humans as a 'Meta-Species'
Ramez Naam
2011-08-11 00:00:00
URL




Eddie Germino: Thanks for participating in the interview, Ramez. To start, let’s touch on one of the major themes in your book. Much of More Than Human focuses on the promise of human genetic engineering, which has the potential to cure many terrible diseases and also enhance healthy people beyond their natural limits. Won’t the development of human genetic engineering lead to a less diverse world in which the undesirable extremes of human behavior, intelligence and physique are eliminated? Being frank, who would ever want their children or themselves to be mentally handicapped, physically deformed, extremely introverted, or abnormally short?

Ramez Naam: Well, first I would say that it is going to be a very long time before, say, even 20% of the population is substantially genetically engineered.

Even if everyone were, though, I suspect it would create a much more diverse world than we have now. Yes, there will be some genes that basically no one wants, but there will be other genes that are introduced into the human genome as a result of these technologies, and there are lots of genes that are neither clear positives nor clear negatives. Do you want blonde hair, black hair, or red hair? Do you want to fit into the crowd with conventional traits, or do you want to stand out?

Or, to use another example: Would you give your child a gene that makes them twice as smart, but also doubles their risk of schizophrenia? Maybe you would, maybe you wouldn’t. Different parents will make different choices.

And in the longer run, genetic engineering won’t be limited to just at birth: We’ll learn how to reprogram cells in living humans, and since people are usually more willing to take chances with their own health than that of their children, I would imagine that using genetics to enhance one’s self will really take off in adults who are willing to take bigger chances with technologies like gene therapy (or a number of others).

bookEG: Right. You mentioned in your book that people will someday insert artificial or animal genes into themselves for various reasons. But do you think there should be limits on the degree to which people can do this? Won’t we have real ethical and social problems if we start allowing animal-human hybrids, or people that are so genetically modified that they constitute a new species?

RN: Why would that be a problem?

I personally think the human race is headed towards becoming a kind of “meta-species” wherein different branches of humanity have gone down different roads and may by some definitions constitute separate species…but among whom the very idea of species is a bit weakened because the technologies of human transformation make it easier to modify one’s self.

EG: Continuing with that, in the last few pages of your book, you allude to humanity splitting into many new species in the future. Do you think life-extending technologies will allow people alive today to see such an event?

RN: There’s a chance people alive today will live to see radical human enhancement, or dramatic changes to human lifespan, or uploading, or the emergence of Strong A.I. A lot of children being born today are likely to live into the 22nd century, even with no further advances in medicine. So yes, there is a good chance of people alive today living to see such amazing events.

And to answer the version of this question you didn’t ask but which I hear often: Yes, there is a chance that people alive today will live to see human control over aging to such an extent that lifespan becomes almost indefinite. There’s no guarantee, though. The best thing anyone can do for themselves is to stay fit, healthy and educated, and to continue building their knowledge and resources. There’s a lot of luck involved in living a long time. There are no guarantees, either of one’s personal safety or health, or of how fast technology will develop.

Mostly, in my own life, I try to just put this question aside, and I try to live in a way that is reasonably healthy and has a reasonably positive impact on the world around me, including on the development of technologies that could transform human lifespan, but I try not to obsess over whether I will live to 60, 80, 100, or 1,000. I would certainly like the option to live for a long time, but I know that not all of these things are in my control. To become obsessed with things that are too far out of your control can drive one crazy. So my advice to all those transhumanists out there is: Control what you can, take care of yourself, and keep acquiring resources and knowledge that can help you in the future, but also live your life as if you could die any day.

EG: While you avoid setting a timeline for the emergence of the technologies described in your book, in Chapter 4 you write that medical treatments capable of extending human lifespan in healthy people could become available in the 2010′s. What new progress has been made towards realizing this goal since More Than Human was published, and do you still think the technology will arrive during the aforementioned timeframe?

RN: It is quite possible. You need only look at some of the recent news about resveratrol [a possibly life-extending chemical compound found in red wine] to see that this is an increasingly active area. GlaxoSmithKline paid $720 million to purchase a company called Sirtris that had done pioneering research on resveratrol, aging, and the potential of synthetic analogues of resveratrol that are substantially more powerful. Why did one of the biggest pharmaceutical companies in the world shell out three-quarters of a billion dollars for a tiny biotech company? Because Glaxo sees that there is a huge potential market -- many markets actually -- for a drug that could slow aging, or which might even have a substantial effect on several of the important consequences of aging, such as diabetes or heart disease or cancer.

Now, science is a Darwinian process. Many things are tried. Few succeed. This drug might never make it to market, or it might make it but maybe only really have a positive impact on diabetics. We won’t know for a while yet, but what it shows is that pharmaceutical companies are taking this area seriously. The first true anti-aging drug might be in human trials now. If not, it is only a matter of time.

EG: Part of your book focuses on nootropics, which are drugs (either conventional or genetically based) that have the ability to temporarily alter your personality or cognitive abilities. You look favorably upon their use, partly because of the aggregate benefits they would have for the economy and society. But wouldn’t the mass use of nootropics also be deleterious in a sense since they would mask the natural identities of users both from other people and perhaps even from themselves? Might we risk becoming totally dishonest and artificial? For example, always wondering if our accomplishments were really our own or thanks to a pill, or if your spouse truly loved you or was just under the influence of a drug that duplicated the effect?

RN: We should rethink this idea of “natural identities.” After all, are you the same person you were ten years ago? How about 20? Our models of human identity and personality need to become more flexible. This will be disorienting, I’m sure. A lot of people will lose their way altering their personalities. Some people who use Prozac end up hating it, but most find it positive overall.

As for your spouse, well, if your spouse takes a pill that makes him or her love you for just one night, you should understand that, and there may be one way to react to it. If, on the other hand your spouse takes a pill that makes him or her love you for the next few decades, you should understand that too, and you may want to have a different reaction to the two.

EG: Won’t nootropics and genetic engineering lead to conformity in social, professional, and academic environments as people who ordinarily wouldn’t want modify themselves were pressured to do for social or economic reasons?

RN: Maybe, maybe not.

Consider this: Social factors are probably a big contributor to my trips to the gym or the clothing store already, right?

I think there will be pressure to up performance; If the person next to you has a photographic memory and you don’t, that’s an obvious advantage to the other person in a lot of jobs, so you had better get with the program. That kind of pressure will be real.

But as far as conformity goes…there are already a lot of jobs where you have to wear a uniform, or a suit and tie, or have your hair just so. I don’t see these technologies really changing that. On the other hand, I do see enhancement tech allowing people to, say, alter their neurochemistry to up creativity temporarily. People could also alter their appearances in more profound ways -- like with blue glowing skin, red eyes, etc. -- that might not be appropriate in most workplaces, but which would increase the ability to express one’s self.

EG: So we could someday make blue or orange people? Wow…Future Shock!

RN: Yes, indeed. Cosmetic gene therapy may be a major driver of technology.

EG: You offer a very hopeful and uplifting vision of the future, which safe to say is a rare thing. Why do you think so many people are pessimistic about the future, not just when it comes to biotechnology, but politics, society, and the economy? Why do pessimism and fearfulness seem to be the default?

RN: Well, I don’t think it’s all about fear. I think there is a lot of hope in the world, we just don’t recognize it. You know, I don’t even really care for the world “transhumanist.” It sounds like an us-vs-them dichotomy to me.

We are all transhumanists to my way of thinking. Anyone who goes to the gym, enrolls in a night class, reads a self-improvement book, buys a new car, or gets a new cell phone is a person taking action to enhance themselves. What is that, if not optimism?

EG: Tell us about your education, professional career and personal history, and how these inspire and equip you to write a book about the future of biotechnology.

RN: Well, I’m an Egyptian immigrant. I came to the U.S. at the age of three. My parents came on an exchange program. My mother, Elene Awad, came to the U.S. to get her Ph.D. in physiology. Both my parents are M.D.’s.

Maybe because I came from a medical family I chose computers: I wanted a bit of a buffer between my work and my life, I suppose. From the moment I first tried to program a computer, in middle school on…maybe a TRS-80 or a Commodor Vic 20…I saw that computers were an infinite palette. They were so flexible, even then, that it was clear to me that they’d be incredibly powerful tools that one could use in all sorts of ways. And it was also clear that they’d be a good vehicle for expression.

Because of my interests and proficiencies, I attended a state magnet school for kids interested in math and science, The Illinois Mathematics and Science Academy. I was a bad kid there: I cut class a lot and was rebellious, but I had some really inspirational teachers.

My degree is in Computer Science. I studied it at the University of Illinois in Urbana-Champaign in the early 90’s. That led me to Microsoft, at the age of 22.

At Microsoft I learned that what I did could make an impact on the world. My first job was working on an email program -- the email client for Microsoft Exchange. It was a totally horizontal communication tool: You could use it to get a project done at work, to communicate with your family, to collaborate with a fellow scientist, etc. From very early on I realized that this was, in effect, an augmentation of human intelligence and productivity -- at least when seen at a group level.

From about 1995-99, I worked on Exchange and on Microsoft Outlook. Then I went to work on another human cognition enhancer -- the web browser. I worked on some of the later versions of Internet Explorer. After Internet Explorer 6 and Windows XP were released, I decided it was time for a break from Microsoft and to try something new.

Around that time, the word “nanotechnology” was starting to get serious interest in the press and a little bit of interest from investors. I had been reading about nanotech since Drexler’s Engines of Creation, and it seemed interesting, feasible and profoundly important. So, encouraged by its new popularity, some friends and I founded a company that combined what we knew (software, and making powerful software technology easy to use) with this field. Sadly the investing climate tanked around then. It was right after the tech bubble and 9/11 -- not the best time to start a company!

It was in the end of that period that I wrote More Than Human. Just before it came out I returned to Microsoft, where I’ve been working on Live Search.

In any case, I guess you could say my whole career has been about human enhancement technologies in some way. Outlook, Internet Explorer, search, More Than Human -- I view them all as part of one thread which is to devote as much of my energy as I can to improving the world, and in particular by helping make it easier for people to find and share information. More Than Human just went slightly further afield.

EG: How do your colleagues at Microsoft respond to your ideas about the future?

RN: They seem to like More Than Human. To be honest, we are usually too busy trying to revolutionize the way people search the Internet to have a lot of time for other topics.

EG: As a skilled computer engineer, it is strange that you never once discuss in More Than Human the role that A.I. might play in the future. Many notable transhumanists in fact consider this to be the most critical technology that awaits us and that will shape our destiny as a species. If anything, your writings indicate that you believe computers will serve as mere supplements to human intelligence -- eventually in the form of brain-interfacing cybernetics -- well into the future. What, in your opinion, is the future of A.I., and how will it change the human race?

RN: Well I guess I’m not much of a transhumanist.

It seems pretty clear to me that humans are not the ultimate possible sentiences. Everything we know in physics, in information science, in biology, in neuroscience, etc. all points to it being quite possible to create minds in substrates other than human brains, and also to create minds that are much more capable than our own, either by starting with the current human model and improving on it, or by designing something entirely new.

Augmentation is here already. My laptop, my cell phone, the web -- they are augmentations of my mental abilities. We have access to information in a way never before possible to any previous generation. We are already far down the path of augmenting human intelligence, and have been at least since writing.

Your question was about A.I., though. Will we create superhuman intelligences with their own volition via paths other than just augmenting humans? Probably. It’s definitely possible. If we survive long enough as a species it seems inevitable that we’ll do so, and I for one would welcome that.

At the same time, I think it’s a harder problem than most people realize. I say this as someone who works with machine learning algorithms, neural networks, Bayesians, etc., etc., all day long, every day. These things borrow some ideas from cognitive science, but they are nothing like humans, and nothing like what we would call “intelligent.” What they are is very simple in some ways, very good at specific jobs, and yet highly unpredictable and difficult to understand. They are more alien than traditional programs, even recursive ones. They are extremely hard to debug, and if you find that they do what you hoped they would do in 95% of the situations, but not the other 5%, well, it is not necessarily a simple matter to correct that.

If you don’t believe me, then go look at the code produced in a genetic algorithms contest and try to trace the logic. Or go try to understand what the weights in a multi-layer neural net mean.

I am not saying this is impossible -- far from it. Developing a more-than-human-intelligence A.I. is absolutely possible. If nothing else, there is the uploading path. But human intelligence is still one of those areas where we don’t know what we don’t know. Building an A.I. is not a simple matter of continuing to crank up the available memory and processing power of your system. We will get there, I think, but my personal bet for Strong A.I. is much further out than, say, Ray Kurzweil’s.

Also, to a large degree, aside from uploading, it isn’t really clear to me that the A.I.’s we produce will resemble humans all that much. We are not just generic general intelligences; we have a lot of evolved traits that are specific to things about our evolutionary ancestry. A.I.’s we create will not be human, they will be their own unique thing.

EG: Right, sorry about the “Transhumanist” label -- many people who speculate about the future of technology and believe in the benefits of human modification also have problems with this title, though for better or for worse, it seems to be the term most commonly applied to the movement’s adherents.

It’s interesting that you bring up Ray Kurzweil because he also dislikes the “T-word” and because he, like yourself, is an experienced computer scientist, and he is also one of the foremost futurists in the world. Kurzweil’s optimistic timetable for the emergence of Strong A.I. -- which he sees occurring by 2045 -- stems from his belief that medical nanomachines capable of entering the brain will provide scientists with hyper-accurate brainscans of human subjects within two decades, in turn allowing researchers to correlate physiological activities in the brain with different elements of conscious thought, which should then enable the creation of computer algorithms capable of supporting artificial intelligence. Firstly, what do you think of this “copying” approach to making A.I.’s? Secondly, why is it that you decided to restrict the content of More Than Human to biotechnology and a little bit of cybernetics when you also have a strong understanding of A.I. and nanotechnology, which also have the potential to enhance human abilities?

RN: No worries about calling me a transhumanist. It’s a fairly accurate label of me. I guess I would like to see the label disappear because I think most people are, to a great extent, transhumanists. People want more capabilities, more free-time, more freedom, more health, more longevity, more youth, more fitness, and more access to information. The small subset of the world who has even heard of the word “transhuman” just tends to be those with a longer time horizon and more optimism about the future of technology, but when you boil sci-fi down into concrete products that really enhance people’s lives in a safe way, the labels hardly matter: Consumerism is pretty much transhumanism in my book.

I could have included a chapter about A.I. and nanotechnology in More Than Human, but it would have pushed the book even further into the future, and I wanted to write something about the next 10-30 years that specifically focused on changes to human beings in that timeframe that had a pretty solid laboratory basis at the time of the book’s writing (2003 or so). Maybe in my next book I’ll get to dive into those more advanced topics.

As far as copying or uploading, I think it’s a very pragmatic view of translating sentience into a computer substrate. Everything we know about neuroscience says it should work, if the model is detailed enough and an accurate enough representation of the brain, and if your input data is accurate and detailed enough. Predicting exactly when it will work is tough, though. Do you need to model the system at the level of individual neurons? Individual synapses? Individual receptors and ion channels? Individual neurotransmitter molecules? The exact level at which you need to model things doesn’t change the theoretical feasibility, but it may change the timing by decades or even more. But as Ray Kurzweil points out, if progress in computing power continues on an exponential scale, even being off in your estimate of complexity by a few orders of magnitude just delays that inevitable by a decade or two. Even so, those decades could be a big deal to some, like those who are on the cusp of living that long or not.

And in addition to the scientific issues, any engineering, computer science, or biotech project of that magnitude is simply not going to get it right on the first try. It doesn’t happen. These sorts of things move forward through trial and error, and the iterations take time. When you are iterating with potentially sentient beings, you may find yourself in some sticky ethical situations, or in a situation where what you’re doing may be perceived very negatively.

So… I think it will work, and I think uploading the human mind by copying data out of the brain is in some ways the approach to A.I. that we are most confident we can make work, but it is going to be bumpy and potentially long and winding road between here and there.

EG: In your book, you criticize population growth projections devised by organizations like the U.N. because they cannot take into account the unforeseen impacts of future plagues, wars, or medical breakthroughs like cures for diseases. How far into the future do you think we can reliably predict the size of the population?

RN: That depends on your tolerance for error. The best estimates I have seen put the population in 2050 at around nine billion people, with a subsequent drop off in population for the next half-century. It used to be that we thought 2050 would have ten billion people, but for quite a while, the population growth estimates have been dropping, largely because fertility continues to decline, especially in the developing world.

As people (especially women) get more wealth, freedom and options they tend to start having fewer kids. A lot of Europe is now shrinking in population due to this, with immigration taking up the gap. Places like India, Pakistan and Indonesia are still growing rapidly, but the yearly rates of growth (as a percent of their populations) are actually slowing.

Purely by the demographics, the world is heading towards a flattening of population and a decline in the second half of this century, and the world is also heading for a lot of graying.

EG: Those trends you point out are pretty clear, but there’s another, oft-overlooked variable that could substantially affect world population—the emergence of new medical technologies that significantly extend human life. In your book, you attack the popular idea that such technologies will cause the population to explode, and instead estimate that they will only increase the world population by a couple hundred million by the end of this century. You go on to argue that even bigger increases to the size of the human race occurred during the 20th century without planetary disaster. But might it be the case that, in the 21st century, we are nearing a “tipping point” above which the Earth will be unable to support more people? Might the resource demands of that extra “couple hundred million” people kept alive by the technologies discussed in your book be too much for the planet to bear?

RN: Unlikely, I think.

There are only a couple disastrous possible tipping points that I know of for the planet. One is runaway (non-linear) climate change, another is global nuclear war, and the last is the possibility of biological warfare or bioterrorism using extremely virulent and lethal agents.

I think your question was closer to climate change. There we are in a situation of limited knowledge: We know that the climate has in the past changed by many degrees in a decade or less. That is an epic shift in temperature, and at a rate that would strain civilization. That is a situation we should certainly avoid. We also know that humans are warming the planet with our activities. Now, a little bit of warming is not a very big deal. The risk is that we will warm the planet enough to kick off one of these large state changes via some runaway feedback effect (of which there are several possible). The problem is, it’s extremely hard for us to actually quantify how likely that is to occur. We know it would be extremely bad if it did, but we don’t know how likely it is.

So we should put some effort into insurance against this situation, both efforts to reduce the risk of it occurring, and efforts to make sure we are resilient as a society and as a planet against major climate change.

Now, what are the effects of keeping people younger longer through delayed aging? Frankly, I think they will be positive. Among those people kept alive will be scientists, engineers, researchers, environmental activists, and more. There are a lot of people who can be very useful in actually helping guard against catastrophe who are in their productive years now and who are building up experience and skills that we could use for decades longer. It’s a major loss to civilization’s capabilities when those people leave the work force or die, or even just slow down because their bodies are less energetic. Delaying aging has this great effect of keeping smart, experienced people vital and available to help with society’s problems for longer. That to me seems more likely to have a positive impact against some catastrophe than a negative effect.

I guess I am in the Julian Simon camp of thinking that a human mind, especially an educated one, is the ultimate resource available on this planet.

EG: What do you think of the FDA’s current approach to approving new drugs and medical devices? What specific policies, if any should be changed, added or deleted to benefit consumers?

RN: Well, this is a rather large question. I would say that the FDA is best thought of as a champion of the consumer, in that the FDA forces studies of the effects of new drugs and medical devices on the health of individuals, both in terms of effectiveness and in terms of identifying and guarding against unexpected negative consequences.

All that said, I do see room for the FDA to be improved. Human trials are extremely expensive today, which deters the development of some new drugs, in particular drugs that are not going to be “blockbusters” either because their market is too small or because they can’t be patented (naturally occurring substances, for instance.)

Another issue I think is worth looking at is what constitutes a disease? The FDA is chartered with approving drugs that cure some disease or restore health, but what if there were a drug that cured no disease but made people smarter? Today this is handled through loopholes or by finding some ostensible disease at which to target a drug, while in actuality it will be used mostly “off-label.”

EG: Your writings clearly indicate that your strong support for human enhancement technologies stems from a more basic appreciation of the importance of individual choice and expression. Naturally, you also tout the supremacy of social, political and economic systems that best support such human drives. But recent disasters in the world economy, like the 2008 U.S. housing bust show us that markets can fail and millions of people just following their natural instincts can make bad decisions that sum into a monumental crisis (what economists term “the tragedy of the commons”). Clearly, there is still a role for government regulation of markets and personal choices. With this in mind, what do you think the government’s role should be in regulating development and access to the technologies you discuss in your book, and what types of unregulated use of biotechnologies would be the most dangerous to society?

RN: The most dangerous use of biotechnology is in playing with highly contagious pathogens like Smallpox, Ebola or Bird Flu. Those are self-replicating agents that kill people. They are the next most dangerous thing on the planet after nuclear weapons. Indeed, in a couple important ways they are more dangerous than nukes. Namely, a small lab funded by a terrorist group or a small nation could alter one of these pathogens, release it in a single location, and have it wipe out civilization, or at least kill millions. The combination of the relative ease and secrecy of experimenting (in theory) combined with the viral spread that means that a release at a major international airport could have the weapon delivered to all corners of the globe within a day or two, makes this a very, very dangerous class of organisms.

Most dangerous after that? Probably agriculturally targeted bio-weapons. A bio-weapon that had a major impact on rice or wheat yields could cause a global famine, with the deaths of millions and potential chaos worldwide.

Compared to those, biotech for human enhancement is low on the risk meter.

As for the state’s role, obviously our governments should try to keep us safe from things like nukes and bio-weapons. For something a lot more slow acting like human enhancement, I think governments should focus on education, on encouraging and sponsoring good research, on making enhancement technologies available to those who can’t afford them, and on regulation only to guard against consumer fraud, grossly unsafe products, or to protect the privacy, autonomy and safety of people who can’t protect themselves.

EG: What do you think is the best argument people like environmentalist Bill McKibben -- whom you mention in More Than Human -- put forth against the sorts of human augmentations you support?

RN: There are a few arguments that give me pause. Equality is one. Will these technologies lead to some sort of superclass or deepen social divisions between the haves and have-nots?

Unintended consequences more generally. Human society is complex. If we mess with it too rapidly, things are bound to happen that we don’t anticipate.

I think on both of the above concerns, though, the case is stronger that we will see an improvement to the human condition and to society through enhancement technologies. Indeed, there is a lot of evidence that we have already.

EG: You cite Robert Wright’s Nonzero: The Logic of Human Destiny as a book which has had an influence on your thinking, and this shows at points in your own book. That being said, don’t you think that Wright’s belief in a future with dramatically increased shared world governance undercuts your argument that human augmentation technologies will always be available since countries will never standardize bans? Secondly, do you feel that Wright’s observation that excessively rapid change disrupts societies attacks your own thesis that human augmentation is desirable on a mass scale?

RN: Certainly we are continuing to see more and more international agreements that bind different countries together into a common legal framework. At the same time, that framework is still pretty thin. Agreements and organizations like the WTO, Kyoto Protocols, or NAFTA certainly do have an important impact on the world stage. So does international trade, which I think Bob Wright would recognize as fitting into his thesis. Yet the majority of laws you or I are subject to are still state or local or federal, not international. I suspect that will continue to be the case for a while. It is certainly possible that we would see an international treaty barring human enhancement…but I doubt it: There are too many concrete benefits of enhancement technologies, and at the same time they are not obvious weapons of mass destruction in the way that, say, nuclear weapons are. And finally, a lot of the concerns about enhancement technologies in Western countries seem to be related to cultural beliefs that are much less widely held in Asia in particular. So that combination of factors makes me think that even if the U.S. really clamped down on human enhancement tech (which I really doubt will occur), it would still be possible to engage in some medical tourism to Thailand or some other rapidly developing country in Asia and find things freely there that are a lot harder to procure in the U.S.

EG: Might social stigmas against human augmentation along with widespread government and employer screening (biological testing and polygraph) and punishment for illegal enhancements be effective disincentives against getting augmentations?

RN: Sure, this could happen, but it seems a little unlikely to me. Social stigmas seem to be most detrimental to people and groups of low socio-economic status, like the poor, immigrants, etc. When groups are identifiable for possessing some new, clearly useful ability or trait, you do see a bit of stigma, but more often envy or a desire to join them. The first people to get cell phones didn’t seem to be ostracized. They might have been considered a bit flamboyant for their time or something, or perhaps overly geeky, but the devices they had -- as bulky, newfangled, and limited as they were -- offered clear benefits. And so what happened was that other members of society wanted those devices, and that drove competition and investment that evolved that technology into something that was much more suitable for a mass market.

EG: Isn’t there a risk that genetic technologies might have serious unforeseen health consequences that could go undetected by licensing agencies like the FDA because the problems take so long to manifest or can’t be predicted because they result from a highly complex series of genetic and biochemical interactions? Do you think it is likely that a future incident might occur in which thousands of consumers, blinded to these risks by promises of enhanced abilities, take the treatment and then suffer dire consequences later? Could such an incident or series of incidents cause a major world backlash against human genetic engineering?

RN: It’s possible. It is actually in the best interest of those who want to see these technologies advance to make sure that a good safety system is in place -- which likely means a good regulatory system -- to avoid such calamities.

It’s obvious that any complex technology that is widely deployed enough will have its downsides and its share of misuses and catastrophes. These will happen with genetic technologies just as they have in the past with automobiles, antibiotics, airplanes, and more. It is not all going to be daisies. Yet at every point we have to look at the overall costs and benefits to society. If we do this, we’ll see that we are gaining more and have always gained more through these sorts of advances than we’ve lost, and I think we’ll also see that open, democratic societies that put these decisions in the hands of individuals and families fare the best.

EG: If these choices are best left to families, should it be legal for parents to genetically engineer their children in indisputably bad ways, say, by making them paralyzed, deformed or vulnerable to disease?

RN: Not in my book. Any use of technology to harm someone directly, or to limit them, or control them, should be banned, unless there is some overriding societal interest, and that is a very hard case to make.

EG: That being said, in a future with cheap genetic screening and modification technologies, should it be legal for parents NOT to genetically engineer their children if they know from an early phase of pregnancy that the child is going to be paralyzed, deformed or vulnerable to disease? In other words, with respect to genetic engineering of offspring, is failure to protect morally equivalent to deliberate harm?

RN: Whew. This is a tough one.

I think situations like this will end up in the courts. Indeed, analogous situations have already.

In general, I would say governments should overrule the genetic decisions parents make on behalf of their children only in extremely clear cases of intentional parentally inflicted harm or severe and avoidable negligence. Honestly, I think both of those will be quite rare.

Even then, if the “negligence” is a matter of leaving the child’s genes unaltered, I think we are going to have to accept that as a parental right for a long time to come. Society will need to have accepted genetic engineering as a fairly normal and mainstream thing long before we would ever think of criticizing a parent for failing to alter the genes of their child, and criticizing is a long ways short of creating a legal mandate.

And as a practical matter, I would rather trust the parents and not the state as the genetic decision makers for their children. I fear that if we go down the road of giving governments too much control over these matters, we are opening the door to potential massive abuse down the road. Parents may make mistakes one or two children at a time, governments can make mistakes that affect hundreds of millions of people.

EG: In the book, you point out that the genetic engineering of children will be the purview of caring parents considering the technique’s expense and complications. This, you say, should mitigate the risks of parents genetically engineering their kids in some way (say, to be physically strong), and then pressure them to conform to some lifestyle according with those traits (to be an athlete), but won’t this safeguard evaporate once genetic engineering becomes cheap and widespread, and any random parent can shape their child’s genome as they please?

RN: I think the bigger issue than cost is risk. Parents tend to be conservative with their kids. As a general rule, parents want their kids to do well, but even more so, they want a healthy baby, and they want to keep their children safe from harm. That means parents will tend to gravitate towards genetic treatments that are relatively tried and true, and stay away from those that are more experimental or risky. This is not hard and fast, of course, but that’s the general pattern I’d expect.

EG: Are there any features of human nature or human biology that are too sacred to be altered? What sorts of things should biotechnology never be used for?

RN: Sacred? No, not to me.

To me it comes down to not intentionally inflicting harm on others, and not being reckless in a way that could harm others.

EG: What role should faith play in informing one’s decisions to use radical technologies to upgrade themselves beyond natural human limits?

RN: That is a personal decision for you and every reader.

EG: While most of More than Human focuses on the potential uses of biotechnology towards augmenting human life, in the final chapters you discuss the role that brain-interfacing cybernetics could have. You describe a possible technology in which a small computer connected to thousands of carbon nanotube “tentacles” is installed into the brain, and the appendages extend throughout the capillaries, connecting every neuron to the computer and providing the user with fantastic new abilities like electronic telepathy and dramatically improved cognitive abilities. But you don’t discuss the serious drawbacks that such cybernetics would bring. For instance, even futuristic nanotube-based brain computers wear out and break after a while, so what are you supposed to do with all the nanotubes that accidentally break off into the bloodstream -- especially once they become numerous enough to start clogging things up? Furthermore, like all other devices, any such implant would inevitably experience a total malfunction -- either temporary or permanent -- that would suddenly leave the user with only their baseline human abilities. How will people who have grown used to superhuman mental strengths be able to function in that situation?

RN: The technical issues are, well, technical issues. Maybe we’ll create nanobots to clean up the debris our implants leave, or maybe we’ll genetically engineer the human immune system to do that, or maybe the idea I proposed just won’t work. The point of that section was to demonstrate that it is, in fact, possible to send sensory data in and out of the brain, and to give one example of one possible way that could be used to create a very powerful mental prosthesis. There are, of course, other ways.

As far as what happens when your implants fail: What happens when your cell phone battery dies and you don’t have a charger? Or when your internet connection goes down? *shrug* You just deal with it. You go read a book. You turn on the TV. You shift from your laptop to your desktop or vice versa.

EG: In your discussion of cybernetics, you mention Dr. Dobelle, who was a world-renowned neurosurgeon specializing in restoring lost vision with brain implants linked to externally worn cameras. He is also noteworthy because he moved his practice overseas to bypass ethics rules that would have blocked his work in the U.S. Do you agree with this approach to developing and delivering new technologies that help people?

RN: I would rather this sort of thing happen as infrequently as possible. Dobelle was a visionary, but at the same time other researchers managed to move forward with retinal implants just a few years later inside the U.S.

EG: One of the big themes in your book is that it is in human nature for us struggle against our limits. Most would agree that such struggle -- whether it comes in the form of dealing with some disability, learning new knowledge, or simply exercising every day to stay in shape -- also is important to building discipline and character. Won’t the technologies you talk about rob some of the richness from life by allowing us to avoid the hard work and go straight to the reward? Are you afraid we might have a future full of lazy, hedonistic humans who just pop pills and get synthetic implants to improve themselves?

RN: Well, we have a culture today of humans who use antibiotics, caffeine, and aspirin, or who ride cars, planes and bicycles to get around. Has that removed richness from life?

Personally, I don’t think so. I think the march of technology has done a lot to enrich human life and existence while detracting comparatively little. Yes, some things that used to be challenges aren’t these days, but there are plenty of new challenges, and the pipeline of those seems to be perpetually full. A lot of people today can get by without exerting much effort, and yet, we still have this incredibly dynamic culture where at least a large subset of people push themselves quite a bit.

I’m not really worried about discipline and character. They’ll continue to be important for as far into the future as the eye can see.

EG: Where it comes to the functioning of the brain, will gene therapies, genetic engineering and cybernetics ever become so advanced that they totally antiquate pharmaceuticals, or will there always be a role for drugs? What niches, if any, will drugs always be required to fill?

RN: Interesting question. This is a bit of a technical issue. Generally, in medicine and engineering there are some advantages to using simpler tools rather than more complex ones to do the job. So to the extent that a small molecule (as most drugs are) can get the job done, there may be advantages to that over something much more complex or with more lasting consequences (like using a virus to insert a gene, for example). So my guess is that we will see small molecule drug therapy continue to be quite useful for decades to come if not longer.

EG: Thank you very much for your time and for sharing you thoughts, Ramez!

RN: Thanks!