Our final session of blogging live from Washington, D.C., at the Future Tense event on “Governing a Technologically Uncertain Future.”
8:50 AM People are coming in, milling about, and getting settled. I guarantee we’ll start on time, though. Unlike almost every other conference I’ve ever attended, this one is keeping to its published schedule like clockwork.
We have a half-day remaining on the agenda. This morning we begin with “Technology’s Challenge for Storytellers: Stranger than Fiction.” Participants are Neal Stephenson, best-selling author of Cryptonomicon, The Diamond Age, Snow Crash, Zodiac and other novels, and Sascha Meinrath, director of the Open Technology Initiative at the New America Foundation.
So far, the conversation is being led by Meinrath into a discussion of how Stephenson writes his books, with less about the topics he covers.
Asked about his 2008 novel Anathem, Stephenson says it was partly inspired by the Long Now Foundation’s 10,000-year clock, and by his musings on what sort of societal structures would be required to actually keep a clock operating for that long. He also ponders, why spend time reading the daily news when you could just as well wait until the end of the year and read a summary of all the important stuff?
Meinrath asks Stephenson if he thinks it would be a good idea to put “the writers” [of fiction] in charge of society, and Stephenson responds, “The writers? You’ve just given me an idea for a new dystopian novel.”
Stephenson says the reason some people think the future—that is, the 21st century—hasn’t turned out the way it was envisioned in the late 20th century, is that most of the change over the last few decades has happened in computing and less in the outward physical world. Those changes are a lot less obvious, but it is easy to underestimate how much things actually have changed.
“I’m not sure if progress has slowed down as radically as we might think, but it certainly has slowed down in the rate of building stuff,” says Stephenson. “I would love to see a new wave of physical inventing and building over the coming decades. I worry, though, that we’ve switched into an era of austerity and of being afraid of building radically new stuff.”
Stephenson laments that even though there has been plenty of talk for decades about the need to shift to sustainable and renewable forms of energy, our Western model of energy production still has not changed substantially. An alien coming to planet Earth would notice right away that we are strangely blind to the obvious sources of solar and geothermal energy; no sane person would ever have thought that it makes sense over the long term to suck stuff out of the Earth and burn it to create energy. But we are stuck with habits and practices that are very difficult to break.
On another subject, Stephenson says that many of us were led to believe by movies like 2001 that the future of computing was artificial general intelligence—HAL—but what caught almost everyone, including science fiction writers, by surprise was the emergence of the Web and its impact on personal portable computing. Although iPhones are every bit as amazing as jetpacks, they seem somehow less compelling.
Referring to the ambitious US space program of the 1960s and the interstate highway system of the 1950s—both of which were products of the Cold War mentality—Stephenson’s closing words were, “Humans, unfortunately, are far more easily motivated by abject terror than by hope.”
Next up is “The Internet’s Coming Surprises,” a conversation between Alan Davidson, Director of Government Relations and Public Policy at Google, and Columbia Law School Professor Tim Wu, author of The Master Switch: the Rise and Fall of Information Empires. The moderator is AndrÃ©s Martinez, co-director of the Future Tense Initiative, the group that organized this event.
Davidson says the pace of change is increasing, not just in computing and its applications, but in keeping up with the policies necessary to appropriately manage them. Obviously, this is a huge challenge for policy makers.
Wu says that although the idea may seem obvious, it’s important to recognize that we can’t invent things that haven’t first been imagined. But most of what was imagined by science fiction writers in the 20th century, at least in the area of personal computing, already has been invented. And now we’re feeling a letdown. Have we emptied the gas tank of creativity? Do we have a shortage of imagination?
Martinez wonders if the problem might be that most of our development now is “siloed,” that we have a lack of big-picture thinkers and inventors. Davidson is touting stuff that Google is working on, like instant voice translation, as ideas that should be just as exciting as the science fiction we’ve all read.
Wu hopes that the next big thing will be human augmentation. We’re all to some extent already cyborgs, but what about taking that one step further, by imbedding chips inside our bodies, for example, or maybe even by fixing death.
Turning the conversation back to concerns about regulating the Internet, Martinez asks where things might be going. Davidson says we have seen a major shift in recent weeks; the old assumption of governments that the Internet basically is beyond their control has changed—some governments now are thinking they must find ways to gain power over the Net.
Another key area that regulators (and those who oppose regulation) are watching now, Davidson adds, is the issue of data surveillance and personal privacy. Although some in the younger generation have been thought of as being less concerned about privacy, we may see a reversal, where people will object to their personal data being shared without permission.
A questioner from the audience challenges Wu about his assertion that we have run out of imagination. Wu responds by saying that we seem these days to be focussed too much on the means and not enough on the ends, on our gadgets and not on how we live. He also says that recent works of speculative literature—and hence our views of the future—often are dystopian and rarely utopian, a marked contrast to the middle of the 20th century, when our thinking seemed broader and brighter.
Now we have a discussion with the provocative title, “Will Synthetic Biology End Human History?”
Participants are Drew Endy, a synthetic biologist in the Department of Bioengineering at Stanford University, and Francis Fukuyama, who wrote two books, The End of History, and Our Posthuman Future: Consequences of the Biotechnology Revolution that certainly seem relevant to what we’ve been covering here. The moderator for this conversation is Michael Specter, a staff writer at The New Yorker, and author of Denialism: How Irrational Thinking Hinders Scientific Progress, Harms the Planet, and Threatens Our Lives.
Fukuyama suggests that the coming “revolution” in bio-tech is being overhyped, that a combination of political, financial, cultural, and other setbacks will limit its impact. Endy admits that progress may not happen at the rate and to the scope that some visionaries would like, but that genetic engineering will indeed have huge consequences in the next decade or two, although some of the most important consequences—both positive and negative—may be unforeseen.
Specter and Fukuyama both say the European concern about GMOs (genetically modified organisms) is almost completely without basis, because there is no evidence to show that GMOs are significantly more dangerous than anything else in the natural world.
Endy says DIYBio is unlikely to have the same impact as the personal computer revolution of the 1970s and 1980s, because there is no similar government-built infrastructure and set of tools that the enthusiastic innovators can take advantage of. He also says that over-worrying about safety by government regulators may hinder progress in the field.
Fukuyama responds that current levels of regulations may be appropriate, and that in fact he thinks there are some areas beyond syn-bio, such as reproductive technologies or stem cells, that are dangerously under-regulated. Specter wonders who gets to make the decisions about what is “moral” and what is not. Fukuyama says that is why we appoint regulatory agencies to make those decisions, staffed by people from the fields who know what they’re talking about. Both Fukuyama and Specter agree, however, that in the current bureaucratic model, those agencies tend to move too slowly to deal effectively with the pace of change in emerging technologies.
Endy brings up the possibility of things like regenerative medicine, which at some point in the future could allow humans to, for example, grow a tail. Fukuyama says that the current public reaction to sports doping may give us some indication of the amount of resistance that such augmentations may encounter.
Someone in the audience asks about the potential for weaponization of biotech, especially in an age of DIYBio. Fukuyama responds that such fears may prove to be overblown, just as the world’s worries about nuclear conflagrations following Hiroshima and Nagasaki actually turned out to be handled quite well by the IAEA. Endy agrees, but he notes that this likely will require the creation of international organizations analogous to the IAEA for biotech, and that nothing like that currently exists.
And the concluding portion of this event (to be conducted while we eat box lunches at our seats) is a discussion between Amy Gutmann, president of the University of Pennsylvania and chair of the US Presidential Commission for the Study of Bioethical Issues, and Steve Coll, president of the New America Foundation. Their topic is “Public Beneficence in the Pursuit of Science.”
Gutmann says that the very day the first synthetic organism was announced, she received a letter from President Obama requesting her commission (see above) to prepare a report for him on what that development meant, what the legal and ethical issues might be, and how the science should move forward to maximize benefits and minimize harm—and to have the report completed within six months. Her commission held hearings with both proponents and critics, studied the literature, and published their report a month ago (right on time).
One of the main focuses of their report, she says, is “responsible stewardship,” addressing not just the direct effects on humans and human society, but also the impacts of this emerging tech on the biosphere. A question that arose during their six months of work was, should we recommend following the precautionary principle and propose a moratorium on all such efforts until their full impacts can be assessed? They chose not to go that way, because there are obvious medical benefits, among others, that may be obtainable within a relatively short time that potentially could save hundreds of thousands of lives in the developing world.
However, Gutmann says, the commission also rejected the opposite approach, which some scientists recommended, that they should just “let science rip” and avoid any and all hindering regulations. They did not think this would be responsible. But instead of proposing new laws or regulations for syn-bio at this point, the commission urged further study with another review of the technology in 18 months.
One interesting recommendation they made is to create the equivalent of a FactCheck.org for biotechnology, to be run by an independent non-profit organization. Another question yet to be answered is, which existing federal agency has jurisdiction over synthetic biology? She says creating a whole new agency is not a good idea and they did not propose it.
Coll asks, does synthetic biology have embedded within it the makings of a global catastrophic risk? Gutmann says there are huge benefits from the tech in medicine, in energy, in environmental remediation, and that for this reason, we have to find an appropriate balance of achieving the gains while avoiding the greatest dangers. And, she insists, there is no reason for anyone at this point to worry about syn-bio bringing about the end of the world. Much greater areas of concern are nuclear weapons or climate change, just to name two.
Gutmann says the aim of her commission was to navigate a road between the precautionary principle on one side and the extreme libertarians on the other side.
A commenter in the audience flips the “end of the world” question around, and suggests it may turn out that synthetic biology will not end the world but will save the world.
Okay, that’s all, folks. Hope you’ve enjoyed it!