The pace of technological change is governed by many factors — including public demand. Which is why we need to be demanding more. Here are 12 transformative technologies whose development should be expedited right now. To make this list meaningful, I only included those items that are within reasonable technological reach. Sure, it would be nice to have molecular assemblers, warp drives, and the recipe for safe artificial intelligence, but it’ll be decades before we can reasonably embark upon such projects.
Back in 2012, I was invited to spend a few weeks visiting at the Research Institute for Humanity and Nature (RIHN), a federally funded Japanese research institute based in the beautiful city of Kyoto. I was invited by my colleague Itsuki Handoh of RIHN. During my visit, Handoh and I came up with an idea for how to fuse two important lines of research on major global threats.
It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster.
On September 23, the Food and Drug Administration sent Rima Laibow and Ralph Fucetola at the Natural Solutions Foundation a warning letter claiming that their allegedly nano (colloidal) silver based “Dr. Rima Recommends™ The Silver Solution” product violates the Federal Food, Drug, and Cosmetic Act (FFDC Act).
Reproducing in space, lifeboat problems, and other ethical quandaries that could arise if we travel to Mars. Disaster can happen at any moment in space exploration. “A good rule for rocket experimenters to follow is this: always assume that it will explode,” the editors of the journal Astronautics wrote in 1937, and nothing has changed: This August, SpaceX’s rocket blew up on a test flight.
Robert Frost’s famous imagery—fire or ice, take your pick—pretty much sums it up. But lately, largely unnoticed, a revolution has unwound in the thinking about such matters, in the hands of that most rarefied of tribes, the theoretical physicists. Maybe, just maybe, ice isn’t going to be the whole story. Of course, linking the human prospect to cosmology itself is not at all new. The endings of stories are important, because we believe that how things turn out implies what they ultimately mean. This comes from being pointed toward the future, as any ambitious species must be.
Transforming the world’s energy supply will take decades. It is a very tall order. But it’s starting. The price of renewables – and energy storage – continues to plunge, putting them on a path to being cheaper than any other form of energy within the coming decade. And they continue to grow exponentially – albeit it from a low baseline – spreading out into the market.
Several years ago, Bill Gates keynoted a breakfast for Seattle-based Climate Solutions, a nonprofit focused on advancing the clean energy economy and driving practical, profitable solutions to climate change. Gates opened his speech with an equation. To paraphrase: Our carbon problem = persons x services x the energy intensity of services x the carbon intensity of energy. The number of people is growing, Gates observed, and we all want more services.
As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
Measles is one of the leading causes of death amongst children worldwide. In 2012, an estimated 122,000 people died of the disease according to the World Health Organization – equivalent to 14 deaths every hour. Yet talk to parents about this highly infectious disease, and the response is often a resounding “meh”. Why is this?
If the controversy over genetically modified organisms (GMOs) tells us something indisputable, it is this: GMO food products from corporations like Monsanto are suspected to endanger health. On the other hand, an individual’s right to genetically modify and even synthesize entire organisms as part of his dietary or medical regimen could someday be a human right.
Can consciousness be created in a machine? Is the mind/brain simply a computational system? IEET Fellow and University of Connecticut philosopphy professor, Susan Schneider, was interviewed by The Humanist on these pressing topics. What kind of technology will exist in a transhumanist world the humanists are starting to question…
Pick up a jar of chili powder, and the chances are it will contain a small amount of fumed silica – an engineered nanomaterial that’s been around for over half a century. The material – which is formed from microscopically small particles of amorphous silicon dioxide – has long been considered to be non-toxic.
“This year alone, there have been 17,000 cases of meningitis in Nigeria, with nearly 1,000 deaths”. It’s a statement that jumped out at me watching a video from this summer’s Aspen Ideas Festival by my former University of Michigan Public Health student Utibe Effiong.
I'm back from the first Climate Engineering Conference, held in Berlin. Quite a good trip, but in many ways the highlight was the talk I gave at the Berlin Natural History Museum. The gathering took place in the dinosaur room, which holds (among other treasures) the "Berlin Specimen" Archaeopteryx fossil, among the most famous and most important fossils ever discovered.
I’ve recently been looking into the ethics of vegetarianism, partly because I’m not one myself and I’m interesting in questioning my position, and partly because it is an interesting philosophical issue in its own right. Earlier this summer I looked at Jeff McMahan’s critique of benign carnivorism. Since that piece was critical of the view I myself hold, I thought it might be worthwhile balancing things out by looking at an opposing view.
Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.
The WHO medical ethics panel convened Monday to discuss the ethics of using experimental treatments for Ebola in West African nations affected by the disease. I am relieved to note that this morning they released their unanimous recommendation: “it is ethical to offer unproven interventions with as yet unknown efficacy and adverse effects, as potential treatment or prevention.”
Debate about the merits of enhancement tends to pretty binary. There are some — generally called bioconservatives — who are opposed to it; and others — transhumanists, libertarians and the like — who embrace it wholeheartedly. Is there any hope for an intermediate approach? One that doesn’t fall into the extremes of reactionary reject or uncritical endorsement?
The World Health Organization has released a statement (in full, bottom of blog post) that they are going to convene, early next week, a panel of medical ethicists to “explore the use of experimental treatment in the ongoing Ebola outbreak in West Africa.” The statement goes on to say that “the recent treatment of two health workers from Samaritan’s Purse with experimental medicine has raised questions about whether medicine that has never been tested and shown to be safe in people should be used in the outbreak.”
I just finished a thrilling little book about the first machine war. The author writes of a war set off by a terrorist attack where the very speed of machines being put into action,and the near light speed of telecommunications whipping up public opinion to do something now, drives countries into a world war. In his vision whole new theaters of war, amounting to fourth and fifth dimensions, have been invented. Amid a storm of steel huge hulking machines roam across the landscape and literally shred human beings in their path to pieces. Low flying avions fill the sky taking out individual targets or help calibrate precision attacks from incredible distances beyond. Wireless communications connect soldiers and machine together in a kind of world-net…
No one birth control method fits everyone, but today young women have better options than ever before. Across the United States, from New York to South Carolina to Texas to Oregon, health advocates and providers are scrambling to get the word out about long-acting yet easily reversible contraceptive methods that are now approved for use by teenagers and well liked by most who use them. (See this earlier Sightline series, Twenty Times Better Than the Pill.)
Transhumanists as a rule may prefer to contemplate implants and genetic engineering, but few if any violations of morphological freedom exceed being torn to pieces by shrapnel or dashed against concrete by an overpressure wave. In this piece I argue that the settler-colonial violence in occupied Palestine relates to core aspects of modernity and demands futurist attention both emotionally and intellectually.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
My goal in this article is to demolish the AI Doomsday scenarios that are being heavily publicized by the Machine Intelligence Research Institute, the Future of Humanity Institute, and others, and which have now found their way into the farthest corners of the popular press. These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous. On a more constructive and optimistic note, I will argue that even if someone did try to build the kind of unstable AI system that might lead to one of the doomsday behaviors, the system itself would immediately detect the offending logical contradiction in its design, and spontaneously self-modify to make itself safe.