Continuing our series on co-veillance, sousveillance and general citizen empowerment, on our streets… last time we discussed our right and ability to use new instrumentalities to expand our ability to view, record and hold others accountable, with the cameras in our pockets.
Last week, I published a guest post at Wired UK called It's Time to Consider Restricting Human Breeding. It was an opinion article that generated many commentary stories, over a thousand comments across the web, and even a few death threats for me.
I'm back from the first Climate Engineering Conference, held in Berlin. Quite a good trip, but in many ways the highlight was the talk I gave at the Berlin Natural History Museum. The gathering took place in the dinosaur room, which holds (among other treasures) the "Berlin Specimen" Archaeopteryx fossil, among the most famous and most important fossils ever discovered.
If you push long and hard enough for something that is logical and needed, a time may come when it finally happens! At which point – pretty often – you may have no idea whether your efforts made a difference. Perhaps other, influential people saw the same facts and drew similar, logical conclusions!
Materials and how we use them are inextricably linked to the development of human society. Yet amazing as historic achievements using stone, wood, metals and other substances seem, these are unbelievably crude compared to the full potential of what could be achieved with designer materials.
Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.
I had the opportunity to see Wally Pfister’s Transcendence, with Johnny Depp, Rebecca Hall, and Morgan Freeman, only last week, more than three months after the film’s release in theaters. Before seeing the film I satisfied my Transcendence cravings with an old, still unnamed copy of Jack Paglen’s script that can be found online (it appears that Paglen’s screenplay was part of what is known as the Black List, a list of popular but unproduced screenplays in Hollywood).
On August 9, at around 12 in the afternoon, Michael Brown and his friend Dorian Johnson were attacked by Ferguson, Missouri police officer Darren Wilson. With his hands in the air, telling Officer Wilson that he was unarmed, the officer shot Brown several times, killing him as a result. This was the eyewitness account told by Brown’s friend Dorian.
The WHO medical ethics panel convened Monday to discuss the ethics of using experimental treatments for Ebola in West African nations affected by the disease. I am relieved to note that this morning they released their unanimous recommendation: “it is ethical to offer unproven interventions with as yet unknown efficacy and adverse effects, as potential treatment or prevention.”
Debate about the merits of enhancement tends to pretty binary. There are some — generally called bioconservatives — who are opposed to it; and others — transhumanists, libertarians and the like — who embrace it wholeheartedly. Is there any hope for an intermediate approach? One that doesn’t fall into the extremes of reactionary reject or uncritical endorsement?
A well known and atheist-minded Transhumanist, Zoltan Istvan blames religion for an anti-cryonics law in Canada. Basically, Transhumanism is the ethical use of technology to extend human abilities, and cryonics is low-temperature preservation of a legally-dead body for resuscitation when new technology might cure the cause of death. Zoltan’s concern is that the religious views of Canadian lawmakers may have informed the law, and that this may influence other lawmakers around the world to inhibit access to cryonics likewise.
Machine ethics is a term used in different ways. The basic use is in the sense of people attempting to instill some sort of human-centric ethics or morality in the machines we build like robots, self-driving vehicles, and artificial intelligence (Wallach 2010) so that machines do not harm humans either maliciously or unintentionally.
Robot cars and military robots have more in common than you’d think. Some accidents with self-driving cars will result in fatalities, and this may be troubling in ways that human-caused fatalities are not. But is it really worse to be killed by a robot than by a drunk driver—or by a renegade soldier?
This is the fourth post of my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I started my discussion of Bostrom’s argument for an AI doomsday scenario. Today, I continue this discussion by looking at another criticism of that argument, along with Bostrom’s response.
Maria Konovalenko discusses personalized medicine services, why you should participate in clinical trials of geroprotector drug candidates, Personalized science, Why scientific research should be organized, why you should be friends with people with no harmful habits, “Create crowdfunding campaigns in the area of longevity”, why you should increase your own competence, promote the value of human longevity, and neuropreservation.
Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.
So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.
Most broadly, Social Futurism stands for positive social change through technology; i.e. to address social justice issues in radically new ways which are only just now becoming possible thanks to technological innovation. If you would like some introduction to Social Futurist ideas, you can read the introduction page at wavism.net and there are links to articles at http://IEET.org listed at the top of this post. In this post I will discuss the Social Futurist alternative to Liberal Democratic and Authoritarian states, how that model fits with our views on decentralization and subsidiarity, and its relevance to the political concept of a “Third Way“.
Jim Thomas of the ETC Group has just posted a well reasoned article on the Guardian website on the challenges of defining the the emerging technology of “synthetic biology”. The article is the latest in a series of exchanges addressing the potential risks of the technology and its effective regulation.
Overview of Advances Articulated in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013)  This article provides an overview of the research findings related to cognitive enhancement that are presented in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013), an encyclopedic textbook chronicling a plethora of recent advances in myriad areas of nanotechnology and nanomedicine. The final chapter discusses progress in nanomedical cognitive enhancement, where we find ourselves in a modern era in which many technologies appear to be on the cusp – helping to resolve pathologies while also having much future potential for the augmentation of human capabilities.
Geoengineering has come under attack recently by conspiracy theorists, scientists, to “greens.” There have been many kinds of proposals for geoengineering, and even a legal/illegal experiment pouring 200,000 pounds of iron sulfate into the North Pacific which was supposed to increase plankton that would absorb carbon dioxide. The experiment did not work and pissed off a lot of scientists. China also recently stopped their “flattening of mountains.” Therefore this article is not purely about techniques of combating global warming, but about the need for people to understand that geoengineering is a must, not only a must, but also a “human right.”
Over the spring the Fundamental Questions Institute (FQXi) sponsored an essay contest the topic of which should be dear to this audience’s heart- How Should Humanity Steer the Future? I thought I’d share some of the essays I found most interesting, but there are lots, lots, more to check out if you’re into thinking about the future or physics, which I am guessing you might be.
A few days ago, I drove up the Califonia coast to help my son move. The trip coincided with the attempted (3 am) launch from Vandenberg AFB of JPL's Orbiting Carbon Observatiory—OCO-2—which will nail down Earth's CO2 cycle. OCO is part of a constellation of five earth-sensing satellites bring launched just this year. (The first OCO failed, weirdly, and others were canceled, back during the Bush Administration. Whereupon it took a while to re-start the earth-sensing programs.)
Imagine if you could take an exotic vacation billions of light years from Earth, peek in on the dinosaurs’ first-hand, or jump into a parallel universe where another you is living a more exciting life than yours; and you could swap places if you like.
“Why do you cry, Gloria? Robbie was only a machine, just a nasty old machine. He wasn’t alive at all.” “He was not no machine!” screamed Gloria fiercely and ungrammatically. “He was a person like you and me and he was my friend.” – Isaac Asimov (1950). Most discussions of “robot rights” play out in a seemingly distant, science-fictional future. While skeptics roll their eyes, advocates argue that technology will advance to the point where robots deserve moral consideration because they are “just like us,” sometimes referencing the movie Blade Runner. Blade Runner depicts a world where androids have human-like emotions and develop human-like relationships to the point of being indistinguishable from people. But Do Androids Dream of Electric Sheep, the novel on which the film is based, contains a small, significant difference in storyline…