With futurist thinkers supporting the notion of human upgrading through technological enhancement, what parameters are considered in respect to moral enhancement? What cross cultural barriers and variations in moral reasoning are we targeting for such upgrades? Moreover, is moral enhancement simply a term we fear delving into despite the association it arguably has to almost everything our culture produces?
The “Singularity” seems to have become a new lucrative field for the struggling publishing industry (and, i am sure, soon, for the equally struggling Hollywood movie studios). To write a bestseller, you have to begin by warning that machines more intelligent than humans are coming soon. That is enough to get everybody’s attention.
In a powerful article at the Atlantic, “Why I Hope to Die at 75,” Dr. Ezekiel Emanuel lined up facts and figures showing that much of the recent gain in human lifespan is about stretching out the process of decline and death rather than living well for longer. Most of us would love to live to 100 and beyond with our minds sharp and our senses clear, able to take pleasure in the world around us while contributing at least modestly to the happiness and wellbeing of others. But clear-eyed analysis shows that is not how most elderly Americans experience their final years.
I am a transhumanist, and I believe that politics is important. Let me unpack that a little: I believe that we can and should voluntarily improve the human condition using technology. That makes me a transhumanist, but aside from that single axiom I have in common with all transhumanists, we’re an increasingly diverse bunch.
Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not.
This time, let’s veer into an area wherein I actually know a thing or two! The matter of whether humanity might someday… or even should… meddle in other creatures on this planet and bestow upon them the debatable “gift” of full sapience—the ability to argue, ponder, store information, appraise, discuss, create, express and manipulate tools, so that they might join us in the problematic task of being worthy planetary managers.
This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.
As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.
“Virtually Human explores what the not-too-distant future will look like when cyberconsciousness—simulation of the human brain via software and computer technology—becomes part of our daily lives.” by Martine Rothblatt Ph.D., MBA, J.D.
IEET Fellow David Eagleman has written and will host a six hour television series on The Brain for PBS. The series will premeire in 2015, and deals with tough questions of ethics and emerging neurotechnologies.
The growing body of work in the new field of “affective robotics” involves both theoretical and practical ways to instill – or at least imitate – human emotion in Artificial Intelligence (AI), and also to induce emotions toward AI in humans. The aim of this is to guarantee that as AI becomes smarter and more powerful, it will remain tractable and attractive to us. Inducing emotions is important to this effort to create safer and more attractive AI because it is hoped that instantiation of emotions will eventually lead to robots that have moral and ethical codes, making them safer; and also that humans and AI will be able to develop mutual emotional attachments, facilitating the use of robots as human companions and helpers. This paper discusses some of the more significant of these recent efforts and addresses some important ethical questions that arise relative to these endeavors.
I’ve recently been looking into the ethics of vegetarianism, partly because I’m not one myself and I’m interesting in questioning my position, and partly because it is an interesting philosophical issue in its own right. Earlier this summer I looked at Jeff McMahan’s critique of benign carnivorism. Since that piece was critical of the view I myself hold, I thought it might be worthwhile balancing things out by looking at an opposing view.
This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.
Should animals be permitted to hunt and kill other animals? Some futurists believe that humans should intervene, and solve the “problem” of predator vs. prey once and for all. We talked to the man who wants to use radical ecoengineering to put an end to the carnage. A world without predators certainly sounds extreme, and it is. But British philosopher David Pearce can’t imagine a future in which animals continue to be trapped in the never-ending cycle of blind Darwinian processes.
Transhumanists as a rule may prefer to contemplate implants and genetic engineering, but few if any violations of morphological freedom exceed being torn to pieces by shrapnel or dashed against concrete by an overpressure wave. In this piece I argue that the settler-colonial violence in occupied Palestine relates to core aspects of modernity and demands futurist attention both emotionally and intellectually.
More than 80 percent of teen pregnancies are accidents. A girl with other hopes and dreams—or maybe a girl who is floundering, who hasn’t even begun to explore her hopes and dreams—finds herself unexpectedly slated for either an abortion or 4,000 diapers. Given the shame and stigma surrounding abortion in many American subcultures, that can seem like a choice between the proverbial rock and hard place. The exciting news that launched this Sightline series is that teen pregnancy is in decline across the United States and across all major ethnic groups. Fewer and fewer young women are facing hard decisions after the fact.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
There may be as many as 80,000 American prisoners currently locked-up in a SHU, or segregated housing unit. Solitary confinement in a SHU can cause irreversible psychological effects in as little as 15 days. Here’s what social isolation does to your brain, and why it should be considered torture.
So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.
This is the second part of my series on feminism and the basic income. In part one, I looked at the possible effects of an unconditional basic income (UBI) on women. I also looked at a variety of feminist arguments for and against the UBI. The arguments focused on the impact of the UBI on economic independence, freedom of choice, the value of unpaid work, and women’s labour market participation.
Although Jibo, designed by MIT professor Cynthia Breazeal to be the “world’s first family robot,” isn’t set to ship until 2015, folks are already excited about this little bot with a “big personality.” While there’s much to be said for Breazeal’s vision of “humanizing technology” so that the smart home of the future doesn’t “feel cold and computerized,” we might want to pause a bit before rushing to build the type of world depicted in the movieHer. Although it is easy to imagine we’ll be better off when we’ve got less to do, we don’t actually know the existential and social implications ofoutsourcing ever-more intimate tasks to technology.
The introduction of an unconditional basic income (UBI) is often touted as a positive step in terms of freedom, well-being and social justice. That’s certainly the view of people like Philippe Van Parijs and Karl Widerquist, both of whose arguments for the UBI I covered in my two mostrecent posts. But could there be other less progressive effects arising from its introduction?
Overview of Advances Articulated in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013)  This article provides an overview of the research findings related to cognitive enhancement that are presented in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013), an encyclopedic textbook chronicling a plethora of recent advances in myriad areas of nanotechnology and nanomedicine. The final chapter discusses progress in nanomedical cognitive enhancement, where we find ourselves in a modern era in which many technologies appear to be on the cusp – helping to resolve pathologies while also having much future potential for the augmentation of human capabilities.
Anesthesia was a major medical breakthrough, allowing us to lose consciousness during surgery and other painful procedures. Trouble is, we’re not entirely sure how it works. But now we’re getting closer to solving its mystery — and with it, the mystery of consciousness itself. When someone goes under, their cognition and brain activity continue, but consciousness gets shut down.
Over the spring the Fundamental Questions Institute (FQXi) sponsored an essay contest the topic of which should be dear to this audience’s heart- How Should Humanity Steer the Future? I thought I’d share some of the essays I found most interesting, but there are lots, lots, more to check out if you’re into thinking about the future or physics, which I am guessing you might be.
As expected, the last case ruled on before the Supreme Court of the United States adjourned until October was the Hobby Lobby/Conestoga case. For those unaware, this case is based on the Affordable Care Act’s contraception mandate, classifying contraceptives as preventive healthcare required under all insurance plans without a co-pay. Hobby Lobby and Conestoga Wood both objected to this, saying that covering some forms of birth control, like the IUD/IUS or Plan B, violated their religious beliefs by requiring them to fund abortive medications.1
The Prime Minister of Morocco recently compared women to “lanterns” or “chandeliers,” saying that “when women went to work outside, the light went out of their homes.” His remarks, which ran counter to Morocco’s constitutionally-guaranteed rights for women, promptly provoked both street demonstrations and an “I’m not a chandelier” Twitter hashtag.