The rapidly growing field of transhumanism—an international social movement whose highest immediate priority is overcoming human death via science and technology—is facing a colossal challenge. About 85 percent of the world’s population believes in life after death, and much of that population is perfectly okay with dying because it gives them an afterlife with their perceived deity or deities—something transhumanists often refer to as “deathist” culture.
In fact, four billion people on Earth—mostly Muslims and Christians—see the overcoming of death through science as potentially blasphemous, a sin involving humans striving to be godlike. Some holy texts say blasphemy is unforgivable and will end in eternal punishment.
The ultra-nationalist political agitator Dinanath Batra sued its publisher and the publisher withdrew the book from the Indian market. The lawsuit was based on a law (Hate Law Speech Section #295A, enacted in 1927 by the British under pressure from the Muslim community) that de facto allows courts to punish religious blasphemy.
Jaron Lanier’s book “Who Owns the Future?” discusses the role that technology plays in both eliminating jobs and increasing income inequality. Early in the book Lanier quotes from Aristotle’s Politics:
If every instrument could accomplish its own work, obeying or anticipating the will of others, like the statues of Daedalus, or the tripods of Hephaestus, which, says the poet, “of their own accord entered the assembly of the Gods; ”if, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves.
Aristotle saw that the human condition largely depends on what machines can and cannot do; moreover, we can imagine that machines will do much more.
Sticks and stones can break my bones , but words can never hurt me
The Nursery Rhyme is Bulls**t. Words hurt.
They don’t physically damage our bodies, but the pain is palpable. It’s also measurable in our brain activity. Social rejection activates the same parts of our brain as a punch to the face or a broken arm.
First of all, someone needs to demystify the idea that Westerners have of India. There are two modern empires in Asia: Russia and mainland China. They are empires because they rule over subjects who, given a choice, would probably not want to be part of them and these are big chunks of territory with huge natural resources (Chechnya and other Muslim regions in the case of Russia, Tibet and Xinjang in the case of China). India is never listed alongside them because it used to be a colony. Somehow the colonial past deters people from seeing what is relatively obvious: India too is an empire just like China and Russia that rules over many “conquered” regions that, given a choice, would probably secede.
What underlies a question like this is that it’s okay to force people to work by withholding what they need to live, in order to force them to work for us. And at the same time, because they are forced, we don’t even pay them enough to meet their basic needs that we are withholding to force them to work.
A worry that is not yet on the scientific or cultural agenda is neural data privacy rights. Not even biometric data privacy rights are in purview yet which is surprising given the personal data streams that are amassing from quantified self-tracking activities. There are several reasons why neural data privacy rights could become an important concern.
If asked to rank humanity’s problems by severity, I would give the silver medal to the need to spend so much time doing things that give us no fulfillment—work, in a word. I consider that the ultimate goal of artificial intelligence is to hand off this burden, to robots that have enough common sense to perform those tasks with minimal supervision.
But some AI researchers have altogether loftier aspirations for future machines: they foresee computer functionality that vastly exceeds our own in every sphere of cognition. Such machines would not only do things that people prefer not to; they would also discover how to do things that no one can yet do. This process can, in principle, iterate—the more such machines can do, the more they can discover.
What’s not to like about that? Why do I NOT view it as a superior research goal than machines with common sense (which I’ll call “minions”)?
Technological change is accelerating and transforming our world. Assuming trends persist, we will soon experience an evolutionary shift in the mechanisms of reputation, a fundamental on which relationships are based. Cascading effects of the shift will revolutionize the way we relate with each other and our machines, incentivizing unprecedented degrees of global cooperation.
In 2015, you probably have more computing power than that of the Apollo Guidance computer in your smartphone, and yet Moore’s Law continues unabated at its fiftieth anniversary. Machines are becoming faster and smaller and smarter.
Even if they aren’t flesh, “mindclones” deserve protection.
For much of the 20th century, capital punishment was carried out in most countries. During the preceding century many, like England, had daily public hangings. Today, even Russia, with a mountainous history of government-ordered executions, has a capital-punishment moratorium. Since 1996, it has not executed a criminal through the judicial system.
If we can learn to protect the lives of serial killers, child mutilators, and terrorists, surely we can learn to protect the lives of peace-loving model citizens known as mind clones and bemans—even if they initially seem odd or weird to us.
On May 20 I participated in a four person debate about the existence of God at Western Washington University. On the ‘yes’ side were Mike Raschko and Mark Markuly from the School of Theology and Ministry at Seattle University. On the ‘no’ side were Bob Seidensticker and me. Here are my remarks:
Recently, the Daily Kos published an article titled, I Am Pro-Choice, Not Pro-Abortion. “Has anyone ever truly been pro-abortion?” one commenter asked.
Uh. Yes. Me. That would be me.
I am pro-abortion like I’m pro-knee-replacement and pro-chemotherapy and pro-cataract surgery. As the last protection against ill-conceived childbearing when all else fails, abortion is part of a set of tools that help women and men to form the families of their choosing. I believe that abortion care is a positive social good. And I suspect that a lot of other people secretly believe the same thing. And I think it’s time we said so.
Bernie Sanders, Senator from Vermont, is campaigning to be the next USA President. He defines himself as a “Democratic Socialist” and praises Scandinavian nations. USA citizenry is largely puzzled and aghast:
“The only thing most American know about socialism is they don’t like it.” - Leo Huberman
In a survey of transhumanists, 16.9% described themselves as Socialist, 4.2% Marxist, 32.7% Liberal, 27.4 Libertarian, and 15.6 Moderate. The Transhumanist Party is running a candidate in 2016 - Zoltan Istvan. I’ll be posting a series of articles on transhumanist political positions.
In this first installment, I interview four contributors to IEET.
As William Gibson always reminds us the real role of science-fiction isn’t so much to predict the future as to astound us with the future’s possible weirdness. It almost never happens that science-fiction writers get core or essential features of this future weirdness right, and when they do, according to Gibson, it’s almost entirely by accident. Nevertheless, someone writing about the future can sometimes, and even deliberately, play the role of Old Testament prophet, seeing some danger to which the rest of us are oblivious and guess at traps and dangers into which we later fall. (Though let’s not forget about the predictions of opportunity.)
Frank Herbert’s Dune certainly wasn’t intended to predict the future, but he was certainly trying to give us a warning.
“May all that have life be delivered from suffering”, said Gautama Buddha.
The vision of a happy biosphere isn’t new. Jains, for instance, aim never to hurt another sentient being by word or deed. But all projects of secular and religious utopianism have foundered on the rock of human nature. Evolution didn’t design us to be happy.
I am interested in “secularizing” Africa because I believe this would benefit the continent intellectually, socially, and economically. To help advance this goal I support Kasese Humanist Primary School, and I co-launched BiZoHa - the world’s first atheist orphanage.
For the very first time, scientists have demonstrated that a brain implant can improve thinking ability in primates. By implanting an electrode array into the cerebral cortex of monkeys, researchers were able to restore — and even improve — their decision-making abilities. The implications for possible therapies are far-reaching, including potential treatments for cognitive disorders and brain injuries.
But there’s also the possibility that this could lead to implants that could boost your intelligence.
Michael Tooley’s article “Moral Status of Cloning Humans” defends human cloning. I am in complete agreement with it. Cloning, despite the viceral reaction it raises, is a tool in the arsenal of the transhumanist once it is understood.
Here is a brief outline of the article with a bit of commentary identified by parenthesis.
What makes a Seattle mother spend her days trying to chip away at Bible belief rather than digging holes in the garden?
When my husband sent me the Pew Report news that the percent of Americans who call themselves Christian has dropped from 78.4 to 70.6 over the last 7 years, I responded jokingly with six words: You’re welcome. Molly Moon’s after dinner?
Not that I actually claim credit for the decline. As they say, it takes a village.
Groucho Marx, one of my favorite comedians of all time, famously wrote a telegram to a Hollywood club he had joined, that said: “Please accept my resignation. I don’t want to belong to any club that will accept me as a member.” I have recently considered sending such a letter to the skeptic and atheist movements (henceforth, SAM), but I couldn’t find the address.
Any reader of this blog knows that I am a transhumanist; I believe in using technology to overcome all human limitations. What follows is a summary of an article by Paul Lauritzen, a Professor Department of Theology and Religious Studies at the Catholic, Jesuit John Carroll University near Cleveland Ohio. I believe his argument worthless, and contrary to everything I believe in, but I will summarize it as best I can. As I proceed I will provide a few parenthetical comments, as well as a few critical remarks at the end.
A company in South China’s Guangdong province is building the city’s first zero-labor factory. It’s an effort to address worker shortages and rising labor costs, but the rise of semi-autonomous “smart factories” could be a sign of things to come, in China and elsewhere.
The founding of the Transhumanist Party of the United States, the intensifying of the U.S. BRAIN-Initiative and the start of Google’s project “Ending death” were important milestones in the year 2014, and potential further steps towards “transhumanist” politics. The most significant development was that the radical international technology community became a concrete political force, not by chance starting its global political initiative in the U.S. According to political scientist and sociologist Roland Benedikter, research scholar at the University of California at Santa Barbara, “transhumanist” politics has momentous growth potential but with uncertain outcomes. The coming years will probably see a dialogue between humanism and transhumanism in—and about—most crucial fields of human endeavor, with strong political implications that will challenge, and could change the traditional concepts, identities and strategies of Left and Right.
Here’s the question: does the existence of life in the universe reflect something deep and fundamental or is it merely an accident and epiphenomenon? There’s an interesting new theory coming out of the field of biophysics that claims the cosmos is indeed built for life, and not just merely in the sense found in the so-called “anthropic principle” which states that just by being here we can assume that all of nature’s fundamental values must be friendly for complex organisms such as ourselves that are able to ask such questions. The new theory makes the claim that not just life, but life of ever growing complexity and intelligence is not just likely, but the inevitable result of the laws of nature.