(by Robert Bradbury, IEET Fellow Milan Cirkovic, and IEET Board Chair George Dvorsky) We critically assess the prevailing currents in the Search for Extraterrestrial Intelligence (SETI), embodied in the notion of radio-searches for intentional artificial signals as envisioned by pioneers such as Frank Drake, Philip Morrison, Michael
Papagiannis and others. In particular, we emphasize (1) the necessity of integrating SETI into a wider astrobiological and future studies context, (2) the relevance of and lessons to be learnt from the anti-SETI arguments, in particular Fermi’s paradox, and (3) a need for complementary approach which we dub the Dysonian SETI. It is meaningfully derived from the inventive and visionary ideas of Freeman J. Dyson and his imaginative precursors, like Konstantin E. Tsiolkovsky, Olaf Stapledon, Nikola Tesla or John B. S. Haldane, who suggested macro-engineering projects as the focal points in the context of extrapolations about the future of humanity and, by analogy, other intelligent species. We consider practical ramifications of the Dysonian SETI and indicate some of the promising directions for future work.
Buddhist psychology and metaphysics focus on the emergence of selves, their drives, and their potential for developing wisdom and compassion. Buddhism has already entered into a wide ranging dialogue with cognitive science, and can also inform and be informed by efforts to create self-aware machine minds. Buddhism suggests that there are a number of prerequisites for the development of humanlike intelligence in machines. These include embodiment, sensory interaction with the environment, preferences and aversions. The Buddhist view of the advantages of different kinds of minds and embodiments suggests an ethical obligation not to create machine minds which are trapped in particular emotional states or cognitive loops. Rather machine minds should be created with the capacity to dynamically evolve in compassion and wisdom. Compassion must start with empathetic feelings and a theory of mind, but for Buddhism also requires cultivation of equanimity and ethical wisdom. Buddhism suggests the developmental cultivation of ethics from rule-based to virtue-oriented to utilitarian. Finally thoughts are offered on what enlightenment might mean for a machine mind.
Paul Root Wolpe, senior bioethicist at NASA and a pioneer in the field of neuroethics, recently spoke to BigThink about his concerns about a neuroenhanced future:
Peering into his children’s and grandchildren’s future, he sees an America that rewards competitiveness and productivity over relationship-building, and suspects that future generations will face intense pressure to enhance their minds and bodies in unhealthy ways.
There’s nothing new, Wolpe says, about humans chemically altering their brains:
Paul Root Wolpe: It’s not whether. We always have done it; we always will do it. Human Beings have been manipulating their brains in that manner since they first fermented grapes or discovered hallucinogenic mushrooms, or whatever was the very first time people realized that they could ingest something and change their brain’s functioning.
But now that we can do it better, more powerfully, more accurately and with fewer side effects, the temptation to do it dramatically and often will increase. So the question now becomes, what are the proper limits? What is the proper nature of that change?
Up until now, it’s been a bit of a moot question because the drugs that we had had side effects that made them undesirable. So if you take amphetamines to try to increase your attention, you’re going to have jitters, sleep disturbances and other things like that. Now you have something like Modafinil, a much more benign drug that can, in many people, enhance attention without any of those systemic side effects. And now we really have to begin to ask ourselves some interesting questions.
They did some studies, for example, with pilots. Gave some of them, not Modafinil, but a similar type drug and some they didn’t and then they threw emergencies at them in flight simulators. And what they discovered is that the pilots that were on attention enhancing drugs responded faster and more accurately to those emergencies.
So now we’re not just talking about, should I take it when I want to pay attention, maybe we should make people take it who have – surgeons and pilots and other people – who have other people’s lives in their hands. Maybe my surgeon on Modafinil will be much more able to focus on what he’s doing than my surgeon off of Modafinil.
What’s the Significance?
When faced with these complex ethical questions, it is tempting to take sides either for or against biotechnology. Utopian proponents will argue that biotech will end human suffering. Detractors will label it “unnatural” (many of them in blog posts on the equally unnatural internet).
But the reality, as always, is somewhere in the middle.
There is an unbelievable essay written - in apparent sincerity - by my colleague John C. Wright (a pretty good author, by the way), in which he asserts that the long darkness called feudalism was admirable, and that - by dismal contrast - we now live in an age that is benighted by crudely materialistic modernity and a shabby shallowness of the soul.
So, apparently there’s an Adderall drought going on the United States. Adderall is a prescription med that is used by people suffering from attention deficit disorder (ADD), attention deficit hyperactivity disorder (ADHD), and narcolepsy. It’s also being increasingly used as an off-label cognitive enhancer and for recreational purposes (which I’ll get to in just a little bit).
“As an artist, I can appreciate precedent representation and objecthood crises at the cite and sight of artistic collage and assemblage. As a transhumanist, however, I’m cognizant that artistic collage and assemblage will look like mere speed bumps when compared to the transubstrationality to be encountered near a singularity spike.”
Dr. J. chats with Christian Miller, Professor of Philosophy and Director of The Character Project at Wake Forest University. They discuss the idea of virtue and moral character and its relationship to moral philosophy, personality theory, religion and neuroscience. Part 2 of 2. Also Dr. J. finishes his chat with Ted Chiang about his Hugo award winning novella “The Lifecycle of Software Objects” and the state of science fiction. (Part 2 of 2)
Dr. J. chats with Christian Miller, Professor of Philosophy and Director of The Character Project at Wake Forest University. They discuss the idea of virtue and moral character and its relationship to moral philosophy, personality theory, religion and neuroscience. Part 1 of 2.
Human morality is older than our current religions, and may go back to tendencies observable in other mammals. In a bottom-up view of morality, this talk is one man’s road to discovering an array of positive tendencies in animals at a time when competition and aggression were the only themes.
Randy Sarafan shows us how to build robots to serve the revolution:
Learning from the lessons of the 1%, I set forth to outsource our occupy-related labor to a robotic workforce. Robots obviously have many advantages over their human counterparts. For instance, robots never get tired, they don’t get cold, they don’t sleep, nor eat, don’t require tents, and when armed insurrection becomes necessary, robots are much more morally ambivalent. Additionally, we had a discussion with an unnamed member of the San Francisco police force and they confided in us that the police currently do not have any plan for dealing with robotic occupiers.
For all of those reasons and more, I present to you Occu(pi) Bot; the first in a promising line of tireless, unstoppable, robotic class warriors.
Learn how to make your own!
A few days ago, the famous comic book writer and illustrator Frank Miller issued a howl of hatred toward the young people in the Occupy Wall Street movement. Well, all right, that’s a bowdlerization. After reading even one randomly-chosen paragraph, I’m sure you’ll agree that “howl” understates the red-hot fury and scatalogical spew of Miller’s lavishly expressed hate: “Occupy” is nothing but a pack of louts, thieves, and rapists, an unruly mob, fed by Woodstock-era nostalgia and putrid false righteousness. These clowns can do nothing but harm America.”
Topics discussed in this week’s episode of Geoprge Dvorsky’s Sentient Development podcast include the benefits of creatine, Jared Diamond’s 1987 article on how agriculture was the “worst mistake in the history of the human race”, the current state of lab grown meats, computational pathology, a review of the documentary “How to Live Forever”, and a word (or two) on the pernicious de-radicalization of the radical future.
Tracks used in this episode:
Oneohtrix Point Never: “Replica”
The Advisory Circle: “Now Ends the Beginning”
Russian Circles: “309”
Hooray For Earth: “Pulling Back”
In this Sentient Developments Podcast George Dvorsky talks about octopus intelligence, the rise of wrongful birth suits in Israel and elsewhere, and the latest news and findings into autism. George reprises the talk he gave on designer psychologies at the H+ conference at Parson’s University in NYC earlier this year. Lastly, George discusses how religion works as a reproduction control system.
Music used in this episode:
“At Last” by Plaid
“Hours” by Tycho
“Ballad of Gloria Featherbottom” by Mux Mool
Enlightenment values presume an independent self, the rational citizen and consumer who pursues her self-interests. Since Hume, however, Enlightenment empiricists have questioned the existence of a discrete, persistent self. Today, continuing that investigation, neuroscience is daily eroding the essentialist model of personal identity. Transhumanism has yet to come to grips with the radical consequences of the erosion of the liberal individualist subject for projects of enhancement and longevity. Most transhumanist thought still reflects an essentialist idea of personal identity, even as we advance projects of radical cognitive enhancement that will change every element of consciousness. How do ethics and politics change if personal identity is an arbitrary, malleable fiction?
At the 2011 Adelaide Festival of Ideas Julian Savulescu argues we should be using science and technology for moral enhancement, and that the future of humanity depends on it. Julian is the Director of the Oxford Centers for Neuroethics, Practical Ethics, and Science and Ethics.
What are the ways our civilization might collapse, and how might the human race become extinct?
According to sociobiologist Rebecca Costa, the answers are all staring us straight in the face. Just look at current events. Costa writes in her book The Watchman’s Rattle: Thinking Our Way Out of Extinction that human existence is threatened by “a global recession, powerful pandemic viruses, terrorism, rising crime, climate change, rapid depletion of the earth’s resources, nuclear proliferation, and failing education.”
Fortunately, Costa argues we are remarkably equipped to counter these threats today, due to our current understanding of the “biological reasons for the ascension and decline of civilizations.” The problem, as Costa describes it, is that humans are governed by two clocks: the very slow-ticking clock of human evolution and the fast-accelerating clock of technological progress. The result of these two clocks not synching up is the human brain (and the public policy our brains generate) is unable to keep up with the complex environment around us. According to Costa, we’re then left with “paleolithic emotions, medieval institutions and godlike technology.” Put all those in the blender, and look out!
So how do we stave off our collapse? The solution involves what Costa calls the most (surprisingly) controversial word in the English language: evolution. Costa asks why, if Charles Darwin’s theory is “the most important scientific principle governing life on earth,” we don’t utilize it as a relevant tool to solve our problems today? In other words, why is evolution “the greatest discovery you’ve never heard of?”
Dr. J. chats with Joseph Schwartz, Professor of Political Science at Temple University and author of The Future of Democratic Equality: Reconstructing Social Solidarity in a Fragmented United States. Prof. Schwartz is a long-time leader in the Democratic Socialists of America, the largest American socialist organization. Part 2 of 2.
Dr. J. chats with Joseph Schwartz, Professor of Political Science at Temple University and author of The Future of Democratic Equality: Reconstructing Social Solidarity in a Fragmented United States. Prof. Schwartz is a long-time leader in the Democratic Socialists of America, the largest American socialist organization. Part 1 of 2.
Due to an unprecedented number of spam comments—now exceeding legitimate comments by ten to one—we have made the decision to require commenters to create an account with us and login before posting comments. Please use the link on the bar at the top right of this page.