Pulp Ethics Exponential tech needs exponential ethics
Nicoletta Iacobacci
2016-02-08 00:00:00
URL

You don’t have to look far to see science fiction becoming science fact.




These are just examples of what’s possible here and now. Technology waits for no one as it grows exponentially. New technologies emerge every day. We see the impossible become possible—regularly—as exponential technologies like these develop and increasingly affect our lives.



Ethics, however, has a hard time keeping pace. To be sure, moral guidelines are defined for existing anomalies. Technology isn’t waiting around for ethics to catch up, though as technology surges ahead. This gives rise to ever-newer ethical debates—and causes a significant problem.



That problem boils down to this: it’s ever more difficult to keep up with the meanings, impacts, and possible repercussions triggered by the paradigm shift in our own innovations.



To everyone who's paying attention, it’s clear we'll experience progress at lightning speed in the twenty-first century; we’ll see what amounts to a millennia of progress come to life in just a few decades. It’s certain machine intelligence will surpass human intelligence—if not in our lifetimes, at least in this century.



We’ll see technological change never before witnessed. We’ll see the merging of biological and non-biological. We’ll see software-based immortal "humans." 

We’ll see sentient artificial intelligence.



These innovations have the potential to dramatically augment human cognition and capabilities. They could magnify the economy and give rise to other, even more powerful technologies. Our response to this is crucial.





Certainly, we should focus on the responsibility and morality of how machines behave. But what about the here-and-now accountabilities of the technologists creating these innovations?



Proposals do exist for developing autonomous robotic systems programmed with the intelligence to tell right from wrong and to evaluate moral consequences. 

However, an important question remains unanswered.



How can robots be capable of moral or ethical reasoning when, at times, their inventors are not?



This poses fundamental ethical questions. If superintelligent machines, (advanced robots that exceed our intelligence) become sentient—in other words, capable of perceptions and feelings— should we treat them like human beings? Should we respect their consciousness? Or will a new dimension of “humans” emerge?



What about the perceptions of human dignity, morality, and agency/autonomy? Will these perceptions survive in their entirety? Should ethics play a role? If yes, how?



The term “ethics” comes from the Ancient Greek word ἠθικός (ethikos), which comes from another Greek word, the noun êthos meaning "character, disposition, habit.” For Aristotle, the first to use the term, ethics is the attempt to offer a rational response to the question of how humans should best live, how to obtain Eudaimonia—what he calls happiness and eventually a good life.



What are the criteria of contemporary eudaimonia? Can we flourish and live the good life with abundance, augmentations, virtual worlds, and immortality? While not there today—ethics must be rebooted so that our answer to such a question is a resounding yes.



Today, the exponential progress of computing, genetics, 3D-printing, robotics, and artificial intelligence promises to drive humanity into an amazing era of abundance. We’ll soon be able to meet and exceed the basic needs of every man, woman, and child on the planet. But—we shouldn’t delegate fundamental decisions only to technology and superintelligence.



We must strive to contribute to an endangered human evolution. We can and must fight the typical infatuation or inertia that technology causes. We can and must draw up new guiding rules in each domain of technological innovations.



Over the past 50 years, scientific advances have produced remarkable progress, but some of these advances have also raised important ethical and societal issues.



For example, CRISPR (Clustered Regularly Interspaced Short Palindromic Repeat) is a new technology that allows us to make specific changes in the DNA of humans, other animals, and plants. Think about that for a minute.



We can selectively change human DNA.



Compared to previous techniques, this new approach makes it much faster and easier to modify DNA. CRISPR consists of two components. The first is an enzyme called Cas9—a pair of molecular scissors that isolate the DNA and cuts it into two. 



The second is RNA—a macromolecule guide that tells Cas9 exactly what to cut.



This is where the ethics have the potential to get extra dicey.



Cas9 enzyme, however, is still not fail-safe. Sometimes it can create unwanted mutations. This doesn’t even factor in someone intentionally acting with malfeasance. In either case, ethics plays an important role here.



Jürgen Habermas, an influential social and political thinker, questions whether post-metaphysical philosophy can contribute, for instance, to the ethics of genetic intervention.



As a first step, he predicts that the general population, political public sphere, and representative parliaments may come to consider pre-implantation genetic diagnosis as morally permitted or legally tolerated—if limited to a small number of well-defined cases of severe hereditary diseases.



In a next step, he assumes genetic intervention will be legalized to prevent genetic diseases. This would, in turn, lead to a gray area between negative and positive eugenics, improving the human race by intervention.



To what extent, though, should we use technology to create better human beings? Who will control the outcomes of this progress? Will countries have to rely on communities of people developing and using future tech to regulate themselves in the absence of new and updated ethical codes?



Contemporary transhumanists reason that NBIC (nanotechnology, biotechnology, information technology, and cognitive science) should improve human and non-human natures. Techno-skeptics believe that exponentially growing technologies and practices are causing irreversible disruptions. Clearly, it’s a topic that also leads to some concerns.



Will we experience a positive outcome?—or will we face an unknown future of mechanical humans? Can we coexist with machines that will be smarter, faster, and more intelligent than us?—or will we lose our humanity?



A human being is the only kind of being that knows it’s going to die. How do we relate with the contemporary quest of achieving immortality?



Think about what immortality really means. The principles held by those who defeat death will be the same held for the rest of eternity. Death will still occur (by violent causes, for example). Immortality won’t, however, foster social progression. It will happen too slowly. The only solution, then, is mass reprogramming.



But it’s safe to say that those with power will control any such reprogramming, not the individuals impacted by it.



Another fascinating area comes to light with prosthetics, smart drugs, and brain boosters that can enhance our physiology beyond current human limits. While these have the potential to improve our lives in innumerable ways, they also raise important ethical questions.



What are the consequences of human intervention in evolution? Do we need to protect humanity from, if any, technology's alienating powers?



Ultimately, we don’t know what would be lost if humans take nature into their own hands. We cannot know whether what will be lost will improve the health and welfare of future generations, or just limit their freedom.



But there’s a few things we do know—things we can and should do something about.



This will be the most pioneering decade in history. Exponential technologies will lead to exponential innovation. Ethics, then, must keep up with the exponential progress of technology. Contemporary philosophers should become proactive. They should support and push for ethics to leapfrog technological innovation—for ethics to be rebooted.



We must strive to take this journey. We must stay conscious of the risks we’re facing. Most importantly, we must raise a call to action for openly discussing the social repercussions these technologies could have if left only to their “makers.”



We must pursue exponential ethics.



This and other key future topics are being highlighted and discussed by over 3,000 policy makers attending the World Government Summit, held in Dubai between 8-10 February 2016. 





More information at http://www.worldgovernmentsummit.org or follow #WorldGovSummit