Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Will World War 3 Be Prevented Because of Global Interdependence?

The Injustice of Sexism

NASA Can Get Humans to Mars by 2033 (Without a Budget Increase!)

Where does intelligence come from?

8th Beyond Humanism Conference

The Universal Balance of Gravity and Dark Energy Predicts Accelerated Expansion


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

instamatic on 'NASA Can Get Humans to Mars by 2033 (Without a Budget Increase!)' (May 26, 2016)

almostvoid on 'Where does intelligence come from?' (May 26, 2016)

almostvoid on 'The Future of PR in Emotionally Intelligent Technology' (May 25, 2016)

almostvoid on 'Rituals Improve Life According to Ancient Chinese Philosophers' (May 25, 2016)

almostvoid on 'Optimize Brain Health by Balancing Social Life with Downtime' (May 23, 2016)

instamatic on 'Faithfulness--The Key to Living in the Zone' (May 22, 2016)

R Wordsworth Holt on 'These Are the Most Serious Catastrophic Threats Faced by Humanity' (May 22, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Ethicists Generally Agree: The Pro-Life Arguments Are Worthless
May 17, 2016
(4309) Hits
(10) Comments

Artificial Intelligence in the UK: Risks and Rewards
May 12, 2016
(3337) Hits
(0) Comments

Nicotine Gum for Depression and Anxiety
May 5, 2016
(3036) Hits
(0) Comments

3D Virtual Reality Is the Best Storytelling Technology We’ve Ever Had
May 5, 2016
(2852) Hits
(1) Comments



IEET > Security > SciTech > Rights > Economic > Vision > Artificial Intelligence > Futurism > Contributors > Gerd Leonhard

Print Email permalink (0) Comments (2876) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Redefining the Relationship of Man and Machine


Gerd Leonhard
By Gerd Leonhard
futuristgerd.com

Posted: Jan 14, 2016

What are the challenges and opportunities facing society in the next 10 years as a result of an accelerating pace of technological development? 

(this essay is an excerpt of a chapter in The Future of Business, published by Futurescapes)

From technology disruption to furthering human happiness 

This chapter aims to provide important context framing for the mission-critical business decisions that we will all need to make in the next few years in strategy, business model development, marketing, and HR. Remaining relevant, unique, purposeful, and indispensable in the future is obviously a key objective for every business everywhere, yet technology will no doubt continue to generate exponential waves of disruptions at an ever-faster pace. Soon – once technology has made almost everything efficient and abundant – I believe that we will need to focus on the truly human values of business, i.e. to transcend technology. Successful business will no longer be about running a well-oiled machine; rather it will be about uniquely furthering human happiness. 

Exponential and combinatorial: we’re at the pivot point 

We are witnessing dramatic digitization, automation, virtualization, and robotization all around us, in all sectors of society, government, and business – and this is only the beginning. I believe these trends will continue to grow exponentially over the next decade as we head towards a world of five to six billion Internet users by 2020, and possibly as many as 100 billion connected devices in the Internet of Things, such as sensors, wearables, and trackers. 

Beyond any doubt, machines of all kinds – both software and hardware – will play an increasingly larger role in our future, and progressively more intelligent machines will impact how we live our lives at every turn. Netscape’s founder turned venture capitalist Marc Andreessen already highlighted this phenomena in a 2011 Wall Street Journal opinion piece entitled Why Software is Eating the World36 – a prescient headline that is certain to play out in force in our imminent future. 

We are already nearing the pivot point where very few ideas seem to remain in the realm of science fiction for very long. This can be witnessed in areas such as automated, real-time translation (SayHi, Google Translate, Skype Translate) and self-driving and semi-autonomous cars (Google, Tesla, Volvo). The fiction-reality boundary is also being crossed by developments such as intelligent personal agents (Cortana, Siri, Google Now), augmented and virtual reality (Microsoft Hololens, Oculus Rift) and many other recent breakthroughs. Our world is being reshaped by developments that used to only exist in the scripts of Hollywood blockbusters such as Blade Runner, Her, Minority Report, Transcendence and The Matrix. (On that note, let’s be sure not to give these blockbuster movies too much credit as far as realistic foresight is concerned). 

Technology: it’s no longer about IF or HOW but about WHY 

The urgent need for clear man-machine ethics is amplified by the view that we should probably no longer be concerned whether technology can actually do something, but whether it should do something. The how is being replaced by the why (followed by who, when and where). 

For example, why would we want to be able to alter our DNA so that we can shape what our babies look like? And who should be able to afford or have access to such treatments? What would be the limits? In machine intelligence, should we go beyond mere deductive reasoning and allow smart software, robots, and artificial intelligence (AI) to advance to adductive reasoning (i.e. to make unique decisions based on new or incomplete facts and rules)? If autonomous machines are to be a part of our future (as is already a certainty in the military), will we need to provide them with some kind of moral agency, i.e. a human-like capacity to decide what is right or wrong even if the facts are incomplete? 

“Hellven” challenges 

Tremendous scientific progress in sectors such as energy, transportation, water, environment, and food can be expected in the next 10-20 years. I believe most of these achievements will have an overall positive effect on humanity, and hopefully on human happiness (which I would suggest should be the ultimate goal) as well. This would clearly be the heavenly side of the coin. 

At the same time, on the hell side we are now approaching a series of complex intersections at very high speeds. Soon, every single junction we navigate could either lead to more human-centric gains or result in serious aberrations and grave dangers. It has often been said that, “technology is not good or evil – it just is”. It is now becoming clear that the good / bad part will probably be for us to decide, every day, globally and locally, collectively and individually. Clearly, if we assume that machines will be an inevitably large part of that future, we will need to decide both what we want them to be, and perhaps more importantly, what we want to be as humans – and we need to do it soon. 

Artificial Intelligence (AI) is the most significant “hellven” challenge

Most technologies, software and hardware alike, are not only becoming much faster and cheaper but also increasingly intelligent. The spectrum of rapid recent advances runs the gamut from the kind of simple algorithmic intelligence it takes to win against a chess master, to the advent of thinking machines and IBM’s neuromorphic chips (i.e. chips that attempt to mirror our own neural networks) and their ambitious cognitive computing initiative. Buzzwords such as AI and deep learning are already making the headlines every single day, and this is just the tip of the iceberg. Looking at the investments by the leading venture capitalists and funds, AI has already become a top priority in Silicon Valley and in China, often a certain sign of what’s to come. 

At the same time, almost every single major information and communications technology (ICT) company already has several initiatives in this man-machine convergence arena. Google and Face- book are busy acquiring small and large companies in a wide range of AI and robotics-related fields. They clearly realize that the future is not just about big data, mobile, and connected everything. They see the next horizon as embedding capability to make every process, every object, and every machine truly functionally intelligent, albeit not (yet) humanly intelligent as far as social or emotional traits are concerned. But maybe this is just a question of when rather than if? 

Just imagine what AI could do to our everyday activities such as searching the web (as we call it today), and you can get a glimpse of what’s at stake here. In the very near future, who will bother with typing a precise two-word search phrase into a box when the system already knows everything about you, your schedule, your location, your likes, your connections, your transactions, and much more? Based on the situational context, your external brain i.e. the AI in the cloud will already know what you need, before you even think of it, and will propose the most desirable actions as easily as today’s Google maps propose walking directions. Hellven, once again, depends on your standpoint. 

IBM, the creator of Watson Analytics, a leading commercially available AI product, appears to be betting the farm on this future. IBM is investing billions of dollars into neurosynaptic chips and cognitive computing – designed to emulate the human neural systems with the intention of creating a holistic computing experience, i.e. computing that feels as natural as breathing. Computing is no longer outside of us – a thought both scary and exhilarating. Apart from IBM, Google is working on its own Global Brain project and the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland is pushing the EU’s hotly contested Human Brain project. China’s Baidu has also signaled its ambitions to discover the holy grails of AI by hiring top-level researchers in that field including Stanford’s Andrew Ng, and by opening up a Silicon Valley AI center. The list goes on. Clearly, man-machine convergence is on top of the global agenda and investors smell enormous profits. 

But: machines don’t have ethics 

The AI gold-rush has only just started, and this is probably a very good time to be more concerned about whether Silicon Valley’s leading venture capital firms have enough foresight to consider more than their financial returns. After all, it is they who are funding commercial applications of man-machine technologies that might have potentially catastrophic side effects on humanity. In my view, the issue of how man and machine will inter-relate in the future should not be viewed from a profit-only perspective. Machines don’t have ethics and neither does money. The coming combination of these forces that operate beyond and above human values strikes me as even more dangerous. 

Some futurist colleagues predict that we will soon reach a point where the capacity of thinking machines will exceed that of the human brain; a point that Ray Kurzweil, scientist and author of How to Create a Mind, calls The Singularity, with 2029 as the likely ETA. At this point, if not earlier, even larger and deeply wicked problems will emerge. For example, if we maintain that technology does not (and will not) have ethics, it would probably be downright stupid for anyone to expect that any current or future software program, machine, or robot would be able to act based on human morals, values, or ethics. Thus, the morals of machines will emerge as a major factor in the future of humanity, and the issues around what I call Digital Ethics (see below) will quickly become more essential as technology spirals into the future. 

Every algorithm will need a “humarithm” 

I coined the humarithm neologism in 2012 – as a wordplay that riffs off algorithms – because I believe that the chains of logic, formulas and if this then that rules urgently need to be paralleled with corresponding systems of ethics, values and assumptions, and new if we believe this we must do that rules. I believe that every time we offload a task to an algorithm (a machine) we will also need to think about what kind of humarithm we need to offset the side-effects, i.e. how to best deal with the unintended consequences which are certain to arise. 

For example, we may eventually come to the conclusion that commercial airliners can indeed be better piloted by software and robots than by human beings; most research already indicates that this is indeed the case. But if so, we must certainly think about how the passengers will feel about traveling inside a large metal tube that is steered entirely by a robot. This may well be a typical case of where efficiency should not trump humanity. 

(this essay is an excerpt of a chapter in The Future of Business, published by Futurescapes)


Gerd Leonhard is a futurist, focusing on near-future, ‘nowist’ observations and actionable foresights in the sectors of humanity, society, business, media, technology and communications. Gerd is also an author, an executive ‘future trainer’ and a strategic advisor. He is the co-author of The Future of Music, the host of the web-TV series TheFutureShow and the CEO of TheFuturesAgency.
Print Email permalink (0) Comments (2877) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Transhumanism: Are We Decommissioning Evolution?

Previous entry: James Barrat’s “Our Final Invention: Artificial Intelligence and the End of the Human Era”

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org