Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

“Unequal access to technology: what can we learn from smartphones?” (50min)

“Demystifying visionary technology” (1hr)

“What is a fair distribution of brains?” (1hr)

Natasha Vita-More, “Informed Radical Life Extension, by Design” (53min)

Ambition: A Short Sci Fi Film Celebrates the Rosetta Mission (5min)

Transvision 2014, the Technoprogressive Declaration, & the ISF


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

jhughes on 'Technoprogressive Declaration - Transvision 2014' (Nov 26, 2014)

dangrsmind on 'Technoprogressive Declaration - Transvision 2014' (Nov 26, 2014)

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Giulio Prisco on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Giulio Prisco on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Why Running Simulations May Mean the End is Near
Nov 3, 2014
(21334) Hits
(15) Comments

Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.
Nov 18, 2014
(19899) Hits
(2) Comments

Decentralized Money: Bitcoin 1.0, 2.0, and 3.0
Nov 10, 2014
(8895) Hits
(1) Comments

Psychological Harms of Bible-Believing Christianity
Nov 2, 2014
(6916) Hits
(5) Comments



IEET > Security > Military > Rights > Neuroethics > Economic > Life > Access > Health > Vision > Technoprogressivism > Contributors > Rick Searle

Print Email permalink (11) Comments (2449) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Psychobot


Rick Searle
By Rick Searle
Ethical Technology

Posted: Jul 7, 2013

Singularity, or something far short of it, the very real revolution in artificial intelligence and robotics is already encroaching on the existential nature of aspects of the human condition that have existed for as long as our history.

Robotics is indeed changing the nature of work, and is likely to continue to do so throughout this century and beyond. But, as in most technological revolutions, the impact of change is felt first and foremost in the field of war.

In 2012 IEET Fellow Patrick Lin had a fascinating article in the Atlantic about a discussion he had at the CIA revolving around the implications of the robotics revolution. The use of robots in war results in all kinds of questions in the area of Just-War theory that have yet to even begun to be addressed. An assumption throughout Lin’s article is that robots are likely to make war more not less ethical as robots can be programmed to never target civilians, or to never cross the thin line that separates interrogation from torture.

This idea, that the application of robots to war could ultimately take some of the nastier parts of the human condition out of the calculus of warfare is also touched upon from the same perspective in Peter Singer’s Wired for War.  There, Singer brings up the case of Steven Green, a US soldier charged with the premeditated rape and murder of a 14 year old Iraqi girl.  Singer contrast the young soldier “swirling with hormones” to the calm calculations of a robot lacking such sexual and murderous instincts.

The problem with this interpretation of Green is that it relies on an outdated understanding of how the brain works. As I’ll try to show Green is really more like a robot-soldier than most human beings.

Lin and Singer’s idea of the “good robot” as a replacement for the “bad soldier” is based on a understanding of the nature of moral behavior that can be traced, as most things in Western civilization, back to Plato. In Plato’s conception, the godly part of human nature, it’s reason, was seen as a charioteer tasked with guiding chaotic human passions. People did bad things whenever reason lost control. The idea was updated by Freud with his ID (instincts) Ego (self) and Super-Ego (social conscience). The thing is, this version of why human beings act morally or immorally is most certainly wrong.

The neuroscience writer Jonah Lehrer in his How we Decide has a chapter, The Moral Mind, devoted to this very topic.  Odd thing is the normal soldier does not want to kill anybody- even enemy combatants. He cites a study of thousands of American soldiers after WWII done by  U.S. Army Brigadier General S.L.A Marshall.

His shocking conclusion was that less than 20 percent actually shot at the enemy even when under attack. “It is fear of     killing” Marshall wrote “rather than fear of being killed, that is the most common cause of battle failure of the individual”. When soldiers were forced to directly confront the possibility of directly harming another human being- this is a personal moral decision- they were literally incapacitated by their emotions. “At the most vital point of battle”, Marshall wrote, “the soldier becomes a conscientious objector”.

After this study was published, the Army redesigned it’s training to reduce this natural moral impediment to battlefield effectiveness. “What was being taught in this environment is the ability to shoot reflexively and instantly… Soldiers are de-sensitized to the act of killing until it becomes an automatic response. pp. 179-180

Lehrer, of course, has been discredited as a result of plagiarism scandals, so we should accept his ideas with caution, yet, they do suggest what already know that the existential condition of war is that it is difficult for human beings to kill one another, and well it should be. If modern training methods are meant to remove this obstruction in the name of combat effective they also remove the soldier from the actual moral reality of war. This moral reality is the reason why wars should be fought infrequently and only under the most extreme of circumstances. We should only be willing to kill other human beings under the most threatening and limited of conditions.

The designers of  robots warriors are unlikely to program this moral struggle with killing into their machines. Such machines will kill or not kill a fellow sentient beings as they are programmed to do. They were truly be amoral in nature, or to use a loaded and antiquated term, without a soul.

We could certainly program robots with ethical rules of war, as Singer and Lin suggest. These robots would be less likely to kill the innocent in the fear and haste of the fog of war. It is impossible to imagine that robots would commit the horrible crime of rape, which is far too common in war. All these things are good things. The question for the farther future is, how would a machine with a human or supra-human level of intelligence experience war? What would be their moral/existential reality of war compared to how the most highly sentient creatures today, human beings, experience combat.

Singer’s use of Steven Green as a flawed human being whose “hormones” have overwhelmed his reason, as ethically inferior to the cold reason of artificial intelligence which have no such passions to control is telling, and again is based on the flawed Plato/Freud model of the conscience of human beings.  A clear way to see this is by looking inside the mind of the rapist/murderer Green who, before he had committed his crime had been quoted in the Washington Post as saying:

I came over here because I wanted to kill people…

I shot a guy here when we were out at a traffic checkpoint, and it was like nothing. Over here killing people is like squashing an ant. I mean you kill somebody and it’s like ‘All right, let’s go get some pizza’.

In other words, Green is a psychopath.

Again we can turn to Lehrer who in describing the serial killer John Wayne Gacy:

According to the court appointed psychiatrist, Gacy seemed incapable of experiencing regret, sadness, or joy. Instead his inner life consisted entirely of sexual impulses and ruthless rationality. p.169

It is not the presence of out of control emotions that explain the psychopath, but the very absence of emotion. Psychopaths are unmoved by the very sympathy that makes it difficult for normal soldiers to kill. Unlike other human beings they show no emotional response when shown depictions of violence. In fact, they are unmoved by emotions at all.  For them, there are simply “goals” (set by biology or the environment) that they want to achieve. The means to those goals, including murder, are, for them, irrelevant. Lehrer quotes G.K. Chesterson:

The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.

Whatever the timeline, we are in the process of creating sentient beings who will kill other sentient beings, human and machine, without anger, guilt, or fear. I see no easy way out of this dilemma, for the very selective pressures of war, appear to be weighted against programming such moral qualities (as opposed to rules for who and when to kill) into our machines.  Rather than ushering in an era of “humane” warfare, on the existential level, that is in the minds of the beings actually doing the fighting, the moral dimension of war will be relentlessly suppressed. We will have created what is in effect, an army of psychopaths.


Rick Searle, an Affiliate Scholar of the IEET, is a writer and educator living the very non-technological Amish country of central Pennsylvania along with his two young daughters. He is an adjunct professor of political science and history for Delaware Valley College and works for the PA Distance Learning Project.
Print Email permalink (11) Comments (2450) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


“Rather than ushering in an era of “humane” warfare, on the existential level, that is in the minds of the beings actually doing the fighting, the moral dimension of war will be relentlessly suppressed. We will have created what is in effect, an army of psychopaths.”

This describes succinctly the “Terminator” Psychopathy and moral dilemma, and although I do not subscribe myself to submitting to the pop culture dystopia, (from the 70-80s movie mythos - more than 30 years indoctrinated!), really this is fiction becoming reality now, and a small step to employing not merely AI but future AGI as vanguard for Humanity? Perhaps we need re-evaluate the old movie/Novella messages, Connor S. instructs, “There’s no fate but what WE make”, seems WE are not listening - yet?

Read Patrick’s comprehensive article previously, and there seems very little he leaves to question.

However, perhaps WE Humans need to evaluate the need for war more closely, which is argument separate from use and utility of war machines?

Why do Humans wage war?

1. The noble argument, and customary dissemination of propaganda, is to protect “freedoms” in face of threat, and oppose oppression of peoples in this ethic?

2. Arguably the more realistic motives are for profit and gain, of land or resources, through careful manipulation of politics and international protocols?

Where these rationale persist, there will no doubt be extensive and “profitable” use of AI in warfare. Yet where there is no longer profit to be made, then even these machines become expensive and eventually redundant? Although I share your views of concern, and speculate that redundancy of war will not be happening any time soon, or for the foreseeable future of “Humanity”.

Seems that “wars” and violence for “personal profit” and gain is a “very” Human trait and characteristic, and war is the “inflation” and expansion of this characteristic?

Have certainly learned a lot about this term “Rationality” this weekend, and I will be reflecting and utilizing it with more scrutiny in future!

Rationality, Rational, Rationale, Ration - seem to have different applications?

We should evaluate our Rationale - for wars?

Also regarding Super-Ego, Freud is as outmoded as Jung, (perhaps not quite?) but this metaphor is still worthy? (this “Social Conscience” - the origins of belief in deity perhaps?)

“His shocking conclusion was that less than 20 percent actually shot at the enemy even when under attack. “It is fear of killing” Marshall wrote “rather than fear of being killed, that is the most common cause of battle failure of the individual”. When soldiers were forced to directly confront the possibility of directly harming another human being- this is a personal moral decision- they were literally
incapacitated by their emotions. “At the most vital point of battle”, Marshall wrote, “the soldier becomes a conscientious objector”.

This is truth? Plato’s truth? Whence from our Social Conscience, at what age instilled?

These learned memes of non-violence are strong Luke! And this “fear” of transgression transfixiating, a mental short-circuit loop!

Here lies usefulness still for this metaphor Super-ego?


——

ps. @ Rick - You may also find the below relevant if you have not already seen this?

András Kornai - Bounding the Impact of AGI -
Winter Intelligence Oxford

“Humans already have a certain level of autonomy, defined here as capability for voluntary purposive action, and a certain level of rationality, i.e. capability of reasoning about the consequences of their own actions and those of others. Under the prevailing concept of AGI we envision artificial agents that have at least this high, and possibly considerably higher, levels of autonomy and rationality.

We use the method of bounds to argue that AGIs meeting these criteria are subject to Gewirth’s dialectical argument to the necessity of morality, compelling them to behave in a moral fashion, provided Gewirth’s argument can be formally shown to be conclusive. The main practical obstacles to bounding AGIs by means of ethical rationalism are also discussed.”

youtube.com/watch?hl=en-GB&gl=GB&client=mv-google&v=lh2DmUHRRyw&fulldescripti;


pps. @ Kris Notaro

Kris, I notice that Multimedia links/URLs at IEET are rolling, and that any links bookmarked and shared never point to the correct article? Is there any way around this?

 





@CygnusX1:

For us human the causes of war were stated no better than by Thucydides: fear, honor, and profit.

So far at least, these war machines we are building are plugged into these human motives for war. I am really much less nervous about a Terminator/Sky Net type scenario than I am this symbiosis between our worst instincts and the efficiency of machines.

Whereas fear, honor, and profit motivate the rulers when it comes to war, I think the effect of these on the individual soldier is largely staged. There is the very real fear when battle is joined, yes, but it took a whole system that works largely for the benefit of those at the apex of power to get the individual in that situation in the first place. Then there are superficial rituals of honor- medals and the like, and pay that is hugely disproportionate in terms of size compared to that of the conqueror. 

Individual human soldiers are “messy”. They need constant grooming, high salaries because of the risks, and actually have trouble harming one another. Machines are the perfect mercenaries- and that is what I am afraid of.

In terms of non-violent tendencies I don’t think this has anything to do with the “Super-Ego”- the Nazis had a clock like Super-Ego.

If our relatives in the animal kingdom are any example we have an innate sense of empathy and respond with revulsion and compassion when some other animal is in pain. We will, however, also violently defend ourselves and are prone to murderous rages given the right circumstance and given the right environmental feedback. These rages or crimes of passion are, I think, distinct from the cold meticulous rationality that is necessary to wage war successfully. 

Thanks for the link, but I find Ethical Rationalism like that of Alan Gewirth somewhat suspect. A psychopath like Green would respond to claims that he was contradicting his own status as an agent with a likely “who give a damn”, and machines unless so programmed are unlikely to hit upon Ethical Rationalism independently. They are, and will be for the foreseeable future programmed by us, and therefore will reflect our vices and hopefully our virtues as well.

So I agree with you- it is us we should be working on.





@CygnusX1 - Thank you - Fixed!





It is laughable this notion that robots would somehow be “more just” or “more moral” than human soldiers because they can be programmed to take the high road.

“For those regarded as warriors, when engaged in combat the vanquishing of thine enemy can be the warrior’s only concern. Suppress all human emotion and compassion. Kill whoever stands in thy way, even if that be Lord God, or Buddha himself. This truth lies at the heart of the art of combat.”

Robots are just tools, like a gun or a sword.  Get real and understand the ultimate truth of war: kill whoever stands in thy way.  If you call this “psycho,” then OK.  Remember, victors write the history.





@ Rick

Sure enough Super-Ego is merely metaphor, yet nurture of “Social Conscience” may be investment for the future worthy of investigation, (Social science - Jonathan Haidt and his ideas on social unity, empathy, compassion, mirror neurons, war - in fact most of the areas of concern), and neither can this “Social Conscience” be disenfranchised from the minds of individuals, and is reliant upon their cooperation and participation in evolving collective philosophies/ideologies. Once again this is all a part of the miracle?

Agreed that machines, even if eventually AGI does arrive/achieve it’s “own sense” of morality, this will most likely not be superior or even relevant to Human values, which may lead us then into conflicts with our creations - Terminator scenario again?

War machines or Finance Algorithms, there is little difference concerning ideals of efficiency and application for rationality. Although I will remind again, what can be devised “for” can also be utilized “against”, such is the balance of nature, and in justification of morality? (thesis/antithesis etc)


@ dobermanmac

The argument is that both Psychopaths and uber rational machines lack empathy and thus morality, and for all “intents” may be treated as similar, not that machines are Psychopaths technically.

That is a quote from “Kill Bill” yes? A noble form of Art is war, yet still, we have to differentiate between this violence of aggression and for defense? Would the killing of a Buddha really be necessary? - this is why we Humans possess “minds” to stop and “think”, (see also Libet’s Veto Function?)


—-

ps. You’ve most likely seen news on this Robot Jet bomber, smarter than your average drone, and those lines and elegant design, a thing of beauty isn’t she? (Think this was also featured in the SciFi movie “Skyline” nuclear capable)

Just how close are we Humans to permitting these machines Free license to kill?


US NAVY X-47B ROBOT FIGHTER JET
COMPLETES FIRST PHASE OF TESTING

singularityhub.com/2012/06/29/us-navy-x-47b-robot-fighter-jet-completes-first-phase-of-testing/





An army of psychopaths is what we ALREADY have at the top of the command control hierarchy! Psychopaths are masters at disguising themselves by presenting the emotions the target expects or desires. It is now well known that psychopaths tend to congregate at the top of the control hierarchies of the various industries and organizations, especially military , corporate, and government. Thus we have a literal psychopath control structure, creating and designing their militarized control grid via autonomous robotics, drones and automation, and then expect some sort of Paradise on Earth?? Huh? Until the psychopaths in power are removed, none of the techno-utopia will ever occur.Now we have advances in neuroscience to the point THERE WILL NO LONGER BE FREE WILL CONSCIOUSNESS AND CHOICE..period! A free thinking mind will be gone forever, once those little nano-robots (nanites) start rewiring your brain…or alternatively, millions are convinced to “upload” to some fictious virtual Valhalla, never to be heard from again!





@ Renegade

“THERE WILL NO LONGER BE FREE WILL CONSCIOUSNESS AND CHOICE..period! A free thinking mind will be gone forever, once those little nano-robots (nanites) start rewiring your brain…or alternatively, millions are convinced to “upload” to some fictious virtual Valhalla, never to be heard from again!”

Your points are valid, yet you are also not going to permit this are you? That’s what this site is all about, arguing for ethics of emerging technologies!

Actually if I did offer you opportunity for a Virtual Reality all of your very own, filled with your own ideology, philosophy and values and integrity, and peoples, I’m betting you would be more than content? I know I would? (The ultimate Longevity Retirement package - The future of Real Estate is Virtual?)

“Welcome to my World, won’t you come on in.. ?”

;0)





It is true we are at the CUSP of collectively choosing either Paradise or Oblivion! Look at the very titles of some of the latest sci fi movie releases; OBLIVION, AFTER EARTH, S.T. INTO DARKNESS, WORLD WAR Z…Hollywood is not there for entertainment, it is for programming…Many predictive programming movies later turned into reality (e.g., 911). The goal of the elites is to get humanity AS A WHOLE to collectively surrender their free choice and free will in favor of the global police/military surveillance grid,,,then the takeover will be complete…What happens when you no longer have free will? How will you know if you even have it? I, the Master Programmer, will program your reality!
Most in the 1st world countries ALREADY live in a VR world via their smart phones, Internet,gaming, PC’s, etc…Soon a INTERNET OF THINGS..where EVERYTHING is wired into the Matrix Control Grid..
A virtual valhalla a la the CAPRICA scenario?





@ Renegade

And now “Pacific Rim”. Yes, these Hollywood blockbuster movies are used for EDUtainment, and also for indoctrination of war and advertisement for US Imperialism and military might and machinery. Yet when was it ever any different?

Video games like “Medal of Honor” For what purpose indeed? I complained once to makers of the game “Assassin” because it was portrayed as promoting holy war and the Crusades at a rather sensitive time after 911.

What can I say but reiterate, that we know, and whilst we know, we have no excuses to stick heads in sand? Apathy is the World’s worst enemy? Yes freedoms are under attack, right here, right now, and for tomorrow - yet at what time can we afford not to care? There is no such relaxation, (outside of Virtual retirement)?


Concerning Free Will - in answer to your question, I’m sure our Free Will is limited, and minds are indeed susceptible, yet there is still room for maneuver, still the option to declare “NO!” such like Zoey? And there begins a whole new episode in the expression of freedoms and will to action - bombing in the name of Monotheism and “the One True God” is where the Cylon crises all began - Yet where did it end? At what outcome? “All of this has happened before, and will happen again”? Not entirely implausible?

If you doubt your ability and Free Will now or in future, I can only advise to reflect upon this..

Libet Experiments

“The neurologist Benjamin Libet performed a sequence of remarkable experiments in the early 1980′s that were enthusiastically, if mistakenly, adopted by determinists and compatibilists to show that human free will does not exist.

His measurements of the time before a subject is aware of self-initiated actions have had a enormous, mostly negative, impact on the case for human free will, despite Libet’s view that his work does nothing to deny human freedom.

Since free will is best understood as a complex idea combining two antagonistic concepts – freedom and determination, “free” and “will,” in a temporal sequence, Libet’s work on the timing of events can also be interpreted as supporting our “two-stage model” of free will.

Indeed, Libet himself argued that there was still room for a veto over a decision that may have been made unconsciously over 300 milliseconds before the agent is consciously aware of the decision to flex a finger, but before the action of muscles flexing. In his 2004 book, Mind Time: The Temporal Factor in Consciousness, he presented a diagram of his work.”

“Libet says the diagram shows room for a
“conscious veto.” The finding that the volitional process is initiated unconsciously leads to the question: Is there then any role for conscious will in the performance of a voluntary act (Libet, 1985)?

The conscious will (W) does appear 150 msec before the motor act, even though it follows the onset of the cerebral action (1W) by at least 400 msec. That allows it, potentially, to affect or control the final outcome of the volitional process. An interval msec before a muscle is activated is the time for the primary motor
cortex to activate the spinal motor nerve cells, and through them, the muscles. During this final 5o msec, the act goes to completion with no possibility of its being stopped by the rest of the cerebral cortex.)

The conscious will could decide to allow the volitional process to go to completion, resulting in the motor act itself. Or, the conscious will could block or “veto” the process, so that no motor act occurs” 

http://www.informationphilosopher.com/
freedom/libet_experiments.html

 

 





Look around you today..is not everything and everyone simply another program running (and sub-routines)...Are not most people simply biological androids? What about DNA sequencing..another program running? Although the experts say only some 2% of the total DNA is for body functioning..the rest is a mystery…What is the difference between a VR game world, and what is around us? Second Life, anyone? Zoey “uploaded”, and became trapped in the virtual world….Convincing people to upload to a simulated Paradise of their own choosing seems like a great plan by the New Gods of the eilites!
Nature vs. Nurture debate…but it is hard to argue that most of humanity seem to be nothing more than programmable ‘droids…thus the rush for Brain Mapping neurotech and Qu-bit quantum computing.
http://www.dataasylum.com/media-bioapi-chemtrails-references4.html#ultra





Update..

US unmanned drone jet makes first carrier landing
11 July 2013 Last updated at 17:53

“The US Navy X-47B drone touched down on the deck of the USS George H W Bush as it sailed off the coast of Virginia. The bat-wing aircraft can deliver guided bombs from a range of 3,200km (1990 miles). It is the first drone to land on a ship at sea.”

bbc.co.uk/news/23276968





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Zen and the art of GMO Policy Making

Previous entry: Top 10 New Future Jobs

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376