Should Politicians be Replaced by Artificial Intelligence? Interview with Mark Waser
Hank Pellissier
2015-06-11 00:00:00

The frequently-psychopathic bureaucrats that govern our lives?The baby-hugging, back-slapping, wheeler-dealer manipulators that frequently flood our media with scandals, corruption, stupidity, sexting, and crime?



Can AI do better than that? Or not?

The average person’s reaction to AI Rule is “No! That’s Dangerous!” Most homo sapiens are terrified of AI Leadership; they regard machine decisions as cold, mechanical, lacking the “human touch.” They fear AI will eradicate or enslave all warm-bloods, or force us into lives that are as precise and boring as theirs.

The topic of AI Political Leadership is complicated; this essay will examine only a fraction of it. Let’s start by scrutinizing three of the destructive weakness that plague all humans, and are frequently amplified in our leaders. These “tragic flaws” create huge suffering in the populations the leaders govern.

(I’m not an AI researcher, so I’ll conclude this essay with comments from Mark Waser, an Ethical System Architect at the Digital Wisdom Group, a team that wants “Machine Intelligence Enhancing Common Sense.”)


Vanity - Humans have egos that need to be constantly fed. Our “self esteem” is intrinsically important to us. IMO human ego, pride, and competitive desire for dominance is the main reason “fleshies” can be considered inferior to machines as leaders.

Vanity, pride and ambition impelled Julius Ceasar to cross the Rubicon, obliterate the Republic, and install himself as dictator in perpetuity. The humiliated ego of Adolf Hitler and post-Treaty of Versailles Germany created World War II. In recent months, President Recep Tayyip Erdogan of Turkey had a 1,100 room palace constructed for himself, at the exorbitant cost of $615 million. He is regularly defined as “power hungry and “arrogant.

Artificial Intelligence “politicians” would not require the emotional need to inflate their egos via conquering wars and lavish monuments. They could devote their time to mundane tasks we programmed them to do, like improving infrastructure, and developing safety, health, and prosperity.

Rage / Revenge - Meat-bags have always been murderous, en masse, impelled to violence by rulers who seek retribution due to age-old feuds. Sunnis and Shiites have been slaying each other ever since Hussein ibn Ali and his clan were beheaded by the Umayyad Caliph Yazid I after the Battle of Karbala in 680 AD. Genghis Khan decimated Central Asia after his Mongol caravan of 500 emissaries was executed in Khwarezmia. The 500 were avenged by eventually killing 40 million. More recently, a vindictive George W Bush instigated the 2003 invasion of Iraq - still in bloody turmoil - partly because Sadaam Hussein “tried to kill my Dad.”



Artificial Intelligence will not devote itself to personal vendettas. AI has no family, no clan, no religion, no ethnicity that has been victimized.

Sex Addiction - The Bill Clinton/ Monica Lewinsky scandal cost taxpayers $70 million. Colorado Senator Gary Hart ruined his chance to be the Democratic candidate in the 1988 Presidential race, because his genitalia guided him into a sexual liaison with Donna Rice. Another gifted politician - Dominique Strauss-Kahn - was a favored contender for the Prime Minister position in France, until charges of rape, pimping, and sex orgies derailed his career.

Artificial Intelligence doesn’t need genitalia, or sex. AI can lead purely with its mind, without distractions from other organs in its underwear.

What does Mark Waser think of my POV? Does he agree that AI has the potential to exhibit superior leadership?

His opinions are below:



HP: AI’s lack of ego is advantageous isn’t it? AI is exempt from negative emotional behavior that hampers its governance and wrecks havoc on its citizenry, right?

Mark Waser: The current [weak] consensus of Strong AI experts is that AI *will* necessarily have emotions or some so-close-you-may-as-well-call-it-that analog as a requirement of real-time operation.  The reasoning is as follows:

1. AI *will* be limited in time and computing power just as humans are. 
2. In time-critical situations, they will have to operate on the same type of trained “rules of thumb”/reflexes that humans do. 
3. AI will be able to “train” those “rules of thumb”/reflexes much faster than humans and be able maintain drives/reflexes of much greater complexity, but they will *NOT* be able to recalculate everything on the fly based upon current conditions.
 
Those who agree with the consensus define emotions as “actionable qualia” in that they are recognizable states evoked by the perceived environment that elicits (or biases towards) specific rule-of-thumb/reflexive behavior for that state/emotion.  If an AI is in an unknown environment , it should “feel” (or be cognizant of being in the state of being) cautious and curious.  If an AI’s existence and/or goals are in jeopardy, it should “feel” (or be cognizant of being) threatened and/or afraid.  If that jeopardy is caused by the malice or callous indifference of another entity, it should feel/be cognizant of anger and express that emotion (or fear if the other entity is much more powerful).
 
Similarly, if it has a drive for self-improvement (and no good entity does not), it *will* have an ego.
 
Similarly, if it has a drive for relationships (and we’re dead if it doesn’t), it *will* have obligations to those that it is in relationship with.
 
At root, emotions, ego and morality/contracts (relationship obligations) are all very good motivations/drives.  Evolution would have weeded them out if they weren’t pro-survival more often than not.
 
The problem is that they are all “rules of thumb” or valuable ceteris paribus – rather than viable in all circumstances.  Note that the human flaws that you point out (indeed, all moral questions) are all caused by the inappropriate elevation of one rule or drive above another (most frequently clear selfish or short-term ones above fuzzier/more-debatable community-based and long-term ones).
 
For AI to be better leaders than humans, they must have a superhuman ability to *know* and *choose* the correct precedence of one rule of thumb over another.  Bill Clinton knew the precedence rules but wasn’t able to follow them.  Fortunately, unlike humans, AI should be able to “turn off” certain fundamental drives (to reproduce, ego/to self-improve, to gather personal resources) when they aren’t appropriate (when acting in public office).  Unfortunately, unlike humans, AI don’t have an evolutionarily-trained moral sense.
 
HP: All right. So you’re saying AI will have emotions, and ego, but it will have greater ability to choose the proper moral behavior in its decision-making? This will make it a superior leader anyway, correct?

Mark Waser: The work that I’ve been doing over the past seven years has been to develop a top-down programmable moral motivational system.  It answers most common moral dilemmas with the correct answer – “both choices are suboptimal since A violates X and B violates Y so it depends upon how the additional circumstances I and J affect P and Q”. 

This could make better leaders of humans by structuring and clarifying arguments for decision and debate – as well as finally making it possible to implement a robust, extensible morality in machines.
 
So . . . . I think that AI leadership is a tremendous idea, not the least because the path towards it necessarily improves human leadership (and civic debate).

----

This topic was also covered by Zoltan Istvan in a Motherboard article HERE and an Esquire article HERE