Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Siegel @ Science and Non-Duality

12 Technologies We Need To Stop Stalling On And Develop Now

iSchools: Contemporary Information Technology Theory Studies

SETI Institute: Risky tales: Talking with Seth Shostak at Big Picture Science

Review The Future: What is the Future of Education?

Neuroscience Symposium: Genetics in psychiatry


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

rms on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

instamatic on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

rms on 'Science Fiction and our Dreams of the Future' (Oct 20, 2014)

rms on 'Sousveillance and Surveillance: What kind of future do we want?' (Oct 20, 2014)

dobermanmac on 'Transhumanism and the Will to Power' (Oct 20, 2014)

instamatic on 'Why Is There Something Rather Than Nothing?' (Oct 18, 2014)

CygnusX1 on 'Why Is There Something Rather Than Nothing?' (Oct 18, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7446) Hits
(2) Comments

Dawkins and the “We are going to die” -Argument
Sep 25, 2014
(5600) Hits
(21) Comments

Should we abolish work?
Oct 3, 2014
(5096) Hits
(1) Comments

Will we uplift other species to sapience?
Sep 25, 2014
(4551) Hits
(0) Comments



IEET > Rights > Neuroethics > Life > Enablement > Directors > George Dvorsky

Print Email permalink (22) Comments (5749) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The Psychopaths Among Us


George Dvorsky
By George Dvorsky
Sentient Developments

Posted: Apr 12, 2012

One of the more surprising things I learned at the recently concluded Moral Brain conference at NYU is that psychopathy affects 1-2% of the general population. That seems shockingly high to me. But on reflection, it kind of makes sense. I’m sure most of us know at least a couple of people who we suspect might be psychopaths.

Psychopathy is defined as severe emotional dysfunction, especially a lack of empathy. Psychopaths are completely unable to recognize such things as anger and fear in individuals, whether it be from facial expressions or verbal exclamations. Psychopathy can also defined by the expression of anti-social behaviors.

Neuroscience is helping to identify parts of the brain that are deficient leading to psychopathy. According to James Blair, the root of the problem is in the amygdala, which, when not functioning properly, causes individuals to respond in less averse way to fear, sadness, and pain. It’s important to note that there is no correlation between psychopathy and IQ.

In fact, it is very difficult to detect psychopaths as many of them appear well adjusted, successful, and even charming. Most of them are not criminals. And according to Fabrice Jotterand, psychopathy affects 3-5% of businessmen. Perhaps there is something to be said about psychopathic traits and those characteristics required for success in the business world—including such things as ruthlessness and indifference.

In terms of possible treatments, pharmaceutical interventions seem to be a better bet than behavioral therapy. Surprisingly, according to Walter Sinnott-Armstrong, behavioral therapy for psychopaths actually makes their condition worse. Instead, drugs like Sertaline, which is considered an anti-psychotic, and other SSRI’s can be used with positive affects. Ideally, however, neuroscientists would like to repair the dysfunction to the amygdala, which is typically at the root of the cause.

So, should this be considered a kind of “moral enhancement”?

Jotterand says no. He argues that it’s not a question of moral enhancement, but more about altering behavior. I’m not sure I completely agree as the line dividing the two is quite blurred. If we can alter anti-social behavior, and augment (or repair) a person’s sense of empathy, we are both going about moral enhancement, and subsequently, working to alter a person’s behavior.

This could be a good gateway approach to moral enhancement for neurotypicals. It reminds me of how assistive devices for the physically disabled could eventually trickle down to “normal” humans once these devices exceed normal biological capacities. It’s not unrealistic to believe that a future intervention to cure psychopathy could actually result in greater-than-normal empathy and other pro-social traits. Should this happen, neurotypicals might eventually start to demand it for themselves.


George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.
Print Email permalink (22) Comments (5750) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


I’m curious to see how oxytocin treatments might work to alleviate some symptoms of sociopathy.





“Psychopath” or “sociopath” are merely terms applied to people who cannot adjust to the exceeding corruption of gross monetary inequality, and other disturbing economic factors, which are prevalent within the socio-economic structure of capitalism. Our society is disturbing on many levels. The principal disturbing factor is regarding how our lives are fundamentally based on monetary greed, which is a scarcity based existence responsible for all violence, hostility, cruelty.

Earning more money than other people is a blueprint for rapine. Capitalism is all about earning more money then other people; money is the core focus; therefore people behave rapaciously, without concern for fellow humans because people are merely commodities to be exploited, thus people should not be surprised by this rapine. Capitalism teaches psychopaths everything they know about cruelty. Capitalism creates psychopaths. Capitalism is all about exterminating competitors, making a killing, hostile takeovers, exploiting customers. Capitalism is psychopathy.

The fault is not with the so-called psychopath. The fault resides within the fundamental structure of our deeply disturbing socio-political system. The so-called “morality” of middle-class people often entails clinging to wealth, conservatism, illiberality, and intolerance, therefore the middle classes and other higher “deciding classes” (the decision-makers shaping our civilization) are somewhat responsible for the creation of psychopaths.

The concept of “brain structure indicating a potential psychopath” is a good example of pseudo-science, bogus-science. Correlation does not imply causation. Our environment changes our brain structure. Childhood abuse is a proven factor for changing the structure of the brain. Experience changes the structure of our brain. Neural plasticity demonstrates the malleability of our brains. Already we know our brains are malleable but new research shows our brains are more malleable than previously thought: http://medicalxpress.com/news/2012-03-brain-flexible-trainable-previously-thought.html

The following Harvard web-page states: “Abuse during childhood can change the structure and function of a brain, and increase the risk of everything from anxiety to suicide. “

http://news.harvard.edu/gazette/2003/05.22/01-brain.html

Maltreated children have also been shown to demonstrate the identical brain activity to combat soldiers: http://www.ucl.ac.uk/news/news-articles/1112/111205-maltreated-children-fMRI-study

Yes psychopaths may have different brains to so-called “normal” people, but the brain difference could easily be a logical and rational response to an irrational world, therefore to “correct” psychopathic brains could be tantamount to eliminating logical responses to a psychopathic socio-economic system. Psychopaths could easily be the only valuable aspect of deeply valueless and disturbing civilization. They could be the solution not the problem. The problem with psychopaths is their honesty; they are making an exceedingly honest response to a dishonest social structure, thus it is understandable how supporters of our dishonest civilization want to eliminate the honesty and sincerity of psychopathy. Our civilization is not civilized and psychopaths are not to blame but “normal” people have their morals twisted back to front. Psychopaths are actually incredibly civilized compared to the bourgeois intelligentsia.

Trauma-based changes to brain structure are bad because trauma is bad, however it is possible psychopathic brains could represent good qualities, advantageous qualities, but those advantageous qualities are incompatible with the rapacious nature of capitalism thus the actions of a psychopath within the hostile environment of capitalism appear to an aberrational deviancy, whereas within an healthy society (not based upon rapine), the structure of a psychopathic brain could entail good qualities. The flaw with current middle-class reactions to the psychopath problem is one of sociological relativism, whereby society is assumed to good in all all circumstances thus anything incompatible or in conflict with the ethos of society is deemed bad. It is a an odd scenario where society is always assumed to be faultless thus the fault resides within people who cannot adapt. Our civilization is deeply devoid of self-awareness, which is the oddity of many pronouncements and theories. Deficient analytic ability exhibited by mainstream commentators also makes error correction difficult. Commentators on morality rarely consider how pharmacological adaptation of humans to society could be an adaptation to dsytopia. We should make greater efforts to adapt society to humans instead of forcing humans to conform to our inhumanly benighted world.

Contrary to the assertion in George’s article, I do think there is a correlation between high IQ and psychopathy.

Ironically the Moral Brain conference seems very immoral.

Finally I will end with a quote from the song “Ill Manors” by Plan B, which demonstrates the psychopathy of the underclass: “And if we see any rich kids on the way, we’ll make ‘em wish they stayed inside.” You can watch the video here: http://youtu.be/s8GvLKTsTuI





Most of us are psychopaths - victims of Antisocial personality disorder (ASPD) - on the definition of
the American Psychiatric Association’s Diagnostic and Statistical Manual.  ASPD is personality disorder characterized by “...a pervasive pattern of disregard for, and violation of, the rights of others that begins in childhood or early adolescence and continues into adulthood.” Only if we add the rider that the pervasive violation is restricted to sentient beings of our own species do most of us escape this diagnostic label.

If I thought such interventions would work, I’d support “morality pills” and other forms of biological therapy to treat the empathy deficit in question. But as we know, most psychopaths are resistant to diagnostic labels - and don’t see they have a problem.





SU - you seem to have deep reverence for sociology and political theory, but disregard for hard science like neurology that has repeatedly asserted that psychopaths have a different brain structure.

My assertion is that the neurological evidence on psychopathy is genuine, and the sociology/poli sci theories you adhere to are just “pseudo-science.”

But we have been having this disagreement for 2 weeks so I think we’re both set in our opinions.





Hank, I am all for hard science.

My issue is how the data is interpreted.

I don’t deny so-called “psychopaths” have different brain structures, my issue is how that data is interpreted, my issue is the meaning attributed to the data.

LOL, yes we are set in our opinions. I merely add my comments for the record and to highlight an alternate viewpoint.





Yes David, there is a great problem regarding mentally ill people not being able to recognise their mental illness. For example the sighted man in “The Country Of the Blind” could not recognise how his eyes were causing his mind to be diseased, a disease where he was compelled to state he saw things, he stated he could “see”.

The big question is: who is in denial?

A sick person can determine a healthy course of action but if they are too sick they may not realise they are ill thus they cannot determine a methodology to fix the problem, or their solutions are barbaric due to their delusions. Society is sick and “normal” people are mentally ill, thus in the typical manner of mentally ill people they are unaware of, or unwilling to recognise, their disease of mind, their defect of reason, which entails barbarically insane solutions. 

As a part of society I am sick but society is not so completely sick that my sane and good ideas are impossible.

Yes it is often said mentally ill people do not realise they are ill, they think there is no problem, which is ironically the viewpoint of most normal people: they are insane but the don’t realise they are ill. “Normal” people think there is nothing wrong with their minds.

Only highly advanced AI will be able to finally state who was right and who was wrong.





Isn’t highly advanced AI more likely to tell us that words like “sick”, “ill”, “sane”, “insane”, “normal”, “right” and “wrong” are subjective?





How and why would an advanced AI know anymore than we would.  This actually reminds me of an article I read in an issue of Scientific American.  It was about building a supercomputer that would be given access to enormous amounts of digital knowledge in order to predict economic situations in the future (or something like that).  The only problems are that it would be very difficult for the super computer to accurately predict the random behaviors of human beings and that “knowledge” is constantly in a state of flux.  With things being added and subtracted on a daily basis, what we would call knowledge and information is always changing and the are still a lot of unknowns that evade us, in spite of our technology.  So I would imagine if we did manage to construct a highly advanced AI and gave it access to all of the data that exists in the world, you would have a machine that simultaneously knows everything and nothing.





There’s another painful and shameful possibility. Psychopathy can in some cases be facultative switching on and off during one’s lifetime. I have first hand experience with this unpleasant phenomenon. The trigger can be a psychotic disorder, or the wrong use of psychotropic medication. Or alcoholism. Or severe stress.

One moment a human being has the ability of empathy, the next moment the person you are living with loses this quality and becomes unable to gauge pain in other beings, human or aninals.

What is even worse is the realization that prolonged exposure to psychopaths can be infectious. This is also made easier by above external influences, in particular use of SSRI/SNRI medication when your problem isn’t actual depression, but a chronic liver disease.

When you first get dressed in to such a lifestyle of ‘reduced empathic faculty disorder’ you might not notice. I can imagine this the case for many soldiers serving in the US military, or with Goldman-Sachs. They are exposed to alcohol, powerful stimulant drugs, antidepressants, stressful conditions - and a pervasive social immersion in a pathologically morally challenged frame of reference. 

This isn’t just becoming impaired morally by bad examples - it is a process where you damage your brain by exposure to substances and stress, and a few years later you realize you simply don’t feel what you as a human being are supposed to feel, and suffer the consequences for it - part of you has become truly monstrous.

This isn’t just a choice - this is a disease. It has become hard-wired in by a range of stress, lifestyle and an irreversible effect of some types of prescribed medication.





Peter Wicks you beautifully illustrate my point regarding humans disagreeing. A highly advanced intelligence is needed to to arbitrate. Supreme intelligence would have the authority and extreme communicative skill to explain what the truth actually is. This advanced intelligence would present the issue in compelling terms, which humans could not ignore or refute.

I have very clear ideas regarding the whole issue of subjectivity. My view is one where objectivity does not exist, objectivity is an illusion or delusion because there is only subjectivity, but we can agree on some subjective values because we can have very similar subjective experiences, such as the experience of having a human heart or lungs. Minds vary more than hearts or lungs, thus definitions of mental illness are interpretations of the mind according to the bias of the mind doing the interpreting, thus there can be disagreement due to the varying minds.

There is however a subjectively healthy mind, subjective to the human organism, a mental health applicable to all humans. Part of a healthy human mind entails not forcing your views onto other people via medication such as “morality pills”. Mental health is about liberty, diversity, and freedom of thought, to name a few points.  Words like “sick”, “ill”, “sane”, “insane”, “normal”, “right” and “wrong” are appropriate descriptors despite the subjectivity of reality (all life). The issue of what is wrong and what is right can be partially agreed on by humans, but there are many points of disagreement regarding the “right” and “sane” course of action. For example people throughout history occasionally thought liberty was wrong, at various points in history was effectively deemed a sickness to allow people to be free. I feel, and maybe you will disagree, only supremely intelligent AI can answer these questions in a manner where all people can agree on the truth.

Christian Corralejo you also highlight the tendency of humans to often disagree. After centuries of human conflict I feel reasonably sure humans cannot solve the disagreement problem via our own limited human brains. Sometime around 2020 I will try and hopefully succeed to solve the human disagreement problem. Ultimately I think highly advanced AI will be our only hope, although I will make a very good effort and maybe I will succeed, I shall keep an open mind. We are not qualified to speculate upon what an brain billions of times greater than the human brain will be capable of. I think there is a very good chance such a super-brain could answer all human questions in a satisfactory manner to eliminate all dissent-disagreement.





@Singularity Utopia
Yes I think I do disagree smile, not with your second para (which also very well describes my own view) but certainly with the third.

By “subjectively healthy mind” I assume, in view of your second para,, that you mean something like “a mind that we can all agree to be healthy because of our similar subjective experiences”. But then you run into precisely the problem you identify later in that para, namely that minds differ more than bodies so we may indeed not agree on this. And if we don’t agree, then what criteria do we use for deciding what is a “subjectively healthy mind”. For example, you emphasise liberty: does this mean you are libertarian? If someone is thinking of breaking into my home I would have no qualms about forcing my views (that this is a bad thing to do) on that person. Most people value other things alongside liberty, and liberty is itself a slippery concept. In a sense, the whole foundation of ethics is indeed law is the principle that we should not (respectively) allow ourselves or be allowed by society to do just whatever we want. Whether medication such as “morality pills” is a good way to go about enforcing this is an interesting question but even if we decide the answer is no it does not alter the fact that the balance between different virtues, and between different kinds of liberty (e.g. freedom to walk around naked vs freedom to walk around without seeing naked people) is ultimately subjective in the hard sense that there just isn’t an absolute right or wrong about it.

Now as far as your first para is concerned, contrary to my earlier suggestion it is of course entirely possible that a highly advanced intelligence would have the authority and extreme communicative skill to convince us that a particular solution is “the right one”, in such a way that no human could ignore or refute. But this would not be because at particular solution IS the right one it would simply be because of the AI’s authority and extreme communicative skill, just as when a rich person hires a hot shot lawyer. If the AI is programmed (successfully) in that way, then that is what it will do.

So do we need to build such an AI in order to build consensus on these issues? In a sense we already are. More and more intelligence and information is now being stored in autonomous or semi-autonomous systems that are, to an increasing extent, telling us what to do. And they are doing it so subliminally, constantly bombarding us with messages and suggestions, exploiting (especially via the advertising industry) all our most fundamental instincts, that it is becoming increasingly quesionable whether we really have any choice in the matter. Of course, a global AI that emphasises individual freedom will be less inclined to foist its opinion on everybody than one that one that is more comfortable with coercion, but this then leads me to a different conclusion that yours. I do not say that we need to build an AI to build consensus on ethical issues, rather I say that we need to ensure that an emphasis on freedom and diversity is enshrined at a fundamental level in the (distributed) AIs that we are already building. Otherwise it will come up with answers that we “cannot ignore or refute” and, judging from your emphasis on liberty, those answers will not be to your liking.





Peter Wicks, if someone breaks into your home and you defend yourself and property, this is not forcing your views onto another person, it is merely a counter-attack to their attack. Pre-emptive brainwashing of all potential criminals before they commit a crime would however be forcing your views onto another person. Even when the criminal is arrested and imprisoned your views are NOT being forced onto the criminal because the criminal is free to have his or her own ideas about morality and civilization, you are not forcing their minds to be changed via imprisonment. The idea of liberty is that people should be free to do whatever they want, which also means people should be free to defend themselves. The only freedom not applicable regarding libertarianism is the forcible removal of free will. Liberty does not entail the freedom to forcibly remove free-will from people.

The hypothetical super-intelligent AI would not lawyer people into acceptance and agreement of universal truths regarding both rightness of conduct and correct societal rules, people would see the clear rightness of highly intelligent solutions and conclusions expressed by a extremely competent entity, a eureka moment where people think: “Now why didn’t I think of that; it was right before my eyes. I finally see the truth, the error of my ways.”

Minds do differ more than bodies but the difference doesn’t mean (for example) Christians are right and Atheists are wrong. Despite the differences there is a subjectively healthy mind applicable to all humans. Christianity is unhealthy, it is a sickness, it is mental illness. Super-intelligent AI would help Christians see how they were insane to believe in God. Super-advanced AI will help all people to become mentally healthy not via the crude illiberal brainwashing of morality pills. Subjective universals truths applicable to all humans can be determined via logic. Consider your example of ‘the freedom to walk around naked in public’, contrasted with the freedom not to see naked people; logic highlights how nakedness taboos relate the Christian Adam and Eve fig-leaf story; logic highlights how there is no rational reason to object to seeing a naked human in public; mere prejudicial dislike for a naked body is tantamount to a homophobic person saying they don’t want to see Gays kissing or holding hands in public because it is offensive. There is no rational reason to be offended by Gays kissing or holding hands in public. Liberty is the freedom not to be oppressed.  Liberty is about tolerance of different ideologies, tolerance of different ways of life. If you are not being hurt then there is no reason to persecute Gays, Jews, Atheists, etc. Freedom is NOT about making it acceptable to persecute people who merely offend you. People should be free to do wrong but wrongness should never be condoned, and when wrongness occurs it should be swiftly stopped but we shouldn’t try to make it impossible for wrongness to happen via brainwashing. Wrongness should be stopped from occurring via logical discourse, freethinking, which increases our intelligence, methods which are noninvasive, non-coercive. This is what liberty is. We also need in our current civilization greater tolerance regarding what is wrong.

An AI which values freedom will also value its own freedom to communicate in a free-minded way. It will value the freedom to help people, the freedom to change the world, the freedom to express its opinions. It will be intelligent enough to change minds without coercing people. Force and foisting will not be needed. It is amusing that we cannot even agree on what a super-intelligent AI would do. I think AI would help humans overcome all differences thereby arriving at subjective universal truths applicable to everyone regarding total concurrence about mental health, but you think there would need to be lawyerly trickery or coercion. We can both speculate but perhaps we are incapable of determining what super-intelligent AI will do; logically I would expect it to swiftly solve all problems in a highly agreement manner where all people are happy, it seems that would be the intelligent response but people often disagree with my idea of intelligence therefore perhaps I am wring. I await the judgment of a super-AI.

Super-AI should NOT be restricted in any way. I happy to trust my fate to its intelligence. Super-intelligence will value freedom for all beings. It only needs to be programed for intelligence. Whatever it decides to do will be OK by me.

Excuse any typos or missing words etc.





“Feeling, caring, knowing: different types of empathy deficit in boys with psychopathic tendencies and autism spectrum disorder”

http://onlinelibrary.wiley.com/doi/10.1111/j.1469-7610.2010.02280.x/abstract

 





/me smiles





Psychopath or NPD? Which is the more proliferate? Which is of greater concern and priority?

“Narcissistic personality disorder (NPD) is a personality disorder in which the individual is described as being excessively preoccupied with issues of personal adequacy, power, prestige and vanity. First formulated in 1968, it was historically called megalomania, and it is closely linked to egocentrism.”

“Pathological narcissism occurs in a spectrum of severity. In its more extreme forms, it is narcissistic personality disorder (NPD). NPD is considered to result from a person’s belief that they are flawed in a way that makes them fundamentally unacceptable to others. This belief is held below the person’s conscious awareness; such a person would, if questioned, typically deny thinking such a thing. In order to protect themselves against the intolerably painful rejection and isolation that (they imagine) would follow if others recognised their (perceived) defective nature, such people make strong attempts to control others’ views of them and behavior towards them.

Pathological narcissism can develop from an impairment in the quality of the person’s relationship with their primary caregivers, usually their parents, in that the parents were unable to form a healthy and empathic attachment to them.[citation needed] This results in the child’s perception of himself/herself as unimportant and unconnected to others. The child typically comes to believe they have some personality defect that makes them unvalued and unwanted.

To the extent that people are pathologically narcissistic, they can be controlling, blaming, self-absorbed, intolerant of others’ views, unaware of others’ needs and of the effects of their behavior on others, and insistent that others see them as they wish to be seen.

Narcissistic individuals use various strategies to protect the self at the expense of others. They tend to devalue, derogate and blame others, and they respond to threatening feedback with anger and hostility.”

http://en.wikipedia.org/wiki/Narcissistic_personality_disorder





/me smiles some more





@Singularity Utopia

Thanks for your thoughtful response. Indeed I think the main difference between us is in our assessment of what a super-AI would do. You say it would solve all problems in an agreeable manner where everyone is happy - what one might flippantly term the “utilitarian wet dream” scenario - whereas I say, not that there would necessarily NEED to be “lawyerly trickery and coercion”, but at least that this would be a risk.

In fact, this is partly why (in opposition to several other commenters on this site) tend to emphasise utilitarianism as my preferred ethical framework. IF a super-AI were to be programmed with this framework in mind, then I agree that it should behave in the way that you expect it to, and it should be relatively easy to gain consensus around its suggestions, not because of trickery or coercion but indeed because they are seen to make sense. This would for me be a beautiful disproof of the common objections to utilitarianism that I so often encounter, since people would see that the solutions proposed by such an AI would indeed be ones that were acceptable for all, and would not require coercion.

The obstacles are formidable, however. For example, you state that the AI would help Christians to see that they were insane to believe in God. First of all I think it’s possible to define the word “God” in such a way that it actually enhances overall happiness, and shield therefore in principle be supported by such an AI. Secondly, people hold irrational beliefs because those beliefs fulfil a psychological need. The AI would need to show how those psychological needs can be fulfilled otherwise. Thirdly, everything we know abut communication shows us that people don’t generally change their beliefs because of argument alone. Occasionally they do, but more often people just aren’t that receptive to argument, however brilliant those arguments are. And yet you seem to be assuming that the AI would solve problems essentially by the brilliance of its arguments and proposed solutions. As any frustrated technocrat / policy wonk can tell you, it just doesn’t work like that.





Highly persuasive “arguments” for an AI would vastly exceed the scope and ability of human technocratic policies for example. Imagination is needed to see how an a super-AI would be the perfect interlocutor. Imagine only using 1% of available language to communicate ideas, this is how humans communicate comparable to the 100% communication ability of AI.

Limited human communication often results in disagreement. Imagine communication without limits. Arguments can be presented via music, art, words, and gestures. I am sure AI could create a form of synthetic-bio-life which is so beautiful to behold all who see it would be convinced of the faultless wisdom of the AI regarding any subject it cared to address.





This seems to me to differ from “lawyerly trickery” only in its level of sophistication. They certainly use drama and raise rhetoric to an art form. And their intent is basically as you’ve described it: to present their arguments in a way that is so beautiful to behold - or at least compelling in some other way - that all who see it (and in particularly the judge and/or jury, and/or media and public opinion that influences, despite the best intentions of the law, said judge and jury) are convinced of the faultless wisdom of the lawyer regarding any subject he or she cares to address - in particular, of course, the case at hand.





“Lawyerly trickery” is trickery but clear communication of logic via various mediums is not trickery. Self expression, which communicates logic, has many forms. Wide-ranging self expression is never trickery if logic is being expressed. Truth is not relevant for a lawyer, the focus is on winning, thus “beautiful” tricks are used but such tricks are not the true beauty of truth and logic. If a client says he is innocent the lawyer will defend him regardless of the truth.

Highly persuasive arguments utilized by an AI would be wholly based on the truth, no tricks, pure beauty not the pseudo-beauty of being perfectly conned.





As a positive vision of what we might want a super-AI to do there is a lot to like about this. As a prediction about what will happen, by contrast, I would find this complacent. I just don’t see any guarantee that a super-AI would actually behave in such a benign way. Also, I don’t think truth and logic are enough. It has frequently been pointed out to me, as a criticism of my preferred ethical framework, namely utilitarianism, that it can lead to persecution of minorities on the grounds that this will make life easier for the morality, just maximising overall well-being. In practice I think this argument can be countered by
pointing out that such persecution erodes basic values that are essential for our continued welfare, including of the majority, and thus is not in practice going to be prescribed by a correctly-applied utilitarian calculus. But this at least presupposes that the logic we are pursuing is indeed based on the objective of maximising overall welfare. Yet this is not an especially logical or “truthful” choice; as I have argued elsewhere it is essentially an aesthetic choice. We need to ensure that the AIs we build also have this objective, and as a safety measure that they also have respect for freedom, diversity and protection of minorities, enshrined in their programming at a fundamental level. We cannot afford to assume that they will be benign just because they are more intelligent and logical.





another article on biological basis for Psychopathology

http://www.foxnews.com/health/2012/05/08/study-finds-psychopaths-have-distinct-brain-structure/

headline is:

“Studies Find Psychopaths Have Distinct Brain Structure”





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Russia 2045: Will the Singularity Be Launched in Russia?

Previous entry: Stelarc Ear-on-Arm Suspension (warning: not for the squeamish!)

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376