IEET > Rights > HealthLongevity > GlobalDemocracySecurity > Vision > Affiliate Scholar > Rick Searle > FreeThought > Sociology > Psychology
The Problem with the Trolley Problem, or why I avoid utilitarians near subways
Rick Searle   Apr 27, 2014   Utopia or Dystopia  

Human beings are weird. At least, that is, when comparing ourselves to our animal cousins. We’re weird in terms of our use of language, our creation and use of symbolic art and mathematics, our extensive use of tools. We’re also weird in terms of our morality, and engage in strange behaviors visa-via one another that are almost impossible to find throughout the rest of the animal world.

We will help one another in ways that no other animal would think of, sacrificing resources, and sometimes going so far as surrendering the one thing evolution commands us to do- to reproduce- in order to aid total strangers. But don’t get on our bad side. No species has ever murdered or abused fellow members of its own species with such viciousness or cruelty. Perhaps religious traditions have something fundamentally right, that our intelligence, our “knowledge of good and evil” really has put us on an entirely different moral plain, opened up a series of possibilities and consciousness new to the universe or at least our limited corner of it.

We’ve been looking for purely naturalistic explanations for our moral strangeness at least since Darwin. Over the past decade or so there has been a growing number of hybrid explorations of this topic combining elements in varying degrees of philosophy, evolutionary theory and cognitive psychology.

Joshua Greene’s recent Moral Tribes: Emotion, Reason, and the Gap Between Us and Them   is an excellent example of these. Yet, it is also a book that suffers from the flaw of almost all of these reflections on singularity of human moral distinction. What it gains in explanatory breadth comes at the cost of a diminished understanding of how life on this new moral plain is actually experienced the way knowing the volume of water in an ocean, lake or pool tells you nothing about what it means to swim.

Greene like any thinker looking for a “genealogy of morals” peers back into our evolutionary past. Human beings spent the vast majority of our history as members of small tribal groups of perhaps a few hundred members. Wired over eons of evolutionary time into our very nature is an unconscious ability to grapple with conflicting interests between us as individuals and the interest of the tribe to which we belong. What Greene calls the conflicts of Me vs Us.

When evolution comes up with a solution to a commonly experienced set of problems it tends to hard-wire that solution. We don’t think all at all about how to see, ditto how to breathe, and when we need to, it’s a sure sign that something has gone wrong. There is some degree of nurture in this equation, however. Raise a child up to a certain age in total darkness and they will go blind. The development of language skills, especially, show this nature-nurture codependency. We are primed by evolution for language acquisition at birth and just need an environment in which any language is regularly spoken. Greene thinks our normal everyday “common sense morality” is like this. It comes natural to us and becomes automatic given the right environment.

Why might evolution have wired human beings morally in this way, especially when it comes to our natural aversion to violence? Greene thinks Hobbes was right when he proposed that human beings possess a natural equality to kill the consequence of our evolved ability to plan and use tools. Even a small individual could kill a much larger human being if they planned it well and struck fast enough. Without some in built inhibition against acts of personal aggression humanity would have quickly killed each other off in a stone age version of the Hatfields and the McCoys.

This automatic common sense morality has gotten us pretty far, but Greene thinks he has found a way where it has also become a bug, distorting our thinking rather than helping us make moral decisions. He thinks he sees distortion in the way we make wrong decisions when imagining how to save people being run over by runaway trolley cars.

Over the last decade the Trolley problem has become standard fair in any general work dealing with cognitive psychology. The problem varies but in its most standard depiction the scenario goes as follows: there is a trolley hurtling down a track that is about to run over and kill five people. The only way to stop this trolley is to push an innocent, very heavy man onto the tracks. A decision guaranteed to kill the man. Do you push him?

The answer people give is almost universally- no, and Greene intends to show why most of us answer in this way even though, as a matter of blunt moral reasoning, saving five lives should be worth more than sacrificing one.

The problem, as Greene sees it, is that we are using our automatic moral decision making faculties to make the “don’t push call.” The evidence for this can be found in the fact that those who have their ability to make automatic moral decisions compromised end up making the “right” moral decision to sacrifice one life in order to save five.

People who have compromised automatic decision making processes might be persons with damage to their ventromedial prefrontal cortex, persons such as Phineas Gauge whose case was made famous by the neuroscientist, Antonio Damasio in his book Descartes’ Error. Gauge was a 19th century miner who after suffering an accident that damaged part of his brain became emotionally unstable and unable to make good decisions. Damasio used his case and more modern ones to show just how important emotion was to our ability to reason.

It just so happens that persons suffering from Gauge style injuries also decide the Trolley problem in favor of saving five persons rather than refusing to push one to their death. Persons with brain damage aren’t the only ones who decide the Trolley problem in this way- autistic individuals and psychopaths do so as well.

Greene makes some leaps from this difference in how people respond to the Trolley problem, proposing that we have not one but two systems of moral decision making which he compares to the automatic and manual settings on a camera. The automatic mode is visceral, fast and instinctive whereas the manual mode is reasoned, deliberative and slow. Greene thinks that those who are able to decide in the Trolley problem to push the man onto the tracks to save five people are accessing their manual mode because their automatic settings have been shut off.

The leaps continue in that Greene thinks we need manual mode to solve the types of moral problems our evolution did not prepare us for, not Me vs. Us, but Us vs. Them, of our society in conflict other societies. Moral philosophy is manual mode in action, and Greene believes one version of our moral philosophy is head and shoulders above the rest, Utilitarianism, which he would prefer to be called Deep Pragmatism.

Needless to say there are problems with the Trolley Problem. How for instance does one interpret the change in responses to the problem when one substitutes a push with a switch? The number of respondents who chose to send the fat man to his death on the tracks to save five people significantly increases when one exchanges a push for a switch of a button that opens a trapdoor. Greene thinks it is because giving us bodily distance allows our more reasoned moral thinking to come into play. However, this is not the conclusion one gets when looking at real instances of personal versus instrumentalized killing i.e. in war.

We’ve known for quite a long time that individual soldiers have incredible difficulty killing even fellow enemy soldiers on the battlefield a subject brilliantly explored by Dave Grossman in his book On Killing: The Psychological Cost of Learning to Kill in War and Society. Strange thing is, whereas an individual soldier has problems killing even one human being who is shooting at him, and can sustain psychological scars because of such killing, airmen working in bombers who incinerate thousands of human beings including men women and children do not seem to be affected to such a degree by either inhibitions on killing or its mental scars. The US military’s response to this is to try to instrumentalize and automaticize killing by infantrymen in the same way killing by airmen and sailors is disassociated. Disconnection from their instinctual inhibitions against violence allows human being to achieve levels of violence impossible at an individual level.     

It may be the persons who chose to save five persons at the cost of one aren’t engaging in a form of moral reasoning at all they are merely comparing numbers. 5 > 1, so all else being equal choose the greater number. Indeed, the only way it might be said that higher moral reasoning was being used in choosing 5 over 1 was if the 1 that needed to be sacrificed to save 5 was the person being asked. With the question being would you throw yourself on the trolley tracks in order to save 5 other people?

Our automatic, emotional systems are actually very good at leading us to moral decisions, and the reasons we have trouble stretching them to Us vs Them problems might be different than the ones Greene identifies.

A reason the stretching might be difficult can be seen in a famous section in The Brother’s Karamazov by Russian novelist, Fyodor Dostoyevsky. “The Grand Inquisitor” is a section where the character Ivan is conveying an imaginary future to his brother Alyosha where Jesus has returned to earth and is immediately arrested by authorities of the Roman church who claim that they have already solved the problem of man’s nature: the world is stable so long as mankind is given bread and prohibited from exercising his free will. Writing in the late 19th century, Dostoyevsky was an arch-conservative, with deep belief in the Orthodox Church and had a prescient anxiety regarding the moral dangers of nihilisms and revolutionary socialism in Russia.

In her On Revolution, Hannah Arendt pointed out that Dostoyevsky was conveying with his story of the Grand Inquisitor, not only an important theological observation, that only an omniscient God could experience the totality of suffering without losing sight of the suffering individual, but an important moral and political observation as well- the contrast between compassion and pity.

Compassion by its very nature can not be touched off by the sufferings of a whole group of people, or, least of all,  mankind as a whole. It cannot reach out further than what is suffered by one person and remain what it is supposed to be, co-suffering. Its strength hinges on the strength of passion itself, which, in contrast to reason, can comprehend only the particular, but has no notion of the general and no capacity for generalization. (75)

In light of Greene’s argument what this means is that there is a very good reason why our automatic morality has trouble with abstract moral problems- without having sight of an individual to which moral decisions can be attached one must engage in a level of abstraction our older moral systems did not evolve to grapple with. As a consequence of our lack of God-like omniscience moral problems that deal with masses of individuals achieve consistency only by applying some general rule. Moral philosophers, not to mention the rest of us, are in effect imaginary kings issuing laws for the good of their kingdoms. The problem here is not only that the ultimate effect of such abstract level decisions are opaque due to our general uncertainty regarding the future, but there is no clear and absolute way of defining for whose good exactly are we imposing these rules?

Even the Utilitarianism that Greene thinks is our best bet for reaching common agreement regarding such rules has widely different and competing answers to how we should answer the question “good for who?” In their camp can be found Abolitionist such as David Pierce who advocate the re-engineering of all of nature to minimize suffering, and Peter Singer who establishes a hierarchy of rational beings. For the latter, the more of a rational being you are the more the world should conform to your needs, so much so, that Singer thinks infanticide is morally justifiable because newborns do not yet possess the rational goals and ideas regarding the future possessed by older children and adults. Greene, who thinks Utilitarianism could serve as mankind’s common “moral currency”, or language,  himself has little to say regarding where his concept of Utilitarianism falls on this spectrum, but we are very far indeed from anything like universal moral agreement on the Utilitarianism of Pierce or Singer.

21st century global society has indeed come to embrace some general norms. Past forms of domination and abuse, most notably slavery, have become incredibly rare relative to other periods of human history, though far from universal, women are better treated than in ages past. External relations between states have improved as well the world is far less violent than in any other historical era, the global instruments of cooperation, if far too seldom, coordination, are more robust.

Many factors might be credited here, but the one I would least credit is any adoption of a universal moral philosophy. Certainly one of the large contributors has been the increased range over which we exercise our automatic systems of morality. Global travel, migration, television, fiction and film all allow us to see members of the “Them” as fragile, suffering, and human all too human individuals like “Us”. This may not provide us with a negotiating tool to resolve global disputes over questions like global warming, but it does put us on the path to solving such problems. For, a stable global society will only appear when we realize that in more sense than not there is no tribe on the other side of us, that there is no “Them”, there is only “Us”.

Rick Searle, an Affiliate Scholar of the IEET, is a writer and educator living the very non-technological Amish country of central Pennsylvania along with his two young daughters. He is an adjunct professor of political science and history for Delaware Valley College and works for the PA Distance Learning Project.



COMMENTS

The article scores an own goal when it asks whether you could sacrifice your own life to save several others. We admire those who die, or risk death, to save others. It’s difficult to do that, but that doesn’t mean it’s wrong.

http://stallman.org/articles/trolley-problem.html explains why the trolley problem is a bad model for real life moral questions.

Oh, I hope I didn’t score an own goal… rms how do you figure? Unless you meant to say: “

“It’s difficult to do that, but that doesn’t mean it’s NOT wrong.”

In which case I would ask: shouldn’t we be willing to sacrifice our lives to save others? If not, how many people would you be willing to let die in order to save yourself- a million, a billion, an infinite amount? That seems morally absurd to me…

I do find Stallman’s position interesting, so I’ll quote from it:

“The reason, in real life, why killing someone is ethically different from letting someone die is that real life is full of surprises: the person might not really die. If you kill him, his death is pretty certain (though not totally; just recently a man was hanged in Iran and survived). If you merely don’t take action to save him, he might survive anyway. He might jump off the track, for instance, or someone might pull him off. All sorts of things might happen. Likewise, throwing the one person onto the track might not succeed in saving the other five; how could you possibly be sure it would? You might find that you had done nothing but cause one additional death. Thus, in real life it is a good principle to avoid actively killing someone now, even if that might result in other deaths later.”

Though, I am not sure how one explains the fact that a significantly larger number of people are willing to sacrifice the life of the “fat man” when a push is substituted with a switch?

The problem with most thought experiments is that they are flawed from the premise, so it is difficult to draw conclusion and consensus?

For example, would you push a child on the track to save five adults? Would you kill a dog or cat if you were a loving pet owner?

Numbers may factor in utilitarian calculus, and certainly killing from a distance, (with dutiful training), such is the danger with drone warfare which you have also highlighted. Yet Human drone operators and their flaws will most likely be soon and swiftly replaced by full drone automation and total machine efficiency, eliminating any further accountability of Humans and their flawed and fickle consciences?


And what of the observations of Stanley Milgrim? (I know you have highlighted this previously, as I have myself)? Is there a deeper connection between the indifference to the suffering of others than merely the practicable?


The Milgram Experiment

www.simplypsychology.org/milgram.html

 

@CygnusX1:

“The problem with most thought experiments is that they are flawed from the premise, so it is difficult to draw conclusion and consensus?”

I completely agree that these experiments give us limited insight. I do find it somewhat of an interesting correlation that you have this doubling or so of the number of people willing to sacrifice one individual once you change the scenario from a push to a switch and the fact that human beings seem to find instrumentalized killing so much easier than the face-to-face sort.

“Yet Human drone operators and their flaws will most likely be soon and swiftly replaced by full drone automation and total machine efficiency, eliminating any further accountability of Humans and their flawed and fickle consciences?”

This is precisely why I was looking at Greene’s Moral Tribes. I am currently working on an anthology on machine ethics and am writing a chapter on war. There are a lot of people arguing that robots in warfare with make warfare more ethical > as in our machines will not violate the laws of war when push comes to shove as humans are prone to do. But I think your totally right, these machines will not possess any natural aversion to killing > depending on the programmer they will kill with impunity and have no capacity to understand the existential characteristics of war or to perceive the moral universe which we inhabit and which makes killing a meaningful rather than just instrumental act.

I think these thought experiments can be helpful (they can only be “flawed” if we think they are telling us something they are not telling us) but I agree one must be careful drawing conclusions.

The question about what role moral philosophy in general - and utilitarianism in particular - may have played in the adoption of global norms is an interesting one, and I am certainly less convinced than you seem to be Rick that it’s role has been minor to negligible. If Greene is really postulating a stark opposition between “automatic” and “manual” reasoning - that is to say instinctive, rapid reasoning and slower, cerebral reasoning - and crediting the latter with better moral outcomes, then I would certainly disagree. Clearly life works out best - both individually and collectively - when those systems are working in harmony, and I think the emergence of moral philosophy has been precisely an example of this happening. It is clear that our automatic moral instincts cannot be relied upon to produce good results in all circumstances, and unless we are more dismissive of the influence that our philosophical traditions has had on actual behaviour (via politics, laws, cultural norms and so on) than I would consider justified, then surely it must be helping us to bring our more cerebral reasoning skills to the aid of our automatic reactions. Basically, we have trained ourselves to promote certain automatic behaviours and suppress others, in real time, in part because of politics, laws, and changing cultural norms - we can all think of examples of this - and surely some of those changes in wider society have been influenced and informed by moral philosophy.

Certainly utilitarianism features much ambiguity, but I am certainly with Greene in calling it Deep Pragmatism, combined with altruism. The question about whether one would throw oneself in front of the trolley is indeed a good one, and much less contrived than the original version, and it shows how utilitarianism applied properly is actually far more moral than most of us are ever likely to achieve. Most of us are just not that altruistic.

@Peter:

“If Greene is really postulating a stark opposition between “automatic” and “manual” reasoning - that is to say instinctive, rapid reasoning and slower, cerebral reasoning - and crediting the latter with better moral outcomes, then I would certainly disagree.”

Greene is postulating that our automatic moral behavior works well on the tribal or nation-state level, but fails us on the level of US vs Them questions, that is, disputes between peoples, states, etc including situations where you are faced with the need to help someone not belonging to your tribe > sending money to famine victims on the other side of the world for instance.

It’s not that I think moral philosophy has been without any effect here, but it’s certainly been less effective than satellite TV that allows us to SEE famine victims on the other side of the world > that is to tap into our automatic systems.

I wonder. In a sense, yes of course we empathise more directly and instinctively when we see the pictures. On the other hand, we all know how fickle and sometimes downright counterproductive this kind of “telescopic philanthropy” can be. So yes, the increasing interconnectedness of the world has a far more direct influence, but by no means an exclusively helpful one. By contrast, pragmatic moral philosophy based around maximising overall well-being while avoiding the delusions of moral realism (which so quickly leads us to demonise those whose moral and other preferences differ from our own) seems to be a more reliable, steadfast and indeed essential guide. Indeed, this is basically what makes participating in the discussions we have here worthwhile in my view.

I suppose the question for me is how do you get someone to perform a moral act? Or to phrase it differently, what makes an act compelling in a moral sense.  Greene seems to think we can reason our way to these moral acts. I’ve never been sure reason really takes us there or as Hume said “ `Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. `” You need to start from first principles and those are born of emotion, intuition not reason of itself.

@ Rick

“I do find it somewhat of an interesting correlation that you have this doubling or so of the number of people willing to sacrifice one individual once you change the scenario from a push to a switch and the fact that human beings seem to find instrumentalized killing so much easier than the face-to-face sort.”

The key factor seems to be in our ability to distance ourselves from the accountability and responsibility as Milgram indicates. The creativity in technology applied to safeguard oneself from dangers of conflict and harm, permitting Humans to kill at distance has been evolving since the slingshot and longbow, and where the attractiveness for use of guns has evolved “culturally” as some perception of skill in hitting a target at distance, especially as promoted in the pride of snipers - even though still a cowardly and dishonorable trade?

Still think the Trolley scenario is dubious, as by changing the parameters only slightly would change an individuals moral choices.

Groupthink also weighs heavily on our contemporary moral landscape and as also indicated by Milgram, lends the weight of “authority” and excuses not to think for ourselves?

Would I push a stranger on a track to save others - sorry to disappoint, but don’t think I could. Would I sacrifice myself? Sorry to disappoint, but not at all sure, (such is the usefulness of this thought experiment). I can guarantee what I would do however, try dismally to find a way to derail the trolley despite the likelihood of failure - stoopid huh?


Regarding the evolving use of automated warfare - I guess you are aware that Patrick Lin writes extensively on this subject also, and customarily leaves no stone unturned, it’s worth checking out his articles if you have not already.

Personally I can see some positives in the use of impartial machines in warfare, (the arguments over the utility of war aside). Cruelty, rage, anger, hatred and Psychopathy at least may be erased from warfare by the dispassion of machines. Yet to eliminate this from the motives of the Humans is still yet a complex study, (Patrick has also written articles about using machines for torture a la Milgram).

Yet imagine a future balanced(?) conflict involving automated war machines on both sides, where one side is winning and destroying such machines, the opposing side will have to resort to use of Humans once more? An insurgency against automated robot machines and drones, and one step closer to a Terminator scenario?

AGI war machines need not be Self-reflexive to be a real threat and apply “Utility” in the despatch of Humans - and perhaps even a future existential threat to all Humans depending on it’s authority and autonomy and the “laziness” of Humans?

I certainly agree - with both Rick and Hume - that reason alone does not lead to morality. As Rick says you have to start from first principles, and these are born of emotion and intuition.

However,  I think we need to distinguish clearly between the two questions Rick asks at the beginning of his comment, namely how we can get people (including ourselves) to act morally, and what makes an act morally compelling. These are indeed two different questions, and I suspect that it is the conflation of the two that leads many to distrust utilitarianism. For me, utilitarianism is mainly helpful in the latter context, i.e. in its ability to clarify (to some extent - of course ambiguities still remain) whether an act, and even more especially a legal or cultural rule, is morally valid. Once that has been clarified, we can then set about figuring out how to ensure that it happens, and certainly in real-time stressful situations reason is likely to be of limited impact. Mindfulness and learning good habits - and incentivising others to do the same - are much more likely to be effective.

@CygnusX1:

“The key factor seems to be in our ability to distance ourselves from the accountability and responsibility as Milgram indicates. The creativity in technology applied to safeguard oneself from dangers of conflict and harm, permitting Humans to kill at distance has been evolving since the slingshot and longbow, and where the attractiveness for use of guns has evolved “culturally” as some perception of skill in hitting a target at distance, especially as promoted in the pride of snipers - even though still a cowardly and dishonorable trade?”

Largely, agreed. There is a great deal of continuity here: Drones and soon to become mainstream robots for land warfare are part of the evolution of war at a distance, especially since the mechanization of warfare with long range guns on ships and air warfare almost since its inception. “Accountability and responsibility” are indeed key.

Yes, Lin’s writings on the topic are very important, hopefully I will be able to bring something new to the table outside of the very extensive ground he has covered.

I too can see some positives in the use of such weapons. The danger I see is their use might occult two existential aspects of war- risk and moral injury. That is, states may be more willing to engage in skirmishes if their flesh and blood soldiers are not put at risk. And soldier become so distanced from the battlefield they no longer understand on a moral level the depth of what they are being asked to do.   

I may throw a very rough draft up on the IEET sometime in the near future to see if I can get helpful feed back such as the kind you have given me now > thanks.

BTW the anthology is still searching for authors. If you are aware of any relatively new writers who might have something interesting to add on machine ethics, please pass along their details.

@Peter:

“For me, utilitarianism is mainly helpful in the latter context, i.e. in its ability to clarify (to some extent - of course ambiguities still remain) whether an act, and even more especially a legal or cultural rule, is morally valid.”

I think this is great that you use utilitarianism in this way as long as you recognize that others use different tools > Kantian ethics, or Aristotelian ethics, or even, ethics derived from and supported by religious traditions. Most of society’s disputes are not good vs bad but between different versions of the Good which is fine by me because no one version of the Good represents the Good in its fullness.

I guess I don’t really believe in “the Good in its fullness”. What does that even mean? The moral philosophy that makes most sense to me is utilitarianism (ambiguities notwithstanding), but I would be lying if I said that I am 100% committed to applying mindfulness and developing good habits in the service of becoming more moral in a utilitarian sense. I also have more selfish intentions.

So indeed, I can hardly complain just because some people draw inspiration from ethical frameworks that I find less compelling than utilitarianism. That really would be hypocritical, and also fairly obviously counterproductive in terms of maximising overall well-being according to any reasonable definition. Not only do I recognise that others use different tools, I welcome it.

What I will not do, though (I’m not saying you are suggesting I should, but just to be clear…) is to buy into the view that these ethical systems are somehow morally equivalent. It is one thing to recognise and even welcome the fact that people buy into different ethical systems, religious traditions and so on, and another thing to completely relinquish any claim regarding the superiority of one’s own preferred framework. Other things being equal, I think utilitarianism does the best job at clarifying what kind of actions, laws and habits are morally sound, and which need to be changed.

Of course, other things never are equal. An ethical system is only as good as the results it produces (at least from utilitarianism’s own, pragmatic perspective), and that depends on the extent to which and how it is likely to be applied in practice. But for the moment I think we are better off defending utilitarianism against criticisms that are based on confusion and misunderstanding rather than rallying to the side of the critics. Of course there are legitimate criticisms to make, but for the moment that seems to me less useful than exposing and dismantling the illegitimate ones.

I’ll admit “the Good in its fullness” sounds quite slippery, but here is what I mean by it: different moral philosophies tend to focus on a narrow set of aspects of the human condition as the ultimate ones: utilitarianism> happiness, Aristotelianism > virtue, Rawlsian ethics > equality > Libertarianism > personal autonomy etc. Religions share this quality as well > Buddhism > compassion, Islam > humility, Christianity > forgiveness…

As to moral equivalence there is no way to judge between these different systems, or better, there is no meta-morality by which they can be ranked> although each claims to be a meta-morality. I actually think this inability to definitively decide between them the tensions between them and their diversity is a very good thing. It tends to keep us going from one direction alone which would atrophy human possibility. No one view captures all of what we are or can be. 

I agree that there is no meta-morality by which different ethical/religious systems can be ranked, and I agree to some extent (though it really depends on how they are formulated) that each claims to be a meta-morality. By definition they cannot be, since by definition meta-morality means something lying outside the moral framework by which the moral framework can be judged. This is basically what makes me a moral subjectivist rather than a moral realist, and I have the impression that you are too.

So why does utilitarianism make more sense to me as a moral framework? Because ultimately, taken as a whole, I care more about whether people (including perhaps non-human persons) are happy than whether they are virtuous, autonomous, compassionate, humble or forgiving, or how much inequality there is. I care about these things as means to an end, but the ultimate end, for me, has to be if not “happiness” at least well-being, or some such quality that is not primarily a behavioural one.

Part of the reason I am insisting on this is that I suspect that if people saw these things more clearly more of them would actually embrace utilitarianism in this sense. For example, how many people really prefer a scenario where everyone is equally miserable to one where inequalities persist but people are on average happier? And if they do, what is the cause of this? Does this really correspond to a genuine, deep moral preference, or are they actually displaying a degree of envy and/or guilt that, if they saw things more clearly, they would realise is not actually particularly honourable? This is a suspicion, and not a claim I am making with any great confidence, but it does form part of my motivation for advocating utilitarianism as strongly as I do.

Interesting thoughts. As you say, people were more desperate back then than *some* of us are today, so they simply couldn’t afford the luxury of existential doubt and subjectivism. Moral realism reigned, because it had to. Of course this didn’t mean they were *objectively correct* in their beliefs, that their beliefs were *objectively accurate*, but they thought they were - and, as you say, even those who didn’t strictly believe still prayed “just on case”, as someone might light a candle in church today without worrying much about “evidence” and whether it’s likely to work. And why not? As Giulio would say, it does no harm.

And I also agree that this sheds important light on why religion/spirituality persists today, candles in church being of course just the tip of the iceberg. On this blog I have tended up to now to put more effort into critiquing religion than to defending it, but let’s face it: it still plays an essential role for many people. And will continue to do so for quite some time to come.

What we should not do, though, IMO, is to cling ourselves to objectivist illusions that we no longer really need. It’s a bad example for others, and it leaves us ill-equipped to thrive in this weird future that we are heading towards. Those of us whose circumstances allow us to do so need to let go of our more naïve beliefs (and all our beliefs are to some extent naïve), and allow ourselves simply to be aware. Then we can re-enter the fray, better equipped to thrive, prosper, learn, be compassionate, and set a good example for others to follow.

instamatic >” In Medieval Europe people would often look up into the sky and see God; perhaps they were intoxicated on wine and or food impurities.”

Peter >“Interesting thoughts. As you say, people were more desperate back then than *some* of us are today, so they simply couldn’t afford the luxury of existential doubt and subjectivism.”

I think we need to remind ourselves that human beings have always had the same underlying cognitive architecture and that, unless there is something organically wrong with that architecture, we can’t see, as in actually physically see, something that is not there.

Medieval theology is full of laments that God has “hidden his face”
they are riddled with very modern sounding doubts hence the need for FAITH- you don’t need to have faith regarding something whose existence is obvious. For the medievals the religious world was a kind of map which they overlaid on the world their eyes and ears could sense.

We use these sorts of maps too - we can’t see gravity etc. The end of our faith in the idea that these maps were reality itself rather than just our convenient approximation of it was the end of our faith in objective truth. (For an excellent discussion on how science is just a set of these extremely effective maps see S. Hawking’s book The Grand Design). Maps are useful or not useful based on what you are using them for and where you want to go. Using religious maps to navigate your way through the natural world end up getting a person horribly lost. Perhaps the same can be said is using scientific maps to navigate moral or spiritual worlds?

Yes I think I basically agree with that. Where I still think Gould goes wrong is to assume that religious maps are still adequate as means for navigating moral or spiritual worlds. We’ve discussed utilitarianism (and other secular ethical frameworks) with regard to the former; and to meaningfully discuss the latter we would need to define better what we mean. Also, alongside the risk of using scientific maps inappropriately is the risk that we fail to use them where they are appropriate. In fact, at the risk of contradicting myself, I think scientific maps these days (what with cognitive psychology and so on) can be quite effective in navigating moral and let’s say aesthetic/emotional worlds, though I do agree they are insufficient on their own.

@ Rick..

I noticed you posted a link on Twitter to an article at “The Atlantic” Is One of the Most Popular_Psychology Experiments Worthless? which proposes retiring the Trolley problem, and it got me thinking regarding a better test of subjective morality.

Firstly, any Trolley problem scenario must attempt to be totally impartial to gain the best results and not provoke moral decisions flavoured by personal and contemporary social prejudices or profiling?

Examples of bias being ageism, both use of children and elderly as analogy for victims, as well as social profiling fat/thin/tall/small etc. The article mentions the use of the notion of a fat man which distracted the test and caused group hilarity and laughter.

So this got me thinking and attempting to reduce the test further, and it crossed my mind that THE fundamental flaw with the Trolley problem is that the protagonist is required to “act to save” which causes a moral dilemma greater than the reverse of “not acting” to save the greater number of victims?

I guess ideally both styles of analogy and both supporting will to act and to defer from acting would give an even greater accuracy for evaluating an individual also?

So what about the test itself?

Well, based along the similar lines of Schrodinger’s cat in a box thought experiment - imagine two rooms both linked to potential release of cyanide gas. One room contains five or even two individuals, but the door is locked and you cannot open it.

The other room contains one individual and the door opens from the outside, so you could free the occupant quite easily. However, by doing so you would cause the release of cyanide gas killing the occupants in the other room.

Yet if you do nothing, and don’t open the door for the single occupant, then the gas will instead be released in there killing the individual.

This means you have opportunity to save more lives by not acting and doing nothing.


I will hazard a guess that in this example those that would not usually sacrifice the life of an individual in the Trolley problem would be more likely and willing to sacrifice the individual if their conscience was eased by doing nothing instead of something?

What do you think?

 

Sure,

I think if you made inaction an option an option most people would chose that. Though if that were the case, I wonder if you’d run into the issue that behavioral economist have “discovered” that most people are too lazy to make choices. This is why you get many more people donating their organs when doing so is a matter of opting out rather than opting in. It’s also why privacy settings on something like Face Book are opt out, they know most people won’t go to the trouble of clicking otherwise.

That’s the thing about these completely artificial scenarios like the Trolley Problem they are absolutely useless as simulations of real world morality, but they do have a way of engendering conversations which sometimes leads to reflection and questioning about the way real human beings make moral decisions and act.

@ Rick

I agree with everything you have said and opt out scenarios are indeed used and widespread and knowingly employed to take advantage of Human behaviour and laziness - so yes we need also to remove any laziness bias from the test.

Remember the test is not to measure an individuals laziness to act, but subjective morality in facing a dilemma, so we need to reason and tweak the thought experiments to achieve this.

On reflection I think I have made the test too “easy” for an individual to do nothing and choose not to do anything, (this is of absolute importance to measure effects and manipulations of real world political events and dilemmas - Gaza, Syria, Ebola, Food banks etc).

Like Facebook, governments can utilize the same manipulation of behaviourism to achieve it’s political aims and end game - all the more reason we all should be aware of the Trolley problem?


So, some tweaks to the test above..

We must ensure that the test subject must choose, (synonymous with acting I know, but bear with me)

1. The doors to either room may be opened from the outside.
2. By opening a door to release the occupant(s) the cyanide gas is released in the other room.
3. And Most importantly, a choice must be made and timed, else poison gas will be released in both rooms and kill all occupants.

Now, have I made it even easier to make a utilitarian decision? I think perhaps I have?

It is easier to do absolutely little and permit others to die rather than face even our own consciences. In fact this is exactly what tyrant leaders of feudal nations do regularly?

So we need to make the participant of the test aware of the consequences of their choices also, and before they take the test?

Imagine the rooms have one way glass so you can see the effects of the gas and death inside.

Individuals before taking the test could be shown a video of the results of cyanide gas on the single occupant! This should provoke reflection for test subjects on the results of their actions before they take the test?


The point I am still making here and in all of this, is that the Trolley problem forces violent action choices upon the test subject, and this causes conflict between one’s ethics and moral subjectivity - many still cannot kill to save others by virtue of the structure of the test.

What do you think?

 

 

 

@CygnusX1:

Traveling to a dark skies park, today and Monday. (I might even be able to actually see Cygnus the Swan 😊

I’ll try to answer you question when I return.

RE:

“So we need to make the participant of the test aware of the consequences of their choices also, and before they take the test?

Imagine the rooms have one way glass so you can see the effects of the gas and death inside.

Individuals before taking the test could be shown a video of the results of cyanide gas on the single occupant! This should provoke reflection for test subjects on the results of their actions before they take the test?”

My inference here was obviously not to execute persons for the sake of a test, yet to use actors. However, this now seems totally redundant for the sake of a “thought experiment”. And on “further” reflection it really all depends on how far we extend beyond the utility of a “thought experiment” to provide a test that requires not merely reflection on thoughts/contemplation but actions?

Perhaps this is why Stanley Milgrim used audio tapes of actors, which may be more suitable and more convincing than viewing occupants in rooms, (and again avoiding all sorts of personal bias by test subjects on occupants).

In the end, the test no matter what format still seems to indicate a measure of utility and by numbers - so we are back to square one?

 

@CygnusX1:

“What do you think?”

I don’t know. The scenarios you’ve set up reminds me of ones found in the post 9-11 show 24 where there were always bombs about to go off that could only be stopped by torturing some terrorist.

The problem I had with that is that using a completely imaginary scenario it weakens public support for constitutional protections.
The fact of the matter is it’s almost never the case that the bomb is going to go off right NOW unless the terrorist is tortured or the kidnapping victim will die unless you break a few bones. 

I am not a pacifists, nor do I hold to some absolute version of morality where rules cannot be violated whatever the consequence (Kantian etc). In fact very few of us are pacifist-
Pete Seeger of all people once said that if his country was invaded he’d pick up a gun, and I feel the same. What the Trolley Problem, and others bring out is that most of us hold that under some circumstances killing another human being is not the absolutely worst thing you could do. What is good about us is how difficult we find it to make this leap to become killers even when we are forced to make such decisions for an obvious greater good, or thank God we are more than mere computers who would decide such dilemmas through mere cost/benefit analysis.

@ Rick

I see the above torture example you propose as different from the Trolley test, in that the latter is an attempt to evaluate will to action and subjective morality without any prior prejudice or motivation to violence. ie; it is by happenstance the test subject is confronted with a dilemma to solve, there is no objective to fulfil. Although your scenario is similar to Milgrim’s tests in attempting to see how far Humans will go for the least amount of influence and coercion, so is also equally important.

So what is the Trolley test all about, what are we hoping to achieve? What is it a measure of?


“What is good about us is how difficult we find it to make this leap to become killers even when we are forced to make such decisions for an obvious greater good, or thank God we are more than mere computers who would decide such dilemmas through mere cost/benefit analysis.”


And isn’t this why we use the Trolley test, and to hopefully deduce in which direction Human behaviour is heading? Will Humans and their inherent natures ever be better than we are today?

One thing is for certain, Humanity is walking a knife edge - in times of economic boom and affluence Humans are less unhappy and perhaps even more peaceful? Yet in times of global economic crisis, (orchestrated or not), suffering, angst, fear, hatred and chaos emerges. And where we once thought Humanity was making progress there seems to be tendency of regression towards aggression?

What would Jesus do/say? (if he really existed that is).

As usual Rush have something to say about all of this, and your last comment has somewhat prompted it’s entry here..

 

“Lock And Key”


I don’t want to face the killer instinct
Face it in you or me

We carry a sensitive cargo
Below the waterline
Ticking like a time bomb
With a primitive design
Behind the finer feelings
This civilized veneer
The heart of a lonely hunter
Guards a dangerous frontier

The balance can sometimes fail
Strong emotions can tip the scale

Don’t want to silence a desperate voice
For the sake of security
No one wants to make a terrible choice
On the price of being free
I don’t want to face the killer instinct
Face it in you or me
So we keep it under lock and key

It’s not a matter of mercy
It’s not a matter of laws
Plenty of people will kill you
For some fanatical cause
It’s not a matter of conscience
A search for probable cause
It’s just a matter of instinct
A matter of fatal flaws

No reward for resistance
No assistance, no applause

Don’t want to silence a desperate voice
For the sake of security
No one wants to make a terrible choice
On the price of being free
I don’t want to face the killer instinct
Face it in you or me
So we keep it under lock and key

We don’t want to be victims
On that we all agree
So we lock up the killer instinct
And throw away the key

 

 

I love Rush! They’re the only band I know that could turn the social contract into a rock song.

I took a trip down memory lane with a listen:

https://www.youtube.com/watch?v=IWNPP8di9-g

Ah yes..

The 70’s were the flamboyant years, and for those brave and mature enough - the moustache.
The 80’s it was the blazer and mullet

Sweet #Nostalgia

Personally I feel “Rush in Rio” was perhaps their best commercially recorded concert, grab it if you can for around $10 with over 2 hours of concert + an excellent backstage documentary featuring the importance of soup before a gig.

In the meantime you can watch most of it song by song online search for “Rush in Rio” on YouTube

YOUR COMMENT Login or Register to post a comment.

Next entry: The Social Futurist policy toolkit

Previous entry: Bostrom at Moogfest: Super intelligence is likely