IEET > Directors > Mark Walker > HealthLongevity
Double your life span: Walker on Singer on extended longevity

IEET Director Mark Walker has an interesting article responding to an article published by Peter Singer in the 1990s, in which Singer considers the possibility of an anti-aging drug, and concludes that, on the scenario presented: “we should recommend against any further development of the anti-aging drug.”

As one might expect, Singer’s analysis proceeds along utilitarian lines. The particular scenario developed is not a super-optimistic one: the drug will enable us to live to about 150 - more or less doubling the normal life expectancy - but with an average level of happiness, during the second half of this long life, that is lower than the level experienced during the first half. More precisely, the level of health experienced will be that of someone in her 60s or 70s today. In addition, during the second half of life we will not experience it with the same “freshness” as younger people do. Singer deduces from this that individuals who take the drug will have less average happiness - averaged, that is, over a full (extended) life - than they did during their first 70 or 80 years. However, we may still imagine them as enjoying the extra years. They just don’t enjoy the later ones quite as much as the earlier ones.

Walker offers an interesting analysis, in which he defends longevity research against Singer’s attack, even on the assumption that it would lead to an additional 70 or 80 years lived in less-than-optimal health. He points out that the available empirical studies do not support the idea that people are less happy in old age than at earlier times, despite inferior health. In fact, the happiness curve seems to be U-shaped: actually at its lowest point when we are in our late 30s or early 40s. (This may seem surprising, since that is exactly when we are at our peak in many ways, but perhaps that itself creates pressures which are not yet experienced when we are younger, and which start to recede as we move deeper into middle age. I’ll avoid any further speculation about such issues.)

I want to defend Walker’s prolongevist position, though on rather different grounds. But first, here is the strength of Singer’s case ...

Assume for the sake of argument that Singer is right about the facts. If we take the anti-aging drug we will typically live to be 150, and we will have happy lives for the second 70 to 80 years, but our lives will be less happy (perhaps not by much) through that period than during the first 70 to 80 years. It seems apparent that, once a society with this technology gets going, the average happiness being experienced per person in the society at any point of time will be less than is currently the case. If the population is kept constant - which might be required for Malthusian reasons, as Singer discusses - the total sum of happiness being experienced at any given point of time will also be less. Moreover, if we take a space-time block of the society’s experience, the amount of happiness that exists in it will be less than in a space-time block of equal duration chosen from the history of our current society.

On grounds similar to these, Singer argues that we should not develop a drug with the specified effects.

Yet - and here comes my defence - this seems paradoxical. If I take the drug, I will be able to enjoy many extra years of happy life. It seems that this gives me a good reason to take the drug; those extra years of happy life seem to be something that it is rational for me to want, and taking the drug is a means to achieve that rationally-desired end. So, if I take the drug, no one need feel sorry for me. I have not done something self-destructive. It looks as if I am actually better off.

Nor is this a situation where it is collectively irrational for each person to take the drug. It is not as if a rational choice by each individual in isolation leads to an outcome in which each is worse off than if some other option had been chosen. On the contrary, if we all take the drug, each one of us is better off than she would have been if she had not taken it. This is not a prisoner’s dilemma type of problem; it’s not a problem of social coordination.

In other words, a society in which each person takes the drug is a society in which each individual is better off than he or she would otherwise have been - and each individual is also better off than otherwise-comparable individuals in our current society!

I think this thought experiment actually dramatises some of the problems underlying utilitarianism, for Singer has clearly (in my view) drawn a paradoxical and wrong conclusion.

First, note that it is rational for me to take the drug if it is available. It is also rational for me to hope that it is developed, so that I can take it. It is also rational for me to fear interference (for example, by the legislature) to stop the development of the drug. Indeed, it is rational for me to agitate against legislative interference.

Might it be that my actions - in agitating for the drug’s continued development, and actually taking it if it becomes available - are nonetheless morally impermissible? No.

If I have a moral duty to maximise the average amount (or the maximum amount) of utility in my society, or the universe, at any point of time, or within any space-time block, yes, indeed, I have acted in breach of that duty. But where could any such duty come from? Why should I accept that it exists? It has not been commanded by God, or some other mighty being - and even if I’m wrong about that, why should we obey a mighty being’s commands in circumstances wherein it’s not in our individual interests to do so? It is not, moreover, written into the metaphysical structure of the universe, whatever that might amount to. We need to locate a naturalistic, human-level grounding for such a duty, and there’s none at hand.

A duty like this might seem to make sense as the content of a (non-morally) good moral norm that we have adopted in order to give expression to our sympathies - except that the norm would go too far, since the particular case we’re considering is one in which there’s nobody to feel sorry for! Every actual person involved is better off, by taking the drug, than he or she would otherwise been. Far from feeling guilty at doing what is in our interests, we can be pleased that we have acted in a way that has not placed any individual in a position where we need look on her with pity. If we ever reach a position in history where we have agitated successfully for the development of the drug, all the other people who will thus have an opportunity to take it will have reason to thank us for our efforts. We will not have frustated their life plans or caused them suffering; we will have helped them achieve something that they (rationally) value.

This thought experiment helps show the difference between utilitarian approaches, which try to maximise happiness (in the sense of pleasure, or preference-satisfaction, or something similar), and my more sceptical and pluralist approach to morality, which regards moral norms as justified to the extent that they protect us from things that we fear, help us to preserve things we value, help us to give expression to our sympathies, and so on. On this approach, a set of moral norms that actually becomes a new thing to fear is given a low or negative rating. This approach treats morality as our servant - it is something that human beings collectively invented to meet a variety of widespread human desires and interests. It is not a set of highly abstract, objective requirements, such as a requirement, from “out there” somewhere, to maximise the total or average utility in the universe, or in some part of it.

(I don’t deny that there may sometimes be justification for a moral norm requiring us to do certain things that only “harm” people whose very existence is contingent on the action prohibited by the norm. But I think that, in so far as such norms really can be justified, they need to be grounded in something other than an abstract requirement to maximise overall utility. For example, they might be grounded in a desire for future human societies to flourish in certain ways, or a desire to avoid situations where individuals will experience actual suffering or be stuck with lives that are, in some sense, limited or blighted. The appeal will most likely be to our sympathies and/or to certain perfectionist values.)

If we agitate for the anti-aging drug, and take it once it is available, we thereby act rationally. We also act consistently with any set of positive moral norms that we have good reason to commit ourselves to (or assign a high justification rating), given the sorts of beings we actually are, with the sort of psychology that we actually have. If we develop the drug, every actual person who takes it will be better off, and no one need, as a result of taking the drug, turn into an appropriate object for our compassion. Even on Singer’s not-very-optimistic scenario, agitating for the anti-aging drug’s development is the rational thing to do, and it is at least morally permissible if not morally required.


Russell Blackford
Russell Blackford Ph.D. is a fellow of the IEET, an attorney, science fiction author and critic, philosopher, and public intellectual. Dr. Blackford serves as editor-in-chief of the IEET's Journal of Evolution and Technology. He lives in Newcastle, Australia, where he is a Conjoint Lecturer in the School of Humanities and Social Science at the University of Newcastle.



COMMENTS

Russsell: My article takes Singer’s assumption and shows that his conclusion does not follow. I think you do a good job of thinking about how to cut-off Singer’s argument at the root. Your point seems consistent with Griffin’s claims that ethics should fit our “human moral torso”, and that utilitarianism fails miserably at this. I have a different worry about utilitarianism (and indeed all types of consequentialism). Reading Mill one would think that the goal of utilitarianism is to make people happier. It is interesting, however, that the very notion of ‘persons’ at some level seems to drop out of consequentialism. Take world A that has a billion people and each person experiences on average 1 unit of happiness per year and lives to be 150. World B has the same number of people and same average level of happiness but they live only to be 75 years. Which world should one prefer as a utilitarian? Seems we might as well flip a coin, because after 150 year there will be exactly the same amount of aggregate utility: 150 billion units in both cases. Now take the “mayfly” world C: people grow rapidly and age rapidly so their average lifespan is 1 year. Assuming that they have the same population and same average level of happiness at any point in time, the aggregate happiness after 150 years will be the same: 150 billion units. But this means that all are equal and so one might as well flip a coin to choose which is better. It seems that the idea of a person here almost completely drops out. It is a matter of indifference if in a certain period there are 150 billion people or a billion people so long as the aggregate happiness stays the same. I think utilitarians must imagine people being like grocery bags in which you put happiness. It does not matter whether we have, as in world A, billion grocery bags that are full after 150 years, as compared with world B where we have 2 billion grocery bags that are half full after 150 years. I haven’t quite got my mind around how to make this point, but it seems like utilitarians think of persons as mere sacks of happiness. So long as we aren’t worried about being environmentally wasteful with the grocery bags, utilitarianism is profoundly indifferent to the sacks themselves.
This worry is different, but I think it might tend to point in the same direction for ethical thought.

Cheers,

Mark

Wow.

Singer’s argument is the same kind of argument that says we can raise average wealth in our world by killing the poor. He’s quite right of course - but that doesn’t mean the suggestion is remotely ethical.

The problem with Singer’s measure of average happiness is that once a person is dead, you can conveniently remove them from the statistics. It may be problematic and bizarre to propose that we measure the happiness of the dead - but I think we can safely say that the living are at least happier on average than the dead. If that’s true, then it’s easy to see that longevity enhances average happiness.

YOUR COMMENT Login or Register to post a comment.

Next entry: Things That Make Me Happy

Previous entry: Protecting Babies from Religion, and Animals from Carnivores