Is life absurd? Should we bother with it? Does it matter either way? Rightly or wrongly, Thomas Nagel’s 1971 article, “The Absurd”, is one of the most celebrated and widely-cited contributions to the literature on these questions. I certainly am struck by how frequently people refer to it in conversations I have with them about this topic. It seems like anyone with even a dim awareness of literature will have heard of Nagel’s piece.
I’ve been meaning to recommend Michael Huemer’s latest book — The Problem of Political Authority — for some time. I don't have much to say about it, except that it is the most comprehensive and tightly-argued defence of political anarchism that I’ve ever come across.It is a book of two halves. In the first half, Huemer looks at the problem of political authority, which he says consists in two sub-problems. The first being the problem of political legitimacy, i.e. does the state have to make certain laws and enforce them by coercion? The second being the problem of political obligation, i.e. do people have an obligation to follow the laws made by the state?
Right now it’s Sunday afternoon. There is large pile of washed, but as yet un-ironed clothes on a seat in my living room. I know the ironing needs to be done, and I’ve tried to motivate myself to do it. Honestly. The ironing board is out, as is the iron, I have lots of interesting things I could watch or listen to while I do the ironing, and I have plenty of free time in which to do it. But instead I’m in my office writing this blog post. Why?
It has oft been observed that people are uneasy about the prospect of advanced enhancement technologies. But what is the cause of this unease? Is there any rational basis to it? I’m currently trying to work my way through a variety of arguments to this effect. At the moment, I’m looking at Saskia Nagel’s article “Too Much of Good Thing? Enhancement and the Burden of self determination”, which appeared a couple of years back in the journal Neuroethics.
Okay, it’s been awhile but at long last I’m going to finish off my series on Michael Hauskeller’s article “Human Enhancement and the Giftedness of Life”. To recap, in this article Hauskeller tries to refine, rehabilitate, and reconstruct Sandel’s giftedness argument against enhancement. I’m covering this as part of an ongoing series of posts looking at hyperagency-based objections to enhancement.
Is there something disturbing about the drive for human enhancement? Is it unwise? Likely to reduce the quality and meaning of our lives? Likely to deprive us of something of great value? Several prominent philosophers argue that it is. Among them is Michael Sandel, who several years back argued that enhancement was unwise because it caused us to lose our appreciation for the giftedness of our lives. More precisely, he challenged proponents of enhancement on the grounds that its pursuit would give rise to a state of hyperagency, i.e. a state in which virtually every aspect of our lives is open to our control and manipulation.
A while back, I wrote a post about Michael Sandel’s case against human enhancement. As noted at the time, Sandel’s central claim was that enhancement was bad because it caused us to lose appreciation for the gifted nature of our lives. On the face of it, this doesn’t look to be a particularly persuasive argument, and indeed it has been repeatedly criticised in the literature since it was originally presented (see the earlier post for some examples of this). But maybe there is more to Sandel’s argument than meets the eye? Michael Hauskeller certainly seems to think so. In his 2011 article, “Human Enhancement and the Giftedness of Life”, he tries to rehabilitate, refine and reconstruct Sandel’s argument, defending it from common criticisms, and turning it into a powerful objection to the human enhancement project. Over the next few posts I want to take a fairly detailed look at Hauskeller’s attempted rehabilitation.
As mentioned in an earlier post, I’ve recently begun reading two books on the ethics of human enhancement. One of those books is called Humanity’s End and it’s by Nicholas Agar. Agar seems like an interesting character. In an earlier book he defended a liberal position on positive eugenics. This suggested he had a willingness to embrace certain forms of enhancement. And yet in this book he offers an argument against radical human enhancement. There’s not necessarily an incompatibility between the two positions, but it’s an interesting shift nonetheless.
This is the second part in my series looking at pornography and the free speech principle. The series is focusing on the arguments analysed in Andrew Koppelman’s article “Is Pornography “Speech”?”. In part one, we looked at Frederick Schauer’s argument. In this post, we will look at John Finnis’s one. Both authors suggest that pornography is not covered by the FSP.
This post considers whether or not pornography should be covered by the free speech principle (FSP). According to this principle, all (or most) forms of speech should be free from government censorship and regulation. But this raises the question: which types of symbolic productions are covered by the FSP? And is pornography among them?
Democratic Legitimacy and the Enhancement Project Klaming and Vedder (2010) have argued that enhancement technologies that improve the epistemic efficiency of the legal system (“epistemic enhancements”) would benefit the common good. But there are two flaws to Klaming and Vedder’s argument. First, they rely on an under-theorised and under-specified conception of the common good. When theory and specification are supplied, their CGJ for enhancing eyewitness memory and recall becomes significantly less persuasive. And second, although aware of such problems, they fail to give due weight and consideration to the tensions between the individual and common good.
A couple of weeks back, I looked at David Owens’s article “Disenchantment”. In this article, Owens argues that the ability to manipulate and control all aspects of human life — which is, arguably, what is promised to us by enhancement technologies — would lead to disenchantment. Those of you who read my analysis of Owens’s article will know that I wasn’t too impressed by his arguments. Since then I’ve been wondering whether there might be a better critique of enhancement, one which touches upon similar themes, but which is defended by more rigorous arguments.
This post is the second part in a short series about the meaning of life. The series is working primarily off Gianluca di Muzio’s article “Theism and the Meaning of Life”, which appeared in the journal Ars Disputandi back in 2006. However, I’m trying to develop some formal reconstructions of Di Muzio’s arguments so as to improve my understanding of the arguments in this debate. My central contention, which is worth keeping in mind while reading this post, is that the debate is concerned with finding the necessary and sufficient conditions for a life that is worth living (i.e. meaningful = worthwhile).
This article starts by critiquing Craig’s conception of meaning and then offers an alternative conception that does not depend on the truth of theism. While I enjoyed Di Muzio’s article, I was frustrated by the lack of formality in it (this might just be an annoying idiosyncrasy of mine, for which I apologise). In particular, I was frustrated by the failure to formalise Craig’s argument and then to map out each step in the critique of that argument. Admittedly, this is not an easy thing to do. This is for two reasons: (i) in contrast to his other work, Craig doesn’t go to the bother of formalising his own argument on the meaning of life; and (ii) his comments on meaning are quite rhetorical and enthymematic.
One often hears it claimed that future artificial intelligences could have significant, possibly decisive, advantages over us humans. This claim plays an important role in the debate surrounding the technological singularity, so considering the evidence in its favour is worthy enterprise. This post attempts to do just that by examining a recent article by Kaj Sotala entitled “Advantages of Artificial Intelligences, Uploads and Digital Minds”.
Death looms large for most of us, even if we try not to think about it. But should we be worried at the prospects of our eventual demise? Should we do everything we can to avoid it (e.g. by opting for cryopreservation)? Or should we approach it with indifference and equanimity?
This is the second part in a brief series looking at whether human enhancement — understood as the use of scientific knowledge and technology to improve the human condition — would rob our lives of meaning and value. The focus is on David Owens’s article “Disenchantment”. The goal is to clarify the arguments presented by Owens, and to subject them to some critical scrutiny.
Although there are some enthusiasts, many people I talk to are deeply ambivalent about the prospects of human enhancement, particularly in its more radical forms. To be sure, this ambivalence might be rooted in human prejudice and bias toward the status quo, but I’m curious to see whether there is any deeper, more persuasive reason to share that unease.
This is the second (and final) part in my series looking at the arguments from Muehlhauser and Helm’s (MH’s) paper “The Singularity and Machine Ethics”. As noted in part one, proponents of the Doomsday Argument hold that if a superintelligent machine (AI+) has a decisive power advantage over human beings, and if the machine has goals and values that are antithetical to the goals and values that we human beings think are morally ideal, then it spells our doom. The naive response to this argument is to claim that we can avoid this outcome by programming the AI+ to “want what we want”. One of the primary goals of MH’s paper is to dispute the credibility of this response. The goal of this series of blog posts is to clarify and comment upon the argument they develop.
This is the first of two posts on Muehlhauser and Helm’s article “The Singularity and Machine Ethics”. It is part on my ongoing, but completely spontaneous and unplanned, series on the technological singularity. Before reading this, I would suggest reading my earlier attempts to provide some overarching guidance on how to research this topic (link is above). Much of what I say here is influenced by my desire to “fit” certain arguments within that framework. That might lead to some distortion of the material I’m discussing (though hopefully not), but if you understand the framework you can at least appreciate what I’m trying to do (even if you don’t agree with it).
I want to begin with the problem. A couple of weeks back — the 24th of December to be precise — I published a post offering a general framework for understanding and participating in (philosophical) debates about the technological singularity. The rationale behind the post was simple: I wanted to organise the dialectical terrain surrounding the technological singularity into a number of core theses, each of which could be supported or rejected by a number of arguments. Analysing, developing and defending the premises of those arguments could then be viewed as the primary role of philosophical research about this topic. Obviously, I was aware that people are engaged in this kind of research anyway, but I had hoped that the framework might provide some utility for those who are new to the area, while also enabling existing researchers to see how their work connects with that of others.