In order to be responsible for your actions, you must be free. Or so it is commonly believed. But what exactly does it mean to be free? One popular view holds that freedom consists in the ability to do otherwise. That is to say: the ability to choose among alternative possible futures. This popular view runs into a host of problems. The obvious one being that it is inconsistent with causal determinism.
I try to be a decent writer. I try to convey complex ideas to a broader audience. I try to write in a straightforward, conversational style. But I know I often fail in this. I know I sometimes lean too heavily on technical philosophical vocabulary, hoping that the reader will be able to follow along. I know I sometimes rush to complete blog posts, never getting a chance to polish or rewrite them. Still, I strive for clarity and would like to improve.
I’ve been writing about the ethics of human enhancement for some time. In the process, I’ve looked at many of the fascinating ethical and philosophical issues that are raised by the use of enhancing drugs. But throughout all this writing, there is one topic that I have studiously avoided. This is surprising given that, in many ways, it is the most fundamental topic of all: do the alleged cognitive enhancing drugs actually work?
Street defends a form of constructivist antirealism, which I find quite attractive. I was thus pleasantly surprised to find that she had also recently written a paper dealing with one of my favourite topics in the philosophy of religion: the problem of evil and its moral implications. It’s a very good paper too, one that I’m sure will provide plenty of fodder for discussion.
The paper tries to fuse traditional concerns about the problem of evil with recent work in population ethics. The result is an interesting, and somewhat novel, atheological argument. As is the case with every journal club, I will try to kick start the discussion by providing an overview of the paper’s main arguments, along with some questions you might like to ponder about its effectiveness.
I’ve recently been looking into the ethics of vegetarianism, partly because I’m not one myself and I’m interesting in questioning my position, and partly because it is an interesting philosophical issue in its own right. Earlier this summer I looked at Jeff McMahan’s critique of benign carnivorism. Since that piece was critical of the view I myself hold, I thought it might be worthwhile balancing things out by looking at an opposing view.
This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.
Debate about the merits of enhancement tends to pretty binary. There are some — generally called bioconservatives — who are opposed to it; and others — transhumanists, libertarians and the like — who embrace it wholeheartedly. Is there any hope for an intermediate approach? One that doesn’t fall into the extremes of reactionary reject or uncritical endorsement?
This is the fourth post of my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I started my discussion of Bostrom’s argument for an AI doomsday scenario. Today, I continue this discussion by looking at another criticism of that argument, along with Bostrom’s response.
In the first two entries, I looked at some of Bostrom’s conceptual claims about the nature of agency, and the possibility of superintelligent agents pursuing goals that may be inimical to human interests. I now move on to see how these conceptual claims feed into Bostrom’s case for an AI doomsday scenario.
This is the second post in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I looked at Bostrom’s defence of the orthogonality thesis. This thesis claimed that pretty much any level of intelligence — when “intelligence” is understood as skill at means-end reasoning — is compatible with pretty much any (final) goal. Thus, an artificial agent could have a very high level of intelligence, and nevertheless use that intelligence to pursue very odd final goals, including goals that are inimical to the survival of human beings. In other words, there is no guarantee that high levels of intelligence among AIs will lead to a better world for us.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
Voltaire once said that “work saves a man from three great evils: boredom, vice and need.” Many people endorse this sentiment. Indeed, the ability to seek and secure paid employment is often viewed as an essential part of a well-lived life. Those who do not work are reminded of the fact. They are said to be missing out on a valuable and fulfilling human experience. The sentiment is so pervasive that some of the foundational documents of international human rights law — including the UN Declaration of Human Rights (UDHR Art. 23) and the International Covenant on Economic, Social and Cultural Rights (ICESCR Art. 6) — recognise and enshrine the “right to work”.
This is the second part of my series on feminism and the basic income. In part one, I looked at the possible effects of an unconditional basic income (UBI) on women. I also looked at a variety of feminist arguments for and against the UBI. The arguments focused on the impact of the UBI on economic independence, freedom of choice, the value of unpaid work, and women’s labour market participation.
The introduction of an unconditional basic income (UBI) is often touted as a positive step in terms of freedom, well-being and social justice. That’s certainly the view of people like Philippe Van Parijs and Karl Widerquist, both of whose arguments for the UBI I covered in my two mostrecent posts. But could there be other less progressive effects arising from its introduction?
This post is part of an ongoing series I’m doing on the unconditional basic income (UBI). The UBI is an income grant payable to a defined group of people (e.g. citizens, or adults, or everyone) within a defined geo-political space. The income grant could be set at various levels, with most proponents thinking it should be at or above subsistence level, or at least at the maximum that is affordable in a given society. In my most recent post, I looked at Van Parijs’s famous defence of the UBI. Today, I look at Widerquist’s critique of Parijs, as well as his own preferred justification for the UBI.
I want to write a few posts about the basic income over the next couple of months. This is part of an ongoing interest I have in the future of work and solutions to the problem of technological unemployment. I’ll start by looking at a debate between Philippe van Parijs and Elizabeth Anderson about the justice of an unconditional basic income (UBI).
Should we worry that only X% of CEOs, or politicians or philosophers (or whatever) are women? Is there something unjust or morally defective about a society with low percentages of women occupying these kinds of roles? That’s what we’re looking at in this series of posts, based on Janet Radcliffe-Richard’s (RR’s) paper “Only X%: the Problem of Sex Inequality”.
Let’s start with a thought experiment. Suppose that in a given population 50% of people have blue eyes and 50% have brown eyes. Suppose further that there is no evidence to suggest that eye colour has any effect on cognitive ability; indeed, suppose that everything we know suggests that cognitive ability is equally distributed among blue and brown-eyed people. Now imagine that in this population 80% of all senior academics and professors are blue-eyed. What conclusions should we draw about the justice of this society?
This is the second part in my series on the ethics of benign carnivorism. The series is working off Jeff McMahan’s article “Eating animals the nice way”. Benign carnivorism (BC) is the view that it is ethically permissible to eat farmed meat, so long as the animals being reared have lived good lives (that they otherwise would not have lived) and have been killed painlessly.
Is it morally permissible to eat farmed meat? According to a position known as “benevolent carnivorism” it can be. I’ll offer a more detailed characterisation of this position below, but in general terms benevolent carnivorism (BC from here on out) is the view that it is permissible to eat farmed meat so long as the animals one eats live good lives (that they would not otherwise have lived) and are painlessly killed.
The Borg are the true villains of the Star Trek universe. True, the Klingons are warlike and jingoistic, the Romulans are devious and isolationist, and the Cardassians are just plain devious, but their methods and motivations are, for want of a better word, all too human-like. The Borg are truly alien: a hive-like superorganism, bent upon assimilating every living thing into their collective mind. To hardy individualists, this is the epitome of evil.
I think I might be a bit of stoic. It’s not that I agree with stoic metaphysics or logic — I actually know very little about those things — but if asked about my general attitude toward life, I would describe it as being stoic. For me, that means that I try to live in the moment as much as possible, to constantly factor-in the arbitrariness of the world around me, and to regularly practice negative visualisations.
In 1774, Goethe published the novel The Sorrows of Young Werther. The novel consists of a series of letters from a young, sensitive artist by the name of Werther. Over the course of these letters, we learn that Werther has become involved in a tragic love triangle. He believes that in order to resolve the love triangle, some member of it will have to die. Not being inclined to commit murder, Werther resolves to kill himself. This he duly does by shooting himself in the head.
Transhumanists want to liberate themselves from the limitations of the human body. Anarchists want to liberate themselves from the limitations of contemporary human social structures. You might think that these two goals are compatible: that the liberatory ethos of transhumanism could complement that of anarchism.
The notorious 1982 video game Custer’s Revenge requires the player to direct their crudely pixellated character (General Custer) to avoid attacks so that he can rape a Native American woman who is tied to a stake. The game, unsurprisingly, generated a great deal of controversy and criticism at the time of its release. Since then, video games with similarly problematic content, but far more realistic imagery, have been released. For example, in 2006 the Japanese company Illusion released the game RapeLay, in which the player stalks and rapes a mother and her two daughters.
This is going to be the final part in my series on Nicholas Agar’s book Truly Human Enhancement. In the most recent entry, I went through the first part of the argument in chapter 4. To briefly recap, that argument contends that radical enhancement may lead to the disintegration of personal identity (in either a metaphysical or evaluative sense).