What kind of society are we creating? With the advent of the internet-of-things, advanced data-mining and predictive analytics, and improvements in artificial intelligence and automation, we are the verge of creating a global “neural network”: a constantly-updated, massively interconnected, control system for the world. Imagine what it will be like when every “thing” in your home, place of work, school, city, state and country is connected to a smart device?
The paper introduces a novel critique of the Kalam Cosmological argument. Or rather, a novel critique of a specific sub-component of the argument in favour of the Kalam. As you may be aware, the Kalam argument makes three key claims: (i) that the universe must have begun to exist; (ii) that anything that begins to exist must have a cause of its existence; and (iii) that in the case of the universe, the cause must be God.
Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not.
I seem to work a lot. At least, I think I work a lot. Like many in the modern world, I find it pretty hard to tell the difference between work and the rest of my life. Apart from when I’m sleeping, I’m usually reading, writing or thinking (or doing some combination of the three). And since that is essentially what I get paid to do, it is difficult to distinguish between work and leisure. Of course, reading, writing and thinking are features of many jobs. The difference is that, as an academic, I have the luxury of deciding what I should be reading, writing and thinking about.
Consider the following passage from Richard Dawkins’s book Unweaving the Rainbow: “We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. The potential people who could have been here in my place but who will in fact never see the light of day outnumber the sand grains of Arabia. Certainly those unborn ghosts include greater poets than Keats, scientists greater than Newton. We know this because the set of possible people allowed by our DNA so massively exceeds the set of actual people…”
I have been blogging for nearly five years (hard to believe). In that time, I’ve written over 650 posts on a wide variety of topics: religion, metaethics, applied ethics, philosophy of mind, philosophy of law, technology, epistemology, philosophy of science and so on.
This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.
The brain is the engine of reason and the seat of the soul. It is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and along with them so too will our minds. This will result in our deaths. Little wonder then that the prospect of transferring (or uploading) our minds to a more robust, technologically advanced, substrate has proved so attractive to futurists and transhumanists.
In order to be responsible for your actions, you must be free. Or so it is commonly believed. But what exactly does it mean to be free? One popular view holds that freedom consists in the ability to do otherwise. That is to say: the ability to choose among alternative possible futures. This popular view runs into a host of problems. The obvious one being that it is inconsistent with causal determinism.
I try to be a decent writer. I try to convey complex ideas to a broader audience. I try to write in a straightforward, conversational style. But I know I often fail in this. I know I sometimes lean too heavily on technical philosophical vocabulary, hoping that the reader will be able to follow along. I know I sometimes rush to complete blog posts, never getting a chance to polish or rewrite them. Still, I strive for clarity and would like to improve.
I’ve been writing about the ethics of human enhancement for some time. In the process, I’ve looked at many of the fascinating ethical and philosophical issues that are raised by the use of enhancing drugs. But throughout all this writing, there is one topic that I have studiously avoided. This is surprising given that, in many ways, it is the most fundamental topic of all: do the alleged cognitive enhancing drugs actually work?
Street defends a form of constructivist antirealism, which I find quite attractive. I was thus pleasantly surprised to find that she had also recently written a paper dealing with one of my favourite topics in the philosophy of religion: the problem of evil and its moral implications. It’s a very good paper too, one that I’m sure will provide plenty of fodder for discussion.
The paper tries to fuse traditional concerns about the problem of evil with recent work in population ethics. The result is an interesting, and somewhat novel, atheological argument. As is the case with every journal club, I will try to kick start the discussion by providing an overview of the paper’s main arguments, along with some questions you might like to ponder about its effectiveness.
I’ve recently been looking into the ethics of vegetarianism, partly because I’m not one myself and I’m interesting in questioning my position, and partly because it is an interesting philosophical issue in its own right. Earlier this summer I looked at Jeff McMahan’s critique of benign carnivorism. Since that piece was critical of the view I myself hold, I thought it might be worthwhile balancing things out by looking at an opposing view.
This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.
Debate about the merits of enhancement tends to pretty binary. There are some — generally called bioconservatives — who are opposed to it; and others — transhumanists, libertarians and the like — who embrace it wholeheartedly. Is there any hope for an intermediate approach? One that doesn’t fall into the extremes of reactionary reject or uncritical endorsement?
This is the fourth post of my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I started my discussion of Bostrom’s argument for an AI doomsday scenario. Today, I continue this discussion by looking at another criticism of that argument, along with Bostrom’s response.
In the first two entries, I looked at some of Bostrom’s conceptual claims about the nature of agency, and the possibility of superintelligent agents pursuing goals that may be inimical to human interests. I now move on to see how these conceptual claims feed into Bostrom’s case for an AI doomsday scenario.
This is the second post in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I looked at Bostrom’s defence of the orthogonality thesis. This thesis claimed that pretty much any level of intelligence — when “intelligence” is understood as skill at means-end reasoning — is compatible with pretty much any (final) goal. Thus, an artificial agent could have a very high level of intelligence, and nevertheless use that intelligence to pursue very odd final goals, including goals that are inimical to the survival of human beings. In other words, there is no guarantee that high levels of intelligence among AIs will lead to a better world for us.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
Voltaire once said that “work saves a man from three great evils: boredom, vice and need.” Many people endorse this sentiment. Indeed, the ability to seek and secure paid employment is often viewed as an essential part of a well-lived life. Those who do not work are reminded of the fact. They are said to be missing out on a valuable and fulfilling human experience. The sentiment is so pervasive that some of the foundational documents of international human rights law — including the UN Declaration of Human Rights (UDHR Art. 23) and the International Covenant on Economic, Social and Cultural Rights (ICESCR Art. 6) — recognise and enshrine the “right to work”.
This is the second part of my series on feminism and the basic income. In part one, I looked at the possible effects of an unconditional basic income (UBI) on women. I also looked at a variety of feminist arguments for and against the UBI. The arguments focused on the impact of the UBI on economic independence, freedom of choice, the value of unpaid work, and women’s labour market participation.
The introduction of an unconditional basic income (UBI) is often touted as a positive step in terms of freedom, well-being and social justice. That’s certainly the view of people like Philippe Van Parijs and Karl Widerquist, both of whose arguments for the UBI I covered in my two mostrecent posts. But could there be other less progressive effects arising from its introduction?
This post is part of an ongoing series I’m doing on the unconditional basic income (UBI). The UBI is an income grant payable to a defined group of people (e.g. citizens, or adults, or everyone) within a defined geo-political space. The income grant could be set at various levels, with most proponents thinking it should be at or above subsistence level, or at least at the maximum that is affordable in a given society. In my most recent post, I looked at Van Parijs’s famous defence of the UBI. Today, I look at Widerquist’s critique of Parijs, as well as his own preferred justification for the UBI.
I want to write a few posts about the basic income over the next couple of months. This is part of an ongoing interest I have in the future of work and solutions to the problem of technological unemployment. I’ll start by looking at a debate between Philippe van Parijs and Elizabeth Anderson about the justice of an unconditional basic income (UBI).
Should we worry that only X% of CEOs, or politicians or philosophers (or whatever) are women? Is there something unjust or morally defective about a society with low percentages of women occupying these kinds of roles? That’s what we’re looking at in this series of posts, based on Janet Radcliffe-Richard’s (RR’s) paper “Only X%: the Problem of Sex Inequality”.
Let’s start with a thought experiment. Suppose that in a given population 50% of people have blue eyes and 50% have brown eyes. Suppose further that there is no evidence to suggest that eye colour has any effect on cognitive ability; indeed, suppose that everything we know suggests that cognitive ability is equally distributed among blue and brown-eyed people. Now imagine that in this population 80% of all senior academics and professors are blue-eyed. What conclusions should we draw about the justice of this society?
This is the second part in my series on the ethics of benign carnivorism. The series is working off Jeff McMahan’s article “Eating animals the nice way”. Benign carnivorism (BC) is the view that it is ethically permissible to eat farmed meat, so long as the animals being reared have lived good lives (that they otherwise would not have lived) and have been killed painlessly.