Consider your smartphone for a moment. It provides you with access to a cornucopia of information. Some of it is general, stored on publicly accessible internet sites, and capable of being called up to resolve any pub debate one might be having (how many U.S. presidents have been assassinated? or how many times have Brazil won the World Cup?). Some of it is more personal, and includes a comprehensive databank of all emails and text message conversations you have had, your calendar appointments, the number of steps you have taken on any given day, books read, films watched, calories consumed and so forth.
Back in 1973, Bernard Williams published an article about the desirability of immortality. The article was entitled “The Makropulos Case: Reflections on the Tedium of Immortality”. The article used the story of Elina Makropulos — from Janacek’s opera The Makropulos Affair — to argue that immortality would not be desirable. According to the story, Elina Makropulos is given the elixir of life by her father. The elixir allows Elina to live for three hundred years at her current biological age. After this period has elapsed, she has to choose whether to take the elixir again and live for another three hundred. She takes it once, lives her three hundred years, and then chooses to die rather than live another three hundred. Why? Because she has become bored with her existence.
I’ve met Erik Parens twice; he seems like a thoroughly nice fellow. I say this because I’ve just been reading his latest book Shaping Our Selves: On Technology, Flourishing and a Habit of Thinking, and it is noticeable how much of his personality shines through in the book. Indeed, the book opens with a revealing memoir of Parens’s personal life and experiences in bioethics, specifically in the enhancement debate. What’s more, Parens’s frustrations with the limiting and binary nature of much philosophical debate is apparent throughout his book.
I have recently been working my way through some of the arguments in Derk Pereboom’s book Free Will, Agency and Meaning in Life. The book presents the most thorough case for hard incompatibilism of which I am aware. Hard incompatibilism is the view that free will is not compatible with causal determinism, and, what’s more, probably doesn’t even exist. In previous entries, I’ve looked at Pereboom’s critique of non-compatibilist theories of free will. In this post, I want to look at his famous argument against compatibilism.
The term “libertarianism” is used in two senses in philosophical circles. The first, and perhaps more famous sense, is as a name for a family of political theories that prioritise individual freedom; the second, and perhaps less famous (except among the cognoscenti), is as a specific view on the nature of free will. It is the latter sense that concerns me in this post.
What makes us free, if we are free? In other words, what conditions must be satisfied in order for us to say of any particular agent that he/she has free will or doesn’t? This is something that philosophers have long debated. Indeed, the free will debate is almost nauseating in its persistence and intricacy.
So I have another paper coming out. It’s about plea-bargaining, brain-based lie detection and the innocence problem. I wasn’t going to write about it on the blog, but then somebody sent me a link to a recent article by Jed Radoff entitled “Why Innocent People Plead Guilty”. Radoff’s article is an indictment of the plea-bargaining system currently in operation in the US. Since my article touches upon same thing, I thought it might be worth offering a summary of its core argument.
Samuel Scheffler made quite a splash last year with his book Death and the Afterlife. It received impressive recommendations and reviews from numerous commentators, and was featured in a variety of popular outlets, including the Boston Review and the New York Review of Books. I’m a bit late to the party, having only got around to reading it in the past week, but I think I can see what all the fuss was about.
I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is going to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail. Hence why I wrote the article.
Some people think that neuroscience will have a significant impact on the law. Some people are more sceptical. A recent book by Michael Pardo and Dennis Patterson — Minds, Brains and Law: The Conceptual Foundations of Law and Neuroscience — belongs to the sceptical camp. In the book, Pardo and Patterson make a passionate plea for conceptual clarity when it comes to the interpretation of neuroscientific evidence and its potential application in the law. They suggest that most neurolaw hype stems from conceptual confusion. They want to throw some philosophical cold water on the proponents of this hype.
Why do we punish others? There are many philosophical answers to that question. Some claim that we punish in order to incapacitate a potential wrongdoer; some claim that we do it in order to rehabilitate an offender; some claim that we do it in order to deter others; and some claim that we do it because wrongdoers simply deserve to be punished. Proponents of the last of these views are called retributivists. They believe that punishment is an intrinsic good, and that it ought to be imposed in order to ensure that justice is done. Proponents of the other views are consequentialists. They think that punishment is an instrumental good, and that its worth has to be assessed in terms of the ends it helps us to achieve.
Regular readers will know that I have recently been working my through Erik Wielenberg’s fascinating new book Robust Ethics. In the book, Wielenberg defends a robust non-natural, non-theistic, moral realism. According to this view, moral facts exist as part of the basic metaphysical furniture of the universe. They are sui generis, not grounded in or constituted by other types of fact.
Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.
There two basic types of ethical fact: (i) values, i.e. facts about what is good, bad, or neutral; and (ii) duties, i.e. facts about what is permissible, obligatory and forbidden. In this post I want to consider whether or not there is a defensible non-theistic account of values. In other words, is it possible for values to exist in the godless universe?
We are entering the age of robotics. Robots will soon be assisting us in our homes; stacking our warehouses; driving our cars; delivering our Amazon purchases; providing emergency medical care; and generally taking our jobs. There’s lots to ponder as they do so. One obvious question — obvious at least to lawyers — is whether the age of robotics poses any unique challenges to our legal system?
Roughly (I’ll refine later on) the “technological singularity” (or “singularity” for short, and in the right context) is the name given to point in time at which greater-than-human superintelligent machines are created. The concept (and name) was popularised by the science fiction author Vernor Vinge in the 1980s and 90s, though its roots can be traced further back in time to the work of John Von Neumann and I.J. Good.
Advances in robotics and artificial intelligence are going to play an increasingly important role in human society. Over the past two years, I’ve written several posts about this topic. The majority of them focus on machine ethics and the potential risks of an intelligence explosion; others look at how we might interact with and have duties toward robots.
I am really looking forward to Frank Pasquale’s new book The Black Box Society: The Secret Algorithms that Control Money and Information. The book looks to examine and critique the ways in which big data is being used to analyse, predict and control our behaviour. Unfortunately, it is not out until January 2015. In the meantime, I’m trying to distract myself with some of Pasquale’s previously published material.
What kind of society are we creating? With the advent of the internet-of-things, advanced data-mining and predictive analytics, and improvements in artificial intelligence and automation, we are the verge of creating a global “neural network”: a constantly-updated, massively interconnected, control system for the world. Imagine what it will be like when every “thing” in your home, place of work, school, city, state and country is connected to a smart device?
The paper introduces a novel critique of the Kalam Cosmological argument. Or rather, a novel critique of a specific sub-component of the argument in favour of the Kalam. As you may be aware, the Kalam argument makes three key claims: (i) that the universe must have begun to exist; (ii) that anything that begins to exist must have a cause of its existence; and (iii) that in the case of the universe, the cause must be God.
Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not.
I seem to work a lot. At least, I think I work a lot. Like many in the modern world, I find it pretty hard to tell the difference between work and the rest of my life. Apart from when I’m sleeping, I’m usually reading, writing or thinking (or doing some combination of the three). And since that is essentially what I get paid to do, it is difficult to distinguish between work and leisure. Of course, reading, writing and thinking are features of many jobs. The difference is that, as an academic, I have the luxury of deciding what I should be reading, writing and thinking about.
Consider the following passage from Richard Dawkins’s book Unweaving the Rainbow: “We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. The potential people who could have been here in my place but who will in fact never see the light of day outnumber the sand grains of Arabia. Certainly those unborn ghosts include greater poets than Keats, scientists greater than Newton. We know this because the set of possible people allowed by our DNA so massively exceeds the set of actual people…”
I have been blogging for nearly five years (hard to believe). In that time, I’ve written over 650 posts on a wide variety of topics: religion, metaethics, applied ethics, philosophy of mind, philosophy of law, technology, epistemology, philosophy of science and so on.
This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.
The brain is the engine of reason and the seat of the soul. It is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and along with them so too will our minds. This will result in our deaths. Little wonder then that the prospect of transferring (or uploading) our minds to a more robust, technologically advanced, substrate has proved so attractive to futurists and transhumanists.
In order to be responsible for your actions, you must be free. Or so it is commonly believed. But what exactly does it mean to be free? One popular view holds that freedom consists in the ability to do otherwise. That is to say: the ability to choose among alternative possible futures. This popular view runs into a host of problems. The obvious one being that it is inconsistent with causal determinism.
I try to be a decent writer. I try to convey complex ideas to a broader audience. I try to write in a straightforward, conversational style. But I know I often fail in this. I know I sometimes lean too heavily on technical philosophical vocabulary, hoping that the reader will be able to follow along. I know I sometimes rush to complete blog posts, never getting a chance to polish or rewrite them. Still, I strive for clarity and would like to improve.
I’ve been writing about the ethics of human enhancement for some time. In the process, I’ve looked at many of the fascinating ethical and philosophical issues that are raised by the use of enhancing drugs. But throughout all this writing, there is one topic that I have studiously avoided. This is surprising given that, in many ways, it is the most fundamental topic of all: do the alleged cognitive enhancing drugs actually work?