I had the good fortune to be asked back on to the Robot Overlordz podcast this week. I am the guest on episode #163 during which I chat with the hosts (Mike Johnston and Matt Bolton) about the ethical, legal and social implications of sex robots. We also talk about related issues from the world of AI and futurism.
An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting is its potential implication. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could lead to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.
Money has long fascinated me, and not for the obvious reasons. Although I’d like to have more of it, my interest is largely philosophical. It is the ontology of money that has always disturbed me. Ever since I was a child, collecting old coins and hoarding my pocket money, I’ve wondered why it is that certain physical tokens can function as money and others cannot. What is money made from? What is it grounded in? Why do certain monetary systems fail and others succeed?
They are glib and superficially charming. They have a grandiose sense of self worth. They are often pathological liars and routinely engage in acts of cunning and manipulation. If they do something wrong, they are without remorse.
William Lane Craig has a pretty dispiriting take on the atheistic view of life: If there is no God, then man and the universe are doomed. Like prisoners condemned to death, we await our unavoidable execution. There is no God, and there is no immortality. And what is the consequence of this? It means that life itself is absurd. It means that the life we have is without ultimate significance, value or purpose. (Craig 2008, 72)
Should prospective parents have to apply for parental licences? The argument seems obvious. Having children is a serious business. Negligent or irresponsible parents risk causing long-term harms to their offspring, harms that often have spillover effects on the rest of society. A licensing system should help us to filter out such parents. Therefore, a licensing system would benefit children and society at large. QED
The campaign for the introduction of a universal basic income (UBI) has been gaining ground in recent years. What was once a slightly obscure proposal, beloved by certain political theorists and welfare reformists, is now being embraced as a potential solution to the threat of technological unemployment. I myself have written about it on several occasions, mainly focusing on different political and philosophical arguments in favour of its introduction.
Human beings have long desired immortality. In his book on the topic, cleverly-titled Immortality, Stephen Cave argues that this desire has taken on four distinct forms over the course of human history. In the first, people seek immortality by simply trying to stay alive, either through the help of magic or science. In the second, people seek resurrection, sometimes in the same physical form and sometimes in an altered plane of existence.
Publish or perish, or so they say. That’s the rule in academia. But not all publications are created equal. I’ve “published” over 700 posts on this blog (and republished many on other blogs), and although I think there are advantages to having done so, I’d be lying if I said these publications were academically “significant”. They’re certainly not significant from the perspective of the administrators and overseers lurking within the groves of academe. If you want to please these people you must produce peer-reviewed publications (preferably double or triple-blind peer-reviewed publications) in high impact academic journals. That’s where the game is.
I’m trying to wrap my head around the extended mind hypothesis (EMH). I’m doing so because I’m interested in its implications for the debate about enhancement and technology. If the mind extends into the environment outside the brain/bone barrier, then we are arguably enhancing our minds all the time by developing new technologies, be they books and abacuses or smartphones and wearable tech. Consequently, we should have no serious principled objection to technologies that try to enhance directly inside the brain/bone barrier.
Democracy is the worst form of government except for all those other forms which from time to time we have tried. Granting this, we might be inclined to wonder what sorts of democratic decision-making procedures are possible? This is a question that Christian List sets out to answer in his paper “The Logical Space of Democracy”. In this post, I want to share the logical space alluded to in his title.
You may have heard of the Marquis de Condorcet (Nicolas de Condorcet). He was an 18th century French philosopher, mathematician and social theorist. He was a champion of the Enlightenment, and a leading participant in the French revolution. He is probably most famous today for three things. First, his jury theorem which showed how, under certain conditions, majority voting can get us closer to the truth. Second, his voting method which proposed that winners of elections be determined by pairing each candidate against every other candidate and figuring out who won each of those contests.
Democracies are the preferred form of modern government. Democracies pay homage to the notion that we are all moral equals. This means that no one human has an intrinsic right to exercise domination or control over another. No one human has the right to impose coercive rules on others.
Work is a dominant feature of contemporary life. Most of us spend most of our time working. Or if not actually working then preparing for, recovering from, and commuting to work. Work is the focal point, something around which all else is organised. We either work to live, or live to work.
Consider your smartphone for a moment. It provides you with access to a cornucopia of information. Some of it is general, stored on publicly accessible internet sites, and capable of being called up to resolve any pub debate one might be having (how many U.S. presidents have been assassinated? or how many times have Brazil won the World Cup?). Some of it is more personal, and includes a comprehensive databank of all emails and text message conversations you have had, your calendar appointments, the number of steps you have taken on any given day, books read, films watched, calories consumed and so forth.
Back in 1973, Bernard Williams published an article about the desirability of immortality. The article was entitled “The Makropulos Case: Reflections on the Tedium of Immortality”. The article used the story of Elina Makropulos — from Janacek’s opera The Makropulos Affair — to argue that immortality would not be desirable. According to the story, Elina Makropulos is given the elixir of life by her father. The elixir allows Elina to live for three hundred years at her current biological age. After this period has elapsed, she has to choose whether to take the elixir again and live for another three hundred. She takes it once, lives her three hundred years, and then chooses to die rather than live another three hundred. Why? Because she has become bored with her existence.
I’ve met Erik Parens twice; he seems like a thoroughly nice fellow. I say this because I’ve just been reading his latest book Shaping Our Selves: On Technology, Flourishing and a Habit of Thinking, and it is noticeable how much of his personality shines through in the book. Indeed, the book opens with a revealing memoir of Parens’s personal life and experiences in bioethics, specifically in the enhancement debate. What’s more, Parens’s frustrations with the limiting and binary nature of much philosophical debate is apparent throughout his book.
I have recently been working my way through some of the arguments in Derk Pereboom’s book Free Will, Agency and Meaning in Life. The book presents the most thorough case for hard incompatibilism of which I am aware. Hard incompatibilism is the view that free will is not compatible with causal determinism, and, what’s more, probably doesn’t even exist. In previous entries, I’ve looked at Pereboom’s critique of non-compatibilist theories of free will. In this post, I want to look at his famous argument against compatibilism.
The term “libertarianism” is used in two senses in philosophical circles. The first, and perhaps more famous sense, is as a name for a family of political theories that prioritise individual freedom; the second, and perhaps less famous (except among the cognoscenti), is as a specific view on the nature of free will. It is the latter sense that concerns me in this post.
What makes us free, if we are free? In other words, what conditions must be satisfied in order for us to say of any particular agent that he/she has free will or doesn’t? This is something that philosophers have long debated. Indeed, the free will debate is almost nauseating in its persistence and intricacy.
So I have another paper coming out. It’s about plea-bargaining, brain-based lie detection and the innocence problem. I wasn’t going to write about it on the blog, but then somebody sent me a link to a recent article by Jed Radoff entitled “Why Innocent People Plead Guilty”. Radoff’s article is an indictment of the plea-bargaining system currently in operation in the US. Since my article touches upon same thing, I thought it might be worth offering a summary of its core argument.
Samuel Scheffler made quite a splash last year with his book Death and the Afterlife. It received impressive recommendations and reviews from numerous commentators, and was featured in a variety of popular outlets, including the Boston Review and the New York Review of Books. I’m a bit late to the party, having only got around to reading it in the past week, but I think I can see what all the fuss was about.
I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is going to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail. Hence why I wrote the article.
Some people think that neuroscience will have a significant impact on the law. Some people are more sceptical. A recent book by Michael Pardo and Dennis Patterson — Minds, Brains and Law: The Conceptual Foundations of Law and Neuroscience — belongs to the sceptical camp. In the book, Pardo and Patterson make a passionate plea for conceptual clarity when it comes to the interpretation of neuroscientific evidence and its potential application in the law. They suggest that most neurolaw hype stems from conceptual confusion. They want to throw some philosophical cold water on the proponents of this hype.
Why do we punish others? There are many philosophical answers to that question. Some claim that we punish in order to incapacitate a potential wrongdoer; some claim that we do it in order to rehabilitate an offender; some claim that we do it in order to deter others; and some claim that we do it because wrongdoers simply deserve to be punished. Proponents of the last of these views are called retributivists. They believe that punishment is an intrinsic good, and that it ought to be imposed in order to ensure that justice is done. Proponents of the other views are consequentialists. They think that punishment is an instrumental good, and that its worth has to be assessed in terms of the ends it helps us to achieve.
Regular readers will know that I have recently been working my through Erik Wielenberg’s fascinating new book Robust Ethics. In the book, Wielenberg defends a robust non-natural, non-theistic, moral realism. According to this view, moral facts exist as part of the basic metaphysical furniture of the universe. They are sui generis, not grounded in or constituted by other types of fact.
Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.
There two basic types of ethical fact: (i) values, i.e. facts about what is good, bad, or neutral; and (ii) duties, i.e. facts about what is permissible, obligatory and forbidden. In this post I want to consider whether or not there is a defensible non-theistic account of values. In other words, is it possible for values to exist in the godless universe?