Data-mining algorithms are increasingly being used to monitor and enforce governmental policies. For example, they are being used to shortlist people for tax auditing by the revenue services in several countries. They are also used by businesses to identify and target potential customers.
This post examines how natural lawyers view the connection between facts about human nature and ethical norms. To guide me through this difficult topic, I’m going to use an article by one of the leading contemporary natural lawyers, Patrick Lee. The article in question is called “Human Nature and Moral Goodness” and it appeared in the book The Normativity of the Natural: Human Goods, Human Virtues, and Human Flourishing.
Abstract: Is sex work (specifically, prostitution) vulnerable to technological unemployment? Several authors have argued that it is. They claim that the advent of sophisticated sexual robots will lead to the displacement of human prostitutes, just as, say, the advent of sophisticated manufacturing robots have displaced many traditional forms of factory labour. But are they right? In this article, I critically assess the argument that has been made in favour of this displacement hypothesis. Although I grant the argument a degree of credibility, I argue that the opposing hypothesis—that prostitution will be resilient to technological unemployment—is also worth considering. Indeed, I argue that increasing levels of technological unemployment in other fields may well drive more people into the sex work industry. Furthermore, I argue that no matter which hypothesis you prefer—displacement or resilience—you can make a good argument for the necessity of a basic income guarantee, either as an obvious way to correct for the precarity of sex work, or as a way to disincentivise those who may be drawn to prostitution.
This series of blog posts is looking at arguments in favour of sousveillance. In particular, it is looking at the arguments proffered by one of the pioneers and foremost advocates of sousveillance: Steve Mann. The arguments in question are set forth in a pair of recent papers, one written by Mann himself, the other with the help of co-author Mir Adnan Ali. Part one clarified what was meant by the term “sousveillance”, and considered an initial economic argument in its favour. To briefly recap, “sousveillance” refers to the general use of veillance technologies (i.e. technologies that can capture and record data about other people) by persons who are not in authority.
Steve Mann (pictured) has been described as the world’s first cyborg, and as a pioneer in wearable computing. He is certainly the latter. I’m not so sure about the former (I believe Mann rejects the title himself). He is also one of the foremost advocates for sousveillance in the contemporary era. Sousveillance is the inverse of surveillance. Instead of recording equipment solely being used by those in authority to record data about the rest of us, sousveillance advocates argue for a world in which ordinary citizens can turn the recording equipment back onto the authorities (and one another). This is thought to be beneficial in numerous ways.
I’ve written a few postsabout mind-uploading, focusing mainly on its risks and philosophical problems. In each of these posts I’ve drawn distinctions between different varieties of “uploading” and suggested that some are less prone to risks and problems than others. So far, the distinctions I’ve drawn have been of my own choosing, based on what I’ve read about the topic over the years. But in his article “A Framework for Approaches to Transfer of a Mind’s Substrate”, Sim Bamford offers an alternative, slightly more sophisticated, framework for thinking about these issues. I want to share that framework in this post.
The Gamer’s Dilemma is the title of an article by Morgan Luck. We covered that article in part one. In brief, the article argues that there is something puzzling about attitudes toward virtual acts which, if they took place in the real world, would be immoral. To be precise, there is something puzzling about attitudes toward virtual murder and virtual paedophilia.
Modern video games give players the opportunity to engage in highly realistic depictions of violent acts. Among these is the act of virtual murder: the player’s character intentional kills someone in the game environment without good cause. Most avid gamers don’t seem overly concerned about this (reputed links between video games and violence notwithstanding). Nevertheless, when the possibility of other immoral virtual acts — say virtual paedophilia — is raised, people become rather more squeamish. Why is this? And is this double-standard justified?
This is the second part in a short series of posts on predictive algorithms and the virtues of transparency. The series is working off some ideas in Tal Zarsky’s article “Transparent Predictions”. The series is written against the backdrop of the increasingly widespread use of data-mining and predictive algorithms and the concerns this has raised.
Transparency is a much-touted virtue of the internet age. Slogans such as the “democratisation of information” and “information wants to be free” trip lightly off the tongue of many commentators; classic quotes, like Brandeis’s “sunlight is the best disinfectant” are trotted out with predictable regularity. But why exactly is transparency virtuous? Should we aim for transparency in all endeavours? Over the next two posts, I look at four possible answers to that question.
This is the third (and final) part in my ongoing series about the rationality of mind-uploading. The series deals with something called Searle’s Wager, which is an argument against the rationality of mind-uploading. The argument was originally developed by Nicholas Agar in his 2011 bookHumanity’s End. This series, however, covers a debate between Agar and Levy in the pages of the journal AI and Society. The first two parts discussed with Levy’s critique; this part discusses Agar’s response.
This is the second in a series of posts looking at Searle's Wager and the rationality of mind-uploading. Searle's Wager is an argument that was originally developed by the philosopher Nicholas Agar. It claims that uploading one's mind to a computer (or equivalent substrate) cannot be rational because there is a risk that it might entail death. I covered the argument on this blog back in 2011. In this series, I'm looking at a debate between Nicholas Agar and Neil Levy about the merits of the argument. The current focus is on Levy's critique.
A couple of years ago I wrote a series of posts about Nicholas Agar’s book Humanity’s End: Why we should reject radical enhancement. The book critiques the arguments of four pro-enhancement writers. One of the more interesting aspects of this critique was Agar’s treatment of mind-uploading. Many transhumanists are enamoured with the notion of mind-uploading, but Agar argued that mind-uploading would be irrational due to the non-zero risk that it would lead to your death. The argument for this was called Searle’s Wager, as it relied on ideas drawn from the work of John Searle.
This is the second and final part in my series on the different conceptions of political freedom. The series is working from Philip Pettit's article "The Instability of Freedom as Noninterference: The Case of Isaiah Berlin". In this article, Pettit analyses three different conceptions of political freedom—freedom as non-frustration; freedom as non-interference; and freedom as non-domination—and makes an argument for the non-domination conception.
Freedom is an important ideal in liberal political theory, but what exactly does it entail? What do we have to do in order achieve the ideal of political freedom? How will we know if we have achieved it? The first step to answering these questions will be to provide some concrete conception of what it means to be free.
As an addendum to my recent series on libertarianism and the basic income, I thought I would look at another political philosophy and the case it makes for the same proposal. The philosophy in question is civic republicanism, which has its roots in antiquity, but has most recently been defended by the philosopher Philip Pettit.
This is the third, and final, part in my series on libertarianism and the basic income. To quickly recap, the universal basic income (UBI) is a proposal for reforming the way in which welfare is paid, moving away from a selective and conditional system of payment to a universal and unconditional one. Libertarianism is a political philosophical associated with the individual rights, the celebration of the free market, and the minimal state.
This is the second part in my series on libertarianism and the basic income. The universal basic income (UBI) is a proposal for reforming the way in which welfare is paid. It is thought to be radical because it is paid to everyone, regardless of their work status, or other sources of income. Libertarianism, on the other hand, is a political philosophy associated with robust negative and property rights, the promotion of the free market, and a minimal state.
I have recently become interested in the case for an unconditional basic income (UBI). In large part, this has been prompted by an increasing fascination with the phenomenon of technological unemployment and its future progression. Some argue that increasing levels of technological unemployment, and the associated capital-labour income inequality that comes with this, would be best solved by something like the UBI. This strikes me as a prima facie plausible argument.
Chemical castration has been legally recognised and utilised as a form of treatment for certain types of sex offender for many years. This is in the belief that it can significantly reduce recidivism rates amongst this class of offenders. Its usage varies around the world. Nine U.S. states currently allow for it, as well as several European countries. Typically, it is presented as an “option” to sex offenders who are currently serving prison sentences. The idea being that if they voluntarily submit to chemical castration they can serve a reduced sentence.
This is the second (and final) post in my short series on Michael Hauskeller’s article “Forever Young? Life Extension and the Ageing Mind”. In the article, Hauskeller casts a critical eye over the life extensionist project. According to many leading proponents of life extension, the goal is not just to prolong life indefinitely, but to prolong youth. Hauskeller argues that this goal is unobtainable because youth is dependent on both mind and body. And although it may be possible to halt the aging of the body, it will never be possible to halt the aging of the mind.
There are a lot of people out there who would prefer not to die. There are also many people trying to make this a reality by working seriously on the science of life extension. The goal, it would seem, is to reverse (or at least halt) the aging process, and allow us to live indefinitely. Let’s call the people who share this goal the “extensionists”.
In this, the final, part we will do two further things. First, we will step back from the particular arguments for and against the legitimacy of mental illness, and focus on Neil Pickering’s meta-philosophical diagnosis of the problems inherent in the debate. Then, having sharpened our appreciation for the meta-philosophical issues, we will consider what is probably the most recent and widely-discussed attempt to define “illness” in such a way that it (properly) includes mental illnesses: Jerome Wakefield’s Harmful Dysfunction analysis.
This is the second post in a brief series looking at the philosophy of mental illness. As noted in part one, some people are suspicious about the concept of mental “illness”. To call something an illness is to deem it worthy of medical scrutiny and treatment. This makes sense — so they argue — when dealing with things like broken bones, viruses, clotted arteries, bacterial infections, cancerous tumours and so forth. They all involve clear, objectively assessable physical effects and causes. Mental illness is not the same: it involves more nebulous, less tractable effects and causes, ones that are not always open to the same level of objective assessment.
It may be a push, but I think it is fair to say that no branch of modern medicine faces the same existential challenges as psychiatry. To give a sense of the problem, a quick browse through Amazon reveals aplethoraofbooks, many published within the past ten years, that either directly challenge the legitimacy of mental illness, call into question the medicalisation of the mind, or dispute the unholy alliance between “pharma” and psychiatry.
Life expectancy increased dramatically over the course of the 20th century. In the UK and US — to take two obvious examples — it increased by approximately 30 years. Further increases are projected in the future. In addition to this, advances in medical technology are hoped by many, and demanded by some, to dramatically increase lifespan (a subtly different concept from life expectancy) in the coming century. It may soon come to pass that lifespans of 120 to 150 years are no longer confined to the realms of science fiction.
Human beings have long performed sexual acts with artifacts. Ancient religious rituals oftentimes involved the performance of sexual acts with statues, and down through the ages a vast array of devices for sexual stimulation and gratification have been created. Little wonder then that a perennial goal among roboticists and AI experts has been the creation of sex robots (“sexbots”): robots from whom we can receive sexual gratification, and with whom we may even be able achieve an emotional connection.