This is the third part of my series on Nicholas Agar’s bookTruly Human Enhancement. As mentioned previously, Agar stakes out an interesting middle ground on the topic of enhancement. He argues that modest forms of enhancement — i.e. up to or slightly beyond the current range of human norms — are prudentially wise, whereas radical forms of enhancement — i.e. well beyond the current range of human norms — are not. His main support for this is his belief that in radically enhancing ourselves we will lose certain internal goods. These are goods that are intrinsic to some of our current activities.
This is the second post in my series on Nicholas Agar's new book Truly Human Enhancement. The book offers an interesting take on the enhancement debate. It tries to carve out a middle ground between bioconservatism and transhumanism, arguing that modest enhancement (within or slightly beyond the range of human norms) is prudentially valuable, but that radical enhancement (well beyond the range of human norms) may not be.
Nicholas Agar has written several books about the ethics of human enhancement. In his latest, Truly Human Enhancement, he tries to stake out an interesting middle ground in the enhancement debate. Unlike the bioconservatives, Agar is not opposed to the very notion of enhancing human capacities. On the contrary, he is broadly in favour it. But unlike the radical transhumanists, he does not embrace all forms of enhancement.
I’ve looked at data-mining andpredictive analytics before on this blog. As you know, there are many concerns about this type of technology and the increasing role it plays in our lives. Thus, for example, people are concerned about the oftentimes hidden way in which our data is collected prior to being “mined”. And they are concerned about how it is used by governments and corporations to guide their decision-making processes. Will we be unfairly targetted by the data-mining algorithms? Will they exercise too much control over socially important decision-making processes? I’ve reviewed some of these concerns before.
We’ve all been there. A good-natured dispute among friends escalates; one party insults the honour of another; and the situation can only be resolved with a duel. The two parties face each other down with pistols, and take alternating steps toward one another. They must decide when to shoot. Victory means life and honour restored; loss means death and dishonour. What will the outcome be?
Data-mining algorithms are increasingly being used to monitor and enforce governmental policies. For example, they are being used to shortlist people for tax auditing by the revenue services in several countries. They are also used by businesses to identify and target potential customers.
This post examines how natural lawyers view the connection between facts about human nature and ethical norms. To guide me through this difficult topic, I’m going to use an article by one of the leading contemporary natural lawyers, Patrick Lee. The article in question is called “Human Nature and Moral Goodness” and it appeared in the book The Normativity of the Natural: Human Goods, Human Virtues, and Human Flourishing.
Abstract: Is sex work (specifically, prostitution) vulnerable to technological unemployment? Several authors have argued that it is. They claim that the advent of sophisticated sexual robots will lead to the displacement of human prostitutes, just as, say, the advent of sophisticated manufacturing robots have displaced many traditional forms of factory labour. But are they right? In this article, I critically assess the argument that has been made in favour of this displacement hypothesis. Although I grant the argument a degree of credibility, I argue that the opposing hypothesis—that prostitution will be resilient to technological unemployment—is also worth considering. Indeed, I argue that increasing levels of technological unemployment in other fields may well drive more people into the sex work industry. Furthermore, I argue that no matter which hypothesis you prefer—displacement or resilience—you can make a good argument for the necessity of a basic income guarantee, either as an obvious way to correct for the precarity of sex work, or as a way to disincentivise those who may be drawn to prostitution.
This series of blog posts is looking at arguments in favour of sousveillance. In particular, it is looking at the arguments proffered by one of the pioneers and foremost advocates of sousveillance: Steve Mann. The arguments in question are set forth in a pair of recent papers, one written by Mann himself, the other with the help of co-author Mir Adnan Ali. Part one clarified what was meant by the term “sousveillance”, and considered an initial economic argument in its favour. To briefly recap, “sousveillance” refers to the general use of veillance technologies (i.e. technologies that can capture and record data about other people) by persons who are not in authority.
Steve Mann (pictured) has been described as the world’s first cyborg, and as a pioneer in wearable computing. He is certainly the latter. I’m not so sure about the former (I believe Mann rejects the title himself). He is also one of the foremost advocates for sousveillance in the contemporary era. Sousveillance is the inverse of surveillance. Instead of recording equipment solely being used by those in authority to record data about the rest of us, sousveillance advocates argue for a world in which ordinary citizens can turn the recording equipment back onto the authorities (and one another). This is thought to be beneficial in numerous ways.
I’ve written a few postsabout mind-uploading, focusing mainly on its risks and philosophical problems. In each of these posts I’ve drawn distinctions between different varieties of “uploading” and suggested that some are less prone to risks and problems than others. So far, the distinctions I’ve drawn have been of my own choosing, based on what I’ve read about the topic over the years. But in his article “A Framework for Approaches to Transfer of a Mind’s Substrate”, Sim Bamford offers an alternative, slightly more sophisticated, framework for thinking about these issues. I want to share that framework in this post.
The Gamer’s Dilemma is the title of an article by Morgan Luck. We covered that article in part one. In brief, the article argues that there is something puzzling about attitudes toward virtual acts which, if they took place in the real world, would be immoral. To be precise, there is something puzzling about attitudes toward virtual murder and virtual paedophilia.
Modern video games give players the opportunity to engage in highly realistic depictions of violent acts. Among these is the act of virtual murder: the player’s character intentional kills someone in the game environment without good cause. Most avid gamers don’t seem overly concerned about this (reputed links between video games and violence notwithstanding). Nevertheless, when the possibility of other immoral virtual acts — say virtual paedophilia — is raised, people become rather more squeamish. Why is this? And is this double-standard justified?
This is the second part in a short series of posts on predictive algorithms and the virtues of transparency. The series is working off some ideas in Tal Zarsky’s article “Transparent Predictions”. The series is written against the backdrop of the increasingly widespread use of data-mining and predictive algorithms and the concerns this has raised.
Transparency is a much-touted virtue of the internet age. Slogans such as the “democratisation of information” and “information wants to be free” trip lightly off the tongue of many commentators; classic quotes, like Brandeis’s “sunlight is the best disinfectant” are trotted out with predictable regularity. But why exactly is transparency virtuous? Should we aim for transparency in all endeavours? Over the next two posts, I look at four possible answers to that question.
This is the third (and final) part in my ongoing series about the rationality of mind-uploading. The series deals with something called Searle’s Wager, which is an argument against the rationality of mind-uploading. The argument was originally developed by Nicholas Agar in his 2011 bookHumanity’s End. This series, however, covers a debate between Agar and Levy in the pages of the journal AI and Society. The first two parts discussed with Levy’s critique; this part discusses Agar’s response.
This is the second in a series of posts looking at Searle's Wager and the rationality of mind-uploading. Searle's Wager is an argument that was originally developed by the philosopher Nicholas Agar. It claims that uploading one's mind to a computer (or equivalent substrate) cannot be rational because there is a risk that it might entail death. I covered the argument on this blog back in 2011. In this series, I'm looking at a debate between Nicholas Agar and Neil Levy about the merits of the argument. The current focus is on Levy's critique.
A couple of years ago I wrote a series of posts about Nicholas Agar’s book Humanity’s End: Why we should reject radical enhancement. The book critiques the arguments of four pro-enhancement writers. One of the more interesting aspects of this critique was Agar’s treatment of mind-uploading. Many transhumanists are enamoured with the notion of mind-uploading, but Agar argued that mind-uploading would be irrational due to the non-zero risk that it would lead to your death. The argument for this was called Searle’s Wager, as it relied on ideas drawn from the work of John Searle.
This is the second and final part in my series on the different conceptions of political freedom. The series is working from Philip Pettit's article "The Instability of Freedom as Noninterference: The Case of Isaiah Berlin". In this article, Pettit analyses three different conceptions of political freedom—freedom as non-frustration; freedom as non-interference; and freedom as non-domination—and makes an argument for the non-domination conception.
Freedom is an important ideal in liberal political theory, but what exactly does it entail? What do we have to do in order achieve the ideal of political freedom? How will we know if we have achieved it? The first step to answering these questions will be to provide some concrete conception of what it means to be free.
As an addendum to my recent series on libertarianism and the basic income, I thought I would look at another political philosophy and the case it makes for the same proposal. The philosophy in question is civic republicanism, which has its roots in antiquity, but has most recently been defended by the philosopher Philip Pettit.
This is the third, and final, part in my series on libertarianism and the basic income. To quickly recap, the universal basic income (UBI) is a proposal for reforming the way in which welfare is paid, moving away from a selective and conditional system of payment to a universal and unconditional one. Libertarianism is a political philosophical associated with the individual rights, the celebration of the free market, and the minimal state.
This is the second part in my series on libertarianism and the basic income. The universal basic income (UBI) is a proposal for reforming the way in which welfare is paid. It is thought to be radical because it is paid to everyone, regardless of their work status, or other sources of income. Libertarianism, on the other hand, is a political philosophy associated with robust negative and property rights, the promotion of the free market, and a minimal state.
I have recently become interested in the case for an unconditional basic income (UBI). In large part, this has been prompted by an increasing fascination with the phenomenon of technological unemployment and its future progression. Some argue that increasing levels of technological unemployment, and the associated capital-labour income inequality that comes with this, would be best solved by something like the UBI. This strikes me as a prima facie plausible argument.
Chemical castration has been legally recognised and utilised as a form of treatment for certain types of sex offender for many years. This is in the belief that it can significantly reduce recidivism rates amongst this class of offenders. Its usage varies around the world. Nine U.S. states currently allow for it, as well as several European countries. Typically, it is presented as an “option” to sex offenders who are currently serving prison sentences. The idea being that if they voluntarily submit to chemical castration they can serve a reduced sentence.
This is the second (and final) post in my short series on Michael Hauskeller’s article “Forever Young? Life Extension and the Ageing Mind”. In the article, Hauskeller casts a critical eye over the life extensionist project. According to many leading proponents of life extension, the goal is not just to prolong life indefinitely, but to prolong youth. Hauskeller argues that this goal is unobtainable because youth is dependent on both mind and body. And although it may be possible to halt the aging of the body, it will never be possible to halt the aging of the mind.
There are a lot of people out there who would prefer not to die. There are also many people trying to make this a reality by working seriously on the science of life extension. The goal, it would seem, is to reverse (or at least halt) the aging process, and allow us to live indefinitely. Let’s call the people who share this goal the “extensionists”.