IEET Affiliate Scholar John Danaher has a new paper coming out in the journal Neuroethics. This one argues that directly augmenting the brain might be the most politically appropriate method of moral enhancement. This paper brings together his work on enhancement, the extended mind, and the political consequences of advanced algorithmic governance. Details below:
I have worked hard to get where I am. I come from a modest middle class background. Neither of my parents attended university. They grew up in Ireland in the 1950s and 1960s, at a time when the economy was only slowly emerging from its agricultural roots. I and my siblings were born and raised in the 1970s and 1980s, in an era of high unemployment and emigration. Things started to get better in the 1990s as the Irish economy underwent its infamous ‘Celtic Tiger’ boom. I did well in school and received a (relatively) free higher education, eventually pursuing a masters and PhD in the mid-to-late 2000s.
The debate about moral neuroenhancement has taken off in the past decade. Although the term admits of several definitions, the debate primarily focuses on the ways in which human enhancement technologies could be used to ensure greater moral conformity, i.e. the conformity of human behaviour with moral norms. Imagine you have just witnessed a road rage incident. An irate driver, stuck in a traffic jam, jumped out of his car and proceeded to abuse the driver in the car behind him. We could all agree that this contravenes a moral norm. And we may well agree that the proximate cause of his outburst was a particular pattern of activity in the rage circuit of his brain. What if we could intervene in that circuit and prevent him from abusing his fellow motorists? Should we do it?
Everyone knows about the Turing Test. It was first proposed by Alan Turing in his famous 1950 paper ‘On Computing Machinery and Intelligence’. The paper started with the question ‘Can a machine think?’. Turing noted that philosophers would be inclined to answer that question by hunting for a definition. They would identify the necessary and sufficient conditions for thinking and then they would try to see whether machines met those conditions. They would probably do this by closely investigating the ordinary language uses of the term ‘thinking’ and engaging in a series of rational reflections on those uses. At least, Oxbridge philosophers in the 1950s would have been inclined to do it this way.
Some people are frightened of the future. They think humanity is teetering on the brink. Something radical must be done to avoid falling over the edge. This is the message underlying Ingmar Persson and Julian Savulescu’s book Unfit for the Future. In it they argue that humanity faces several significant existential risks (e.g. anthropocentric climate change, weapons of mass destruction, loss of biodiversity etc.).
[This the text of a talk I’m delivering at the ICM Neuroethics Network in Paris this week]
Santiago Guerra Pineda was a 19-year old motorcycle enthusiast. In June 2014, he took his latest bike out for a ride. It was a Honda CBR 600, a sports motorcycle with some impressive capabilities. Little wonder then that he opened it up once he hit the road. But maybe he opened it up a little bit too much? He was clocked at over 150mph on the freeway near Miami Beach in Florida. He was going so fast that the local police decided it was too dangerous to chase him. They only caught up with him when he ran out of gas.
I would like to be a better swimmer, a better runner, a better guitarist, a better singer, a better lecturer, a better writer, a better organiser, a better partner, and generally a better person. But how can I achieve all these things? I have no method. I approach things haphazardly, hoping that sheer repetition will lead to betterment. This hope is probably forlorn.
Our smart phones, smart watches, and smart bands promise a lot. They promise to make our lives better, to increase our productivity, to improve our efficiency, to enhance our safety, to make us fitter, faster, stronger and more intelligent. They do this through a combination of methods. One of the most important is outsourcing,* i.e. by taking away the cognitive and emotional burden associated with certain activities. Consider the way in which Google maps allows us to outsource the cognitive labour of remembering directions. This removes a cognitive burden and potential source of anxiety, and enables us to get to our destinations more effectively. We can focus on more important things. It’s clearly a win-win.
Seneca was a wealthy Roman stoic and advisor to the emperor Nero. In the third of his Letters from a Stoic, entitled ‘On True and False Friendship’, he makes the following observation:
As to yourself, although you should live in such a way that you trust your own self with nothing which you could not entrust even to your own enemy, yet, since certain matters occur which convention keeps secret, you should share with a friend at least all your worries and reflections.
NOTE: This is a guest post by Iason Gabriel from St. John’s College Oxford. I recently did a series on Iason’s excellent article ‘Effective Altruism and its Critics’. In this post, Iason develops his counterfactual critique of effective altruism. Be sure to check out more of Iason’s work on his academia page.)
This is going to be my final post on the topic of effective altruism (for the time being anyway). I’m working my way through the arguments in Iason Gabriel’s article ‘Effective Altruism and its Critics’. Once I finish, Iason has kindly agreed to post a follow-up piece which develops some of his views.
IEET Affiliate Scholar John Danaher published a new paper coming out in the journal Bioethics. It’s about the philosophy of education and student use of cognitive enhancement drugs. It suggests that universities might be justified in regulating their students’ use of enhancement drugs, but only in a very mild, non-compulsory way. It suggests that a system of voluntary commitment contracts might be an interesting proposal. The details are below.
After a long hiatus, I am finally going to complete my series of posts about Iason Gabriel’s article ‘Effective Altruism and its Critics’ (changed from the original title ‘What’s wrong with effective altruism?). I’m pleased to say that once I finish the series I am also going to post a response by Iason himself which follows up on some of the arguments in his paper. Let me start today, however, by recapping some of the material from previous entries and setting the stage for this one.
IEET Affiliate Scholar John Danaher is hiring a research assistant as part of his Algocracy and Transhumanism project. It’s a short-term contract (5 months only) and available from July onwards. The candidate would have to be able to relocate to Galway for the period. Details below. Please share this with anyone you think might be interested.
This is the second in a two-part series (read Part I here)looking at the ethics of intimate surveillance. In part one, I explained what was meant by the term ‘intimate surveillance’, gave some examples of digital technologies that facilitate intimate surveillance, and looked at what I take to be the major argument in favour of this practice (the argument from autonomy).
‘Intimate Surveillance’ is the title of an article by Karen Levy - a legal and sociological scholar currently-based at NYU. It shines light on an interesting and under-explored aspect of surveillance in the digital era. The forms of surveillance that capture most attention are those undertaken by governments in the interests of national security or corporations in the interests of profit.
The debate about algorithmic governance (or as I prefer ‘algocracy’) has been gathering pace over the past couple of years. As computer-coded algorithms become ever more woven into the fabric of economic and political life, and as the network of data-collecting devices that feed these algorithms grows, we can expect that pace to quicken.
Here’s an interesting idea. It’s taken from Aaron Wright and Primavera de Filippi’s article ‘Decentralized Blockchain Technology and the Rise of Lex Cryptographia’. The article provides an excellent overview of blockchain technology and its potential impact on the law. It ends with an interesting historical reflection. It suggests that the growth of blockchain technology may give rise to a new type of legal order: a lex cryptographia. This is similar to how the growth in international trading networks gave rise to a lex mercatoria and how the growth in the internet gave rise to a lex informatica.
I was first introduced to the work of Ian Morris last summer. Somebody suggested that I read his book Why the West Rules for Now, which attempts to explain the differential rates of human social development between East and West over the past 12,000 years. I wasn’t expecting much: I generally prefer narrowly focused historical works, not ones that attempt to cover the whole of human history. But I was pleasantly surprised.
In 1651, Thomas Hobbes published Leviathan. It is arguably the most influential work of political philosophy in the modern era. The distinguished political theorist Alan Ryan believes that Hobbes’s work marks the birth of liberalism. And since most of the Western world now lives under liberal democratic rule, there is a sense in which we are all living in the shadow of Leviathan.
On the 8th August 1963, a gang of fifteen men boarded the Royal Mail train heading from London to Glasgow. They were there to carry out a robbery. In the end, they made off with £2.6 million (approximately £46 million in today’s money). The robbery had been meticulously planned. Using information from a postal worker (known as “the Ulsterman”), the gang waylaid the train at a signal crossing in Ledburn, Buckinghamshire.
What was Apple thinking when it launched the iPhone? It was an impressive bit of technology, poised to revolutionise the smartphone industry, and set to become nearly ubiquitous within a decade. The social consequences have been dramatic. Many of those consequences have been positive: increased connectivity, increased knowledge and increased day-to-day convenience.
This post focuses on a particular argument about the ethics of body-based trades, in particular surrogacy and reproductive labour. The argument comes from Anne Phillips and is presented in her book Our Bodies, Whose Property?
I feel like there is a lot of exploitation in the world. When I buy clothes, I worry that they have been made by exploited workers, labouring in appalling conditions in sweatshops in developing countries. When I use my mobile phone, I worry that the coltan that is used to manufacture the chips has been sourced from exploited workers in conflict zones, and that the phones themselves have been assembled by exploited workers in large factory complexes somewhere in Asia. Of course, I still buy the clothes and use the phone (like pretty much everybody else). So the question arises: should I worry about the exploitation?
This post is a bit of a departure for me. I’m not an economist. Not by any stretch of the imagination. I dabble occasionally in economics-related topics, particularly those concerning technology and economic theory, but I rarely get involved in the traditional core of economics — in topics like property prices, economic growth, debt, wealth inequality and the like. But it’s precisely those topics that I want to get involved with in this post.
I am currently editing a book with Neil McArthur on the social, legal and ethical implications of sex robots. As part of that effort, I’m trying to develop a clearer understanding of the typical objections to the creation of sex robots. I have something of a history on this topic. I’ve developed objections to (certain types of) sex robots in my own previous work; and critiqued the objections of others, such as the Campaign Against Sex Robots, on this blog. But I have yet to step back and consider the structural properties these objections might share.
IEET Blog |
email list |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.
East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA
Email: director @ ieet.org phone:
West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org