Seneca was a wealthy Roman stoic and advisor to the emperor Nero. In the third of his Letters from a Stoic, entitled ‘On True and False Friendship’, he makes the following observation:
As to yourself, although you should live in such a way that you trust your own self with nothing which you could not entrust even to your own enemy, yet, since certain matters occur which convention keeps secret, you should share with a friend at least all your worries and reflections.
NOTE: This is a guest post by Iason Gabriel from St. John’s College Oxford. I recently did a series on Iason’s excellent article ‘Effective Altruism and its Critics’. In this post, Iason develops his counterfactual critique of effective altruism. Be sure to check out more of Iason’s work on his academia page.)
This is going to be my final post on the topic of effective altruism (for the time being anyway). I’m working my way through the arguments in Iason Gabriel’s article ‘Effective Altruism and its Critics’. Once I finish, Iason has kindly agreed to post a follow-up piece which develops some of his views.
IEET Affiliate Scholar John Danaher published a new paper coming out in the journal Bioethics. It’s about the philosophy of education and student use of cognitive enhancement drugs. It suggests that universities might be justified in regulating their students’ use of enhancement drugs, but only in a very mild, non-compulsory way. It suggests that a system of voluntary commitment contracts might be an interesting proposal. The details are below.
After a long hiatus, I am finally going to complete my series of posts about Iason Gabriel’s article ‘Effective Altruism and its Critics’ (changed from the original title ‘What’s wrong with effective altruism?). I’m pleased to say that once I finish the series I am also going to post a response by Iason himself which follows up on some of the arguments in his paper. Let me start today, however, by recapping some of the material from previous entries and setting the stage for this one.
IEET Affiliate Scholar John Danaher is hiring a research assistant as part of his Algocracy and Transhumanism project. It’s a short-term contract (5 months only) and available from July onwards. The candidate would have to be able to relocate to Galway for the period. Details below. Please share this with anyone you think might be interested.
This is the second in a two-part series (read Part I here)looking at the ethics of intimate surveillance. In part one, I explained what was meant by the term ‘intimate surveillance’, gave some examples of digital technologies that facilitate intimate surveillance, and looked at what I take to be the major argument in favour of this practice (the argument from autonomy).
‘Intimate Surveillance’ is the title of an article by Karen Levy - a legal and sociological scholar currently-based at NYU. It shines light on an interesting and under-explored aspect of surveillance in the digital era. The forms of surveillance that capture most attention are those undertaken by governments in the interests of national security or corporations in the interests of profit.
The debate about algorithmic governance (or as I prefer ‘algocracy’) has been gathering pace over the past couple of years. As computer-coded algorithms become ever more woven into the fabric of economic and political life, and as the network of data-collecting devices that feed these algorithms grows, we can expect that pace to quicken.
Here’s an interesting idea. It’s taken from Aaron Wright and Primavera de Filippi’s article ‘Decentralized Blockchain Technology and the Rise of Lex Cryptographia’. The article provides an excellent overview of blockchain technology and its potential impact on the law. It ends with an interesting historical reflection. It suggests that the growth of blockchain technology may give rise to a new type of legal order: a lex cryptographia. This is similar to how the growth in international trading networks gave rise to a lex mercatoria and how the growth in the internet gave rise to a lex informatica.
I was first introduced to the work of Ian Morris last summer. Somebody suggested that I read his book Why the West Rules for Now, which attempts to explain the differential rates of human social development between East and West over the past 12,000 years. I wasn’t expecting much: I generally prefer narrowly focused historical works, not ones that attempt to cover the whole of human history. But I was pleasantly surprised.
In 1651, Thomas Hobbes published Leviathan. It is arguably the most influential work of political philosophy in the modern era. The distinguished political theorist Alan Ryan believes that Hobbes’s work marks the birth of liberalism. And since most of the Western world now lives under liberal democratic rule, there is a sense in which we are all living in the shadow of Leviathan.
On the 8th August 1963, a gang of fifteen men boarded the Royal Mail train heading from London to Glasgow. They were there to carry out a robbery. In the end, they made off with £2.6 million (approximately £46 million in today’s money). The robbery had been meticulously planned. Using information from a postal worker (known as “the Ulsterman”), the gang waylaid the train at a signal crossing in Ledburn, Buckinghamshire.
What was Apple thinking when it launched the iPhone? It was an impressive bit of technology, poised to revolutionise the smartphone industry, and set to become nearly ubiquitous within a decade. The social consequences have been dramatic. Many of those consequences have been positive: increased connectivity, increased knowledge and increased day-to-day convenience.
This post focuses on a particular argument about the ethics of body-based trades, in particular surrogacy and reproductive labour. The argument comes from Anne Phillips and is presented in her book Our Bodies, Whose Property?
I feel like there is a lot of exploitation in the world. When I buy clothes, I worry that they have been made by exploited workers, labouring in appalling conditions in sweatshops in developing countries. When I use my mobile phone, I worry that the coltan that is used to manufacture the chips has been sourced from exploited workers in conflict zones, and that the phones themselves have been assembled by exploited workers in large factory complexes somewhere in Asia. Of course, I still buy the clothes and use the phone (like pretty much everybody else). So the question arises: should I worry about the exploitation?
This post is a bit of a departure for me. I’m not an economist. Not by any stretch of the imagination. I dabble occasionally in economics-related topics, particularly those concerning technology and economic theory, but I rarely get involved in the traditional core of economics — in topics like property prices, economic growth, debt, wealth inequality and the like. But it’s precisely those topics that I want to get involved with in this post.
I am currently editing a book with Neil McArthur on the social, legal and ethical implications of sex robots. As part of that effort, I’m trying to develop a clearer understanding of the typical objections to the creation of sex robots. I have something of a history on this topic. I’ve developed objections to (certain types of) sex robots in my own previous work; and critiqued the objections of others, such as the Campaign Against Sex Robots, on this blog. But I have yet to step back and consider the structural properties these objections might share.
This post is the first substantive entry in my series about effective altruism. In a previous post, I offered a general introduction to the topic of effective altruism (EA) and sketched out a taxonomy of the main objections to the practice. In that post, I adopted a ‘thick’ definition of EA, which holds that one ought to do the most good one can do, assuming a welfarist and consequentialist approach to ethics, and favouring evidentially robust policy interventions.
My life is filled with trivial, time-wasting tasks. As an academic, teaching and research are the most valuable* activities I perform. And yet as I progress in my career I find myself constantly drawn away from these two things to focus on administrative tasks. While efficient administration is important in large organisations (like universities), it feels like a major time-sink to someone like me because (a) I am not ultimately rewarded for being good at it (career progression depends far more research and, to a lesser extent, teaching) and (b) I don’t have any aptitude for or interest in it.
I think metaphors are important. They can help to organise the way we think about something, highlighting its unappreciated features, and allowing us to identify possibilities that were previously hidden from view. They can also be problematic, biasing our thought in unproductive ways, and obscuring things that should be in plain view. Good metaphors are key.
From the days of the Acheulean hand-axe on, humans have always had a symbiotic relationship with technology. How far will that relationship go? One haunting vision of the future is provided by the Borg — one of the main villains of the Star Trek universe.
This post is the second in a short series looking at the arguments against the use of fully autonomous weapons systems (AWSs). As I noted at the start of the previous entry, there is a well-publicised campaign that seeks to pre-emptively ban the use of such systems on the grounds that they cross fundamental moral line and fail to comply with the laws of war. I’m interested in this because it intersects with some of my own research on the ethics of robotic systems. And while I’m certainly not a fan of AWSs (I’m not a fan of any weapons systems), I’m not sure how strong the arguments of the campaigners really are.
The effective altruism (EA) movement has been gaining quite a lot of notoriety recently. Although EA ideas have been common in academic circles for years, two major books have been published in the past year presenting the central tenets of the movement to the wider public. The first was from the movement’s godfather, Peter Singer, and was called The Most Good You Can Do. The second was from the movement’s precocious young figurehead Will MacAskill and was called Doing Good Better. MacAskill’s book in particular received widespread media coverage, no doubt in part fueled by the impressive resume of its young author.
IEET Blog |
email list |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.
East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA
Email: director @ ieet.org phone:
West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org