[Note: This is (roughly) the text of a talk I delivered at TEDxWHU on the 4th February 2017. A video of the talk should be available within a few weeks.]
There is a cave about 350km from here, in the Swabian Jura. It is called the Hohle Fels (this picture is the entrance to it). Archaeologists have been excavating it since the late 1800s and have discovered a number of important artifacts from the Upper Paleolithic era. In June 2005, they announced an interesting discovery.
It is a noticeable feature of intellectual life that many people research the same topics, but do so using different conceptual and disciplinary baggage, and consequently fail to appreciate how the conclusions they reach echo or complement the conclusions reached by others.
(The following is, roughly, the text of a talk I delivered to the IP/IT/Media law discussion group at Edinburgh University on the 25th of November 2016. The text is much longer than what I actually presented and I modified some of the concluding section in light of the comments and feedback I received on the day. I would like to thank all those who were present for their challenging and constructive feedback. All of this builds on a previous post I did on the ‘logical space of algocracy’)
It’s been a while since I wrote something about theism and morality. There was a time when I couldn’t go more than two weeks without delving into the latest paper on divine command theory and moral realism. More recently I seem to have grown disillusioned with that particular philosophical joy ride. But last week Erik Wielenberg’s new paper ‘Euthyphro and Moral Realism: A Reply to Harrison’ managed to cross my transom. I decided I should read it.
Fellows Kevin LaGrandeur and John Danaher were interviewed by Future Left about the potential impact of automation and computerization on the future of the American workforce. Their comments are included in an initiative to get theAmerican presidential to address this issue in their platforms, and their comments are also included in an article here.
I use pen and paper to do most of my serious thinking. Whether it is outlining blogposts or academic papers, taking notes or constructing arguments, I pretty much always take out my trusty A4 pad and pen when I run into a cognitive trough. To be sure, I often mull ideas over in my head for a long time beforehand, but when I want to move beyond my muddled and incoherent thoughts, I will grab for my pen and paper. I am sure that many of you do the same. There is something cognitively different about thinking outside your head: creating an external representation of your thoughts reveals their strengths and weaknesses in a way that internal dialogue never can.
There is a famous story about an encounter between Henry Ford II (CEO of Ford Motors) and Walter Reuther (head of the United Automobile Workers Union). Ford was showing Reuther around his factory, proudly displaying all the new automating technologies he had introduced to replace human workers. Ford gloated, asking Reuther ‘How are you going to get those robots to pay union dues?’. Reuther responded with equal glee ‘Henry, how are you going to get them to buy your cars?’.
Robust moral realism is the view that moral facts exist, but that they are not reducible to non-moral or natural facts. According to the robust realist, when I say something like ‘It is morally wrong to torture an innocent child for fun’, I am saying something that is true, but whose truth is not reducible to the non-moral properties of torture or children. Robust moral realism has become surprisingly popular in recent years, with philosophers like Derek Parfit, David Enoch, Erik Wielenberg and Russell Shafer-Landau all defending versions of it.
China Mieville’s novel Embassytown is a challenging and provocative work of science fiction. It is set in Embassytown, a colonial outpost of the human-run Bremen empire, located on Arieka, a planet on the edge of the known universe. The native alien race are known as the Ariekei and they have an unusual language. They have two speaking orifices and as a result speak two words at the same time.
I would like to be happier. I would like to live a good life. But I often get it wrong. Once upon a time I thought that getting a PhD would make me happy. It didn’t. It made me painfully aware of my own ignorance and more anxious about the future. Another time I thought that going on holidays to Spain for a week would make me happy: what could be better than a week relaxing in the sunshine, without a care in the world? Surely it would be just the balm that my overactive mind needed? But it didn’t make me happy either. It was too hot and I quickly got bored. By the end of the week I was itching to get home.
Contrast these two scenarios. First, I’m in the supermarket. I want to remember what I need to buy but I’m not the kind of guy who write things down in lists. I just keep the information stored in my head and then jog my memory when I arrive at the store. If I’m lucky, the list of items immediately presents itself to my conscious mind. I remember what I need to buy. Second, I’m in the supermarket. I want to remember what I need to buy. But I’m hopelessly forgetful so I have to write things down in a list. I take the list from my pocket and look at the items. Now, I remember what I needed to buy.
IEET Affiliate Scholar John Danaher has a new paper coming out in the journal Neuroethics. This one argues that directly augmenting the brain might be the most politically appropriate method of moral enhancement. This paper brings together his work on enhancement, the extended mind, and the political consequences of advanced algorithmic governance. Details below:
I have worked hard to get where I am. I come from a modest middle class background. Neither of my parents attended university. They grew up in Ireland in the 1950s and 1960s, at a time when the economy was only slowly emerging from its agricultural roots. I and my siblings were born and raised in the 1970s and 1980s, in an era of high unemployment and emigration. Things started to get better in the 1990s as the Irish economy underwent its infamous ‘Celtic Tiger’ boom. I did well in school and received a (relatively) free higher education, eventually pursuing a masters and PhD in the mid-to-late 2000s.
The debate about moral neuroenhancement has taken off in the past decade. Although the term admits of several definitions, the debate primarily focuses on the ways in which human enhancement technologies could be used to ensure greater moral conformity, i.e. the conformity of human behaviour with moral norms. Imagine you have just witnessed a road rage incident. An irate driver, stuck in a traffic jam, jumped out of his car and proceeded to abuse the driver in the car behind him. We could all agree that this contravenes a moral norm. And we may well agree that the proximate cause of his outburst was a particular pattern of activity in the rage circuit of his brain. What if we could intervene in that circuit and prevent him from abusing his fellow motorists? Should we do it?
Everyone knows about the Turing Test. It was first proposed by Alan Turing in his famous 1950 paper ‘On Computing Machinery and Intelligence’. The paper started with the question ‘Can a machine think?’. Turing noted that philosophers would be inclined to answer that question by hunting for a definition. They would identify the necessary and sufficient conditions for thinking and then they would try to see whether machines met those conditions. They would probably do this by closely investigating the ordinary language uses of the term ‘thinking’ and engaging in a series of rational reflections on those uses. At least, Oxbridge philosophers in the 1950s would have been inclined to do it this way.
Some people are frightened of the future. They think humanity is teetering on the brink. Something radical must be done to avoid falling over the edge. This is the message underlying Ingmar Persson and Julian Savulescu’s book Unfit for the Future. In it they argue that humanity faces several significant existential risks (e.g. anthropocentric climate change, weapons of mass destruction, loss of biodiversity etc.).
[This the text of a talk I’m delivering at the ICM Neuroethics Network in Paris this week]
Santiago Guerra Pineda was a 19-year old motorcycle enthusiast. In June 2014, he took his latest bike out for a ride. It was a Honda CBR 600, a sports motorcycle with some impressive capabilities. Little wonder then that he opened it up once he hit the road. But maybe he opened it up a little bit too much? He was clocked at over 150mph on the freeway near Miami Beach in Florida. He was going so fast that the local police decided it was too dangerous to chase him. They only caught up with him when he ran out of gas.
I would like to be a better swimmer, a better runner, a better guitarist, a better singer, a better lecturer, a better writer, a better organiser, a better partner, and generally a better person. But how can I achieve all these things? I have no method. I approach things haphazardly, hoping that sheer repetition will lead to betterment. This hope is probably forlorn.
Our smart phones, smart watches, and smart bands promise a lot. They promise to make our lives better, to increase our productivity, to improve our efficiency, to enhance our safety, to make us fitter, faster, stronger and more intelligent. They do this through a combination of methods. One of the most important is outsourcing,* i.e. by taking away the cognitive and emotional burden associated with certain activities. Consider the way in which Google maps allows us to outsource the cognitive labour of remembering directions. This removes a cognitive burden and potential source of anxiety, and enables us to get to our destinations more effectively. We can focus on more important things. It’s clearly a win-win.
Seneca was a wealthy Roman stoic and advisor to the emperor Nero. In the third of his Letters from a Stoic, entitled ‘On True and False Friendship’, he makes the following observation:
As to yourself, although you should live in such a way that you trust your own self with nothing which you could not entrust even to your own enemy, yet, since certain matters occur which convention keeps secret, you should share with a friend at least all your worries and reflections.
NOTE: This is a guest post by Iason Gabriel from St. John’s College Oxford. I recently did a series on Iason’s excellent article ‘Effective Altruism and its Critics’. In this post, Iason develops his counterfactual critique of effective altruism. Be sure to check out more of Iason’s work on his academia page.)
This is going to be my final post on the topic of effective altruism (for the time being anyway). I’m working my way through the arguments in Iason Gabriel’s article ‘Effective Altruism and its Critics’. Once I finish, Iason has kindly agreed to post a follow-up piece which develops some of his views.
IEET Affiliate Scholar John Danaher published a new paper coming out in the journal Bioethics. It’s about the philosophy of education and student use of cognitive enhancement drugs. It suggests that universities might be justified in regulating their students’ use of enhancement drugs, but only in a very mild, non-compulsory way. It suggests that a system of voluntary commitment contracts might be an interesting proposal. The details are below.
After a long hiatus, I am finally going to complete my series of posts about Iason Gabriel’s article ‘Effective Altruism and its Critics’ (changed from the original title ‘What’s wrong with effective altruism?). I’m pleased to say that once I finish the series I am also going to post a response by Iason himself which follows up on some of the arguments in his paper. Let me start today, however, by recapping some of the material from previous entries and setting the stage for this one.
IEET Affiliate Scholar John Danaher is hiring a research assistant as part of his Algocracy and Transhumanism project. It’s a short-term contract (5 months only) and available from July onwards. The candidate would have to be able to relocate to Galway for the period. Details below. Please share this with anyone you think might be interested.
This is the second in a two-part series (read Part I here)looking at the ethics of intimate surveillance. In part one, I explained what was meant by the term ‘intimate surveillance’, gave some examples of digital technologies that facilitate intimate surveillance, and looked at what I take to be the major argument in favour of this practice (the argument from autonomy).