The last couple of months have seen major victories for marriage equality. In May, Ireland voted to legalise same-sex marriage in a national referendum — the first country in the world to do so by popular vote. In June, the US Supreme court issued a landmark 5-4 decision legalising same-sex marriage throughout the United States. These were important steps toward building a fairer and more just society. If marriage is to continue to exist as a legally-recognised relationship status, then it is important that it do so in an egalitarian and inclusive manner. I don’t think anyone should doubt this.
The halcyon days of the mid-20th century, when researchers at the (in?)famous Dartmouth summer school on AI dreamed of creating the first intelligent machine, seem so far away. Worries about the societal impacts of artificial intelligence (AI) are on the rise. Recent pronouncements from tech gurus like Elon Musk and Bill Gates have taken on a dramatically dystopian edge. They suggest that the proliferation and advance of AI could pose a existential threat to the human race.
I have recently been working my through David Roden’s book Posthuman Life: Philosophy at the Edge of the Human. It is a unique and fascinating work. I am not sure that I have ever read anything quite like it. In the book, Roden defends a position which he refers to as speculative posthumanism. This holds, roughly, that the future we are creating through technological change could give rise to truly weird and alien forms of posthuman life.
Let’s assume technological unemployment is going to happen. Let’s assume that automating technologies will take over the majority of economically productive labour. It’s a controversial assumption, to be sure, but one with some argumentative basis. Should we welcome this possibility? On previous occasions, I have outlined some arguments for thinking that we should. In essence, these arguments claimed that if we could solve the distributional problems arising from technological unemployment (e.g. through a basic income guarantee), then freedom from work could be a boon in terms of personal autonomy, well-being and fulfillment.
You have probably noticed it already. There is a strange logic at the heart of the modern tech industry. The goal of many new tech startups is not to produce products or services for which consumers are willing to pay. Instead, the goal is create a digital platform or hub that will capture information from as many users as possible — to grab as many ‘eyeballs’ as you can. This information can then be analysed, repackaged and monetised in various ways. The appetite for this information-capture and analysis seems to be insatiable, with ever increasing volumes of information being extracted and analysed from an ever-expanding array of data-monitoring technologies.
This post is a bit of an experiment. As you may know, I have written a series of articles looking at how big data and algorithm-based decision-making could affect society. In doing so, I have highlighted some concerns we may have about a future in which many legal-bureaucratic decisions are either taken over by or made heavily dependent on data-mining algorithms and other artificial intelligence systems. I have even referred to such a future state of governance as being a state of ‘algocracy’ (rule by algorithm).
If you voluntarily consume alcohol and then go out and commit a criminal act, should you be held responsible for that act? Many people seem to think that you should. Indeed, within the criminal law, there is an oft-repeated slogan saying that “voluntarily intoxication is no excuse”.
Consent is moral magic. It transforms an impermissible act into a permissible one. But deciding when and whether to respect a particular token or signal of consent is an ethically fraught business. Can children consent to medical treatment? Can adults with early stage dementia consent to give away all their earthly possessions? Is a smile or a nod sufficient for consent? Is it possible to consent to something by doing or saying nothing? Can you consent to have something done to you while you are asleep, if you provided the consent in writing in advance? Questions of this nature abound.
I think it is an important contribution to the ongoing debate about the growth of AI and robotics, and the future of humanity. Carr is something of a techno-pessimist (though he may prefer ‘realist’) and the book continues the pessimistic theme set down in his previous book The Shallows (which was a critique of the internet and its impact on human cognition). That said, I think The Glass Cage is a superior work. I certainly found it more engaging and persuasive than his previous effort.
This post continues my discussion of the arguments in Nicholas Carr’s recent book The Glass Cage. The book is an extended critique of the trend towards automation. In the previous post, I introduced some of the key concepts needed to understand this critique. As I noted then, automation arises whenever a machine (broadly understood) takes over a task or function that used to be performed by a human (or non-human animal). Automation usually takes place within an intelligence ‘loop’.
How do we relate to technology? How does it relate to us? These are important questions, particularly in light of the increasingly ubiquitous and often hidden roles that modern computing technology plays in our lives. We have always relied on different forms of technology, from stone axes to trains and automobiles. But modern computing technology has some important properties. When it incorporates artificially intelligent programmes, and utilises robotic action-implementation systems, it has the ability to interfere with, and possibly supersede, human agency.
I had the good fortune to be asked back on to the Robot Overlordz podcast this week. I am the guest on episode #163 during which I chat with the hosts (Mike Johnston and Matt Bolton) about the ethical, legal and social implications of sex robots. We also talk about related issues from the world of AI and futurism.
An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting is its potential implication. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could lead to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.
Money has long fascinated me, and not for the obvious reasons. Although I’d like to have more of it, my interest is largely philosophical. It is the ontology of money that has always disturbed me. Ever since I was a child, collecting old coins and hoarding my pocket money, I’ve wondered why it is that certain physical tokens can function as money and others cannot. What is money made from? What is it grounded in? Why do certain monetary systems fail and others succeed?
They are glib and superficially charming. They have a grandiose sense of self worth. They are often pathological liars and routinely engage in acts of cunning and manipulation. If they do something wrong, they are without remorse.
William Lane Craig has a pretty dispiriting take on the atheistic view of life: If there is no God, then man and the universe are doomed. Like prisoners condemned to death, we await our unavoidable execution. There is no God, and there is no immortality. And what is the consequence of this? It means that life itself is absurd. It means that the life we have is without ultimate significance, value or purpose. (Craig 2008, 72)
Should prospective parents have to apply for parental licences? The argument seems obvious. Having children is a serious business. Negligent or irresponsible parents risk causing long-term harms to their offspring, harms that often have spillover effects on the rest of society. A licensing system should help us to filter out such parents. Therefore, a licensing system would benefit children and society at large. QED
The campaign for the introduction of a universal basic income (UBI) has been gaining ground in recent years. What was once a slightly obscure proposal, beloved by certain political theorists and welfare reformists, is now being embraced as a potential solution to the threat of technological unemployment. I myself have written about it on several occasions, mainly focusing on different political and philosophical arguments in favour of its introduction.
Human beings have long desired immortality. In his book on the topic, cleverly-titled Immortality, Stephen Cave argues that this desire has taken on four distinct forms over the course of human history. In the first, people seek immortality by simply trying to stay alive, either through the help of magic or science. In the second, people seek resurrection, sometimes in the same physical form and sometimes in an altered plane of existence.
Publish or perish, or so they say. That’s the rule in academia. But not all publications are created equal. I’ve “published” over 700 posts on this blog (and republished many on other blogs), and although I think there are advantages to having done so, I’d be lying if I said these publications were academically “significant”. They’re certainly not significant from the perspective of the administrators and overseers lurking within the groves of academe. If you want to please these people you must produce peer-reviewed publications (preferably double or triple-blind peer-reviewed publications) in high impact academic journals. That’s where the game is.
I’m trying to wrap my head around the extended mind hypothesis (EMH). I’m doing so because I’m interested in its implications for the debate about enhancement and technology. If the mind extends into the environment outside the brain/bone barrier, then we are arguably enhancing our minds all the time by developing new technologies, be they books and abacuses or smartphones and wearable tech. Consequently, we should have no serious principled objection to technologies that try to enhance directly inside the brain/bone barrier.
Democracy is the worst form of government except for all those other forms which from time to time we have tried. Granting this, we might be inclined to wonder what sorts of democratic decision-making procedures are possible? This is a question that Christian List sets out to answer in his paper “The Logical Space of Democracy”. In this post, I want to share the logical space alluded to in his title.
You may have heard of the Marquis de Condorcet (Nicolas de Condorcet). He was an 18th century French philosopher, mathematician and social theorist. He was a champion of the Enlightenment, and a leading participant in the French revolution. He is probably most famous today for three things. First, his jury theorem which showed how, under certain conditions, majority voting can get us closer to the truth. Second, his voting method which proposed that winners of elections be determined by pairing each candidate against every other candidate and figuring out who won each of those contests.
Democracies are the preferred form of modern government. Democracies pay homage to the notion that we are all moral equals. This means that no one human has an intrinsic right to exercise domination or control over another. No one human has the right to impose coercive rules on others.
Work is a dominant feature of contemporary life. Most of us spend most of our time working. Or if not actually working then preparing for, recovering from, and commuting to work. Work is the focal point, something around which all else is organised. We either work to live, or live to work.
Consider your smartphone for a moment. It provides you with access to a cornucopia of information. Some of it is general, stored on publicly accessible internet sites, and capable of being called up to resolve any pub debate one might be having (how many U.S. presidents have been assassinated? or how many times have Brazil won the World Cup?). Some of it is more personal, and includes a comprehensive databank of all emails and text message conversations you have had, your calendar appointments, the number of steps you have taken on any given day, books read, films watched, calories consumed and so forth.
Back in 1973, Bernard Williams published an article about the desirability of immortality. The article was entitled “The Makropulos Case: Reflections on the Tedium of Immortality”. The article used the story of Elina Makropulos — from Janacek’s opera The Makropulos Affair — to argue that immortality would not be desirable. According to the story, Elina Makropulos is given the elixir of life by her father. The elixir allows Elina to live for three hundred years at her current biological age. After this period has elapsed, she has to choose whether to take the elixir again and live for another three hundred. She takes it once, lives her three hundred years, and then chooses to die rather than live another three hundred. Why? Because she has become bored with her existence.