Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view



UPCOMING EVENTS: John Danaher



MULTIMEDIA: John Danaher Topics

GMOs - Can We Stop With The Hysterics Already?

The Ethics of Human Enhancement




Subscribe to IEET Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List









John Danaher Topics




Algocracy and other Problems with Big Data (Series Index)

by John Danaher

What kind of society are we creating? With the advent of the internet-of-things, advanced data-mining and predictive analytics, and improvements in artificial intelligence and automation, we are the verge of creating a global “neural network”: a constantly-updated, massively interconnected, control system for the world. Imagine what it will be like when every “thing” in your home, place of work, school, city, state and country is connected to a smart device?



Finitism and the Beginning of the Universe

by John Danaher

The paper introduces a novel critique of the Kalam Cosmological argument. Or rather, a novel critique of a specific sub-component of the argument in favour of the Kalam. As you may be aware, the Kalam argument makes three key claims: (i) that the universe must have begun to exist; (ii) that anything that begins to exist must have a cause of its existence; and (iii) that in the case of the universe, the cause must be God.



Sousveillance and Surveillance: What kind of future do we want?

by John Danaher

Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not.



Should we abolish work?

by John Danaher

I seem to work a lot. At least, I think I work a lot. Like many in the modern world, I find it pretty hard to tell the difference between work and the rest of my life. Apart from when I’m sleeping, I’m usually reading, writing or thinking (or doing some combination of the three). And since that is essentially what I get paid to do, it is difficult to distinguish between work and leisure. Of course, reading, writing and thinking are features of many jobs. The difference is that, as an academic, I have the luxury of deciding what I should be reading, writing and thinking about.



Dawkins and the “We are going to die” -Argument

by John Danaher

Consider the following passage from Richard Dawkins’s book Unweaving the Rainbow“We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. The potential people who could have been here in my place but who will in fact never see the light of day outnumber the sand grains of Arabia. Certainly those unborn ghosts include greater poets than Keats, scientists greater than Newton. We know this because the set of possible people allowed by our DNA so massively exceeds the set of actual people…”



Can blogging be academically valuable? 7 reasons for thinking it might be

by John Danaher

I have been blogging for nearly five years (hard to believe). In that time, I’ve written over 650 posts on a wide variety of topics: religion, metaethics, applied ethics, philosophy of mind, philosophy of law, technology, epistemology, philosophy of science and so on.



Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (2): Pigliucci’s Pessimism

by John Danaher

This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.



Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (1): Chalmers’s Optimism

by John Danaher

The brain is the engine of reason and the seat of the soul. It is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and along with them so too will our minds. This will result in our deaths. Little wonder then that the prospect of transferring (or uploading) our minds to a more robust, technologically advanced, substrate has proved so attractive to futurists and transhumanists.



Are hierarchical theories of freedom and responsibility plausible?

by John Danaher

In order to be responsible for your actions, you must be free. Or so it is commonly believed. But what exactly does it mean to be free? One popular view holds that freedom consists in the ability to do otherwise. That is to say: the ability to choose among alternative possible futures. This popular view runs into a host of problems. The obvious one being that it is inconsistent with causal determinism.



Steven Pinker’s Guide to Classic Style

by John Danaher

I try to be a decent writer. I try to convey complex ideas to a broader audience. I try to write in a straightforward, conversational style. But I know I often fail in this. I know I sometimes lean too heavily on technical philosophical vocabulary, hoping that the reader will be able to follow along. I know I sometimes rush to complete blog posts, never getting a chance to polish or rewrite them. Still, I strive for clarity and would like to improve.



Do Cognitive Enhancing Drugs Actually Work?

by John Danaher

I’ve been writing about the ethics of human enhancement for some time. In the process, I’ve looked at many of the fascinating ethical and philosophical issues that are raised by the use of enhancing drugs. But throughout all this writing, there is one topic that I have studiously avoided. This is surprising given that, in many ways, it is the most fundamental topic of all: do the alleged cognitive enhancing drugs actually work?



Why the Price of Theism is Normative Skepticism

by John Danaher

Street defends a form of constructivist antirealism, which I find quite attractive. I was thus pleasantly surprised to find that she had also recently written a paper dealing with one of my favourite topics in the philosophy of religion: the problem of evil and its moral implications. It’s a very good paper too, one that I’m sure will provide plenty of fodder for discussion.



Karlsen on God and the Benefits of Existence

by John Danaher

The paper tries to fuse traditional concerns about the problem of evil with recent work in population ethics. The result is an interesting, and somewhat novel, atheological argument. As is the case with every journal club, I will try to kick start the discussion by providing an overview of the paper’s main arguments, along with some questions you might like to ponder about its effectiveness.



Are we morally obliged to eat some meat? (Part 1 and 2)

by John Danaher

I’ve recently been looking into the ethics of vegetarianism, partly because I’m not one myself and I’m interesting in questioning my position, and partly because it is an interesting philosophical issue in its own right. Earlier this summer I looked at Jeff McMahan’s critique of benign carnivorism. Since that piece was critical of the view I myself hold, I thought it might be worthwhile balancing things out by looking at an opposing view.



Bostrom on Superintelligence (6): Motivation Selection Methods

by John Danaher

This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.



Are we heading for technological unemployment? An Argument

by John Danaher

We’re all familiar with the headlines by now: “Robots are going to steal our jobs”, “Automation will lead to joblessness”, and “AI will replace human labour”. It seems like more and more people are concerned about the possible impact of advanced technology on employment patterns. Last month, Lawrence Summers worried about it in the Wall Street Journal but thought maybe the government could solve the problem. Soon after, Vivek Wadhwa worried about it in the Washington Post, arguing that there was nothing the government could do. Over on the New York Times, Paul Krugman has been worrying about it for years.



An Ethical Framework for the Use of Enhancement Drugs

by John Danaher

Debate about the merits of enhancement tends to pretty binary. There are some — generally called bioconservatives — who are opposed to it; and others — transhumanists, libertarians and the like — who embrace it wholeheartedly. Is there any hope for an intermediate approach? One that doesn’t fall into the extremes of reactionary reject or uncritical endorsement?



Bostrom on Superintelligence (5): Limiting an AI’s Capabilities

by John Danaher

This is the fifth part of my series on Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. So far in the series, we’ve covered why Bostrom thinks superintelligent AIs might pose an existential risk to human beings. We’ve done this by looking at some of his key claims about the nature of artificial intelligence (the orthogonality thesis and the instrumental convergence thesis); and at the structure of his existential risk argument.



Bostrom on Superintelligence (4): Malignant Failure Modes

by John Danaher

This is the fourth post of my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I started my discussion of Bostrom’s argument for an AI doomsday scenario. Today, I continue this discussion by looking at another criticism of that argument, along with Bostrom’s response.



Bostrom on Superintelligence (3): Doom and the Treacherous Turn

by John Danaher

In the first two entries, I looked at some of Bostrom’s conceptual claims about the nature of agency, and the possibility of superintelligent agents pursuing goals that may be inimical to human interests. I now move on to see how these conceptual claims feed into Bostrom’s case for an AI doomsday scenario.



Bostrom on Superintelligence (2): The Instrumental Convergence Thesis

by John Danaher

This is the second post in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I looked at Bostrom’s defence of the orthogonality thesis. This thesis claimed that pretty much any level of intelligence — when “intelligence” is understood as skill at means-end reasoning — is compatible with pretty much any (final) goal. Thus, an artificial agent could have a very high level of intelligence, and nevertheless use that intelligence to pursue very odd final goals, including goals that are inimical to the survival of human beings. In other words, there is no guarantee that high levels of intelligence among AIs will lead to a better world for us.



Bostrom on Superintelligence (1): The Orthogonality Thesis

by John Danaher

In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?



Should we have a right not to work?

by John Danaher

Voltaire once said that “work saves a man from three great evils: boredom, vice and need.” Many people endorse this sentiment. Indeed, the ability to seek and secure paid employment is often viewed as an essential part of a well-lived life. Those who do not work are reminded of the fact. They are said to be missing out on a valuable and fulfilling human experience. The sentiment is so pervasive that some of the foundational documents of international human rights law — including the UN Declaration of Human Rights (UDHR Art. 23) and the International Covenant on Economic, Social and Cultural Rights (ICESCR Art. 6) — recognise and enshrine the “right to work”.



Feminism and the Basic Income (Part Two)

by John Danaher

This is the second part of my series on feminism and the basic income. In part one, I looked at the possible effects of an unconditional basic income (UBI) on women. I also looked at a variety of feminist arguments for and against the UBI. The arguments focused on the impact of the UBI on economic independence, freedom of choice, the value of unpaid work, and women’s labour market participation.



Feminism and the Basic Income (Part One)

by John Danaher

The introduction of an unconditional basic income (UBI) is often touted as a positive step in terms of freedom, well-being and social justice. That’s certainly the view of people like Philippe Van Parijs and Karl Widerquist, both of whose arguments for the UBI I covered in my two most recent posts. But could there be other less progressive effects arising from its introduction?



Widerquist on Freedom and the Basic Income

by John Danaher

This post is part of an ongoing series I’m doing on the unconditional basic income (UBI). The UBI is an income grant payable to a defined group of people (e.g. citizens, or adults, or everyone) within a defined geo-political space. The income grant could be set at various levels, with most proponents thinking it should be at or above subsistence level, or at least at the maximum that is affordable in a given society. In my most recent post, I looked at Van Parijs’s famous defence of the UBI. Today, I look at Widerquist’s critique of Parijs, as well as his own preferred justification for the UBI.



Parasitic Surfers and the Unconditional Basic Income: A Debate

by John Danaher

I want to write a few posts about the basic income over the next couple of months. This is part of an ongoing interest I have in the future of work and solutions to the problem of technological unemployment. I’ll start by looking at a debate between Philippe van Parijs and Elizabeth Anderson about the justice of an unconditional basic income (UBI).



Radcliffe-Richards on Sexual Inequality and Justice (Part Two)

by John Danaher

Should we worry that only X% of CEOs, or politicians or philosophers (or whatever) are women? Is there something unjust or morally defective about a society with low percentages of women occupying these kinds of roles? That’s what we’re looking at in this series of posts, based on Janet Radcliffe-Richard’s (RR’s) paper “Only X%: the Problem of Sex Inequality”.



Radcliffe-Richards on Sexual Inequality and Justice (Part One)

by John Danaher

Let’s start with a thought experiment. Suppose that in a given population 50% of people have blue eyes and 50% have brown eyes. Suppose further that there is no evidence to suggest that eye colour has any effect on cognitive ability; indeed, suppose that everything we know suggests that cognitive ability is equally distributed among blue and brown-eyed people. Now imagine that in this population 80% of all senior academics and professors are blue-eyed. What conclusions should we draw about the justice of this society?



The Ethics of Benign Carnivorism (Part Two)

by John Danaher

This is the second part in my series on the ethics of benign carnivorism. The series is working off Jeff McMahan’s article “Eating animals the nice way”. Benign carnivorism (BC) is the view that it is ethically permissible to eat farmed meat, so long as the animals being reared have lived good lives (that they otherwise would not have lived) and have been killed painlessly.

Page 1 of 4 pages  1 2 3 >  Last ›

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376