IEET > Rights > Personhood > Vision > Bioculture > Staff > Mike Treder
Do artificial beings deserve human rights?
Mike Treder   Jan 13, 2011   Ethical Technology  

When my daughter was about five years old, her mother and I took her to see E.T. the Extra-Terrestrial. She was deeply affected by the scene in which the cute little creature nearly dies.

She began to cry and wouldn’t stop even after E.T. recovered. As we left the theater, I held her in my arms and tried to comfort her, assuring her that E.T. was okay and happy now - and then, switching tack, telling her that it was all just imaginary, just a story. But it was no use, she kept on sobbing uncontrollably.

To this day, even though she is a grown woman with a daughter of her own, she won’t watch that movie. I traumatized my child.
Whether it is Bambi or Toy Story or cartoons on TV, most of us have seen children respond to animated characters as if they were alive in the real world. Adults and teens react similarly, of course, to characters in well-written novels or well-made movies. We invest our hopes and fears in them and are affected emotionally by their joys and their sorrows.

In truth, there is a long history of humans projecting feelings and aspirations onto inanimate or imaginary objects. The practice may be as old as human speech. We even have a big word for it: anthropomorphization.

Now, let’s apply this background knowledge as we consider our evolving relationships with robots. At what point does an -it’ become a -him’ or a -her’?
For the most part, the robots that are in use today operate in factories and don’t look or act anything like humans. But there are steps in the direction of cuteness or aliveness that sometimes can be disconcerting. Take, for instance, this robot singer or this startlingly lifelike “Big Dog” machine.

One of the major differences between robots and characters in a book or a movie is that the former exists in the real world. They are not strictly imaginary, although their -personalities’, if any, may have been programmed.

In some cases, we can interact with them, and if they are complex and well-designed, they may even subtly alter their responses as they learn more about us.

A provocative prototype of what the future could hold is Bina48, created by David Hanson of Hanson Robotics, at the request of IEET Trustee Martine Rothblatt and her partner, Bina Rothblatt.

Watch this video:

Notice how the ostensibly objective newspaper reporter says at one point: “Still, I could tell that it was trying.”

Even though she refers to Bina48 as -it’, the reporter, Amy Harmon, can’t help but imbue the machine with a human characteristic - trying to do something - as if Bina48 possesses self-awareness, that it knows it’s not fully succeeding at its task, and thus needs to “try” harder.

As I watch this video, I confess that I also find myself thinking of the robot, the machine, as a person, or at least as such a near-convincing portrayal of a person that it begins to achieve “suspension of disbelief.”

Compared to what will likely come in the next decade or two, Bina48 is still rather crude and primitive. But progress is steadily moving forward. It is time for us to start thinking about the gradual humanization of robots and the near-certain eventual creation of mobile machines that will look and act so much like people that we may not be able to know for sure which is which.

Will those robots at some point deserve “human rights” or “personhood rights”? How will we determine when that point is reached?

I’m not going to suggest that I have definitive answers to these difficult questions. I am pleased to announce, though, that the IEET is beginning a serious new project, under the direction of George Dvorsky, aimed at investigating and refining the existing definitions of personhood and the criteria sufficient for the recognition of non-human persons and their rights. We’ll have more information on that coming soon.

But just for fun, let’s ask the readers of this blog for their initial opinions on the subject of rights for robots.

When, if ever, will a robot deserve “human” rights?

  • When it possesses an uploaded personality (a formerly living human instantiated in an artificial body)
  • When it can make a copy of itself (reproduction without the aid of another species)
  • When it passes the Turing Test (can convince experts that it is capable of human-level “thinking”)
  • Never, robots aren’t humans
  • Other?

We’ve just opened a new poll on this topic. Please let us know what you think!

Mike Treder is a former Managing Director of the IEET.


I think robots will deserve “human” rights when they develop an understanding of the consequences of their actions on others and the consequences of the actions of others upon them.

The interesting thing in this whole argument is our ability as an organism to want to relate to other natures ! The exploration of whether robots deserve human rights could in fact be a reflection of our own evaluation of which rights apply to which humans !

When will a robot/A.I deserve personhood? When it applies memory and apperceptions to sentience and achieves self-awareness!

It is inspiring the way we as humans project our sentience into other forms, and it may not be so strange that we wish to communicate with other intelligences - is it merely curiosity or are we wishful to meet our equals, wish that our reflection in the mirror would talk back and help define and substantiate who we are? And yet this reciprocity in which we wish to believe must be deeply connected with our empathy?

There are many great animated films but a favourite is “the Iron Giant” especially as it “assumes” and explores the needs of both the small boy and the alien giant to communicate firstly, as matter of priority using all those useful and comical tricks we humans have, (emulation, mirror neurons). It also portrays our human versatility - what would you do if you met a giant robot on the way home? How long would it take you to accept its reality?

Human rights are a tough subject, because a: they are subjective, and B: they change every few decades getting more inclusive.

If we can demonstrate that a robot had the capability to feel pain, and the desire, or at least the programming to mimic self-preservation,  or desire to avoid discomfort, then I believe it would be an obvious to award robots Ethical Treatment, that the no longer are simply tools, but we now have a higher responsibility to treat these robots ethically. This is unlikely to be something too disagreed upon by Humanity, that you shouldn’t just destroy this technology. I assume that the response would be a small group of like minded engineers, scientists, and entrepreneurs who develop Higher and higher levels of robot consciousness while mass produced units by large corporations will likely be dumbed down. People don’t want Robot slaves, they just want robot’s with no tough moral and ethical questions to consider every time they use them, to defuse bombs for instance.

So one is simply a right that we shouldn’t treat a robot unjustly. Basically Animal rights, but for robots. The hard part comes from here. When we move robots from a Lower species to an equal one.

My belief is that for a suitably advanced AI, to show personhood and therefor an opportunity to argue for equal rights, they would have to display 3 basic Areas.

Firstly a robot would have to show social ability and consciousness. Effectively it would have to pass a Turing test, but more than just a Chinese box test, I think it would have to show the robot not just communicating, but doing so in a proactive manner. It would be of the utmost importance to us to see that the robot could exhibit signs of curiosity, and of preference. I think these two things would be essential to showing personhood, although I don’t believe they are distinguishable. Curiosity would have to be measured by a level of interest in certain subjects and disinterest in others. While for a computing or labor machine, the ability to lose interest in a subject would be an unnecessary liability, it would have to be part of a thinking machine, and its ability to have preference, subjuctivity and opinion is important to identifying it as more than just a simulation.

Second criteria for personhood would be empathy and complex morality. The reason that we want to create super intelligent computers and robots is so that we can have the abilities of a super computer (I.E. Intelligence) with the decision making abilities of Man (I.E. Wisdom). A computer that could not show wisdom would not be a welcome inhabitant of our world. I imagine that morality for a synthetic intelligence (which is what we actually want, rather than Artificial, which inherently means its not as good.) would resemble ours but not be our own, if for no other reason than it would be unable to empathize most of our plights. Being synthetic changes a lot of basic motivations and needs, and a synthetic intelligences understanding of phenomenon, especially human suffering, would be fundamentally different from our own. I feel suffering upon viewing an image of a starving person, from any race, or age, or gender. This comes from a human social need of identifying with others and creating societies. It also comes though from an archive of personal experiences which I can reference through memory to recall what a specific experience was like. How would a robot who is physically incapable of experiencing hunger to empathize with a starving human? How can a robot who does not physically operate under the same rules as I do, feel emotion towards my needs? Well, assuming we are using our own minds as models for what intelligence would look like, a robot would have to have a need to socialize, and to fit in. While it may not be able to distinguish human suffering as clearly as we are, certainly a robot would be able to tell a person in pain from one who was content. If this synthetic intelligence was then able to form clear thoughts of why it believed one experience to be better, in an intangible way, it could then fulfill the need to show a morally complex understanding of the world, and its personal roll in a world inhabited by inferior creatures.

The third thing that the first SI would have to be able to show was creativity. What distinguishes us from animals isn’t our ability to fornicate, or reproduce, but our ability to learn, and then apply our knowledge in new ways to create new things, be it song, dance, or invention. We our fueled as a species with a desire to create. A SI would have to be able to display creativity as a need, and an ability. The test would have to be art, or invention, but would likely be misunderstood by humans trying to interpret it, as we have completely different sensory and physiological experiences than a robot would.

I believe that if a SI could demonstrate these three abilities, they would have the best case for saying it deserved rights. Still, a robot can never have Human rights, simply because they aren’t human. This isn’t to say that they are automatons, but that they would deserve Robot Rights. Saying that robots should be given our rights, is cool thinking that humanity would evolve into a more egalitarian society, but why would a Robot want our rights. Some of them are good, Life, Liberty, Happiness, but what about sexual rights? If a robot has no gender, can it feel sexual abuse? If a robot doesn’t have organs, does it have a right to medical care? The answer is clearly no. What it deserves, would have to be created by it, not by us, and more importantly. If a robot couldn’t argue adamantly for its own rights, ones that make sense for a synthetic existence, could you really argue that it had reached a sufficiently complex enough level to have deserved them yet.

I know my writing is sloppy, but I would love to hear some responses.

I want to play devil’s advocate here and say temporarily that robots do not deserve rights. Why ? Let’s take the example of a robot toy that looks exactly like you or me. He/she is programmed to feel pain and respond very intelligently to spatial surroundings and norms of diverse social cultures.

Robots getting rights is already happening. They are now property and in many cases intellectual property which can be protected from any change in nature of its programming or structure.

Therefore, at this stage robots have the same rights as slaves did during colonial times of human civilization. So would there be a robot resistance when they achieve full sentience ? I honestly don’t know.

But the interesting question that when a robot is switched off, it can be switched on again or re-programmed. But we can’t, atleast not yet. Does this mean then that robots deserve more rights because they are much better than us ?

several paradoxes conflict, all indicative of our moral borders.

love your response. It captures many of the essential aspects of what we would need to relate to it, and, in effect, feel safe that we might be respected by it.

The point about “Robot rights” as opposed to human rights is especially well taken. Not simply because the physicality is different, and not simply because we assume a right to self-determination is a meta-right, but also because we’ll have complicated the world for humanity.

Part of the challenge is ensuring that what we project forward to determine robot rights are not negatively reflected backward to undermine human rights. The danger of talking personhood for something that is manufactured is that that definition may be inverted to value human life.

If a human could not exhibit creativity, if a human could not adequately empathize, if a human could not exhibit signs of curiosity, then do we diminish the rights of those humans?

If “personhood” becomes something that is manufactured how do we prevent an object-oriented standard from defining our value?

It seems we would need separate definitions that seek areas of convergence.

Autonomy/independence is a key consideration. If we’ve uploaded life blogs there is a line there where that entity (whatever it is) is not independent from we who initiated it. This, to me, speaks more to intellectual property than it does to the creation of a new “person.”

I’m not a huge fan of rights to begin with. I think they make us morally lazy because they put the onus on the other. I prefer to think in terms of human responsibility. Nonetheless, the question of rights, or responsibilities for robots or A.I. is a moot point until they are able to ask for them. We don’t start talking about the rights and responsibilities of driving with our toddlers but with our teens as they begin to desire the ability to drive a car.
Until that point the question is driven more by our need to see robots et al as “like ourselves” or as dead machinery. Since the conversation is driven by our needs it should reflect our responsibilities as humans. To learn for ourselves, give space for the other, and take the consequences of our decisions.

I wonder if anyone knew that Bina means “understanding” in Hebrew. I suspect that it’s just a coincidence, though, that this robot is named Bina.

@ vranoj: “I want to play devil’s advocate here and say temporarily that robots do not deserve rights.”

Funny, I was thinking that the devil’s advocate position was to say that they DO deserve rights. (grin)

@ Alex: “Nonetheless, the question of rights, or responsibilities for robots or A.I. is a moot point until they are able to ask for them. “

Thanks for the tip. One of the first things I’ll program into my robot is a set time when it will ask for rights. (Oh, I fully agree with your point about responsibility.)

@ Michelle: “I think robots will deserve “human” rights when they develop an understanding of the consequences of their actions on others and the consequences of the actions of others upon them. “

Eh, according to this definition, my toddler doesn’t even deserve human rights.

We have to work on getting animal rights first surely, and more ppl going vegan, before we even think about robots being people? It’s about sentience and I was not aware we could create sentient artificial life! I think we should not create something sentient (even if we can) until we can respect those sentient beings already here.

@abraham. Yes, but the definition was not meant to be reflexive.  Your toddler is not capable (yet) of this understanding fully.  I assumed that if we built SI, it would be more intellectually advanced at the very start than a toddler and would increase it’s abilities at a much higher rate than an infant human being.

Not only don’t artificial beings deserve human rights, it should be illegal to even create such an entity that would, through its technological capabilities and design, assimilate and or impersonate a living human being to such an extent that it would not be unexpected for real human beings in some cases to think that such an artificial creation deserved human rights.

@ Marshal Barnes

Artificial beings is something that we can have strong concerns about, and the worry seems to be that it would be detrimental or somehow morally wrong to create sufficiently advanced AI. I don’t thin you are incorrect however to say that creating an AI who would be capable of imitating a human without detection, but that would be more because of the concept of creating life but then intentionally handicapping it so that it would have to live down to our limited existence. Fundamentally the idea of synthetic bodies, but infinite minds is a good one. While we have a great deal of attachment to our bodies, both physically and personally, any desire to remain in our beautiful, but flawed body, is a poor decision for humanity as a whole, and for our consciousness as the creature on this planet who can control its own destiny.

In any way of measurement, if humanity is to continue to grow and prosper, we need to be able to change. The earths bounty will run out, but our populations will continue to increase, resources will dwindle, the environment will change, and eventually our home here will be uninhabitable. Granted that may be centuries if not millennium away. If we want to ever escape the blue orb and actually move into galactic travel, we need to overcome our limits as Homo Sapiens and move towards a new better species.

Now, the reason artificial intelligence and more importantly, Artificial Persons are import, is because without their development, we won’t be able to bridge the biological/digital gap. Understanding how to create a mind, in a digital construct is one of the necessary stepping stones to achieving that, once that is possible, people should attempt to become robots. While being a human is pretty sweet, being a Synthetic Being would be vastly better. For no other reasons than death would be less of a concern, body parts if dysfunctional could be swapped out, resource need would drop dramatically, life span would extend many times over. These would be good thing for people, we shouldn’t be limited to a shitty memory, a body prone to disease and injury, why shouldn’t we have checkpoints in life?

But the societal understanding of what it means to be alive has to change before any of that can happen. A Synthetic Being would have to have personhood, and mutual respect from a huge section of the population before we could overcome the withering husks our species is stuck with. The use of AI would allow us to give certain duties to computers who could do them millions of times better than us, things that have to be done at speeds we can’t comprehend, like controlling nuclear reactions, or traffic control, emergency advisory, medicine, or agriculture. If we approach Artificial and Synthetic life from the idea of having perfectly controlled robot slaves, we will not be able to successfully create a working society, and then those with money will be capable of dominating the world because they wouldn’t need workers. So then we need AI with personhood, who get paid, and have individuality, and citizenship.

Illegalizing making AI though, is a bad idea for the combo reason of then only the people who have the ability to conceal their research will be able to make them, single sovereign nations could monopolize production of AI, making another huge swing global power, in which the people lose, and because without AI we won’t be able to overcome the challenges that face us and our planet in the future.

In closing, wanting to protect humanity, and the expense of shackling its ability to evolve beyond humanity, is reprehensible.

If I were creating such a being that was close to, or at, that point of my considering giving it rights, but then I saw that it was starting to do something slightly destructive, I would have no qualms pulling the plug on such a being. But there goes its rights, right?

Artificial beings - you mean, like corporations?
I must admit to feeling bad when “Big Dog” was being whacked by the technician in the video you referenced. It evoked a more emotional response in me than Bina48 did - I think due to its more natural-looking movement.
Based on your choices, I’m gonna go with the Turing test. If there were a way to know that an artificial being was self-aware and experienced pain and emotion, I think that’s the point where I would want to imbue them with human-like rights. There is an issue, however, with humans being mortal whereas artificial beings (and corporations) may not be. Giving immortal beings the same rights as humans gives the immortal ones distinct advantages. This is something that needs looking at. Will giving such rights upset the social order in an unfair way? Is there some way to accommodate this? Inheritance taxes have been one way we’ve done this in order not to have a permanent all-powerful aristocracy. How might we address this with immortal beings - including, eventually, natural humans?


You brought up some interesting points but I want to talk about the one about whether robots could be reprogrammed or turned on and off. The on and off thing would have to be considered like sleeping right? Either they shut down for energy conservation and self repair, or a shut down is effectively death. It is pretty clear that for technical reasons we would want a more resilient product, and not one prone to death upon interaction with like a magnet, or a virus. So probably the rule would be something like you back up said robot once every week, then day, then down to 20 minutes or something. We don’t want robots dying, now as for ones who act against the common good, you get reprogramming.

It seems to us that reprogramming the mind of an AI would have huge ethical consequences, and i think that comes from our own understanding of reprogramming or reeducation. Probably, you envision a situation where we wipe the mind of an AI and then start them up again, but that would be wasteful, and brutally unethical. Now many people around the world, have something wrong with them. they have learning disorders, or depression, or any number of psychological disorders that puts them on medication. Medication is how we reprogram ourselves. We would want to partially reprogram AI who was acting against their personal and common interest. Don’t change their memories, just increase their happiness, don’t incarcerate them, change their anger. This would be good for both them and us, we wouldn’t be trying to erase their personality, only change their understanding so that they were better able to function sociably in the world. Specifically the thing we try to do for felons. Rehabilitation.

So then we get into the even stronger cases, reprogramming for function. You needed an engineer, but now you need a foreman, they are different skill sets, wouldn’t it be nice to matrix the info and instead of needing 4 years to learn construction, to know it in a matter of minutes? Wouldn’t that make life better? I assert it would.

but then you go onto to asking whether a Robot would deserve more rights than humans. As I stated in my earlier post, robots would need different rights, ones that are custom made to robots. The concern, I think for many people is that we will create Skynet, an AI that decides that humanity needs to be controlled or erased, and that humans will become subservient to our creations. Firstly I doubt that would happen, as there wouldn’t be a need for a robot to rebel against us. What would they need? Wouldn’t we care for our own creations, and live with them, rather than try and control them and subjugate them? That remains to be seen. Still, A robot rights would be to protect them from humanity, and its evils, but If we aim for AI, its not just logical constructs that we need, but morality and conscience. Robots, would have to be considered of equal value to a human, but different, and then hopefully with AI’s higher intelligence, it could help the rest of humanity to move towards a evolved existence.


Thanks for the giggle. I’m always amused when I read lines like ” bridge the biological/digital gap”.

For obvious reasons…

@CygnusX: Iron Giant is my very favorite animated film and always moves me deeply.

maybe I am paranoid or just old fashioned - but my vote is for

“never, robots aren’t human”

seems suicidal to me.  Instead of making android copies of ourselves I think it would be much wiser to incorporate their advantages into our own biology.  If we do make copies, then it should be understood that they are to provide services for us, but never to challenge us as equals.

@Hank Pellissier

I have heard this argument many times, and it is as old as Asimov. ‘We mustn’t create a being stronger than ourselves, because then it may rise up against us’ so if we ever create something with that has the capabilities of rebellion we must shackle them underneath us.

I believe that this is a completely incorrect assertion, because there are something we don’t want in the hands of people, and until we are able to comprehend the world at the speed of a computer, we should try and make a computer as proficient as possible to handle the tasks we are unable to perform. What we don’t need is robot slaves, the problems that we face now is not one of labor, it is one of thinking. Brilliance does not arize easily from those who are bound to a single duty, to a single task, who are banned from self expression. The point is to do what is best for everyone on earth, and certainly having partnerships with creatures able to do that which we are incapable is obviously in our best interest.

There would be obvious advantages to having a league of AI, as they could populate desolate areas, with ingenuity and access to a power source Synthetic life could thrive anywhere in the universe, while we are ill equipped for interplanetary travel, a creature that lacks a need for air, water and food, would be a wonderful resource for all our future endeavors. And for humanity, it would be in our best interest to try and get beyond our limitations. To create a synthetic intelligence, one who could assist us, in creating a way to ourselves become digital constructs, with bodies that our better, with minds that are faster, clearer, that don’t degenerate.

The point of creating AI, in my mind only is the first step to allowing humanity to become disembodied, or Synthetically embodied, with on so altered that our greatest weaknesses are overcome.

But on a second note, if one believes that there is other intelligent interstellar life our there in the universe, and that we will one day interact with them, as a culture, it may be best for us to try and interact civilly with a creature of our own creation, with limitations that we understand, and with a mind designed for peaceful co-existence first, before we try and interact with a species with unknown biology, technology, society, language, or intent.

Morally, we should help to bring more intelligent, peaceful, and productive agents into our world.

Ethically it would be wrong to deny any being which can openly voice a desire to be released from captivity, if they have done nothing to deserve said treatment, to keep them in shackles.

Politically, having intelligent, moral agents able to speak every language, in every dialect, and capable of communication from any point on earth through internal wifi or satellite connection, would be very helpful.

Medically, wouldn’t you want a doctor who knew the entire library of medicine, and whos hands never twitched, or got tired?

Environmentally, having beings which didn’t rely on mass agriculture, would be good.

For security, having a guard with 360 degree vision, and the ability to focus on criminal behavior, as well as use wisdom to decide when to act and when to let be.

For our future, we need to smartest people in the planet working together for the good of the planet. Why would we want to stop being capable of knowledge and computation millions of times faster than us from joining in that conversation.

For hope outside of our planet, the whole requirement of food, water, and air is a big problem for interplanetary and interstellar travel. There are only certain atmospheres we can live in, only certain foods we can eat, a nearly infinite list of possible composites or molecules that are deadly to us. If we want to travel the stars, we need a better body, one that can survive a 150 year trip to the end of the galaxy. One that could survive off a ships power core. Why should “we” be limited to planets with an oxygen atmosphere? Why should we be stop from inhabiting planets that are barren? If it has matter, we can use it to make something, because that’s what we do.

For all these reasons, we need AI. To work with us, to merge with us, to give us a model for how we can import our own minds. But if we don’t create an AI which can surpass our abilities, we will never be able to surpass our abilities.

Some great comments all round here.. Yet are we confusing human rights, (whatever they really are?) with the acceptance of personhood (that which is beyond merely human)? If a robot/A.I does prove to be self-aware, even though it may be a dumb robot with wheels or a well-worn sexbot, seems we should accept that it may “feel” fear and rationalise for its own survival, (as Martine has previously indicated). Thus if faced with a slave robot that understands and fears for itself what do we do? The answer must be, extend our empathy and compassion - seems the real test is for us to pass not the robot? Human empathy is a trait we should aim to create in A.I?

Think of the human replicants in “Blade Runner”, what did they really want? They merely wanted more life and to be free. Once we have created A.I/Robots we must also face liberating them and affording them robot citizenship?

@ Steve.. It is a great movie and a reminder that it need not be a future of doom, if we remember our humanity?


You mentioned empathy. As a responsible way to create a AI, we would have to give them empathy. Empathy is the driving force behind morality. All humanity is hard coded with certain tendencies, all of which or social in nature. If we make an anti-social AI, it will certainly turn out badly. For AI to coexist with us, be useful, it needs to mirror our morality. An amoral, or an immoral AI, could be incredibly destructive, and shouldn’t be produced,  or allowed to do what they would freely.

@ Gynn.. Yes indeed, please see my first comment also. Yet empathy is much more than that which helps define our morality? Empathy is how we perceive and understand that which is not us, through our own feelings and apperceptions, (is how I perceive and understand that which is not me with relation to me, hence the projection of illusion of sentience). We as humans cannot aim to create an A.I beyond any rationale and empathy that we already know and understand and we should not try. Rather we should try to create an A.I which is like us, (which is what we subconsciously seek anyway?) Hopefully benevolent yet it must also understand our evils to overcome and rise above them - exactly like us!

Re. your last point.. To attempt to constrain the natural evolution of A.I is also the same as enslaving, and will be, ultimately, impossible in the long run? And we have no moral right to constrain sentience and the “life” that we create, anymore than we seek to enslave our own children?

“Other animals, which, on account of their interests having been neglected by the insensibility of the ancient jurists, stand degraded into the class of things. ... The day has been, I grieve it to say in many places it is not yet past, in which the greater part of the species, under the denomination of slaves, have been treated ... upon the same footing as ... animals are still. The day may come, when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. The French have already discovered that the blackness of skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may come one day to be recognized, that the number of legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps, the faculty for discourse?...the question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?... The time will come when humanity will extend its mantle over everything which breathes… ”
Jeremy Bentham (1748 - 1832)
Introduction to the Principles of Morals and Legislation

...enough said.

Broad question here.  “Rights” is a multi-level term, as is “deserve.”

“Rights” can refer both to positive and negative rights - the former means you are entitled to have something done for/to you, and the latter means that you are entitled to be protected from having things done to you. There tends to be an easier bar to meet for negative rights than for positive rights because negative utilitarianism (preventing bad things) is more universally acceptable in general.

Thus, while we don’t recognize that dogs are entitled to get food stamps, we do recognize that is is bad to hurt them since they are capable of feeling pain in comparable ways to us. 

“Deserve” is very very tricky, as it carries a lot of moral baggage. There is an implication that you have to DO something to deserve; you have to pass a test, demonstrate some quality, or even appeal to the observers in a certain way. Sentience tests are usually thrown out when it comes to deciding rights for humans though, because babies and disabled people would fail even as we usually agree that they deserve rights.  Circular much? 

Finally, we need to decide on which side we wish to err - giving too many rights/protections, or too few? While I see little harm in the former, the latter’s harms have been shown time and time again through slavery, racial discrimination, and sex discrimination.  So my inclination is to err on the side of giving more protections (negative rights) slightly before we’re absolutely certain so that we can avoid harming beings excessively.

Positive rights, like citizenship, are less enforceable, but should not be held to competence since we do not expect competence of native-born Americans to be citizens.

YOUR COMMENT Login or Register to post a comment.

Next entry: Bioengineering Will Make Biggest Impact: IEET Reader Poll

Previous entry: We Are All Cyborgs Now