Artificial Intelligence, Real Rights
Jamais Cascio
2005-01-04 00:00:00
URL


There have been no significant recent breakthroughs in AI
research to make one think that the R2D2 is just around the
corner, but the combination of steady advances in hardware
sophistication and new advances in cognitive science suggest
that such breakthroughs are entirely possible. As
"traditional" approaches to AI have faltered, it's quite
possible that a breakthrough will come more as an "aha!"
moment, a realizationcreation 2.0
of a new paradigm, rather than as the cumulation of a long
history of close-but-not-quite attempts. But even absent
Microsoft Conscious Self-Awareness for Windows, there are
good reasons to have considered ahead of time what we will
and will not accept as "proof" of consciousness, and what
limitations there should be on the rights of self-aware
non-humans. At the very least, we should be aware of how the
idea of self-aware machines can be abused:

 



According to Wendell Wallach,
co-author of the forthcoming book Robot Morality,
corporations that own computers and robots might seek to
encourage a belief in their autonomy in order to escape
liability for their actions. "Insurance pressures might
move us in the direction of computer systems being
considered as moral agents," Wallach notes. Given the
close association between rights and responsibilities in
legal and ethical theory, such a move might also lead to
a consideration of legal personhood for computers. The
best way to push back against the pressures to treat
computers as autonomous would be to think carefully
about what moral agency for a computer would mean, how
we might be able to determine it, and the implications
of that determination for our interaction with machines.


The fact that at least parts of any putative AI software
will have been written by humans is also worth bearing in
mind. If the "ethics engine" and "morality subroutines"
ultimately come down to programming decisions, we must be
cautious about trusting the machine's statements -- just
like we had well-founded reasons to be concerned about the
reliability of electronic voting systems. One problem is
that efforts to make machines more "sociable" in both
behavior and appearance short-circuits our logical reaction
and appeals directly to emotions:



Chris Malcolm, at the U.K. Institute
of Informatics at the University of Edinburgh, tells the
hypothetical tale of the "indestructible robot," the
creative challenge posed by a physicist to a robot
designer. After some tinkering, the roboticist comes
back with a small, furry creature, places it on a table,
hands the physicist a hammer, and invites him to destroy
it. The robot scampers around a bit, but when the
physicist raises the hammer, the machine turns over on
its back, emits a few piteous squeals, and looks up at
its persecutor with enormous, terror-stricken eyes. The
physicist puts the hammer down. The "indestructible"
robot survives, a beneficiary of the human instinct to
protect creatures that display the "cute" features of
infancy.


But being careful about how we think about thinking
machines isn't just an issue for our own self-defense; it's
a way of thinking about human rights and ethics, too:



Even specifying why we should deny
rights to intelligent machinesthinking carefully about
what separates the human from the nonhuman, those to
whom we grant moral and legal personhood and those to
which we do notwill help us to understand, value, and
preserve those qualities that we deem our exclusive
patrimony. We can come to appreciate what science can
tell us and what it cannot, and how our empirical habits
of mind are challenged by our moral intuitions and
religious convictions. So the issue of A.I. rights might
allow us to probe some of the more sensitive subjects in
bioethics, for example, the legal status of the unborn
and the brain-dead, more freely than when we consider
those flesh-and-blood subjects head on. In short, it
provides a way to outflank our discomfort with some of
the thorniest challenges in bioethics.


Such considerations may have even broader implications.
It's not unreasonable to ask, for example, why we would
consider extending human rights to machines if we don't
extend them to our closest relations, the Great Apes, or to
demonstrably intelligent animals like Cetaceans. But that
question begs to be turned around -- why don't we
extend greater rights to Bonobos and Dolphins? Is it out of
a sickening fear of what that would mean about how humans
have behaved towards those creatures? Bonobos, our closest
genetic relatives,

have nearly disappeared
in the wild
due to hunting. Taken one way, that's
a tragic story of an animal driven to the brink of
extinction; taken another, it's genocide.


If, for reasons of logic or fear, we shy away from
extending full legal rights to self-aware machines, Soskis
offers another possibility, albeit a line of reasoning which
leads to its own ethical trap:



Christopher Stone suggests various
gradations of what he calls "legal considerateness" that
we could grant A.I. in the future. One possibility would
be to treat A.I. machines as valuable cultural
artifacts, to accord them landmark status, so to speak,
with stipulations about their preservation and
disassembly. Or we could take as a model the Endangered
Species Act, which protects certain animals not out of
respect for their inalienable rights, but for their
"aesthetic, ecological, historical, recreational, and
scientific value to the Nation and its people." We could
also employ a utilitarian argument for their protection,
similar to Kant's justification of certain protections
of animals and Jefferson's argument for protection of
slaves, based on the possibility that if we don't afford
that protection, individuals might learn to mistreat
humans by viewing the mistreatment of robots.

Would we really consider offering a limited protection to
conscious non-human "persons" which echoes past treatment of
slaves? After all, if a machine's emotions and ethics and
reactions are entirely derived from software, there's no
reason why we couldn't program robots to want to be
slaves, to enjoy it, to see their enslavement as entirely
right and proper. If that sentence fills you with disgust...
ask yourself why. How would that differ from how we use
machines now?


How to ethically treat apparently self-aware machines is
not high on the list of immediate problems facing the planet
right now, but that doesn't mean it's not worthy of some
consideration. We are always better off imagining how to
handle potential problems than leaving them until they boil
over. Even if we don't get the particulars right, we have at
least established some ground rules for asking good
questions. And, as Soskis notes, thinking about machine
ethics is a useful pathway to thinking about human ethics --
an issue which can always use further consideration.