Proteus, meet Leviathan: just where are the moral limits?
Russell Blackford
2005-05-01 00:00:00






We face the prospect of technological
modifications to our own physical and
cognitive capabilities, our evolved
human psychology, or the human bodily
structure. The future may contain new
kinds of persons: AIs, cyborgs, enhanced
humans, or uplifted non-human animals.
All this raises questions about what
moral limits we should recognise and
what legal constraints we should impose.




Much of the discussion is being driven
by what seems like an intellectually
bankrupt sort of natural law theory that
grounds morality in our "given" human
nature. This is portrayed as possessing
an inner wisdom that comes from either a
supernatural source or some kind of
teleological fine-tuning of the natural
realm. The arguments seem to be
underpinned by a theological or
quasi-theological worldview.



While this approach should be rejected,
is there no connection at all between
human nature and these issues? Put that
way, it seems counterintuitive. In the
end, is there more that can be said than
that human enhancement should be
uncontrolled by any moral consensus at
all, and should be left unregulated? Or
than the claim that it should be
regulated solely in the interests of
safety, efficiency, access and equity?



Proteus and Leviathan


I am not in favour of attempting to
create an extensive moral consensus in
modern pluralistic societies.
Individuals vary greatly in their
desires, ideals, and values, and the
shared, core morality should accept this
if we are to flourish in diverse ways
with a degree of harmony and toleration
(I assume that this is one value we
could almost all agree on). In
particular, pluralistic societies need
to accept the widespread existence of
partialist, perfectionist, and
competitive values. It is even more
important that the law not be used to
stigmatise and persecute individuals
whose beliefs and values may lie outside
of the mainstream, but who are
psychologically disposed, and
consciously committed, to being good
neighbours and citizens.




This suggests that innovations such as
human enhancement should be tolerated,
even by those with different ideals, and
should receive minimal regulation by the
law - minimal in at least one sense.
Quite detailed regulation might be
required to ensure (for example) that
safety standards are met and that ways
are found to expand people's access to
some enhancements. However, while the
regulation may need to be extensive in
its formulation and its procedures, it
should also contain only reasonable
restrictions on individual choice. It
should not proceed on the assumption
that human enhancement is a morally
wrong act, and that morality should be
enforced in this area, or on the
assumption that human enhancement is,
like murder, rape, theft, or fraud,
intrinsically anti-social. Current
statutes that contain criminal
provisions analogous to those for such
crimes seem to be wrong in principle.



However, there might be some limits.
When we contemplate the prospect of a
shapeshifting, protean future for our
species, we should also keep in mind the
need to sustain the social contract.



Proteus, meet Leviathan. Leviathan, meet
Proteus.




No skyhooks


I have a "no-skyhooks" view of
morality. This makes morality a human
invention, and casts doubt on any
transhumanist claims that our humanity
is of no significance at all. Morality
is something we need in order to carry
on as social animals.



Traditional social contract theories see
a justified morality as consisting of
those rules which we would agree to as
rational, self-interested beings who
desire to obtain the benefits of social
living. What seems to be wrong with this
is that it cannot account for our
attitude of at least some (moral?)
concern for non-human animals - not to
mention our concern for those human
beings (e.g. the very young, the very
sick, the intellectually handicapped)
who might have little to offer in the
way of social productivity, or in the
way of a willingness not to use their
threat advantage.



A more plausible reconstruction of
contractarian morality is to see it as
the invention of creatures who already
have important evolved capacities: for
direct empathy with each other (even
young babies respond empathically to the
cries of other babies); for imaginative
sympathy with any being that can be
imagined to be suffering; and for
entering into what we perceive as fair
bargaining behaviour. There may be
others. It is difficult to imagine how
society, and hence the social contract,
would be possible if we did not possess
these capacities as part of our evolved
human nature. Not only does this make
the social contract possible; it also
makes possible direct appeals to our
sympathies for those who fall outside
it.




I believe that I have here an
explanation (and even a justification)
for our double standard toward human
beings and other animals: avoid cruelty
to them; but ascribe extensive
rights to us. My approach also
coheres with our commonsense moral
acceptance of economic and social
competition (against those philosophers
who believe we should be impartial to
all, including our individual selves and
loved ones), and much else that most
modern ethical theories cannot easily
handle.



Implications for emerging
technologies


One implication is that we should not
risk creating intelligent beings who are
likely to be looked upon with a lesser
degree of empathy than we feel for each
other. The presence of such beings in
our societies could be disastrous. For
that reason, I think there is a strong
case against creating conscious AIs, and
perhaps also against uplifting non-human
animals. We might be capable of
imaginative sympathy with their
interests, but we would not have the
bonds of direct empathy that we have
with creatures whose repertoire of
facial expressions we have evolved to be
able to "read". We should also avoid
creating intelligent beings who will not
be able to empathise directly with

us
. This imposes some limit on how
far we can go in devising "new humans"
with the capabilities that amount to
personhood, but with quite different
psychologies from our own.



No doubt our ability to sympathise with
the suffering of non-human animals is
considerable, as would be our ability to
sympathise with the various interests of
new kinds of persons whom we might
create. However, we have good reason not
to create intelligent beings who would
be handicapped in their dealings with
us, such as by extreme deviations in
appearance, or by lacking our repertoire
of expressions. Nor should we create
intelligent beings who might be
psychologically able to act towards us
in something like the manner of
sociopaths.



I believe we also have good reason not
to allow too large a capability gap to
emerge between enhanced and unenhanced
strata of our societies - this could
also start to put strains on the social
contract and our ability to continue to
gain the benefits of social living.




The above analysis gives some comfort to
those who support the view of George
Annas: that human enhancement could lead
to genocidal conflicts. In the extreme,
that might be possible, but it would
seem to be a high-risk scenario only
with widespread use of unwise
enhancements, or the creation of persons
whom we would not relate to with
empathy. My analysis provides a good
reason not to create Frankenstein's
monster, if we could, but it allows a
great deal that falls short of this.



In particular, it gives no reason to ban
human reproductive cloning outright (as
opposed to regulation for safety
reasons): reproductive cloning would
have no tendency to create intelligent
beings who would lack such things as the
normal human expressive repertoire (or
normal human psychological responses to
it). Cloning would not even put us on a
slippery slope towards the creation of
such beings, since we can establish a
principle forbidding their creation.
Innovations such as therapeutic cloning
and gene therapy are even less likely to
lead to serious threats to our ability
to function together, obtaining the
benefits of social living.



Conclusion: a new moral paradigm
for emerging technologies


I offer this analysis as one new
paradigm for considering which uses of
emerging technologies are tolerable (or
desirable) and which are not. I think
it's time to try on some new moral
paradigms.




 



Russell Blackford
is an Australian writer,
literary and cultural critic, and student of
philosophy and bioethics. He has a Master of
Bioethics degree from the School of Philosophy
and Bioethics, Monash University, where he is
now a graduate student, enrolled in a philosophy
PhD program.