Live-blogging from the Transforming Humanity Conference: Emerging Tech, Hybrid Mice and Smart Drugs
J. Hughes
2010-12-04 00:00:00



Wendell Wallach, co-author of the book
Moral Machines is speaking on Navigating the Future: Managing the Combinatorial Impact of Emerging Technologies. Wendell reviewed the array of emerging technologies, from genetics to nanotechnolgy, with the point that their convergence and combinatorial synergies will have the most unpredictable and dramatic effects. While he thinks many of Luddite anxieties are overblown, and inadequate to merit bans on technological innovation, he also doesn't think the regulatory apparatus is currently adequate to deal with accelerating innovation. We need a new set of anticipatory democratic agencies and bodies to monitor and manage emerging technology policy. George took better notes than I did:
How will we navigate the promise and perils of emerging technologies that enhance human capacities?

Less than 200 years ago our ancestors were provincial, superstitious, unsanitary, unscientific, and filled with racial, sexual and class prejudices. So much has clearly changed since then.

Germ and sanitation revolutions: invisible agents transmit disease; washing hands; waste treatment and water purification; hygiene; life expectancy: 1850 - men 38.3, women 40.5; 2007 - 78 years. Basically a doubling in life expectancy.

Tech revolution: regenerative medicine, genetic engineering, synbio and AL, nano, neuroprosthetics, neuropharma, data mining, AI, convergent technologies.

Convergent technologies: involving the synergistic (and often unpredictable) implications.

Are we inventing the human species as we have known it out of existence?

Existing policy mechanisms: Laws and regulations, Professional codes of ethics, Research ethics, lab practices and procedures, etc.

Collectively, this has been a robust series of protections. Grounds for criticism: funding for oversight, hamper productivity, piecemeal.

Time: The pace of scientific discovery: Central issue for determining the adequacy of exiting oversight mechanisms: Exponential growth; scholarly community: more skeptical; unfulfilled predictions; complexity thwarts easy progress; tremendous confusion: mediating - incredibly difficult; generational differences.

Existential risks: Two kinds:
(1) Speculative threats: designer pathogens, grey goo, robot takeovers.
(2) Alterations in human nature, character or presentation.

How do existential risks play in public policy?

Short of a clearcut danger or crisis. We've got periodic mini-crises - public education, work through issues, etc.

In a democratic society the public should give at least tacit approval to the futures it is creating. But can the future be predicted?

The middle area: Combinatorial impact: Life extension; mixing cognitive enhancers; cyborg soldiers.

Re: cybog soliders: Biomarkers/Screening/Resilience - polymorphisms of FKBP5 gene (Binder et al) + childhood abuse, etc. seems to be indicative of low resilience. Screening techniques: Warrior class, profile required for high risk occupations, social engineering (new forms of discrimination, PSTD: long-term suffering, costs). It is ethical to send someone to the frontline who has a genetic propensity for PSTD?

Assessing risks: Need a mechanism for evaluating when thresholds that hold dangers have or are about to be crossed. We need a new credible vehicle for monitoring and tracking impacts of emerging technologies. Think tank? Agency? Easier said than done, because who will take it seriously? First step: experts workshops, which are credible and apply the Danish model (use a small group of the public which represent the larger group).

Pieces: Foresight and planning (anticipatory ethics, forward engagement); potential massive combinatorial impact; TechCast; etc.

Related issues for experts workshops: adequacy of existing policy mechanisms, public education, reports, monitoring, etc.

With proper attention the time available for creative action expands.
Upshot: He is for a reviving the Congressional Office of Technology Assessment, as well as experiments like the Danish technology jury model. We need more thinktanks, expert workshops. We need new bodies to advise national executives about the economic, social and national security implications of emerging technologies.






Monika Piotrowska is a doctoral philosophy student at the University of Utah working on the moral and legal status of human-animal hybrids, specifically human-mice neural tissue chimeras. She is speaking on Causes that Make a Moral Difference: Examining the Moral Status of the Human Neuron Mouse. She is inspired by the ethics expert working group report that convened to consider the implications of a mouse with human neurons, who wrote the report "Thinking about the Human Neuron Mouse" in AJOB. They were responding to criticisms that introducing human neurons to animals might give them human more status. In response to the report Mark Sagoff raises the question of why it is relevant that the neurons are human? That is just anthropocentricism (or "human-racism"). What if introducing dophin neurons into a mouse allowed them more human characteristics than human neurons. Isn't it the characteristics that we are introducing that are morally relevant rather than the humanness of the tissues or genes?

The most famous critic of anthropocentricism is Peter Singer. He argues that discriminating morally between humans and animals is indefensible. What is actually relevant is their cognitive and emotional capacities. We shouldn't coerce creatures with autonomy, torture creatures that experience pain, or deny the subjectivity of creatures with subjectivity, whether they are human or not.

Philosophy and law do however often make distinctions between the status of things on the basis of the origins, even if they are identical. She references Searle's distinction between organic brains and machine brains with identical outputs, but whose internal workings he assumes are in fact different. Human vs. mouse origins therefore can be morally significant even if the resulting creature is behaviorally identical. (Setting aside that mice will never be behaviorally identical to humans no matter their neural tissue, this argument implies that causal origins have actually made the two functionally, internally different. This is not a refutation of non-anthropocentric ethics.)

The inference of the mental states of uplifted mice is an epistemological question. Does a rat's avoidance of painful stimuli mean that they have the same experience "pain" that humans do? If not we also can't conclude that they don't have that experience through some other pathway or modality. Lobsters may have different pain pathways than we do.

In response to this problem Martha Farah argues that arguments from analogy are useful in inferring the experiences of animals. If we know that a mental capacity is related to a gene or cell in humans, and that capacity is increased by introducing that gene or cell in an animal, then we can assume that the animal has some version of the same capacity.

The capacities can be arrayed from easy to test, such as pain, empathy and intentionality, to the hard to test, such as sentience and self-awareness. If that is the case then it would be morally relevant whether the cells came from humans or dolphins. We don't know directly about the mental experience of dolphins, but we do about that of humans.

Read George's notes here






Katherine Drabiak-Syed is a bioethics faculty member at Indiana University working on the legal regulatory issues around genomics. She is speaking on Reining in the Psychopharmacological Enhancement Train. She is annoyed that the American Academy of Neurology issued guidelines for neurologists on how they can legally and ethically prescribe cognitive enhancing drugs, specifically drugs like modafanil, to well patients. (Also see discussion by Susan Gilbert and Dan Larriviere).

She thinks this is an illegal off-label prescription which enlists patients in a risky medical experiment, driven by a "society stuck in overdrive." If this spreads employers may begin to require the use of these drugs for workers in jobs like truck driving, pilots, medical interns and other jobs where wakefulness is critical. But modafanil is not indicated for healthy people, and has a number of possible (albeit very rare) side effects. (Since 90% of modafinil use is off-label, with very few reports of adverse events, the empirical evidence would seem to support the safety of modafinil however.) Users could become addicted, and sustained avoidance of sleep could make people sick and degrade their mental health. The drugs may then mask the degradation of their decision-making. Federal law says that taking a drug solely for the purpose of the experience makes it a controlled substance. (Uh, alcohol, chocolate, caffeine, etc.?)

Therefore prescribing cognitive enhancers would constitute a violation of professional ethics and of the controlled substances act. (I think this account needs a strong dose of Richard Glen Boire's thinking on cognitive liberty.)

Read George's notes here