Hughes on Regulating AI at the Singularity Summit

2007-09-21 00:00:00

IEET Executive Director James Hughes spoke at the 2007 Singularity Summit in San Francisco on "Waiting for the Great Leap...Forward?"






Abstract: Sentient, self-willed, greater-than-human machine minds are very likely in the coming fifty years. But to ensure that they don't threaten the welfare of the rest of the minds on the planet a number of steps need to be taken. First, given their radically different architecture and origins, developing software capacity for recognizing and relating to, perhpas having empathy for, human sentience should be a design goal, even if machine minds are likely to evolve beyond human perspectives and emotional traits. Second, building on the global networks established to identify and respond to computer viruses, governments and cyber-security firms need to develop detectors for and counter-measures for self-willed machine intelligence that may emerge, evolve, or be accidentally or maliciously released. Those detectors and counter-measures may or may not involve machine minds as well. Third, human beings should aggressively pursue cognitive enhancement and cyber-augmentation in order to give them a competitive chance against machine minds, economically and in the event of conflict. Fourth, since machine intelligence, self-willed or zombie, is likely to displace the need for most human occupations by the middle of the century, industrialized countries will need to renegotiate the relationship between education, work, income, and retirement, extracting a general social wage from robotic productivity to lift all boats, not just those of the shrinking group of workers and owners of capital. Finally, in order to ensure that we do not re-capitulate slavery, we will need to be much clearer about what kinds of minds, organic and machine, have what kinds of responsibilities and are owed which kinds of rights. Machine minds with a capacity to understand and obey the obligations of a democratic polity should be granted the rights to own property, vote and so on. Minds wishing to exercise capacities as dangerous as weapons or motor vehicles, should be licensed to do so, while even more dangerous capacities (AI equivalents of bombs) will need to be restricted to control by, or be integrated into the functioning of, accountable democratic governance.

All the talks can be accessed here.

IEET Executive Director James Hughes spoke at the 2007 Singularity Summit in San Francisco on "Waiting for the Great Leap...Forward?"






Abstract: Sentient, self-willed, greater-than-human machine minds are very likely in the coming fifty years. But to ensure that they don't threaten the welfare of the rest of the minds on the planet a number of steps need to be taken. First, given their radically different architecture and origins, developing software capacity for recognizing and relating to, perhpas having empathy for, human sentience should be a design goal, even if machine minds are likely to evolve beyond human perspectives and emotional traits. Second, building on the global networks established to identify and respond to computer viruses, governments and cyber-security firms need to develop detectors for and counter-measures for self-willed machine intelligence that may emerge, evolve, or be accidentally or maliciously released. Those detectors and counter-measures may or may not involve machine minds as well. Third, human beings should aggressively pursue cognitive enhancement and cyber-augmentation in order to give them a competitive chance against machine minds, economically and in the event of conflict. Fourth, since machine intelligence, self-willed or zombie, is likely to displace the need for most human occupations by the middle of the century, industrialized countries will need to renegotiate the relationship between education, work, income, and retirement, extracting a general social wage from robotic productivity to lift all boats, not just those of the shrinking group of workers and owners of capital. Finally, in order to ensure that we do not re-capitulate slavery, we will need to be much clearer about what kinds of minds, organic and machine, have what kinds of responsibilities and are owed which kinds of rights. Machine minds with a capacity to understand and obey the obligations of a democratic polity should be granted the rights to own property, vote and so on. Minds wishing to exercise capacities as dangerous as weapons or motor vehicles, should be licensed to do so, while even more dangerous capacities (AI equivalents of bombs) will need to be restricted to control by, or be integrated into the functioning of, accountable democratic governance.

All the talks can be accessed here.

http://www.singinst.org/summit2007/audio/ss07-jameshughes.mp3