Taking Aim: The Problematic Nature of Artificial Intelligence in a Time of Social Conflict
William Sims Bainbridge
2019-01-02 00:00:00

First, I must mention that I myself have long worked with artificial intelligence, programming multi-agent systems based on neural nets and rule-based reasoning, believing that artificial intelligence could be of great benefit for social science.  In May 1993 I organized a workshop on Artificial Social Intelligence at the National Center for Supercomputing Applications at the University of Illinois, in collaboration with six creative sociologists who were highly competent in this area: Edward E. Brent, Kathleen M. Carley, David R. Heise, Michael W. Macy, Barry Markovsky and  John Skvoretz.  Our report was published in the 1994 issue of Annual Review of Sociology, and here is the abstract I wrote for it:



Sociologists have begun to explore the gains for theory and research that might be achieved by artificial intelligence technology: symbolic processors, expert systems, neural networks, genetic algorithms, and classifier systems. The first major accomplishments of artificial social intelligence (ASI) have been in the realm of theory, where these techniques have inspired new theories as well as helping to render existing theories more rigorous. Two application areas for which ASI holds great promise are the sociological analysis of written texts and data retrieval from the forthcoming Global Information Infrastructure. ASI has already been applied to some kinds of statistical analysis, but how competitive it will be with more conventional techniques remains unclear. To take advantage of the opportunities offered by ASI, sociologists will have to become more computer literate and will have to reconsider the place of programming and computer science in the sociological curriculum. ASI may be a revolutionary approach with the potential to rescue sociology from the doldrums into which some observers believe it has fallen.

That vision was plausible, and based on extensive citation of work already done, but reality has not yet caught up after even a quarter century.  My own work was primarily in the realm of theory, writing programs that simulated social movements, religious beliefs, and the emergence of multiple competing subcultures.  The titles of the following publications will suggest the scope of that subarea of ASI:

1984 "Computer Simulations of Cultural Drift," Journal of the British Interplanetary Society 37: 420-429.

1987 Sociology Laboratory. Belmont, California: Wadsworth. (including software with 12 educational computer simulation programs)

1995 "Neural Network Models of Religious Belief," Sociological Perspectives 38: 483-495.

1995 "Minimum Intelligent Neural Device: A Tool for Social Simulation," Mathematical Sociology, 20: 179-192.

2006 God from the Machine: Artificial Intelligence Models of Religious Cognition. Walnut Grove, California: AltaMira.

2018 Computer Simulation of Space Societies.  Cham, Switzerland: Springer.

One point to note is that artificial intelligence is not a new idea.  The current frenzy of interest among students in computer science programs could be described as a revival of the Cybernetics Movement that arose at the end of the Second World War, and the 1956 Dartmouth Summer Research Project on Artificial Intelligence.  One theory worth exploring is that ambitious young people are flocking to AI because other opportunities have faded, either because progress in other fields of science and engineering has stalled, or increasing inequality across the social classes has downgraded many skilled professions.

This raises a deeper issue.  Clearly, the world has entered a period of turmoil, although the degree of the chaos is unknown.  In a happier time, it seemed safe to advocate development of powerful AI techniques, because we had some confidence they would be used for human benefit.  But we cannot make that assumption now.  As a practical matter, it may not be possible to establish policies slowing AI advances, or taking AI out of the hands of evil actors.  But we can perform research on the social implications of the various technologies called AI, on the activism of the people promoting and developing these technologies, and more generally on what new forms of governance and social organization will be required for the humane new civilization we seek to create.

A starting point is to discuss the distinct issues about AI that we can now identify, and below I will name 8 of them.  Each paragraph begins with a sentence that should be read as an hypothesis, rather than as my personal view or something I want to convince you to believe. 

1. Pro-AI rhetoric contains many exaggerations, for example calling machine learning an advanced form of artificial intelligence, when it really is just an ornate addition to classical multivariate statistical methods, such as factor analysis that is nearly a century old.  The definition of "artificial intelligence" is actually very uncertain.  For example, in conversations with senior AI computer scientists, I find that they do not consider recommender systems to be examples of AI, although this widely-used technology incorporates machine learning and a range of other complex methods.  Perhaps they want to reserve the term AI for pure machine intelligence, while recommender systems are collaborations between people and computers, that provide advice to people, for example the Netflix or MovieLens systems for recommending movies that a particular user might like to see.  A classic joke runs: "If it works, it ain't AI."  Yes, "AIn't contAIns AI."  More seriously, AI advocates are typically very vague in explaining why the AI they are developing will have radically new abilities, when the two forms most widely (and vaguely) discussed are iterative data fitting techniques that will use bigger datasets, and neural nets with more levels of nodes and connections.  So the AI community itself needs to be studied.

2. Artificial stupidity may be the widespread result if excessive enthusiasm for AI causes governments and corporations to automate their services using inferior methods.  Consider what happens when you call a company on the telephone with a question or request, and encounter an AI agent using crude speech recognition to distinguish "yes" from "no" when you respond to one of its often irrelevant questions.  To be sure, both you and the company benefit when the cost of transmitting reliable information is reduced, and I imagine that a job in which you would need to answer hundreds of phone calls each day could become rather tedious.  There does exist a well-developed field of research called human-computer interaction (HCI), with many journals and conferences, but it tends to focus on the design of HCI systems, rather that their desirability, undesirability, or ethical, legal and social implications.

3. The AI fad is drawing attention away from the humanities and social sciences, which are much more important in creating a new civilization that is both peaceful and creative.  A painful barrier is the politicization of these fields.  Yes, right-wing politicians and voters oppose government funding of the humanities and social sciences, but their left-wing opponents are really not supporters.  In the case of the social sciences in the United States for example, back in the 1970s the Democratic Party began to move away from sociology, when a number of studies found that some of the party's social programs were not functioning well.  Given the low funding levels and inferior public status of social science, it would take years of special initiatives to strengthen it sufficiently for it to play a major role in establishing better governmental policies.  I have long been an advocate of a fresh approach in the humanities that would combine qualitative and quantitative methods, for example in a questionnaire study at the 1978 World Science Fiction Convention that led to an early recommender system publication, the book Dimensions of Science Fiction (Harvard University Press, 1986).  There may be many ways the humanities and social sciences could evolve, if properly funded and encouraged, and some would use AI rather than oppose it.

4. As in the earlier case of the spaceflight social movement, that gave us ICBM nuclear weapons rather than colonies on Mars, AI is magnifying the danger of war in an increasingly conflicted world.  In the Second World War it was perhaps ironic that nasty Germany developed long-range missiles like the V-2 that were prototype spaceships, while nice America developed the atom bomb and used it to evaporate two Japanese cities.  At the end of that conflict, physicists on both sides were surprised, the Germans that the Americans had been able to develop nuclear weapons, and Americans that the Germans had not really tried to do so.  Now that important international treaties limiting missiles and nuclear weapons are being abandoned, we seem to have entered a MAD period of Mutually Assured Destruction, in which war is deterred by the knowledge that both sides would lose.  But that logic may not apply to the great variety of AI military applications, because they can be applied gradually, causing escalation of hostilities.  On September 7, 2018, the US Defense Advanced Research Projects Agency announced a $2 billion AI initiative, and other governments are doing the same.  Research on what hostile governments are developing in secret is next to impossible, so a political movement to ban AI military applications may be necessary, finally banning nuclear weapons at the same time.

5. It is reasonable to worry that AI would greatly reduce the need for human workers, thus causing a surge in unemployment, given that the principle of creative destruction that earlier created more new jobs does not apply once technology is able to duplicate fundamental human abilities.  This is an area in which many kinds of social science research are both possible and valuable, and indeed any policy revolutions need to be based on high-quality research rather than just ideology or imagination.  A guaranteed basic income and universal health care must climb high political barriers before enactment, and it is not at all clear what their consequences for self-respect and social structure might be. An idea I find attractive but uncertain is decommercialization of culture, that could give a much greater faction of the creative public a reason for social respect.  In the journal Science in 2002 I noted that current laws seem prejudiced against the majority of musicians: "Copyright provides protection for distribution companies and for a few celebrities, thereby helping to support the industry as currently defined, but it may actually harm the majority of performers. This is comparable to Anatole France's famous irony, 'The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges.' In theory, copyright covers the creations of celebrities and obscurities equally, but only major distribution companies have the resources to defend their property rights in court. In a sense, this is quite fair, because nobody wants to steal unpopular music, but by supporting the property rights of celebrities, copyright strengthens them as a class in contrast to anonymous musicians."  Thus, perhaps in many areas, we could mitigate the negative employment impacts of AI and related information technologies, by abandoning some traditional norms like copyrights.

6. Automation concentrates power in a small number of large corporations and a technical elite, increasing inequality across the social classes.  This may sound like a bland statement of the Marxist theory of class conflict, yet it is presented here as an hypothesis that needs study, and that may apply in some industries but not others.  Currently, there is much debate about the apparent power that a few really vast corporations have gained, including Google, Facebook, Amazon and Apple.  Ethical debates are flourishing within these particular companies, and it may be that the century-old Iron Law of Oligarchy proposed by Robert Michels still applies - that societies oscillate between democracy and dictatorship, continually returning to rule by a somewhat large elite.  An even more problematic theory is the Frontier Thesis of Frederick Jackson Turner, which suggests that freedom requires the existence of a disruptive frontier like the Wild West, which supported democracy even in more settled regions.  The assertion by Vannevar Bush that science is the endless frontier remains unconfirmed, while the Star Trek notion that space is the final frontier becomes worrisome when we recall that nobody has been to the moon since 1972.  AI may be the last human frontier, if it transfers discovery from human minds to artificial intelligences.

7. Whether through privacy violations, oppressive propaganda personalized to be most effective in controlling individual victims, or automatic infliction of punishments, AI may be used intentionally by governments to reduce the freedom of their citizens.  It is hard to imagine a more difficult issue, in terms of both logic and ethics, because some degree of control over the population is one of the duties of good governments.  Do we really object to the police using AI to catch criminals?  What about the application of AI methods like natural language processing to block slander and prejudice in social media?  It may be possible to find a middle ground, for example allowing free debate about a particular religion, but not insulting individuals for being a member of it.  However, a number of dictatorial governments in the world have been reported to have begun using techniques like AI to censor Internet communications.

8. Despite the excessive dramatization of AI's dangers in popular movies like Terminator (1984) and The Matrix (1999), development of artificial general intelligence could conceivably result in the extinction of humanity.  Literary science fiction has long suggested the possibility of personality uploading, as summarized in my 2014 book Personality Capture and Emulation, so there can be positive ways in which AI evolution could move beyond homo sapiens.  In 1994 Robin Hanson rather cogently suggested in "If Uploads Come First" that copying human memories into computers could achieve artificial general intelligence long before it could be done on a purely mathematical basis.  The Terminator film series explored the possibility of humanization of an AI individual, and The Matrix series did the same on a social scale.  Among the challenging questions about complete upload of a human personality, are whether the result would be immortality only of the elite class, and whether cyber embodiment of humans would permit colonization of our solar system and the distant stars that will take centuries to reach by any feasible mode of transport.  So there are philosophical issues to discuss, and scientific questions worthy of research, even long before this technological revolution comes to its good or evil end.

Extensive discussions on topics like these eight have long flourished at IEET.org and a few other Internet sites, but I am finding it difficult to map the full range of reasonable online debate.  Therefore, I have just set up a Facebook group called Artificial Intelligence Meditations at www.facebook.com/groups/2036731969726102 with the motto "Take AIM!"  This Facebook group can serve as a communication hub where thoughtful people share information that provides insights into the potential harm of AI, and post links to research that evaluates its actual consequences.  Already, the group's page links to IEET's list of AI blogs, and to several comparable sources representing a range of viewpoints.  It is a closed Facebook group, the existence of which is visible to the public, but its content and membership are not.  Members of the group will be free to copy links and information from AIM over to their own Facebook pages and use them in their own scholarly and scientific projects.  Please consider joining, and by whatever other means you wish: share ideas, information, and online source links concerning AI development issues in a period of human history m