IEET > Technopolitics > Philosophy > Fellows > Kevin LaGrandeur
Here’s How We Can Make AI for Tracking Covid Safer
Kevin LaGrandeur   Jul 9, 2020   Ethical Technology  

During the protests against the killing of George Floyd around the country early this month, predator drones were found by government watchdogs to be circling above the protesters in several cities.  There has rightfully been a great hue and cry from both political parties about this, because such surveillance arguably violates the first and fourth amendment rights of the protesters: such surveillance puts a damper their rights to peaceably assemble and to free speech, and it also violates their rights against illegal searches.

Meanwhile, over the past four months, various governments and tech corporations have been scrambling to find ways to use AI to help fight the Covid 19 virus.  Some of the uses to which AI has been put against the virus are helpful and harmless. One of many examples is an application called Daily Check, which uses Amazon’s Alexa.  This AI-based diagnostic application for automatically includes a questionnaire for seniors to see if they have Covid; it alerts a designated caregiver, so privacy is protected. And testing is ubiquitous, quick, and persistent.  It also serves a population that is often hard to reach.  Another AI application with seemingly little downside is Google’s use of its AI called DeepMind to predict more quickly the protein structure of COVID.  This can help scientists develop treatments for the virus.

The problematic innovations are AI-based applications used for tracking and isolating people with the virus.  

There are three big problems with these kinds of tools: 

  1. Privacy: In many tracking apps, data collected is sent to a central repository, and often individuals have no control over what is done with that data; for instance, you can’t ever delete the data you’ve given.  Even if that data is anonymized (and in some instances it isn’t), this is not the best practice for subjects’ security.  If there is a mistake or data breach, whatever data is collected is vulnerable. 
  2. Information security: Also, even if there are no breeches, some apps used (like Fitbit and Apple Health) to siphon data from subjects usually operate via third parties.  And what they do with information is often not as strictly controlled as with the first-party software. This would also increase privacy risks.
  3. General lack of cybersecurity: Another big problem with information security is the general problem with lack of cybersecurity for all databases these days; most businesses and government agencies just don’t devote enough attention and money to cybersecurity. The evidence for this is the rampant reports of breeches by hackers over the last 5 years, including massive breeches at hospitals, at companies like Yahoo and Equifax, and at the federal government’s US Office of Personnel Management.

But perhaps the biggest problem with AI-based applications used for tracking the Covid virus is its potential for later abuse by governments.  Some bad actors are already trying to re-purpose tracking applications for oppressive surveillance of citizens.  In China, for instance, according to the NY Times, the government is looking for new ways to use the tracking app that they forced most people to load onto their smartphones.  With the virus, the government used the application to track and isolate sick citizens.  Now, their main idea is to use it for surveillance of the population.  One way they’re trying to coerce people to keep and use the application, according to the Times, is to make the “app…a digital assistant for accessing local services of all kinds, not just medical ones”—for instance, unlocking coupons for stores.

Artificial Intelligence is increasingly being used to watch us without our knowledge or permission.  This is an extreme danger to all of us, no matter our political persuasion, and it calls for more organized and careful regulation of AI by our elected leaders.

A good start would be guidelines for current AI applications being developed to fight Covid.  Here are five that I and a number of scientists and technology experts recommend:

  1. The most privacy-preserving data options should be used
  2. The application should have limited purpose
  3. Using the application should be voluntary
  4. Transparency: Sharing and open access to data (so all countries and docs have access to everything)
  5. Time limits should be built into the application’s usability

Kevin LaGrandeur
IEET Fellow Kevin LaGrandeur is a Faculty Member at the New York Institute of Technology. He specializes in the areas of technology and culture, digital culture, philosophy and literature.



COMMENTS No comments



Next entry: Les mondes virtuels l’avenir du travail après le confinement ?

Previous entry: IEET Managing Director Steven Umbrello assumes role as co-Editor of the International Journal of Technoethics