IEET > Rights > HealthLongevity > GlobalDemocracySecurity > Vision > Contributors > PrivacySurveillance > B. J. Murphy > Innovation
Google’s Anti-Facial Recognition Policy for Glass is Deadly
B. J. Murphy   Dec 4, 2013   Ethical Technology  

I believe Google is making a huge mistake in completely banning facial recognition systems for its Glass product. In my opinion, such a system could be used to help save thousands of lives. But then, we’re too damn caught up on absolute privacy that we’re willing to sacrifice actual, physical lives to ensure our privacy remains untainted. Such individualist dogma is deadly.

According to the Amber Alert webpage, "A child goes missing in the United States every 40 seconds," and that "More than 700,000 children go missing annually." That is an absolutely frightening statistic! Much more frightening than the prospect that some Glass user may know my name.

How far are we willing to go to ensure absolute privacy isn't diminished whatsoever? When does the right of privacy begin interfering with the right of safety? Can the two come together in harmony, or are they destined to be in conflict until society finally reaches a decision over one or the other?

I understand the desire for privacy, but as I've argued in the past, as we as a society become more public and technologically open-source, the idea of privacy slowly fades away. That isn't to say that some forms of privacy can't be maintained. Surely we should have the right to say 'yes' or 'no' over whether or not our private data is to be shared publicly. That level of freedom and choice could easily maintain a sense of privacy to each individual.

But then, when it comes to missing children, or even missing adults, should we not then be willing to sacrifice a portion of our privacy to ensure the safety of those who's gone missing? It doesn't even have to be that large of a peek into each's private lives - simply a facial recog. map, a name, and whether or not they're reported missing, or even possibly wanted.

Picture this with me: It's 2014 and only a few months have passed since the commercial launch of Google Glass. Hundreds of thousands of people already acquire their own device, scattered across the United States. A mandatory app was included with Glass, which was connected with Amber Alert systems. The app has Glass quietly scanning each face you cross paths with, but doesn't reveal their names, nor does it alert you that it's currently scanning. For all you know, it's a normal day like any other.

Now, as you're walking down a street, you walk past an adult male with a pre-teen female. You don't even pay much attention to them. Just another group of people walking by, as far as you're concerned. But then Glass, on the other hand, knows something you don't - the little girl has been reported missing.

As a result, without alerting you, the app then - albeit quietly - takes a snapshot of the girl and unknown male captor, contacts a 911 operator program, and delivers GPS coordinates of where the photo was taken and in which direction the girl was walking. The police show up, arrest the male captor, and contacts the parents of the missing child informing them that she'd been found and safe.

This was able to occur because each parent - or family member, guardian, etc. - had allowed the missing child's name and facial recog. map to be archived in a Amber Alert system program, which connects via app on Glass. Was said child's "privacy" diminished? Yes. But then she's also alive because of it and a kidnapper is taken off the streets, not able to harm anyone else again.

Isn't this very real prospect of technologically-enhanced safety worth sacrificing a bit of our own privacy? While I'm not a parent, if anyone of my family were to go missing, their privacy would be the last thing I'd be concerned about. And if I'd gone missing, I'd want everyone to do all they could to find me, even if it meant sacrificing my own privacy.

Google Glass is coming just next year. And with Google's determination to ban facial recognition using Glass, we must ask ourselves: At what price?

B.J. Murphy is a Technoprogressive Transhumanist activist within the East Coast region of the U.S. He's worked with the asteroid mining company Planetary Resources as a member of their Planetary Community Vanguard, helping campaign funding for the ARKYD 100 Space Telescope, an open-source means of space exploration. He is a Writer, Editor, and Social Media Manager for SeriousWonder.com and runs his own blog called The Proactionary Transhumanist. He's a co-author of both Longevitize!: Essays on the Science, Philosophy & Politics of Longevity and The Future of Business: Critical Insights On a Rapidly Changing World From 60 Futurists.



COMMENTS

Why can’t we have both? That is, no facial recognition under normal circumstances, but the kind of latent Amber Alert system or missing person system you propose? That way we could avoid or at least dampen the potential of manipulation which is what the design fiction piece “Sight” picture at the top of your post, AND leverage the capacity of the technology for social good. It’s not the precautionary
or the proactionary principal but something I’d call Design Foresight.

The problem is, as we’ve already seen, the surveillance state accepts no limits. If you allow for this, you’ll soon have any number of agencies demanding a full-time hookup, with full user ID and identification of all passerby. After all, if it saves just one, and what do you have to hide anyway?

Google has already encountered Leviathan, and is trying not to give it an in.

If your boss can watch you and you can watch your boss, it does not
balance out: rather, it gives your boss more power over you.  See
http://ieet.org/index.php/IEET/more/stallman20121208.

Beyond that, privacy is necessary for democracy.  See
http://www.gnu.org/philosophy/surveillance-vs-democracy.html.
To make whistleblowers and democracy safe, we must redesign
systems so that they do not accumulate dossiers about everyone.

One way to do this is to make them remember only people who are
specifically sought on legal grounds.

For instance, if face recognition systems and license-plate
recognition systems can only “see” people and cars that are being
sought, under a court order, plus invalid plates, they would do some
good while respecting most people’s privacy.  They would be able to
spot kidnaped children, to the extent such systems have any chance at
it.

I doubt a kidnaped child will walk past you on the street with a
captor, acting as if everything were ok, such that you would need face
recognition to tell you to call 911.  But that’s a side issue.

Ah, the ol’ “won’t someone PLEASE think of the children!??!?” argument, probably the most reliable of the horsemen of the infocalypse.

So you propose we do have the facility for population-level facial recognition, but only allow this facility to exist in a narrowly-defined space that is only accessible to the powerful? With the added bonus of not even being made explicit to those doing the powerful’s data gathering? Gee, that doesn’t sound like it has “POTENTIAL FOR ABUSE” scrawled all over it in jagged red letters, at all. I mean, the NSA has already clearly demonstrated their trustworthiness, so of course they would only stick to the strictly-defined circumstances you outline. Clandestine crowdsourced monitoring of dissidents, love interests or pretty much anyone else some anonymous cubicle dweller has the whim to go after wouldn’t happen in a trillion years, right?

@rms:

“If your boss can watch you and you can watch your boss, it does not
balance out: rather, it gives your boss more power over you.”

If your boss can already watch you and you can’t watch back (which is pretty much where we are currently), it at least would tip the balance back in your direction.

Sensors are only going to get cheaper, smaller and more ubiquitous. Trotting out tired old emotionalisms to justify an even greater sharpening of the already gross power imbalance is not going to get us closer to a happy future. Seeing this technological trajectory for what it is, with the aforementioned tendency of secrecy to be abused, I fail to see how anyone can think anything but Brinian equiveillance is the answer. Not an ideal answer, but the one reality has dealt us.

21st Century: more surveillance cameras than people?

“Tax discs displayed on car windscreens, a staple of British motoring for almost a century, are to disappear, George Osborne will announce in his Autumn Statement .

Instead of displaying a disc to prove that a car is fully taxed, motorists will instead register their car online.

Traffic cameras will then automatically track vehicles on the road and identify those that are not registered for road tax.”

uk.finance.yahoo.com/news/autumn-statement-2013-tax-discs-073429932.html


Sousveillance will not be enough to tip the balance or even equal the balance of power, although it will help. Just “imagine” the litigation and financial prospects of each Human taking each other and the govt to court of Law? Lawyers may become the demi-gods?

I agree that many areas in privacy, including facial recognition, are bound to fade away as technology advances. However, I doubt that the child kidnapping argument is a strong one. Stranger abductions of children are extremely rare - most abductions are by non-custodial parents. These abductions are crimes, but it is hard to argue that they pose much of a threat to the children involved. The likelihood of detecting a stranger-abducted child walking down the street is not much greater than the likelihood of detecting Elvis.

I think we need to limit use of population-level facial recognition
regardless of what entity is using it.

Requiring a court order is the way we stop the state from searching
your house without grounds.  Requiring face recognition systems to be
controlled by court order, so that they don’t track anyone in the
ansence of a search warrant for that person, will limit face
recognition by the state (and by your boss).

@SHaGGGz

  If your boss can already watch you and you can’t watch back (which is pretty
  much where we are currently), it at least least would tip the balance back
  in your direction.

That is exaggeration, and it is defeatist.

First, it’s an exaggeration.  If you work for a company, your boss
can’t watch you everywhere.  Even the US government can’t watch all
its employees everywhere, though it has too much surveillance capacity.

We need to make sure your boss, and the state, can’t watch you
everywhere.  We need this for our own privacy and we need this for
democracy.  “Brinian equivalence” is not a solution, it is a swindle.

Second, it is defeatist.  It claims that privacy has been defeated,
it is doomed, so let’s join in destroying it.  That is nuts.

We are not defeated; we have not yet begun to fight for privacy.

The first part of that comment argues against trusting the NSA.  I
agree completely.

The way to stop trusting the NSA is to reduce the data the NSA can get.

The NSA looks at any and all data that businesses store.  “Brinian
equivalence”, the idea that everything should be monitored by someone,
means the NSA will know everything.  That would require us to trust
the NSA 100%.

The way to reduce what the NSA knows is to limit the collection of
digital dossiers that the NSA can look at.

See http://www.gnu.org/philosophy/surveillance-vs-democracy.html
for various ways of doing this.

YOUR COMMENT Login or Register to post a comment.

Next entry: Should We Be Worried About Technological Unemployment?

Previous entry: Is there such a thing as moral expertise?