IEET > Rights > Economic > Staff > Marcelo Rinesi > PrivacySurveillance
The price of the Internet of Things will be a vague dread of a malicious world
Marcelo Rinesi   Sep 25, 2015   Ethical Technology  

Volkswagen didn’t make a faulty car: they programmed it to cheat intelligently. The difference isn’t semantics, it’s game-theoretical (and it borders on applied demonology).

Regulatory practices assume untrustworthy humans living in a reliable universe. People will be tempted to lie if they think the benefits outweigh the risks, but objects won’t. Ask a person if they promise to always wear their seat belt, and the answer will be at best suspect. Test the energy efficiency of a lamp, and you’ll get an honest response from it. Objects fail, and sometimes behave unpredictably, but they aren’t strategic, they don’t choose their behavior dynamically in order to fool you. Matter isn’t evil.

But that was before. Things now have software in them, and software encodes game-theoretical strategies as well as it encodes any other form of applied mathematics, and the temptation to teach products to lie strategically will be as impossible to resist for companies in the near future as it has been to VW, steep as their punishment seems to be. As it has always happened (and always will) in the area of financial fraud, they’ll just find ways to do it better.

Environmental regulations are an obvious field for profitable strategic cheating, but there are others. The software driving your car, tv, or bathroom scale might comply with all relevant privacy regulations, and even with their own marketing copy, but it’ll only take a silent background software upgrade to turn it into a discrete spy reporting on you via well-hidden channels (and everything will have its software upgraded all the time; that’s one of the aspects of the Internet of Things nobody really likes to contemplate, because it’ll be a mess). And in a world where every device interacts with and depends on a myriad others, devices from one company might degrade the performance of a competitor’s… but, of course, not when regulators are watching.

The intrinsic challenge to our legal framework is that technical standards have to be precisely defined in order to be fair, but this makes them easy to detect and defeat. They assume a mechanical universe, not one in which objects get their software updated with new lies every time regulatory bodies come up with a new test. And even if all software were always available, cheking it for unwanted behavior would be unfeasible — more often than not, programs fail because the very organizations that made them haven’t or couldn’t make sure it behaved as they intended.

So the fact is that our experience of the world will increasingly come to reflect our experience of our computers and of the internet itself (not surprisingly, as it’ll be infused with both). Just as any user feels their computer to be a fairly unpredictable device full of programs they’ve never installed doing unknown things to which they’ve never agreed to benefit companies they’ve never heard of, inefficiently at best and actively malignant at worst (but how would you know?), cars, street lights, and even buildings will behave in the same vaguely suspicious way. Is your self-driving car deliberately slowing down to give priority to the higher-priced models? Is your green A/C really less efficient with a thermostat from a different company, or it’s just not trying as hard? And your tv is supposed to only use its camera to follow your gestural commands, but it’s a bit suspicious how it always offers Disney downloads when your children are sitting in front of it.

None of those things are likely to be legal, but they are going to be profitable, and, with objects working actively to hide them from the government, not to mention from you, they’ll be hard to catch.

If a few centuries of financial fraud have taught us anything, is that the wages of (regulatory) sin are huge, and punishment late enough that organizations fall into temptation time and again, regardless of the fate of their predecessors, or at least of those who were caught. The environmental and public health cost of VW’s fraud is significant, but it’s easy to imagine industries and scenarios where it’d be much worse. Perhaps the best we can hope for is that the avoidance of regulatory frameworks on Internet of Things won’t have the kind of occasional systemic impact that large-scale financial misconduct has accustomed us to.

Marcelo Rinesi
Marcelo Rinesi is the IEET's Chief Technology Officer, and former Assistant Director. He is also a freelance Data Intelligence Analyst.


The Future’s bright, the Future’s a Civil Litigation nightmare?

I’m getting me one of these..

Making 3-D objects disappear: Ultrathin invisibility cloak created

The US approach to automotive regulations is actually not a bad model.

They purchase cars, at random, for later testing. They buy from the same place as normal consumers (no “special” models). A company never knows what products were purchased or when they will be tested.

The real problem is DRM. Imagine if it were illegal for anyone (including the government) to actually measure the gases coming out of the tailpipe of a car?

If VW’s tailpipe emissions were a trade secret that not even regulators were allowed to see, then they’d never have been caught.

This is the currently the case for software. In the US (as soon in every country that we force to sign the TPP or the trans-atlantic version) a law called the Digital Millenium Copyright Act makes it illegal to even look at copywritten code!

Imagine if it were not only difficult, but actually illegal to find out what our objects were up to. If you live in the US (and soon, many, many other countries) you don’t have to imagine, it’s already the law.

You’re right on target—if you assume that these computers are running proprietary software, software designed and controlled unilaterally by a company.

Which is why we must put kick proprietary software out of our lives. We must move to software that respects users’ freedom—free/libre software.

With free/libre software, the program is controlled by the users; if a free program does have something malicious, the users can find it and remove it.  This removes a lot of the temptation to put in something malicious.  The result is that malicious functions are common in proprietary software, and rare in free software.

Typo: “how would you now?” → “how would you know?”

Plenty of medical devices assisted by proprietary software and, given the economic interests at stake, are closed to public eyes, by definition. Regulatory compliance is a joke in these cases, as long as a MD company can pay for having good results from the software and not necessarily from the mechanical devices. Plus, the knowledge of statistics and probability theory at the regulatory level are flawed at least.
These days, a $9 billion company, Theranos, is making my point in the news, avoiding to provide the data that would clean it from the NYTimes allegations.
Other huge example: Samsung. In the geek world benchmarks are used in order to test the performances of electronic gadgets, and smartphones are hot stuff… Samsung has been among the first companies caught in the “action” of deliberately fraud the tests, typically making the processor run at full power, while they actually use less in everyday tasks.
These are mere examples. I’m simply waiting to know how the Toyota, FCA, Hyundai, Citroën, etc. are making their frauds, because it’s highly probable that all of them are on the same boat with VW, even with regulators watching (and closing one eye…).

Next entry: Book Review: The Transhumanist Wager by Zoltan Istvan

Previous entry: Mikey Siegel co-producing “Transformative Technology” conference