I’ve looked at data-mining andpredictive analytics before on this blog. As you know, there are many concerns about this type of technology and the increasing role it plays in our lives. Thus, for example, people are concerned about the oftentimes hidden way in which our data is collected prior to being “mined”. And they are concerned about how it is used by governments and corporations to guide their decision-making processes. Will we be unfairly targetted by the data-mining algorithms? Will they exercise too much control over socially important decision-making processes? I’ve reviewed some of these concerns before.
When the Partially Examined Lifediscussion of human enhancement (Episode 91) turned to the topic of digital technology, the philosophical oxygen was sucked out of the room. Sure, folks conceded that philosopher of mind Andy Clark (not mentioned by name, but implicitly referenced) has interesting things to say about how technology upgrades our cognitive abilities and extends the boundaries of where our minds are located. But everything else more or less was dismissed as concerning not terribly deep uses of “appliances”.
Yes. Yes we can. The last year has brought with it the revelations of massive government-run domestic spying machineries in the US and UK. On the horizon is more technology that will make it even easier for governments to monitor and track everything that citizens do. Yet I'm convinced that, if we're sufficiently motivated and sufficiently clever, the future can be one of more freedom rather than less.
"I believe that we have turned a corner: we have finally attained Peak Indifference to Surveillance. We have reached the moment after which the number of people who give a damn about their privacy will only increase. The number of people who are so unaware of their privilege or blind to their risk that they think “nothing to hide/nothing to fear” is a viable way to run a civilization will only decline from here on in." - Cory Doctorow
Data-mining algorithms are increasingly being used to monitor and enforce governmental policies. For example, they are being used to shortlist people for tax auditing by the revenue services in several countries. They are also used by businesses to identify and target potential customers.
FaceBook turns ten this year, yes only ten, which means if the company were a person she wouldn’t even remember when Friends was a hit TV show- a reference meant to jolt anyone over 24 with the recognition of just how new the whole transparency culture, which FaceBook is the poster child for, is. Nothing so young can be considered a permanent addition to the human condition, but mere epiphenomenon, like the fads and fashions we foolishly embraced, a mullet and tidied jeans, we have now left behind, lost in the haze of the stupidities and mistakes in judgement of our youth.
An RFID chip is a small electronic device that is implanted in a human body. Such a device is embedded to contain information. These chips are implanted to tell things about a human being, such as identity or contractual information – such as a money debit system. This signifies a risk, since it means the conveyance of information can be cloned.
All the recent news about government data leaks has got me thinking about the role of government secrecy in a democracy. It seems to me that in order for the government to serve all the functions we ask of it, some temporary veil of secrecy
may be necessary and ethically permissible. The problem is, if there isn’t a way for members of a democracy to know what their government is doing on their behalf, the democracy may lose its legitimacy.
This series of blog posts is looking at arguments in favour of sousveillance. In particular, it is looking at the arguments proffered by one of the pioneers and foremost advocates of sousveillance: Steve Mann. The arguments in question are set forth in a pair of recent papers, one written by Mann himself, the other with the help of co-author Mir Adnan Ali. Part one clarified what was meant by the term “sousveillance”, and considered an initial economic argument in its favour. To briefly recap, “sousveillance” refers to the general use of veillance technologies (i.e. technologies that can capture and record data about other people) by persons who are not in authority.
Steve Mann (pictured) has been described as the world’s first cyborg, and as a pioneer in wearable computing. He is certainly the latter. I’m not so sure about the former (I believe Mann rejects the title himself). He is also one of the foremost advocates for sousveillance in the contemporary era. Sousveillance is the inverse of surveillance. Instead of recording equipment solely being used by those in authority to record data about the rest of us, sousveillance advocates argue for a world in which ordinary citizens can turn the recording equipment back onto the authorities (and one another). This is thought to be beneficial in numerous ways.
The National Security Agency monitored the communications of other governments ahead of and during the 2009 United Nations climate negotiations in Copenhagen, Denmark, according to the latest document from whistleblower Edward Snowden. The document, with portions marked "top secret," indicates that the NSA was monitoring the communications of other countries ahead of the conference, and intended to continue doing so throughout the meeting.
The problem I see with Nicolelis’ view of the future of neuroscience, which I discussed last time, is not that I find it unlikely that a good deal of his optimistic predictions will someday come to pass, it is that he spends no time at all talking about the darker potential of such technology.
Someone interviewing me for a magazine asked me what current technology tomorrow’s children would find obsolete. I almost answered “The Internet.” Then I decided to think about that answer a little bit because it’s pretty scary. Then I decided it’s true. Shortly, humans may find today’s wide open Internet as archaic as we now find phones that are wired to walls. Here’s why. There are three huge pressures on the internet as we know it today – the one where I can write this essay, post it on my website, and you can find it and read it. Whoever you are.
This is the second part in a short series of posts on predictive algorithms and the virtues of transparency. The series is working off some ideas in Tal Zarsky’s article “Transparent Predictions”. The series is written against the backdrop of the increasingly widespread use of data-mining and predictive algorithms and the concerns this has raised.
Transparency is a much-touted virtue of the internet age. Slogans such as the “democratisation of information” and “information wants to be free” trip lightly off the tongue of many commentators; classic quotes, like Brandeis’s “sunlight is the best disinfectant” are trotted out with predictable regularity. But why exactly is transparency virtuous? Should we aim for transparency in all endeavours? Over the next two posts, I look at four possible answers to that question.
A well-intentioned grandmother accidentally hurt her grandkids’ feelings. She took screenshots of their delightful Instagram photos and proudly uploaded them to Facebook for all of her social network friends to see. If the younger generation didn’t set their accounts to private, could Grandma possibly have committed a faux pas? All she did was lovingly pass along publicly available information!
It would be nice to believe that the road to civility could be paved by following simple formulae, like Frank Bruni’s New Year’s exhortation, “Tweet less, read more”. Unfortunately, uncomplicated Op-Ed advice doesn’t translate into effective results in the messy real world.
The current level of general surveillance in society is incompatible with human rights. To recover our freedom and restore democracy, we must reduce surveillance to the point where it is possible for whistleblowers of all kinds to talk with journalists without being spotted. To do this reliably, we must reduce the surveillance capacity of the systems we use.
If we look back to the early days when the Internet was first exploding into public consciousness, in the 1980’s, and even more so in the boom years of the 90’s, what we often find is a kind of utopian sentiment around this new form of “space”. It wasn’t only that a whole new plane of human interaction seemed to be unfolding into existence almost overnight, it was that “cyberspace” seemed poised to swallow the real world- a prospect which some viewed with hopeful anticipation and others with doom.
What kind of privacy will be left for humans in a future world of ubiquitous computing, with sensors everywhere, and with algorithms that draw alarmingly reliable inferences about our intentions and plans?
Communications technology use is growing at a near exponential rate on a global scale.1 A recent United Nations study shows that more people have access to cell phones than toilets, as 6 billion of the world’s 7 billion people (85 percent) have access to mobile phones, while only 4.5 billion (64 percent) have access to working toilets.2
I believe Google is making a huge mistake in completely banning facial recognition systems for its Glass product. In my opinion, such a system could be used to help save thousands of lives. But then, we’re too damn caught up on absolute privacy that we’re willing to sacrifice actual, physical lives to ensure our privacy remains untainted. Such individualist dogma is deadly.
There’s a new “viral” video making the rounds. It’s a 15-minute pro gay-marriage film that interviews children about the concepts of prejudice, fairness and gay marriage. All the children in the video except one seem to think that basic principles of fairness should apply to men marrying men and women marrying women. However, throughout the video, one kid insists gay marriage “is just wrong.” When pressed for why this is so, the boy (who appears to be a five- or six-year-old) can provide no reason for his assertion.
As we learn more and more details regarding government spying, it seems more and more foolhardy to trust our security to third party businesses.The state requires information on its subjects to be effective. From the first census in Egypt more than 5000 years ago, states have sought personal information on their citizens, especially in tyrannical states, where informants and secret police gather information on any and all potentially subversive activities.
Big data generates big myths. To help society set realistic expectations, the right kind of skepticism is needed. Kate Crawford, Principal Researcher at Microsoft Research and Visiting Professor at MIT’s Center for Civic Media, does a fantastic job of explaining why folks are too optimistic about the promise of what big data can offer. She rightly argues that too much faith in it inclines us to misunderstand what data reflects, overestimate the political efficacy of information, and become insensitive to civil rights concerns.
Healthcare providers are establishing electronic health record (EHR) systems at an astonishing rate, due in part to the Health Information Technology for Economic and Clinical Health (HITECH) Act. The HITECH Act was created as a part of the American Recovery and Reinvestment Act of 2009.
For Google* there was Innocence of Muslims. For Twitter, there were, and still are, rape threats. For Facebook, now there are decapitations. Facebook’s controversy is the newest in a long line of quagmires that make companies—or at least their customers—question American platitudes about free speech. It comes after Facebook briefly decided not to ban one video of the brutal decapitation of a woman in Mexico to go viral.