How to Stop Facebook From Making Us Pawns in Its Corporate Agenda
Evan Selinger
2014-07-01 00:00:00
URL

In the case of Facebook’s most recently published study, the company used the words of some of you—and we can’t know who—in ways you certainly did not intend, to tweak News Feed based on emotional indicators to measure the effect it would have on mood. But this study is not unique. Social media regularly manipulates how user posts appear; the abuse of socially shared information has become a collective problem that requires a collective response.

This is a call to action. We should work together to demand that companies promise not to make us involuntary accomplices in corporate activities that compromise other people’s autonomy and trust.



Why Individual Responsibility Is Far From Enough



Many, though certainly not all, social media users are probably aware that their posts are curated for other people. Yet it’s still quite easy to fall into the trap of thinking that our mediated reality is the same as everyone else’s. In this mindset, the only way our words can prove harmful is when we make bad judgments about what we post. This perspective exerts a powerful hold on our imaginations because it suggests every time we log on, it’s up to us to do the right thing and make good judgments because others will be reading what we write. Unfortunately, this atomistic and choice-driven outlook ignores a deeper structural reality and erroneously frames the common good as protected by each user exhibiting sensitivity and self-control.

Through the lens of this overly reductive view of cyber-citizenship, each person does his or her part to promote the common good by accepting responsibility for three don’ts: Don’t deliberately say something that hurts another person’s feelings; Don’t disclose sensitive information that can harm your own reputation; Don’t let prying eyes peek by using privacy settings.

And yet, the case at issue demonstrates that individual discretion only goes so far when companies can take control of our information, re-purpose it to the potential detriment of others, and keep us in the dark about processes which are hard to keep in mind when looking at the friendly “what’s on your mind?” box—all the while avoiding liability by using lengthy and obtuse language in a Terms of Service agreement.

Even if the experiment resulted in a seemingly modest outcome and didn’t profoundly impact anyone’s life, a happy result couldn’t be presupposed at the outset. If it could, there wouldn’t be any need to run an experiment. Hypothetically, your sharing a problem to get it off your chest could have—combined with other attempts to do the same—been used in a way that made some of your friends (maybe ones with emotional disorders) sadder than it would have otherwise.

Click Here to read more...