Monthly Archives: August 2013

Because Everyone Needs Facebook

Facebook has rolled out several proposed updates to its privacy policy that ultimately gives Facebook even more control over its users information.  Coming on the heels of $20 million settlement by Facebook for using user’s information for inclusion in advertisements and “sponsored stories,” Facebook has responded by requiring users to give it permission to do just that:

You give us permission to your name, profile picture, content, and information in connection with commercial, sponsored, or related content (such as a brand you like) served or enhanced by us.

A prior clause that suggested any permission was “subject to the limits you place” has been removed.

This is why people don’t trust Facebook. The comments sections to these proposed changes are full of thousands of people demanding that Facebook leave their personal information alone, without any awareness that that ship has sailed.  I don’t begrudge Facebook’s efforts to find unique and data-centric methods to make money, but as someone who is already reluctant to share too much about myself on Facebook, I can’t be certain that these policies changes aren’t going to lead to Facebook having me “recommend” things to my friends I have no association with.

But no one is going to “quit” Facebook over these changes.  No one ever quits Facebook.  As a communications and connectivity platform, it is simply invaluable to users.  These changes will likely only augment Facebook’s ability to be deliver users content, but as someone who’s been with Facebook since early on, Facebook sure has transformed from this safe lil’club into a walled Wild West where everyone’s got their eye on everyone.

 

Digital Market Manipulation Podcast

[audio http://www.futureofprivacy.org/wp-content/uploads/FPFCast.Calo_.mp3]

The other week, Rebecca Rosen wrote up a fantastic overview of Professor Ryan Calo’s new paper on “Digital Market Manipulation” in The Atlantic.  “What Does It Really Matter If Companies Are Tracking Us Online?” she provocatively asked in her headline.

Conveniently, I was scheduled to speak with Professor Calo about his essay Consumer Subject Review Boards — A Thought Experiment, which looks at how institutional review boards (IRBs) were put in place to ensure ethical human testing standards and suggests a similar framework could be brought to bear on consumer data projects.

I was able to ask him about the concept of digital market manipulation, which seems to move beyond mere “privacy” concerns into questions of fundamental fairness and equality.

Framing Big Data Debates

If finding the proper balance between privacy risks and Big Data rewards is the big public policy challenge of the day, we can start by having a serious discussion about what that policy debate should look like. In advance of my organization’s workshop on “Big Data and Privacy,” we received a number of paper submissions that attempted to frame the debate between Big Data and privacy. Is Big Data “new”?  What threats exist?  And what conceptual tools exist to address any concerns?

As part of my attempt to digest the material, I wanted to look at how several scholars attempted to think about this debate.

This question is especially timely in light of FTC Chairwoman Edith Ramirez’s recent remarks on the privacy challenge of Big Data at the Aspen Forum this week. Chairwoman Ramirez argued that “the fact that ‘big data’ may be transformative does not mean that the challenges it poses are, as some claim, novel or beyond the ability of our legal institutions to respond.” Indeed, a number of privacy scholars have suggested that Big Data does not so much present new challenges but rather has made old concerns ever more pressing.

Read More…

From Cyberspace to Big Data Podcast

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

In the run-up to the Future of Privacy Forum’s “Big Data and Privacy” workshop with the Stanford Center for Internet & Society, I’ve taken to podcasting again, speaking with scholars who couldn’t attend the conference.  First up was Professor Bill McGeveran, who prepared an essay looking over lessons from the 2000 Stanford symposium on “Cyberspace and Privacy: A New Legal Paradigm?”

Of course, now the buzzword has moved from cyberspace to big data.  McGeveran suggests big data is really seeing a replay of the same debates cyberspace saw a decade ago.  Among the parallels he highlights are (1) the propertization of data, (2) technological solutions like P3P, (4) First Amendment questions, and (4) the challenges posed by the privacy myopia.

Enter the Nexus?

In 2032, a group of genetically engineered neo-Nazis create a super virus that threatens to wipe away the rest of humanity. Coming on the heels of a series of outbreaks involving psychotropic drugs that effectively enslave their users, this leads to the Chandler Act, which places sharp restrictions on “research into genetics, cloning, nanotechnology, artificial intelligence, and any approach to creating ‘superhuman’ beings.” The Emerging Risks Directorate is launched within the Department of Homeland Security, and America’s war on science begins.

This is the world that technologist Ramez Naam sets his first novel, the techno-thriller Nexus. Nexus is a powerful drug, oily and bitter, that allows humans minds to be linked together into a mass consciousness. A hodgepodge of American graduate students develop a way to layer software into Nexus, allowing enterprising coders to upload programs into the human brain. It’s shades of The Matrix, but it’s hardly an impossible idea.

Read More…

The Toobin Principle as a Corollary to the Snowden Effect

Jay Rosen has a fantastic piece today on PressThink on what he calls the “Toobin principle“.  In effect, Jeffrey Toobin and a number of media figures have criticized Edward Snowden as a criminal or, worse, a traitor even as they admit that his revelations have led to a worthwhile and more importantly, a newsworthy debate. For his part, Rosen asks whether there can “even be an informed public and consent-of-the-governed for decisions about electronic surveillance”?

I would add only the following observations. First, an informed public may well be the only real mechanism for preserving individual privacy over the long term. As we’ve seen, the NSA has gone to great lengths to explain that it was acting under appropriate legal authority, and the President himself stressed that all three branches of government approved of these programs. But that hasn’t stopped abuses — as identified in currently classified FISC opinions — or and I think this is key, stopped government entities from expanding these programs.

This also begs the bigger, looming concern of what all of this “Big Data” means. One of the big challenges surrounding Big Data today is that companies aren’t doing a very good job communicating with consumers about what they’re doing with all this data.  Innovation becomes a buzzword to disguise a better way to market us things. Like “innovation,” national security has long been used as a way to legitimize many projects. However, with headlines like “The NSA is giving your phone records to the DEA. And the DEA is covering it up,” I believe it is safe to say that the government now faces the same communications dilemma as private industry.

In a recent speech at Fordham Law School, FTC Commissioner Julie Brill cautioned that Big Data will require industry to “engage in an honest discussion about its collection and use practices in order to instill consumer trust in the online and mobile marketplace.”  That’s good advice — and the government ought to take it.

 Scroll to top