As part of my day job, I recently recapped the Federal Trade Commission’s workshop on “Big Data” and discrimination. My two key takeaways were that regulators and the advocacy community wanted more “transparency” into how industry is using big data, particularly in positive ways, and second, that there was a pressing need for industry to take affirmative steps to implement governance systems and stronger “institutional review board”-type mechanisms to overcome the transparency hurdle the opacity of big data present.
But if I’m being candid, I think we really need to start narrowing our definitions of big data. Big data has become a term that gets attached to a wide-array of different technologies and tools that really ought to be addressed separately. We just don’t have a standard definition. The Berkeley School of Information recently asked forty different thought leaders what they thought of big data, and basically got forty different definitions. While there’s a common understanding of big data as more volume, more variety, and at greater velocity, I’m not sure how any of these terms is a foundation to start talking about practices or rules, let alone ethics.
At the FTC’s workshop, big data was spoken in the context of machine learning and data mining, the activities of data brokers and scoring profiles, wearable technologies and the greater Internet of Things. No one ever set ground rules as to what “Big Data” meant as a tool for inclusion or exclusion. At one point, a member of the civil rights community was focused on big data largely as the volume of communications being produced by social media at the same time as another panelist was discussing consumer loyalty cards. Maybe there’s some overlap, but the risks and rewards can be very different.