Credit: Alicia Kubista / Andrij Borys Associates
Privacy protections for the past 40 years have concentrated on two types of procedural mitigations: informed consent and anonymization. Informed consent attempts to turn the collection, handling, and processing of data into matters of individual choice, while anonymization promises to render privacy concerns irrelevant by decoupling data from identifiable subjects. This familiar pairing dates to the historic 1973 report to the Department of Health, Education & Welfare, Records, Computers and the Rights of Citizens, that articulated a set of principles in which informed consent played a pivotal role (what have come to be known as the Fair Information Practice Principles (FIPPs)) and proposed distinct standards for the treatment of statistical records (that is to say, records not identifiable with specific individuals).
In the years since, as threats to privacy have expanded and evolved, researchers have documented serious cracks in the protections afforded by informed consent and anonymity.a Nevertheless, many continue to see them as the best and only workable solutions for coping with privacy hazards.b They do not deny the practical challenges, but their solution is to try harder—to develop more sophisticated mathematical and statistical techniques and new ways of furnishing notice tuned to the cognitive and motivational contours of users. Although we applaud these important efforts, the problem we see with informed consent and anonymization is not only that they are difficult to achieve; it is that, even if they were achievable, they would be ineffective against the novel threats to privacy posed by big data. The cracks become impassable chasms because, against these threats, anonymity and consent are largely irrelevant.1
No entries found