Technology allows companies to collect more data and with more detail about their users than ever before. Sometimes that data is sold to third parties, other times it’s used to improve products and services.
In order to protect users’ privacy, anonymization techniques can be used to strip away any piece of personally identifiable data and let analysts access only what’s strictly necessary. As the Netflix competition in 2007 has shown though, that can go awry. The richness of data allows to identify users through a sometimes surprising combination of variables like the dates on which an individual watched certain movies. A simple join between an anonymized datasets and one of many publicly available, non-anonymized ones, can re-identify anonymized data.
Aggregated data is not much safer either under some circumstances! For example, say we have two summary statistics: one is the number of users, including Frank, that watch one movie per day and the other is the number of users, without Frank, that watch one movie per day. Then, by comparing the counts, we could tell if Frank watches one movie per day.
Differential privacy formalizes the idea that a query should not reveal whether any one person is present in a dataset, much less what their data are. Imagine two otherwise identical datasets, one with your information in it, and one without it. Differential Privacy ensures that the probability that a query will produce a given result is nearly the same whether it’s conducted on the first or second dataset. The idea is that if an individual’s data doesn’t significantly affect the outcome of a query, then he might be OK in giving his information up as it is unlikely that the information would be tied back to him. The result of the query can damage an individual regardless of his presence in a dataset though. For example, if an analysis on a medical dataset finds a correlation between lung cancer and smoking, then the health insurance cost for a particular smoker might increase regardless of his presence in the study.
More formally, differential privacy requires that the probability of a query producing any given output changes by at most a multiplicative factor when a record (e.g. an individual) is added or removed from the input. The largest multiplicative factor quantifies the amount of privacy difference. This sounds harder than it actually is and the next sections will iterate on the concept with various examples, but first we need to define a few terms.
We will think of a dataset as being a collection of records from an universe . One way to represent a dataset is with a histogram in which each entry represents the number of elements in the dataset equal to . For example, say we collected data about coin flips of three individuals, then given the universe , our dataset would have two entries: and , where . Note that in reality a dataset is likely to be an ordered lists of rows (i.e. a table) but the former representation makes the math a tad easier.
Given the previous definition of dataset, we can define the distance between two datasets with the norm as:
A mechanism is an algorithm that takes as input a dataset and returns an output, so it can really be anything, like a number, a statistical model or some aggregate. Using the previous coin-flipping example, if mechanism counts the number of individuals in the dataset, then . In practice though we will specifically considering randomized mechanisms, where the randomization is used to add privacy protection.
A mechanism satisfies differential privacy if for every pair of datasets such that , and for every subset :
What’s important to understand is that the previous statement is just a definition. The definition is not an algorithm, but merely a condition that must be satisfied by a mechanism to claim that it satisfies differential privacy. Differential privacy allows researchers to use a common framework to study algorithms and compare their privacy guarantees.
Let’s check if our mechanism satisfies differential privacy. Can we find a counter-example for which:
is false? Given such that and , then:
i.e. , which is clearly false, hence this proves that mechanism doesn’t satisfy differential privacy.
A powerful property of differential privacy is that mechanisms can easily be composed. These require the key assumption that the mechanisms operate independently given the data.
Let be a dataset and an arbitrary function. Then, the sequential composition theorem asserts that if is differentially private, then is differentially private. Intuitively this means that given an overall fixed privacy budget, the more mechanisms are applied to the same dataset, the more the available privacy budget for each individual mechanism will decrease.
The parallel composition theorem asserts that given partitions of a dataset , if for an arbitrary partition , is differentially private, then is differentially private. In other words, if a set of differentially private mechanisms is applied to a set of disjoint subsets of a dataset, then the combined mechanism is still differentially private.
The first mechanism we will look into is “randomized response”, a technique developed in the sixties by social scientists to collect data about embarrassing or illegal behavior. The study participants have to answer a yes-no question in secret using the following mechanism :
- Flip a biased coin with probability of heads ;
- If heads, then answer truthfully with ;
- If tails, flip a coin with probability of heads and answer “yes” for heads and “no” for tails.
def randomized_response_mechanism(d, alpha, beta): if random() < alpha: return d elif random() < beta: return 1 else: return 0
Privacy is guaranteed by the noise added to the answers. For example, when the question refers to some illegal activity, answering “yes” is not incriminating as the answer occurs with a non-negligible probability whether or not it reflects reality, assuming and are tuned properly.
Let’s try to estimate the proportion of participants that have answered “yes”. Each participant can be modeled with a Bernoulli variable which takes a value of 0 for “no” and a value of 1 for “yes”. We know that:
Solving for yields:
Given a sample of size , we can estimate with . Then, the estimate of is:
To determine how accurate our estimate is we will need to compute its standard deviation. Assuming the individual responses are independent, and using basic properties of the variance,
By taking the square root of the variance we can determine the standard deviation of . It follows that the standard deviation is proportional to , since the other factors are not dependent on the number of participants. Multiplying both and by yields the estimate of the number of participants that answered “yes” and its relative accuracy expressed in number of participants, which is proportional to .
The next step is to determine the level of privacy that the randomized response method guarantees. Let’s pick an arbitrary participant. The dataset is represented with either 0 or 1 depending on whether the participant answered truthfully with a “no” or “yes”. Let’s call the two possible configurations of the dataset respectively and . We also know that for any . All that’s left to do is to apply the definition of differential privacy to our randomized response mechanism :
The definition of differential privacy applies to all possible configurations of , e.g.:
The privacy parameter can be tuned by varying . For example, it can be shown that the randomized response mechanism with and satisfies differential privacy.
The proof applies to a dataset that contains only the data of a single participant, so how does this mechanism scale with multiple participants? It follows from the parallel composition theorem that the combination of differentially private mechanisms applied to the datasets of the individual participants is differentially private as well.
The Laplace mechanism is used to privatize a numeric query. For simplicity we are going to assume that we are only interested in counting queries , i.e. queries that count individuals, hence we can make the assumption that adding or removing an individual will affect the result of the query by at most 1.
The way the Laplace mechanism works is by perturbing a counting query with noise distributed according to a Laplace distribution centered at 0 with scale ,
Then, the Laplace mechanism is defined as:
where is a random variable drawn from .
def laplace_mechanism(data, f, eps): return f(data) + laplace(0, 1.0/eps)
It can be shown that the mechanism preserves differential privacy. Given two datasets such that and a function which returns a real number from a dataset, let denote the probability density function of and the probability density function of . Given an arbitrary real point ,
by the triangle inequality. Then,
What about the accuracy of the Laplace mechanism? From the cumulative distribution function of the Laplace distribution it follows that if , then . Hence, let and :
where . The previous equation sets a probalistic bound to the accuracy of the Laplace mechanism that, unlike the randomized response, does not depend on the number of participants .
The same query can be answered by different mechanisms with the same level of differential privacy. Not all mechanisms are born equally though; performance and accuracy have to be taken into account when deciding which mechanism to pick.
As a concrete example, let’s say there are individuals and we want to implement a query that counts how many possess a certain property . Each individual can be represented with a Bernoulli random variable:
participants = binomial(1, p, n)
We will implement the query using both the randomized response mechanism , which we know by now to satisfy differential privacy, and the Laplace mechanism which satisfies differential privacy as well.
def randomized_response_count(data, alpha, beta): randomized_data = randomized_response_mechanism(data, alpha, beta) return len(data) * (randomized_data.mean() - (1 - alpha)*beta)/alpha def laplace_count(data, eps): return laplace_mechanism(data, np.sum, eps) r = randomized_response_count(participants, 0.5, 0.5) l = laplace_count(participants, log(3))
Note that while that while is applied to each individual response and later combined in a single result, i.e. the estimated count, is applied directly to the count, which is intuitively why is noisier than . How much noisier? We can easily simulate the distribution of the accuracy for both mechanisms with:
def randomized_response_accuracy_simulation(data, alpha, beta, n_samples=1000): return [randomized_response_count(data, alpha, beta) - data.sum() for _ in range(n_samples)] def laplace_accuracy_simulation(data, eps, n_samples=1000): return [laplace_count(data, eps) - data.sum() for _ in range(n_samples)] r_d = randomized_response_accuracy_simulation(participants, 0.5, 0.5) l_d = laplace_accuracy_simulation(participants, log(3))
As mentioned earlier, the accuracy of grows with the square root of the number of participants:
Randomized Response Mechanism Accuracy
while the accuracy of is a constant:
Laplace Mechanism Accuracy
You might wonder why one would use the randomized response mechanism if it’s worse in terms of accuracy compared to the Laplace one. The thing about the Laplace mechanism is that the private data about the users has to be collected and stored, as the noise is applied to the aggregated data. So even with the best of intentions there is the remote possibility that an attacker might get access to it. The randomized response mechanism though applies the noise directly to the individual responses of the users and so only the perturbed responses are collected! With the latter mechanism any individual’s information cannot be learned with certainty, but an aggregator can still infer population statistics.
That said, the choice of mechanism is ultimately a question of which entities to trust. In the medical world, one may trust the data collectors (e.g. researchers), but not the general community who will be accessing the data. Thus one collects the private data in the clear, but then derivatives of it are released on request with protections. However, in the online world, the user is generally looking to protect their data from the data collector itself, and so there is a need to prevent the data collector from ever accumulating the full dataset in the clear.
The algorithms presented in this post can be used to answer simple counting queries. There are many more mechanisms out there used to implement complex statistical procedures like machine learning models. The concept behind them is the same though: there is a certain function that needs to be computed over a dataset in a privacy preserving manner and noise is used to mask an individual’s original data values.
One such mechanism is RAPPOR, an approach pioneered by Google to collect frequencies of an arbitrary set of strings. The idea behind it is to collect vectors of bits from users where each bit is perturbed with the randomized response mechanism. The bit-vector might represent a set of binary answers to a group of questions, a value from a known dictionary or, more interestingly, a generic string encoded through a Bloom filter. The bit-vectors are aggregated and the expected count for each bit is computed in a similar way as shown previously in this post. Then, a statistical model is fit to estimate the frequency of a candidate set of known strings. The main drawback with this approach is that it requires a known dictionary.
Later on the approach has been improved to infer the collected strings without the need of a known dictionary at the cost of accuracy and performance. To give you an idea, to estimate a distribution over an unknown dictionary of 6-letter strings without knowing the dictionary, in the worst case, a sample size in the order of 300 million is required; the sample size grows quickly as the length of the strings increases. That said, the mechanism consistently finds the most frequent strings which enable to learn the dominant trends of a population.
Even though the theoretical frontier of differential privacy is expanding quickly there are only a handful implementations out there that, by ensuring privacy without the need for a trusted third party like RAPPOR, suit well the kind of data collection schemes commonly used in the software industry.