Differential Privacy for Privacy-Preserving Data Analysis: An Introduction to our Blog Series

Differential Privacy Blog Banner

Credit:

metamorworks/shutterstock.com
Does your organization want to aggregate and analyze data to learn trends, but in a way that protects privacy ? Or possibly you are already using derived function privacy tools, but want to expand ( or partake ) your cognition ? In either shell, this web log series is for you.

Why are we doing this series ? last year, NIST launched a Privacy Engineering Collaboration Space to aggregate open reference tools, solutions, and processes that support privacy engineer and hazard management. As moderators for the Collaboration Space, we ’ ve helped NIST gather differential privacy tools under the subject area of de-identification. NIST besides has published the Privacy Framework : A instrument for Improving Privacy through Enterprise Risk Management and a company roadmap that recognized a issue of challenge areas for privacy, including the topic of de-identification. now we ’ d like to leverage the Collaboration Space to help close the roadmap ’ s gap on de-identification. Our end-game is to support NIST in turning this series into more in-depth guidelines on differential privacy .
Each post will begin with conceptual basics and practical use cases, aimed at helping professionals such as occupation work owners or privacy plan personnel learn just enough to be dangerous ( just kidding ). After covering the basics, we ’ ll look at available tools and their technical approaches for privacy engineers or IT professionals matter to in execution details. To get everyone improving to speed, this beginning post will provide background on differential privacy and describe some key concepts that we ’ ll consumption in the rest of the series .

The Challenge

How can we use data to learn about a population, without learning about specific individuals within the population ? Consider these two questions :

  1.  “How many people live in Vermont?”
  2. “How many people named Joe Near live in Vermont?”

The first reveals a property of the whole population, while the second gear reveals information about one person. We need to be able to learn about trends in the population while preventing the ability to learn anything newly about a detail individual. This is the goal of many statistical analyses of data, such as the statistics published by the U.S. Census Bureau, and machine learning more broadly. In each of these settings, models are intended to reveal trends in populations, not reflect information about any individual person .
But how can we answer the beginning motion “ How many people live in Vermont ? ” — which we ’ ll mention to as a question — while preventing the second question from being answered “ How many people name Joe Near live in Vermont ? ” The most widely used solution is called de-identification ( or anonymization ), which removes identifying information from the dataset. ( We ’ ll broadly assume a dataset contains information collected from many individuals. ) Another choice is to allow entirely aggregate queries, such as an average over the data. unfortunately, we now understand that neither approach actually provides solid privacy security. De-identified datasets are national to database-linkage attacks. Aggregation alone protects privacy if the groups being aggregated are sufficiently large, and flush then, privacy attacks are hush possible [ 1, 2, 3, 4 ] .

Differential Privacy

Differential privacy [ 5, 6 ] is a mathematical definition of what it means to have privacy. It is not a specific work like de-identification, but a place that a process can have. For exemplar, it is possible to prove that a particular algorithm “ satisfies ” differential gear privacy .
informally, differential privacy guarantees the following for each person who contributes data for psychoanalysis : the output of a differentially private analysis will be roughly the same, whether or not you contribute your data. A differentially individual analysis is frequently called a mechanism, and we denote it ℳ .
Figure 1: Informal Definition of Differential Privacy number 1 : informal Definition of Differential Privacy
calculate 1 illustrates this principle. Answer “ A ” is computed without Joe ’ mho data, while answer “ B ” is computed with Joe ’ south data. Differential privacy says that the two answers should be identical. This implies that whoever sees the output won ’ metric ton be able to tell whether or not Joe ’ s data was used, or what Joe ’ second data contained .
We control the strength of the privacy guarantee by tuning the privacy argument ε, besides called a privacy loss or privacy budget. The lower the value of the ε parameter, the more indistinguishable the results, and therefore the more each individual ’ second data is protected .
Figure 2: Formal Definition of Differential Privacy digit 2 : ball definition of Differential Privacy

We can often answer a question with differential privacy by adding some random noise to the question ’ s answer. The challenge lies in determining where to add the randomness and how a lot to add. One of the most normally exploited mechanisms for adding noise is the Laplace mechanism [ 5, 7 ] .
Queries with higher sensitivity require adding more make noise in order to satisfy a particular `epsilon` quantity of differential privacy, and this supernumerary noise has the likely to make results less utilitarian. We will describe sensitivity and this tradeoff between privacy and utility in more detail in future blog posts .

Benefits of Differential Privacy

Differential privacy has respective important advantages over previous privacy techniques :

  1. It assumes all information is identifying information, eliminating the challenging (and sometimes impossible) task of accounting for all identifying elements of the data.
  2. It is resistant to privacy attacks based on auxiliary information, so it can effectively prevent the linking attacks that are possible on de-identified data.
  3. It is compositional — we can determine the privacy loss of running two differentially private analyses on the same data by simply adding up the individual privacy losses for the two analyses. Compositionality means that we can make meaningful guarantees about privacy even when releasing multiple analysis results from the same data.  Techniques like de-identification are not compositional, and multiple releases under these techniques can result in a catastrophic loss of privacy.

These advantages are the basal reasons why a practitioner might choose differential privacy over some other data privacy technique. A current drawback of derived function privacy is that it is rather new, and robust tools, standards, and best-practices are not well accessible outside of academic research communities. however, we predict this restriction can be overcome in the near future due to increasing demand for robust and easy-to-use solutions for data privacy .

Coming Up Next

Stay tuned : our following post will build on this one by exploring the security system issues involved in deploying systems for differential gear privacy, including the remainder between the cardinal and local anesthetic models of differential privacy .
Before we go — we want this series and subsequent NIST guidelines to contribute to making differential privacy more accessible. You can help. Whether you have questions about these posts or can contribution your cognition, we hope you ’ ll engage with us so we can advance this discipline together .

References

[ 1 ] Garfinkel, Simson, John M. Abowd, and Christian Martindale. “ agreement database reconstruction attacks on populace data. ” Communications of the ACM 62.3 ( 2019 ) : 46-53 .
[ 2 ] Gadotti, Andrea, et alabama. “ When the signal is in the noise : exploiting diffix ‘s sticky noise. ” 28th USENIX Security Symposium ( USENIX Security 19 ). 2019 .
[ 3 ] Dinur, Irit, and Kobbi Nissim. “ Revealing data while preserving privacy. ” Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems. 2003 .
[ 4 ] Sweeney, Latanya. “ simple demographics much identify people uniquely. ” Health ( San Francisco ) 671 ( 2000 ) : 1-34.

[ 5 ] Dwork, Cynthia, et alabama. “ Calibrating noise to sensitivity in secret data analysis. ” Theory of cryptography league. Springer, Berlin, Heidelberg, 2006 .
[ 6 ] Wood, Alexandra, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, James Honaker, Kobbi Nissim, David R. O’Brien, Thomas Steinke, and Salil Vadhan. “ Differential privacy : A primer for a non-technical hearing. ” Vand. J. Ent. & Tech. L. 21 ( 2018 ) : 209 .
[ 7 ] Dwork, Cynthia, and Aaron Roth. “ The algorithmic foundations of differential privacy. ” Foundations and Trends in Theoretical Computer Science 9, no. 3-4 ( 2014 ) : 211-407 .

informant : https://coinselected.com
Category : crypto topics

Leave a Reply

Your email address will not be published.