Local differential privacy

Local differential privacy is a model of differential privacy with the added restriction that even if an adversary has access to the personal responses of an individual in the database, that adversary will still be unable to learn too much about the user's personal data. This is contrasted with global differential privacy, a model of differential privacy that incorporates a central aggregator with access to the raw data[1].

History

In 2003, Alexandre V. Evfimievski, Johannes Gehrke, Ramakrishnan Srikant[2] gave a definition equivalent to local differential privacy. In 2008, Kasiviswanathan et al.[3] gave a formal definition conforming with the standard definition of differential privacy.

The prototypical example of a locally differential private mechanism is the randomized response survey technique proposed by Stanley L. Warner in 1965, predating modern discussions of privacy.[4] Warner's innovation was the introduction of the “untrusted curator” model, where the entity collecting the data may not be trustworthy. Before users' responses are sent to the curator, the answers are randomized in a controlled manner guaranteeing differential privacy while allowing valid population-wide statistical inferences.

ε-local differential privacy

Definition of ε-local differential privacy

Let ε be a positive real number and be a randomized algorithm that takes a user's private data as input. Let denote the image of . The algorithm is said to provide -local differential privacy if, for all pairs of user's possible private data and and all subsets of :

where the probability is taken over the randomness used by the algorithm.

The main difference between this definition and the standard definition of differential privacy is that in differential privacy the probabilities are of the outputs of an algorithm that takes all users' data and here it is on an algorithm that takes a single user's data.

Sometimes the definition takes an algorithm that has all users data as input, and outputs a collection of all responses (such as the definition in Raef Bassily, Kobbi Nissim, Uri Stemmer and Abhradeep Guha Thakurta's 2017 paper [5]).

Deployment

Local differential privacy has been deployed in several internet companies:

  • RAPPOR[6], where Google used local differential privacy to collect data from users, like other running processes and Chrome home pages
  • Private Count Mean Sketch (and variances)[7] where Apple used local differential privacy to collect emoji usage data, word usage and other information from iPhone users
gollark: Yes, but the achievement system is inspired by that.
gollark: It's like the Cookie Clicker ||prestige mechanic||, but stupider.
gollark: Why?
gollark: What, to make an achievement only available through ||resetting||?
gollark: It's very easy to figure out and 1337ly h4xx.

References

  1. "Local vs. global differential privacy - Ted is writing things". desfontain.es. Retrieved 2020-02-10.
  2. Evfimievski, Alexandre V.; Gehrke, Johannes; Srikant, Ramakrishnan (June 9–12, 2003). "Limiting privacy breaches in privacy preserving data mining". Proceedings of the Twenty-Second ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems. pp. 211–222. doi:10.1145/773153.773174. ISBN 1581136706.
  3. Kasiviswanathan, Shiva Prasad; Lee, Homin K.; Nissim, Kobbi; Raskhodnikova, Sofya; Smith, Adam D. (2008). "What Can We Learn Privately?". 2008 49th Annual IEEE Symposium on Foundations of Computer Science. pp. 531–540. arXiv:0803.0924. doi:10.1109/FOCS.2008.27. ISBN 978-0-7695-3436-7.
  4. Warner, Stanley L. (1965). "Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias". Journal of the American Statistical Association. 60 (309): 63–69. doi:10.1080/01621459.1965.10480775.
  5. Bassily, Raef; Nissim, Kobbi; Stemmer, Uri; Thakurta, Abhradeep Guha (2017). "Privacy Aware Learning". Practical Locally Private Heavy Hitters. Advances in Neural Information Processing Systems. 30. pp. 2288–2296. arXiv:1707.04982. Bibcode:2017arXiv170704982B.
  6. Erlingsson, Úlfar; Pihur, Vasyl; Korolova, Aleksandra (2014). "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response". arXiv:1407.6981. Bibcode:2014arXiv1407.6981E. doi:10.1145/2660267.2660348. Cite journal requires |journal= (help)
  7. "Learning with Privacy at Scale". 2017. Cite journal requires |journal= (help)
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.