A smart privacy-preserving learning method by fake gradients to protect users items in recommender systems (Q2228145): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
(One intermediate revision by one other user not shown)
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1155/2020/6683834 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W3111409109 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Our Data, Ourselves: Privacy Via Distributed Noise Generation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Theory of Cryptography / rank
 
Normal rank
Property / cites work
 
Property / cites work: Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias / rank
 
Normal rank

Latest revision as of 15:19, 24 July 2024

scientific article
Language Label Description Also known as
English
A smart privacy-preserving learning method by fake gradients to protect users items in recommender systems
scientific article

    Statements

    A smart privacy-preserving learning method by fake gradients to protect users items in recommender systems (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    16 February 2021
    0 references
    Summary: In this paper, we study the problem of protecting privacy in recommender systems. We focus on protecting the items rated by users and propose a novel privacy-preserving matrix factorization algorithm. In our algorithm, the user will submit a fake gradient to make the central server not able to distinguish which items are selected by the user. We make the Kullback-Leibler distance between the real and fake gradient distributions to be small thus hard to be distinguished. Using theories and experiments, we show that our algorithm can be reduced to a time-delay SGD, which can be proved to have a good convergence so that the accuracy will not decline. Our algorithm achieves a good tradeoff between the privacy and accuracy.
    0 references
    0 references
    0 references
    0 references
    0 references