Transfer Learning in Property Inference Attacks

Collaborative learning enables participants to train a joint machine learning model without explicitly revealing their private training data, and is thus a natural candidate for tasks requiring sensitive data. However, it has been shown that updates observed during the learning process of a source task leak unintended information about the clients’ private data, which allows an adversary to infer target properties of the clients’ private data. We replicate such attacks for a wide range of target properties on a dataset of face images using two different models, and explore the variation of the privacy loss depending on the source.Finally, we use transfer learning to compute affinity measures between source and target tasks, and show that they are good predictors for the privacy loss, particularly for completely unrelated source and target tasks.

Semester project with Theresa Stadler and Carmela Troncoso at EPFL’s Security and Privacy Engineering Lab (SPRING)

Christian Knabenhans
Christian Knabenhans
Ph.D. student in security and privacy

Doctoral student at EPFL. Applied cryptography, privacy-enhancing technologies, useable security.