Research
Publications#
2025#
Should We Forget About Certified Unlearning? Evaluating the Pitfalls of Noisy Methods John Abascal, Eleni Triantafillou, Matthew Jagielski, Nicole Mitchell, Peter Kairouz
Black-Box Privacy Attacks on Shared Representations in Multitask Learning
John Abascal, Nicolás Berrios, Alina Oprea, Jonathan Ullman, Adam Smith, Matthew Jagielski
arXiv preprint arXiv:2506.16460, 2025
2024#
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation
Harsh Chaudhari, Giorgio Severi, John Abascal, Matthew Jagielski, Christopher A. Choquette-Choo, Milad Nasr, Cristina Nita-Rotaru, Alina Oprea
arXiv preprint arXiv:2405.20485, 2024
TMI! Finetuned Models Leak Private Information from their Pretraining Data
John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman
Proceedings on Privacy Enhancing Technologies (PETS) 2024
2023#
SNAP: Efficient Extraction of Private Properties with Poisoning
Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan Ullman
IEEE Symposium on Security and Privacy (S&P) 2023
Abstract
Property inference attacks allow an adversary to extract global properties of the training dataset from a machine learning model. We design an efficient property inference attack, SNAP, which achieves higher attack success and requires lower amounts of poisoning than prior work. For example, on the Census dataset, SNAP achieves 34% higher success rate while being 56.5x faster.
For the most up-to-date list of publications, please see my Google Scholar profile.