It all started September 2023 with the launch of the 2023 NeurIPS Machine Unlearning Competition and a dream to compete as a part of my day job. It all ended shortly after with a long-winded ‘no’ from legal.

It all started back up again with support to continue looking into Machine Unlearning, with the hope that it could become a fruitful research direction for our division.

In our preparation for the competition, we (me, Collin Abidi, and Cole Frank, supervised by Shannon Gallagher), had been analyzing the competition’s evaluations and the unlearning literature, and come to the conclusion that Machine Unlearning evaluations needed work. Long story short, we published an Extended Abstract at the 7th Deep Learning Security and Privacy Workshop, co-located with the 45th IEEE Symposium on Security and Privacy. You can read it here.

Based on demonstrated but uncommon evaluations, we recommended the use of:

  • Stronger adversarial attacks and worst-case metrics for assessing threats for sophisticated attackers
  • Update-leak attacks for analyzing a crucial threat model
  • Iterative unlearning evaluations for studying how performance changes over time

We began implementing a suite of tools to help promote and standardize these evaluations (but were cut short due to funding constraints).