Awesome Machine Unlearning (A Survey of Machine Unlearning)
-
Updated
Oct 22, 2025 - Jupyter Notebook
Awesome Machine Unlearning (A Survey of Machine Unlearning)
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
[NeurIPS D&B '25] The one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE, WMDP, and many unlearning methods with easily feature extensibility.
Privacy Testing for Deep Learning
Python package for measuring memorization in LLMs.
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
[NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
reveal the vulnerabilities of SplitNN
What does gpt-oss tell us about OpenAI's training data?
A repository about literature of copyright protection in deep learning.
A unified evaluation suite for membership inference and machine text detection.
Source code for EMNLP 2024 Findings paper: Code Membership Inference for Detecting Unauthorized Data Use in Code Pre-trained Language Models.
Model extraction attack — exploratory implementation and analysis for learning purposes
Experiments at the intersection of ML security & privacy: adversarial attacks/defenses (FGSM/PGD, adversarial training), differential privacy (DP-SGD, ε–δ), federated learning privacy (secure aggregation), and auditing (membership/model inversion). PyTorch notebooks + eval scripts.
Add a description, image, and links to the membership-inference topic page so that developers can more easily learn about it.
To associate your repository with the membership-inference topic, visit your repo's landing page and select "manage topics."