This is the offical implemation of RAPID: Retrieval Augmented Training of Differentially Private Diffusion Models.
The code is based off a public implementation of Latent Diffusion Models, available here and a public implementation of Differentially Private Latent Diffusion Models available here.
conda env create -f environment.yaml
conda activate RAPIDpython main.py --base <AE config file path> -t --gpus 1python main.py --base <DM config file path> -t --gpus 1python main.py --base <Finetune config file path> -t --gpus 0, --accelerator gpupython train_feature_extractor.py --config <DM config file path> --ckpt <checkpoint path> --output <network output path> --epoch 50python conditional_sampling.py --config <DM config file path> --private_config <DM config file path> --ckpt <checkpoint path> \
--private_ckpt <checkpoint path> --netpath <path to the feature extractor> --output <network output path>
python unconditional_sampling.py --config <DM config file path> --private_config <DM config file path> --ckpt <checkpoint path> \
--private_ckpt <checkpoint path> --netpath <path to the feature extractor> --output <network output path> python FID_test.py --sample_path <path to generated samples> --train_stats_path <path to generated statistics on the reference set>python Diversity_test.py --sample_path <path to generated samples> --data_config <config file path>For MNIST, to compute the downstream performance on a regular CNN, the command is:
CNN_downstream.py --sample_path <path to generated samples> --epoch 10We built and tested our project on top of Differentially Private Latent Diffusion Models and Differentially Private Latent Diffusion Models. Many thanks to the authors who make their work publicly accessable!