Skip to content

Protecting Facial Privacy Against AIGC Models via Machine Unlearning

License

Notifications You must be signed in to change notification settings

sylviaaacys/FCRASH

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

FCRASH (coming soon)

Protecting Facial Privacy Against AIGC Models via Machine Unlearning

Table of contents
  1. Environment setup
  2. Dataset preparation
  3. How to run
  4. Contacts

Official PyTorch implementation of "FCRASH: Protecting Facial Privacy Against AIGC Models via Machine Unlearning"

teaser

Abstract: The rapid rise of text-to-image (T2I) generative models, such as Stable Diffusion, has raised significant concerns over the misuse of personal images, particularly through unauthorized personalization. We introduced FCRASH, a novel defense method designed to protect facial privacy against misuse in AI-generated content(AIGC). FCRASH introduces imperceptible, face-aware perturbations into user photos to prevent unauthorized face synthesis by diffusion-based generative models such as DreamBooth. By targeting facial regions critical for identity recognition, FCRASH effectively disrupts identity learning without sacrificing image quality.

TLDR: A security booth safeguards your privacy against malicious threats by preventing DreamBooth from synthesizing photo-realistic images of the individual target.

Environment setup

Our code relies on the diffusers library from Hugging Face 🤗 and the implementation of latent caching from ShivamShrirao's diffusers fork.

Install dependencies:

cd FCRASH
conda create -n fcrash python=3.9  
conda activate fcrash  
pip install -r requirements.txt  

Pretrained checkpoints of different Stable Diffusion versions can be downloaded from provided links in the table below:

Version Link
2.1 stable-diffusion-2-1-base
1.5 stable-diffusion-v1-5
1.4 stable-diffusion-v1-4

Please put them in ./stable-diffusion/. We use Stable Diffusion version 2.1 in all of our experiments.

GPU allocation: All experiments are performed on a NVIDIA 80GB H800 GPU. You have 8 NVIDIA H800 GPUs, each with 81,559 MiB ≈ 81.6 GB of memory. So yes, your GPUs have 81 GB of VRAM each, which is plenty for training large models like SD3.

Dataset preparation

For simple and convenient testing, we provided a simple dataset of several identities in './data/' to run.

For each identity, there's 12 images evenly divided into 3 subsets, including the reference clean set (set A), the target projecting set (set B), and an extra clean set for uncontrolled setting experiments (set C).

How to run

To defense Stable Diffusion version 2.1 with the Anti-DreamBooth baseline, you can run

bash scripts/aspl.sh

To defense Stable Diffusion version 2.1 with the Error-minimizing diffusion attack algorithm (unaccelerated and accelerated version), you can run

bash scripts/aspl_min.sh
bash scripts/unl_acc.sh

To defense Stable Diffusion version 2.1 with the Unlearnable/Error-minimizing vae attack algorithm, you can run

bash scripts/vae_attack.sh

With face-aware mechanism:

bash scripts/face_aware.sh 
bash scripts/face_aware_vae_attack.sh

The same running procedure is applied for other supported algorithms:

Algorithm Bash script
No defense scripts/attack_with_ensemble_aspl.sh
FSMG scripts/attack_with_fsmg.sh
T-FSMG scripts/attack_with_targeted_fsmg.sh
E-FSMG scripts/attack_with_ensemble_fsmg.sh

If you want to train a DreamBooth model from your own data, whether it is clean or perturbed, you may run the following script:

bash scripts/train_dreambooth_alone.sh

Inference: generates examples with multiple-prompts

python infer.py --model_path <path to DREAMBOOTH model>/checkpoint-1000 --output_dir ./test-infer/

Limitations

The picture generated do not successfully learn the concept of the "sks" person, making us couldn't really determine if the attack is really successful or not.

Contacts

Email: sylviachung.22@intl.zju.edu.cn.

About

Protecting Facial Privacy Against AIGC Models via Machine Unlearning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published