From 070d163457ee24d65c22c364e1339b071a032521 Mon Sep 17 00:00:00 2001 From: Fahad Shamshad <39552850+fahadshamshad@users.noreply.github.com> Date: Wed, 19 Jul 2023 08:29:25 +0400 Subject: [PATCH] Update README.md --- README.md | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/README.md b/README.md index fd5d533..f657ae2 100644 --- a/README.md +++ b/README.md @@ -71,6 +71,36 @@

+## Intructions for Code usage + +### Setup + +- **Get code** +```shell +git clone https://github.com/fahadshamshad/Clip2Protect.git +``` + +- **Build environment** +```shell +cd clip2protect +# use anaconda to build environment +conda create -n clip2protect python=3.8 +conda activate clip2protect +# install packages +pip install -r requirements.txt +``` + +Here, the code relies on the [Rosinality](https://github.com/rosinality/stylegan2-pytorch/) pytorch implementation of StyleGAN2. + +You can manually download the pre-trained StyleGAN2 weights from [here](https://drive.google.com/file/d/1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT/view?usp=sharing). Placed the weights in 'pretrained_models' folder. + +Acquire the latent codes of the face images you want to protect using the encoder4editing (e4e) method available [here](https://github.com/omertov/encoder4editing). + +The core functionality of the application is in `main.py`. The generator finetuning and adversarial optimization stages are encapsulated within `pivot_tuning.py` and `adversarial_optimization.py`, respectively. + +To download pretrained face recognition models and dataset instructions, including target images, please refer to AMT-GAN page [here](https://github.com/CGCL-codes/AMT-GAN). + + ## Citation If you're using CLIP2Protect in your research or applications, please cite using this BibTeX: