This readme file gives basic overview of the scrope of this project, sample results, and steps needed to replicate the work, either from scratch or using pre-trained models. Reproducing the results from-scratch is very involved process and includes training of all the models. In either case data processing needs to be done. Details are described below.
The full research paper is available at: https://arxiv.org/abs/2203.00108
TLDR.
DeepFakes are synthetic videos generated by swapping a face of an original image with the face of somebody else. In this paper, we describe our work to develop general, deep learning-based models to classify DeepFake content. We propose a novel framework for using Generative Adversarial Network (GAN)-based models, we call MRI-GAN, that utilizes perceptual differences in images to detect synthesized videos. We test our MRI-GAN approach, and a plain-frames-based model using the DeepFake Detection Challenge Dataset. Our plain frames-based-model achieves 91% test accuracy, and a model which uses our MRI-GAN framework with Structural Similarity Index Measurement (SSIM) for the perceptual differences achieves 74% test accuracy. The results of MRI-GAN are preliminary and maybe improved further by modifying the choice of loss function, tuning hyper-parameters, or by using a more advanced perceptual similarity metric.
MRI-GAN generates MRI of the input image. The MRI is DeepFake image contains artifacts which highlights regions of synthesisezed pixels. The MRI of non-DeepFake image is just black image.
Note: This is very involved process.
-
Set development environment. We have used conda for our python distribution and related libraries on Ubuntu 20.04 OS. Create a new environment using below command and activate it. We have provided our environment.yml in the codebase.
conda env create -f environment.yml
-
Download datasets and extract.
- DFDC dataset from https://ai.facebook.com/datasets/dfdc/
- Celeb-DF-v2 dataset from https://github.com/yuezunli/celeb-deepfakeforensics
- FFHQ dataset from https://github.com/NVlabs/ffhq-dataset
- FDF dataset from https://github.com/hukkelas/FDF
-
Configure the paths and other params.
Note: Configuration of these paths may have been optimized by setting relative paths, but due huge size of dataset and limitation of available storage space, we have set absolute paths for each entity to have flexibility to choose where to save individual outcomes. Downside of this choice is that, we have to set all paths individually which can be tedious.
config.yml
is the key configuration to control the whole flow. Update paths of the dataset paths as needed. You would need to update all the paths which starts from /home/directory, other filenames does not need be changed.- DFDC dataset configuration
- update
['data_path']['dfdc']['train']
: path of the training set - update
['data_path']['dfdc']['valid']
: path of the validation set - update
['data_path']['dfdc']['test']
: path of the test set - Update all key-value pairs under
['features']['dfdc']['landmarks_paths']
to point to where you want to save generated landmarks for DFDC - Update all key-value pairs under
['features']['dfdc']['crop_faces']
to point to where you want to save extracted images of faces for DFDC - update
['features']['dfdc']['mri_path']
: path where all MRIs will be saved. These MRIs are used for MRI-GAN training - update
['features']['dfdc']['train_mrip2p_faces']
: After MRI-GAN is trained, it is used to predict MRIs of DFDC. All predicted MRIs are saved here. Same forvalid_mrip2p_faces
andtest_mrip2p_faces
: update the paths.
- update
- Celeb-DF-v2 dataset configuration.
['data_path']['celeb_df_v2']['real']
: path of real samples (Celeb-real)['data_path']['celeb_df_v2']['fake']
: path of fake samples (Celeb-synthesis)['features']['celeb_df_v2']['landmarks_path']['train']
: path where landmarks will be saved['features']['celeb_df_v2']['crop_faces']['train']
: path where extracted faces will be saved
- FDF dataset configuration.
['data_path']['fdf']['data_path']
: path of samples (cc-by-nc-sa-2/128)['data_path']['fdf']['landmarks_path']['train']
: path where landmarks will be saved['features']['fdf']['json_filename']
: path to a json file where landmarks will be saved['features']['fdf']['crops_path']
: path where extracted faces will be saved
- FFHQ dataset configuration.
['data_path']['ffhq']['data_path']
: path of samples (images1024x1024)['features']['ffhq']['json_filename']
: path to a json file where landmarks will be saved['features']['ffhq']['crops_path']
: path where extracted faces will be saved
-
Data pre-processing. Enter following commands in sequence
python data_preprocess.py --gen_aug_plan
(select random video files in the DFDC training set and make a plan to apply various random combinations of augmentation and distractions. This command generates the plan and saves in a .pkl file.)python data_preprocess.py --apply_aug_to_all
(Execute the plan generated in step #1. This command reads the .pkl file generated in step #1 and executes the plan one-by-one for each video file selected in DFDC training set)python data_preprocess.py --extract_landmarks
(Use pre-trained MTCNN to extract landmarks of each face detected in the video frames. Every 10th frame is used by default in each video. Landmarks are extracted for each video in train, validation and test set. All landmarks are saved in separate .json files for each video)python data_preprocess.py --crop_faces
(Save faces from landmarks json files for each video)python data_preprocess.py --gen_mri_dataset
(Generate MRI-DF dataset. This generates the images of perceptual dissimilarity for DFDC train set -(50% of DFDC train set as mentioned in the paper))
-
MRI-GAN training
- Configure
config.yml
. Parameters under ['MRI_GAN']['model_params'] section can be tweaked. 'tau' is adjusted for different results. 'batch_size' can be changed depending upon GPU memory available for your machine. python train_MRI_GAN.py --train_from_scratch
(Train the MRI-GAN model. Check help for option on --train_resume to resume training if it was stopped earlier. Logs will be generated and saved under logs/<date_time_stamp> directory, model weights will also be saved in the same directory)cp logs/<date_time_stamp>/MRI_GAN/checkpoint_best_G.chkpt assets/weights/MRI_GAN_weights.chkpt
(Copy trained MRI-GAN weights)python data_preprocess.py --gen_dfdc_mri
(Use trained MRI-GAN to predict MRIs for DFDC dataset)
- Configure
-
Train and test the DeepFake Detection model
python data_preprocess.py --gen_deepfake_metadata
(Generate metadata csv files used by DataLoaders of PyTorch classes)- Using plain-frames method
- Configure
config.yml
. Parameters under ['deep_fake']['model_params'] section can be tweaked. For plain-frames method set following params. 'train_transform' : 'complex' 'dataset' : 'plain' 'batch_size' can be changed depending upon GPU memory available for your machine. python deep_fake_detect.py --train_from_scratch
(Start training from scratch. Also check--train_resume
command line option if you want to resume previously started training. After all epochs are done, testing of the model will start)python deep_fake_detect.py --test_saved_model <path>
(Test the model which was saved on disk. e.g. if the training was killed before all epochs were completed, this option can be used to test the model which was saved during training process)
- Configure
- Using MRI-based method
- Configure
config.yml
. Parameters under ['deep_fake']['model_params'] section can be tweaked. For plain-frames method set following params. 'train_transform' : 'simple' 'dataset' : 'mri' 'batch_size' can be changed depending upon GPU memory available for your machine. python deep_fake_detect.py --train_from_scratch
(Start training from scratch. Also check--train_resume
command line option if you want to resume previously started training. After all epochs are done, testing of the model will start)python deep_fake_detect.py --test_saved_model <path>
(Test the model which was saved on disk. e.g. if the training was killed before all epochs were completed, this option can be used to test the model which was saved during training process)
- Configure
- check --help of all scripts mentioned above to see more utility methods, e.g. to resume training of models if the trained was stopped in between.
Download all pre-trained model weights to reproduce the results.
- MRI-GAN. Model with tau = 0.3 and Generator with the lowest loss: https://drive.google.com/uc?id=1qEfI96SYOWCumzPdQlcZJZvtAW_OXUcH
- DeepFake detection models
- Plain-frames based: https://drive.google.com/uc?id=1_Pxv6ptxqXKtDJNkodkDmMTD_KRo08za
- MRI based: https://drive.google.com/uc?id=1xKzehNuq1B1th-_-U6OG9v2Q2Odws6VG
Use the model to test a given video file.
- Download all pre-trained model weights.
- Run the command-line App
python detect_deepfake_app.py --input_videofile <path to video file> --method <detection method>
. Detection method can beplain_frames
orMRI
Pratikkumar Prajapati and Chris Pollett, MRI-GAN: A Generalized Approach to Detect DeepFakes using Perceptual Image Assessment. arXiv preprint arXiv:2203.00108 (2022)
or
@misc{2203.00108,
Author = {Pratikkumar Prajapati and Chris Pollett},
Title = {MRI-GAN: A Generalized Approach to Detect DeepFakes using Perceptual Image Assessment},
Year = {2022},
Eprint = {arXiv:2203.00108},
}