FaceRecognition with MTCNN using ArcFace
-
- Liveness detector capable of spotting fake faces and performing anti-face spoofing in face recognition systems
- Our FaceRecognition system initially will check the faces are Fake or NOT
- If its a Fake face it will give warnings
- Otherwise it will go for Face-Recognition
git clone https://github.com/naseemap47/FaceRecognition-MTCNN-ArcFace.git
cd FaceRecognition-MTCNN-ArcFace
pip3 install -r requirements.txt
You can use:
streamlit run app.py
Args
-i
, --source
: RTSP link or webcam-id
-n
, --name
: name of the person
-o
, --save
: path to save dir
-c
, --conf
: min prediction conf (0<conf<1)
-x
, --number
: number of data wants to collect
Example:
python3 take_imgs.py --source 0 --name JoneSnow --save data --conf 0.8 --number 100
📖 Note:
Repeate this process for all people, that we need to detect on CCTV, Web-cam or in Video.
In side save Dir, contain folder with name of people. Inside that, it contain collected image data of respective people.
Structure of Save Dir:
├── data_dir
│ ├── person_1
│ │ ├── 1.jpg
│ │ ├── 2.jpg
│ │ ├── ...
│ ├── person_2
│ │ ├── 1.jpg
│ │ ├── 2.jpg
│ │ ├── ...
. .
. .
It will Normalize all data inside path to save Dir and save same as like Data Collected Dir
Args
-i
, --dataset
: path to dataset/dir
-o
, --save
: path to save dir
Example:
python3 norm_img.py --dataset data/ --save norm_data
Structure of Normalized Data Dir:
├── norm_dir
│ ├── person_1
│ │ ├── 1_norm.jpg
│ │ ├── 2_norm.jpg
│ │ ├── ...
│ ├── person_2
│ │ ├── 1_norm.jpg
│ │ ├── 2_norm.jpg
│ │ ├── ...
. .
. .
Args
-i
, --dataset
: path to Norm/dir
-o
, --save
: path to save .h5 model, eg: dir/model.h5
-l
, --le
: path to label encoder
-b
, --batch_size
: batch Size for model training
-e
, --epochs
: Epochs for Model Training
Example:
python3 train.py --dataset norm_data/ --batch_size 16 --epochs 100
Args
-i
, --source
: path to Video or webcam or image
-m
, --model
: path to saved .h5 model, eg: dir/model.h5
-c
, --conf
: min prediction conf (0<conf<1)
-lm
, --liveness_model
: path to liveness.model
--le
, --label_encoder
: path to label encoder
Example:
python3 inference_img.py --source test/image.jpg --model models/model.h5 --conf 0.85 \
--liveness_model models/liveness.model --label_encoder models/le.pickle
To Exit Window - Press Q-Key
Example:
# Video (mp4, avi ..)
python3 inference.py --source test/video.mp4 --model models/model.h5 --conf 0.85 \
--liveness_model models/liveness.model --label_encoder models/le.pickle
# Webcam
python3 inference.py --source 0 --model models/model.h5 --conf 0.85 \
--liveness_model models/liveness.model --label_encoder models/le.pickle
To Exit Window - Press Q-Key
Liveness detector capable of spotting fake faces and performing anti-face spoofing in face recognition systems
If you wants to create a custom Liveness model,
Follow the instruction below 👇:
Collect Positive and Negative data using data.py
Args
-i
, --source
: source - Video path or camera-id
-n
, --name
: poitive or negative
Example:
cd Liveness
python3 data.py --source 0 --name positive # for positive
python3 data.py --source 0 --name negative # for negative
Train Liveness model using collected positive and negative data
Args
-d
, --dataset
: path to input dataset
-p
, --plot
: path to output loss/accuracy plot
-lr
, --learnig_rate
: Learnig Rate for the Model Training
-b
, --batch_size
: batch Size for model training
-e
, --epochs
: Epochs for Model Training
Example:
cd Liveness
python3 train.py --dataset data --batch_size 8 --epochs 50
Inference your Custom Liveness Model
Args
-m
, --model
: path to trained Liveness model
-i
, --source
: source - Video path or camera-id
-c
, --conf
: min prediction conf (0<conf<1)
Example:
cd Liveness
python3 inference.py --source 0 --conf 0.8
To Exit Window - Press Q-Key