authors | title | abstract | date | toc | ||||
---|---|---|---|---|---|---|---|---|
|
Hand Gesture Recognition CMPN450 Project |
This project was made using tools learned in the Pattern Recognition course and used [Static Hand Gesture Recognition for Sign Language Alphabets using Edge Oriented Histogram and Multi Class SVM](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=04de47bdd4a0753b33866a7cc445e6817e7a264d) papper as a reference for us. |
\today |
true |
\pagebreak
- Uses PIL module to read images from the disk and exclude them if corrupted.
- Sort the files in increasing order of the file name (as integers).
- Split the dataset into training, validation, and testing sets (70%, 10%, 20%).
- Read the image using opencv, then resize it to 200x200 pixels.
- Convert the image to grayscale.
- Segment the hand from the background using a skin detection (HSV), thresholding technique (Binary + OTSU).
- Morphological operations (Erosion + Dilation) to remove noise.
- Canny edge detection to detect the edges of the hand (remove useless information).
- Used Edge of Oriented Histograms (EOH) to extract features from the image.
- Used RandomizedSearchCV to find the best hyperparameters for the classifier.
- Used SVC with the best hyperparameters.
- Tested with different classifiers.
- KNN
- Random Forest
- Logistic Regression
- ADABoost
- 2-layer NN
-
Used confusion matrix to analyze the performance of the classifier.
-
Used classification report to analyze the performance of the classifier (best accuracy= 94.5%).
- Preprocessing: Use better methods to eliminate shadow (ML or DL).
- Classification: Use a CNN to extract features and classify the images.
\pagebreak
- Image preprocessing
- Image preprocessing
- Input utils, Feature Extraction
- Classification, Output Utils