This repository focuses on a Wild Edible Plant Classifier that compares the performance of three state-of-the-art CNN architectures: MobileNet v2, GoogLeNet, and ResNet-34. The artefact created is part of my BSc dissertation, aimed at classifying 35 classes of wild edible plants using Transfer Learning.
Figure 1. The 35 Wild Edible Plant classes used in the project.
The classes of wild edible plants used are listed in table 1, accompanied by the number of images (per class) within the dataset. The dataset created, comprised of Flickr images, is obtained through their API using the rudimentary scripts within the \data_gathering
folder. The dataset can be found on Kaggle here and contains 16,535 images, where the quantity of images per class varies from 400 to 500.
The project is divided into three Jupyter Notebooks. The first one contains a sample of the plant classes to visualise them, stored within a zip file found inside the dataset folder, and covers steps 4 to 6 in the Machine Learning Pipeline diagram (figure 2). The second notebook focuses on the tuning of the CNN models, and the third and final notebook visualises their results.
|
|
|
Table 1. A detailed list of the Wild Edible Plant classes with the number of images per class within the dataset.
Figure 2. Machine Learning Pipeline diagram.
The file structure used for the artefact is outlined below and has helped to achieve the Machine Learning (ML) pipeline illustrated above.
.
+-- data_gathering
| +-- 1. get_urls.py
| +-- 2. get_images.py
| +-- get_filenames.py
| +-- img_rename.py
| +-- resize_images.py
+-- dataset
| +-- sample.zip
+-- functions
| +-- model.py
| +-- plotting.py
| +-- tuning.py
| +-- utils.py
+-- saved_models
| +-- best_googlenet.pt
| +-- best_mobilenetv2.pt
| +-- best_resnet34.pt
+-- 1. wep_classifier_initial.ipynb
+-- 2. wep_classifier_tuning.ipynb
+-- 3. visualise_results.ipynb
+-- LICENSE
+-- README.md
+-- requirements.txt
As mentioned earlier, the \data_gathering
folder outlines scripts that were used to gather and prepare the Flickr image data (2, 3 in ML pipeline). The \functions
folder holds the classes and functions used to perform all the functionality of the artefact. This covers all the remaining parts of the pipeline (4 -> 7).
The artefact code is run within three Jupyter Notebooks - 1. wep_classifier_initial.ipynb
(steps 4 -> 6), 2. wep_classifier_tuning.ipynb
(step 7), and 3. visualise_results.ipynb
(step 6 for step 7).
This project requires a Python 3 environment, which can be created by following the instructions below.
-
Create (and activate) a new environment.
- Linux or Mac
conda create --name wep source activate wep
- Windows
conda create --name wep activate wep
-
Clone the repository, navigate to the
wep-classifier/
folder and install the required dependencies.(Note) a requirements.txt file is accessible within this folder which details a list of the required dependencies.
git clone https://github.com/Achronus/wep-classifier.git cd wep-classifier conda install -c conda-forge jupyterlab conda install pytorch torchvision cudatoolkit=10.1 -c pytorch pip install -r requirements.txt
-
Create an IPython kernel for the
wep
environment.python -m ipykernel install --user --name wep --display-name "wep"
-
Run the
jupyter-lab
command to start JupyterLab and access the Jupyter Notebooks.
A list of the used packages and versions are highlighted in the table below.
|
|
This section highlights useful documentation links relevant to the project.