Skip to content

Latest commit

 

History

History
80 lines (63 loc) · 4.88 KB

README.md

File metadata and controls

80 lines (63 loc) · 4.88 KB

Deep-Learning-fire-segmentation

Implementation of several state-of-the-art Deep Learning models for fire semantic segmentation. For more details on the implemented architectures, loss functions, attention modules, and the image types used, please refer to the following paper.

The different architectures and loss functions are used to create segmentation masks on images of the Corsican Fire Database, available upon request to the University of Corsica. We also used fused images generated by the FIRe-GAN model and the VGG19 method. The full implementation of the FIRe-GAN model is available here and the full implementation for the VGG19 method is available here.

The program should run adequately with any kind of images, however. As long as you follow the naming conventions (prefixes) specified below, you should be able to use any dataset with images in PNG format. You would probably only need to adjust the cross-validation function (specifically, the pathing convention that we use) depending on your particular dataset and needs.

Alternatively, you can only use the functions that create the models and/or loss functions, instantiate and use them in your own particular training/testing pipelines.

System requirements

These models were trained and tested on an NVIDIA DGX-1 workstation with two NVIDIA P100 GPUs under Ubuntu 16.04.6 LTS, CUDA version 11.1, Python 3.6.9, and TensorFlow 2.3.0.

Configuration

The code can rune with any given number of GPUs, as long as they are detected. You need to specify the IDs of the GPUs that you wish to use inside the main.py file. This, in order to control the use of GPUs in shared environments. If no GPUs are detected, the program should still run appropriately, although the training and inference times will be significantly increased.

The code can perform one of three tasks per run: training, testing, and cross-validation. You need to set the option you wish to execute on the config.ini file, algonside the corresponding configuration requirements.

Important: The program assumes that the source images will have a given prefix (e.g., visible_1.png), as it will replace this prefix with "mask" (e.g., mask_1.png) to preserve an adequate pairing of the source images and the generated masks.

Running the program

After you have finished setting the configuration options on the config.ini file, you can run the program by typing the following:

python main.py

Important: Please note that the cross-validation option requires you to provide the folds of images as follows:

   .
   ├── ...
   ├── Cross_val_dir            
   │   ├── 1                     # Fold number one.
   │   |    ├── Train            # Training images.
   |   |    |   ├── Visible      # Visible training images.
   |   |    |   └── NIR          # Near-infrared training images.
   |   |    |   └── GT           # Ground truths.
   |   |    |   └── ...
   |   |    ├──  N-smoke         # Non-smoke fire images.
   |   |    |   ├── Visible      # Visible training images.
   |   |    |   └── NIR          # Near-infrared training images.
   |   |    |   └── GT           # Ground truths.
   |   |    |   └── TEST_RESULTS # Here the program will save the generated masks.
                                 # It will search for this directory on this specific pathing structure on each fold.
   |   |    |   └── ...
   |   |    ├──  Smoke           # Smoke fire images.
   |   |    |   ├── Visible      # Visible training images.
   |   |    |   └── NIR          # Near-infrared training images.
   |   |    |   └── GT           # Ground truths.
   |   |    |   └── TEST_RESULTS # Here the program will save the generated masks.
   |   |    |   └── ...
   |   ├── 2                     # Fold number two.
   |   |    └── ...
   |   └── ...
   └── ...

As our focus was to test on fire images, we employ and assume this file structure. In your local code, feel free to change this funciton to whatever best suits your needs.

Citation

@Article{Ciprian-Sanchez21-segmentation,
AUTHOR = {Ciprián-Sánchez, Jorge Francisco and Ochoa-Ruiz, Gilberto and Rossi, Lucile and Morandini, Frédéric},
TITLE = {Assessing the Impact of the Loss Function, Architecture and Image Type for Deep Learning-Based Wildfire Segmentation},
JOURNAL = {Applied Sciences},
VOLUME = {11},
YEAR = {2021},
NUMBER = {15},
ARTICLE-NUMBER = {7046},
URL = {https://www.mdpi.com/2076-3417/11/15/7046},
ISSN = {2076-3417},
DOI = {10.3390/app11157046}
}