Skip to content

tatigabru/wheat-detection

Repository files navigation

Wheat Detection Challenge

This is the code for "Wheat Detection Challenge".

The solution is based on EfficientDet detectors, augmentation, weighted box fusion and pseudolabling. Below you will find description of full pipeline and instructions how to run training, inference on competitions data or inference on your own data.

The solution have been packed using Docker to simplify environment preparation.

You can also install code as a package.

Table of content

Requirements

Software

  • Ubuntu 18.04
  • Docker (19.03.6, build 369ce74a3c)
  • Docker-compose (version 1.27.4, build 40524192)
  • Nvidia-Docker (Nvidia Driver Version: 396.44, nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04 for GPU)

Packages and software specified in Dockerfile and requirements.txt

Hardware

Recommended minimal configuration:

  • Nvidia GPU with at least 11GB Memory *
  • Disk space 20+ GB (free)
  • 16 GB RAM

* It is possible to calculate predictions on CPU, but training requires GPU.

Models

EfficientDet models from Ross Wightman efficientdet-pytorch. All base models used were pre-trained on MS Coco dataset.

Pretrained models could be loaded here (kaggle json required).

Training

Prepare environment

Install docker

Install Docker Engine and Docker Compose

Git clone

Clone current repository

Starting service

Build docker image, start docker-compose service in daemon mode and install requirements inside container.

$ make build && make start && make install

Dataset

The dataset is available on kaggle platform.

The script for dowloading is in scripts/download_dataset.sh.

You need to have kaggle account, create directory .kaggle and copy kaggle.json in it to access the data.

$ make dataset

Make folds

python -m src.folds.make_folds

Images preprocessing and augmentations

The original tiles were scaled to 512x512 px and split location-wise, as in notebooks/.

The notebook for combining tiles is in notebooks/.

The data were normalised as in ImageNet.

The images were augmented using albumentations libruary.

Run training

$ make train

Inference

Get models weigths

  • Unpack the model weights to models/ directory

Start inference (models/ directory should be provided with pretrained models)

$ make inference

After pipeline execution final prediction will appear in data/preds/ directory.

Change path and file names for your folders in scripts/stage4. Alternatevely, run

$ docker exec open-cities-dev \
    python -m src.predict_tif \
      --configs configs/stage3-srx50-2-f0.yaml \
      --src_path <path/to/your/tif/file.tif> \
      --dst_path <path/for/result.tif> \
      --batch_size 4 \ 
      --gpu '0' \     

Stop service

After everything is done stop docker container

$ make stop

Bring everything down, removing the container entirely

$ make clean

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages