Skip to content

ML-Cai/LaneDetector

Repository files navigation

Lane Detector implementation in Keras

Contents

Overview

A tensorflow learning project to create model for the real-time application about multi lane detector. The goal of this project:

  • Learn python and tensorflow
  • Create a model for multi-lane detctor and custom dataset by tensorflow ops
  • Convert to qunatized tf-lite model and try model inferencing at Qualcomm Hexagon DSP (Snapdragon 835/Hexagon 682) in real-time.

Instead detecting lane from source camera image, i try to transform image by perspective matrix first, and detect lane from perspective image. The reasons to do that:

  • Keep different camera type or camera installation have similar image style.
  • Constraint the lane region of detection to reduce input image size of model.

The main network architecture:

  • The input of model is perspective image, and the outputs are anchor offset, class prob and instance data called embeddings

  • We split the input image as multiple anchors (n x n):

    • Each anchor responsible for data precting only if lane cross to it.
    • Predcited data would be offset of x, class prob and embeddings.
    • Each lane will have unique embedding index in one image for instance segmentation. see link for more details about embeddings.
  • Our model is created by:

    • resnet block based model as backbone
    • 3 branchs as raw output for training:
      • x_cls : Class probability at each anchor that the lane or background.
      • x_offsets : Offsets at each anchor (only x offset be cared)
      • x_embeddings : embedding data for instance segmentation.
    • OutputMuxer : A data muxter to mux raw outputs.

Dependencies

  • Tensorflow 2.4.0-dev20200815
  • numpy
  • opencv-python
  • matplotlib

How to use it

Training

  1. We use "TuSimple Lane Detection Challenge" dataset for training, please download dataset from TuSimple github, and decompress as following directory structure:

    • ${DatasetsPath}/train_set/
      • clips/
      • label_data_0313.json
      • ...
    • ${DatasetsPath}/test_set
      • clips/
      • test_tasks_0627.json
      • ...
  2. Modify the element TuSimple_dataset_path at config.json by your environment,

  3. run train_tflite.py for training

     > python3 train_tflite.py
    

TF-Lite model convertion

Once the training finish, we must convert model as TF-Lite model for mobile platform. Please run generate_tflite_model.py to convert model, the converted model is named same as element "tflite_model_name" at config.json.

> python3 generate_tflite_model.py

Note : Even though tf-2.0 use new convertor(MLIR converter?) as default to convert tf-lite model, but our model will face many problem when convertng, such as convert finish but allocat_tensor error, or double Dequnatize node error. To go through the convertion for learning, i set converter.experimental_new_converter as False at Tensorflow 2.4.0-dev20200815

Test the model

test_tflite_model.py is used to load and test converted tf-lite at previous step, this program will inference tf-lite model and rendering the result of inferencing.

> python3 test_tflite_model.py

The description about Post-processor is shown as following. In default, i set with_post_process as False to disable post-processor and rendering the default output. Enable this flag if you need post-process.

The goal of post-process step after inferencing is removing the data at rows where the variance of x offset large than threshold, such as following image with 32x32 anchors at 256x256 images, with post-process and threshold as 10 (pixel), anchors at area A will be reserved and averaged as final output. anchors at area B will be removed due to variance at row out of threshold

TF-lite Hexagon Delegate test (Snapdragon 835/Hexagon 682)

Following image shown the result which run benchmark_model at HTC U11+ (Snapdragon 835/Hexagon 682):

./benchmark_model --graph=model_quant.tflite --use_hexagon=true

./benchmark_model --graph=model_quant.tflite --use_hexagon=true --enable_op_profiling=true

TODO

  • Model
    • Add curve fitting at post process
  • Android
    • Open camera/video and get image data.
    • Implement perspective transformation by openGL/ES PBO to transform image for model inference
    • Rendering inference result

References

About

Lane detector

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published