Skip to content

hankkkwu/SegFormer-pytorch

Repository files navigation

Building the blocks of Segformer architecture.

  1. Overlap Patch Embedding. A method to convert images to sequence of overlapping patches
  2. Efficient Self-Attention - 1st Core component of all Transformer based models.
  3. Mix-FeedForward module - 2nd core component of Transformer models. Along with Self-Attention, forms single Transformer block
  4. Transformer block - Self-attention + Mix FFN + Layer Norm forms a basic Tranformer block5.
  5. Decoder head - contains MLP layers.

Here is the result trained on BDD100k drivable area: highway-seg

Here is the attention maps from the video above: highway-attn

About

Implementation of SegFormer in PyTorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published