Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.
Paper | Official Implementation
RegNet offer a very nice design space for neural network architectures. RegNet design space consists of networks with simple structure which authors call "Regular" Networks (RegNet). Models in RegNet design space have higher concentration of models that perform well and generalise well. RegNet models are very efficient and run upto 5 times faster than EfficientNet models on GPUs.
Also RegNet models have been used as a backbone in Tesla FSD Stack.
- Main goal of the paper is to help in better understanding of network design and discover principles that generalize across settings.
- Explore structure aspeck of network design and arrive at low dimensional design space consisting of simple regualar networks
- Network width and depth can be explained by a quantized linear function.
The basic structure of models in AnyNet design space consists of a simple Stem which is then followed by the network body that does majority of the computation and a final network head that predicts the class scores. The stem and head networks are kept as simple as possible. The network body consists of 4 stages that operate at progressively lower resolutions.
Structure of network body is determined by block width w
, network depth d_i
, bottleneck ratio b_i
and group widths g
. Degrees of freedom at stage 'i' are number of blocks d
in each stage, block width w
and other block parameters such as stride, padding and so on.
Other models are obtained by refining the design space by adding more constraints on the above parameters. Design space is refined keeping the following things in mind :
- Simplify structure of design space.
- Improve the interpretability of design space.
- Maintain Design space complexity.
- Maintain model diversity in design space.
- Uses XBlocks within each block of the network
- Degrees of freedom in AnyNetX is 16
- Each network has 4 stages
- Each stage has 4 parameters (network depth di, block width wi, bottleneck ratio bi, group width gi)
- bi ∈ {1,2,4}
- gi ∈ {1,2,3,...,32}
- wi <= 1024
- di <= 16
AnyNetX(A) is same as the above AnyNetX
In this design space,
- bottleneck ratio bi is fixed for all stages.
- performance of models in AnyNetX(B) space is almost equal to AnyNetX(A) in average and best case senarios
- bi <= 2 seemes to work best.
In this design space,
- Shared group width gi for all stages.
- AnyNetX(C) has 6 fewer degrees of freedom compared to AnyNetX(A)
- gi > 1 seems to work best
In AnyNetX(D) design space, authors observed that good networks have increasing stage widths w(i+1) > wi
In AnyNetX(E) design space, it was observed that as stage widths wi increases, depth di likewise tend to increase except for the last stage.
Please refer to Section 3.3 in paper.
Import any of the following variants of RegNet using
from regnet import regnetx_002 as RegNet002
from regnet import Xblock, Yblock # required if you want to use YBlock instead of Xblock. Refer to paper for more details on YBlock
RegNet variants available are:
- regnetx_002
- regnetx_004
- regnetx_006
- regnetx_008
- regnetx_016
- regnetx_032
- regnetx_040
- regnetx_064
- regnetx_080
- regnetx_120
- regnetx_160
- regnetx_320
Import TrainingConfig
and Trainer
Classes from regnet and use them to train the model as follows
from regnet import TrainingConfig, Trainer
model = RegNet002(block=Xblock, num_classes=10)
training_config = TrainingConfig(max_epochs=10, batch_size=128, learning_rate=3e-4, weight_decay=5e-4, ckpt_path="./regnet.pt")
trainer = Trainer(model = model, train_dataset=train_dataset, test_dataset=test_dataset, configs=training_config)
trainer.train()
Note : you need not use TrainingConfig and Trainer classes if you want to write your own training loops. Just importing the respective models would suffice.
- Implement model checkpointing for every 'x' epochs
[1] https://github.com/signatrix/regnet
[2] https://github.com/d-li14/regnet.pytorch
@InProceedings{Radosavovic2020,
title = {Designing Network Design Spaces},
author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{\'a}r},
booktitle = {CVPR},
year = {2020}
}
MIT