Skip to content

Commit

Permalink
update MogaNet and docs (pre-release #33)
Browse files Browse the repository at this point in the history
  • Loading branch information
Lupin1998 committed Nov 30, 2022
1 parent 08c75e6 commit 09fcbf5
Show file tree
Hide file tree
Showing 49 changed files with 1,380 additions and 183 deletions.
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ The main branch works with **PyTorch 1.8** (required by some self-supervised met

## Installation

There are quick installation steps for develepment:
There are quick installation steps for development:

```shell
conda create -n openmixup python=3.8 pytorch=1.12 cudatoolkit=11.3 torchvision -c pytorch -y
Expand Down Expand Up @@ -79,14 +79,15 @@ Please then, see [Tutorials](docs/en/tutorials) for more tech details:

## Overview of Model Zoo

Please refer to [Model Zoos](docs/en/model_zoos) for various backbones, mixup methods, and self-supervised algorithms. We also provide the paper lists of [Awesome Mixups](docs/en/awesome_mixups) for your reference. Checkpoints and traning logs will be updated soon!
Please refer to [Model Zoos](docs/en/model_zoos) for various backbones, mixup methods, and self-supervised algorithms. We also provide the paper lists of [Awesome Mixups](docs/en/awesome_mixups) for your reference. Checkpoints and training logs will be updated soon!

* Backbone architectures for supervised image classification on ImageNet.

<details open>
<summary>Currently supported backbones</summary>

- [x] [VGG](https://arxiv.org/abs/1409.1556) (ICLR'2015) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/vgg/)]
- [x] [InceptionV3](https://arxiv.org/abs/1512.00567) (CVPR'2016) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/inception_v3/)]
- [x] [ResNet](https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html) (CVPR'2016) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/)]
- [x] [ResNeXt](https://arxiv.org/abs/1611.05431) (CVPR'2017) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/)]
- [x] [SE-ResNet](https://arxiv.org/abs/1709.01507) (CVPR'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/)]
Expand Down Expand Up @@ -144,7 +145,7 @@ Please refer to [Model Zoos](docs/en/model_zoos) for various backbones, mixup me
<details open>
<summary>Currently supported datasets for mixups</summary>

- [x] [ImageNet](https://dl.acm.org/doi/10.1145/3065386) [[download](http://www.image-net.org/challenges/LSVRC/2012/)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- [x] [ImageNet](https://arxiv.org/abs/1409.0575) [[download](http://www.image-net.org/challenges/LSVRC/2012/)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- [x] [CIFAR-10](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) [[download](https://www.cs.toronto.edu/~kriz/cifar.html)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/cifar10/)]
- [x] [CIFAR-100](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) [[download](https://www.cs.toronto.edu/~kriz/cifar.html)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/cifar100/)]
- [x] [Tiny-ImageNet](https://arxiv.org/abs/1707.08819) [[download](http://cs231n.stanford.edu/tiny-imagenet-200.zip)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/tiny_imagenet/)]
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# dataset settings
data_source_cfg = dict(type='ImageNet')
# ImageNet dataset
data_train_list = 'data/meta/ImageNet/train_labeled_full.txt'
data_train_root = 'data/ImageNet/train'
data_test_list = 'data/meta/ImageNet/val_labeled.txt'
data_test_root = 'data/ImageNet/val/'

dataset_type = 'ClassificationDataset'
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_pipeline = [
dict(type='RandomResizedCrop', size=299, interpolation=3), # bicubic
dict(type='RandomHorizontalFlip'),
]
test_pipeline = [
dict(type='Resize', size=342, interpolation=3),
dict(type='CenterCrop', size=299),
dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg),
]
# prefetch
prefetch = True
if not prefetch:
train_pipeline.extend([dict(type='ToTensor'), dict(type='Normalize', **img_norm_cfg)])

data = dict(
imgs_per_gpu=64,
workers_per_gpu=8,
train=dict(
type=dataset_type,
data_source=dict(
list_file=data_train_list, root=data_train_root,
**data_source_cfg),
pipeline=train_pipeline,
prefetch=prefetch,
),
val=dict(
type=dataset_type,
data_source=dict(
list_file=data_test_list, root=data_test_root, **data_source_cfg),
pipeline=test_pipeline,
prefetch=False,
))

# validation hook
evaluation = dict(
initial=False,
interval=1,
imgs_per_gpu=128,
workers_per_gpu=4,
eval_param=dict(topk=(1, 5)))

# checkpoint
checkpoint_config = dict(interval=1, max_keep_ckpts=1)
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
]
test_pipeline = [
dict(type='Resize', size=284, interpolation=3), # 0.90
dict(type='CenterCrop', size=224),
dict(type='CenterCrop', size=256),
dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg),
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
]
test_pipeline = [
dict(type='Resize', size=284, interpolation=3), # 0.90
dict(type='CenterCrop', size=224),
dict(type='CenterCrop', size=256),
dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg),
]
Expand Down
2 changes: 1 addition & 1 deletion configs/classification/_base_/models/moganet/moga_small.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
mix_args=dict(),
backbone=dict(
type='MogaNet',
arch="tiny",
arch="small",
init_value=1e-5,
drop_path_rate=0.1,
stem_norm_cfg=dict(type='BN', eps=1e-5),
Expand Down
35 changes: 35 additions & 0 deletions configs/classification/imagenet/alexnet/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# AlexNet

> [ImageNet classification with deep convolutional neural networks](https://dl.acm.org/doi/10.1145/3065386)
## Abstract

We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

<div align=center>
<img src="https://user-images.githubusercontent.com/44519745/204873304-0a481bc9-dbfc-4bb1-9139-5b499cff6ec4.png" width="90%"/>
</div>

## Results and models

We provide the implementation of AlexNet with PyTorch-style training setting.

### ImageNet-1k

| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config |
|:---:|:---:|:---:|:---:|:---:|:---:|
| AlexNet | 61.1 | 0.72 | 62.5 | 83.0 | [config](./alexnet_4xb64_cos_ep100.py) |

## Citation

```
@article{2017Krizhevsky,
author = {Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E.},
title = {ImageNet Classification with Deep Convolutional Neural Networks},
year = {2017},
journal = {Commun. ACM},
month = {may},
pages = {84–90},
numpages = {7}
}
```
36 changes: 36 additions & 0 deletions configs/classification/imagenet/alexnet/alexnet_4xb64_cos_ep100.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
_base_ = [
'../../_base_/datasets/imagenet/basic_sz224_4xbs64.py',
'../../_base_/default_runtime.py',
]

# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='AlexNet',
num_classes=1000,
cls_head=True),
head=dict(
type='ClsHead', # normal CE loss
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=False, multi_label=False, in_channels=None, num_classes=None)
)

# data
data = dict(imgs_per_gpu=64, workers_per_gpu=4)

# optimizer
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)

# fp16
use_fp16 = False
fp16 = dict(type='mmcv', loss_scale='dynamic')
# optimizer args
optimizer_config = dict(update_interval=1, grad_clip=None)

# lr scheduler
lr_config = dict(policy='CosineAnnealing', min_lr=1e-6)

# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=100)
35 changes: 35 additions & 0 deletions configs/classification/imagenet/inception_v3/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Inception V3

> [Rethinking the Inception Architecture for Computer Vision](https://arxiv.org/abs/1512.00567)
## Abstract

Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error on the validation set (3.6% error on the test set) and 17.3% top-1 error on the validation set.

<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/177241797-c103eff4-79bb-414d-aef6-eac323b65a50.png" width="45%"/>
</div>

## Results and models

This page is based on documents in [MMClassification](https://github.com/open-mmlab/mmclassification).

### ImageNet-1k

| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Inception V3\* | 23.83 | 5.75 | 77.57 | 93.58 | [config](./inception_v3_4xb64_cos_ep100.py) | [model](https://download.openmmlab.com/mmclassification/v0/inception-v3/inception-v3_3rdparty_8xb32_in1k_20220615-dcd4d910.pth) |

*Models with \* are converted from the [official repo](https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py#L28).* The config files of these models are only for inference. We don't ensure these config files' training accuracy.

## Citation

```
@inproceedings{szegedy2016rethinking,
title={Rethinking the inception architecture for computer vision},
author={Szegedy, Christian and Vanhoucke, Vincent and Ioffe, Sergey and Shlens, Jon and Wojna, Zbigniew},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)},
pages={2818--2826},
year={2016}
}
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
_base_ = [
'../../_base_/datasets/imagenet/basic_sz299_4xbs64.py',
'../../_base_/default_runtime.py',
]

# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='InceptionV3',
num_classes=1000,
aux_logits=False),
head=dict(
type='ClsHead', # normal CE loss
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=False, multi_label=False, in_channels=None, num_classes=None)
)

# data
data = dict(imgs_per_gpu=64, workers_per_gpu=4)

# optimizer
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)

# fp16
use_fp16 = False
fp16 = dict(type='mmcv', loss_scale='dynamic')
# optimizer args
optimizer_config = dict(update_interval=1, grad_clip=None)

# lr scheduler
lr_config = dict(policy='CosineAnnealing', min_lr=1e-6)

# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=100)
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
_base_ = [
'../../_base_/datasets/imagenet/basic_sz299_4xbs64.py',
'../../_base_/default_runtime.py',
]

# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='InceptionV3',
num_classes=1000,
aux_logits=True),
head=dict(
type='ClsHead', # normal CE loss
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=False, multi_label=False, in_channels=None, num_classes=None)
)

# data
data = dict(imgs_per_gpu=64, workers_per_gpu=4)

# optimizer
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)

# fp16
use_fp16 = False
fp16 = dict(type='mmcv', loss_scale='dynamic')
# optimizer args
optimizer_config = dict(update_interval=1, grad_clip=None)

# lr scheduler
lr_config = dict(policy='CosineAnnealing', min_lr=1e-6)

# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=100)
Loading

0 comments on commit 09fcbf5

Please sign in to comment.