Skip to content

Commit

Permalink
v0.2.3, update configs
Browse files Browse the repository at this point in the history
  • Loading branch information
Lupin1998 committed Jun 17, 2022
1 parent 7566583 commit ea81250
Show file tree
Hide file tree
Showing 198 changed files with 6,858 additions and 278 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# OpenMixup

**News**
* OpenMixup v0.2.3 is released, which supports new self-supervised and mixup methods (e.g., [A2MIM](https://arxiv.org/abs/2205.13943)), and adds new features as [#6](https://github.com/Westlake-AI/openmixup/issues/6). The online document is available.
* OpenMixup v0.2.3 is released, which supports new self-supervised and mixup methods (e.g., [A2MIM](https://arxiv.org/abs/2205.13943)) and backbones ([UniFormer](https://arxiv.org/abs/2201.09450)), update the [online document](https://westlake-ai.github.io/openmixup/) and config files, and adds new features as [#6](https://github.com/Westlake-AI/openmixup/issues/6).
* OpenMixup v0.2.2 is released, which supports new self-supervised methods ([BarlowTwins](https://arxiv.org/abs/2103.03230), [SimMIM](https://arxiv.org/abs/2111.09886), etc.), backbones ([ConvMixer](https://arxiv.org/pdf/2201.09792.pdf), [MLPMixer](https://arxiv.org/pdf/2105.01601.pdf), [VAN](https://arxiv.org/pdf/2202.09741v2.pdf), etc.), and losses as [#5](https://github.com/Westlake-AI/openmixup/issues/5).
* OpenMixup v0.2.1 is released, which supports new methods as [#4](https://github.com/Westlake-AI/openmixup/issues/4) (bugs fixed).
* OpenMixup v0.2.0 is released, which supports new features as [#3](https://github.com/Westlake-AI/openmixup/issues/3). We have reorganized configs and fixed bugs.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,6 @@
data_train_root = 'data/ImageNet/train'
data_test_list = 'data/meta/ImageNet/val_labeled.txt'
data_test_root = 'data/ImageNet/val/'
# Notice: Though official DeiT settings use `RepeatAugment`, we achieve competitive performances
# without it. This repo removes `RepeatAugment`.
sampler = "DistributedSampler"

dataset_type = 'ClassificationDataset'
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,6 @@
data_train_root = 'data/ImageNet/train'
data_test_list = 'data/meta/ImageNet/val_labeled.txt'
data_test_root = 'data/ImageNet/val/'
# Notice: Though official DeiT settings use `RepeatAugment`, we achieve competitive performances
# without it. This repo removes `RepeatAugment`.
sampler = "DistributedSampler"

dataset_type = 'ClassificationDataset'
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Refers to `_RAND_INCREASING_TRANSFORMS` in pytorch-image-models
rand_increasing_policies = [
dict(type='AutoContrast'),
dict(type='Equalize'),
dict(type='Invert'),
dict(type='Rotate', magnitude_key='angle', magnitude_range=(0, 30)),
dict(type='Posterize', magnitude_key='bits', magnitude_range=(4, 0)),
dict(type='Solarize', magnitude_key='thr', magnitude_range=(256, 0)),
dict(type='SolarizeAdd', magnitude_key='magnitude', magnitude_range=(0, 110)),
dict(type='ColorTransform', magnitude_key='magnitude', magnitude_range=(0, 0.9)),
dict(type='Contrast', magnitude_key='magnitude', magnitude_range=(0, 0.9)),
dict(type='Brightness', magnitude_key='magnitude', magnitude_range=(0, 0.9)),
dict(type='Sharpness', magnitude_key='magnitude', magnitude_range=(0, 0.9)),
dict(type='Shear',
magnitude_key='magnitude', magnitude_range=(0, 0.3), direction='horizontal'),
dict(type='Shear',
magnitude_key='magnitude', magnitude_range=(0, 0.3), direction='vertical'),
dict(type='Translate',
magnitude_key='magnitude', magnitude_range=(0, 0.45), direction='horizontal'),
dict(type='Translate',
magnitude_key='magnitude', magnitude_range=(0, 0.45), direction='vertical'),
]

# dataset settings
data_source_cfg = dict(type='ImageNet')
# ImageNet dataset
data_train_list = 'data/meta/ImageNet/train_labeled_full.txt'
data_train_root = 'data/ImageNet/train'
data_test_list = 'data/meta/ImageNet/val_labeled.txt'
data_test_root = 'data/ImageNet/val/'

dataset_type = 'ClassificationDataset'
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_pipeline = [
dict(type='RandomResizedCrop', size=384, interpolation=3), # bicubic
dict(type='RandomHorizontalFlip'),
dict(type='RandAugment',
policies=rand_increasing_policies,
num_policies=2, total_level=10,
magnitude_level=9, magnitude_std=0.5, # DeiT or Swin
hparams=dict(
pad_val=[104, 116, 124], interpolation='bicubic')),
dict(
type='RandomErasing_numpy', # before ToTensor and Normalize
erase_prob=0.25,
mode='rand', min_area_ratio=0.02, max_area_ratio=1 / 3,
fill_color=[104, 116, 124], fill_std=[58, 57, 57]), # RGB
]
test_pipeline = [
dict(type='Resize', size=384, interpolation=3), # 1.0
dict(type='CenterCrop', size=384),
dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg),
]
# prefetch
prefetch = True
if not prefetch:
train_pipeline.extend([dict(type='ToTensor'), dict(type='Normalize', **img_norm_cfg)])

data = dict(
imgs_per_gpu=64,
workers_per_gpu=4,
train=dict(
type=dataset_type,
data_source=dict(
list_file=data_train_list, root=data_train_root,
**data_source_cfg),
pipeline=train_pipeline,
prefetch=prefetch,
),
val=dict(
type=dataset_type,
data_source=dict(
list_file=data_test_list, root=data_test_root, **data_source_cfg),
pipeline=test_pipeline,
prefetch=False,
))

# validation hook
evaluation = dict(
initial=False,
interval=1,
imgs_per_gpu=128,
workers_per_gpu=4,
eval_param=dict(topk=(1, 5)))

# checkpoint
checkpoint_config = dict(interval=10, max_keep_ckpts=1)
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@
magnitude_level=9, magnitude_std=0.5,
hparams=dict(
pad_val=[104, 116, 124], interpolation='bicubic')),
dict(type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4),
dict(
type='RandomErasing_numpy', # before ToTensor and Normalize
erase_prob=0.25,
Expand Down
2 changes: 1 addition & 1 deletion configs/classification/_base_/default_runtime.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
interval=50,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook') # Bug: remove TensorboardLoggerHook in PyTorch1.10
# dict(type='TensorboardLoggerHook') # Bug: remove TensorboardLoggerHook in PyTorch1.10
])
# yapf:enable
# runtime settings
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# model settings
model = dict(
type='MixUpClassification',
pretrained=None,
alpha=[0.8, 1.0,],
mix_mode=["mixup", "cutmix",],
mix_args=dict(),
backbone=dict(
type='ConvMixer',
arch='1024/20',
act_cfg=dict(type='GELU'),
),
head=dict(
type='ClsMixupHead', # mixup CE + label smooth
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='original', loss_weight=1.0),
with_avg_pool=True,
in_channels=1024, num_classes=1000)
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# model settings
model = dict(
type='MixUpClassification',
pretrained=None,
alpha=[0.8, 1.0,],
mix_mode=["mixup", "cutmix",],
mix_args=dict(),
backbone=dict(
type='ConvMixer',
arch='1536/20',
act_cfg=dict(type='GELU'),
),
head=dict(
type='ClsMixupHead', # mixup CE + label smooth
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='original', loss_weight=1.0),
with_avg_pool=True,
in_channels=1536, num_classes=1000)
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# model settings
model = dict(
type='MixUpClassification',
pretrained=None,
alpha=[0.8, 1.0,],
mix_mode=["mixup", "cutmix",],
mix_args=dict(),
backbone=dict(
type='ConvMixer',
arch='768/32',
act_cfg=dict(type='ReLU'),
),
head=dict(
type='ClsMixupHead', # mixup CE + label smooth
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='original', loss_weight=1.0),
with_avg_pool=True,
in_channels=768, num_classes=1000)
)
22 changes: 22 additions & 0 deletions configs/classification/_base_/models/convnext/convnext_base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# model settings
model = dict(
type='MixUpClassification',
pretrained=None,
alpha=[0.8, 1.0,],
mix_mode=["mixup", "cutmix",],
mix_args=dict(),
backbone=dict(
type='ConvNeXt',
arch='base',
out_indices=(3,), # x-1: stage-x
act_cfg=dict(type='GELU'),
drop_path_rate=0.5,
gap_before_final_norm=True,
),
head=dict(
type='ClsMixupHead',
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='original', loss_weight=1.0),
with_avg_pool=False,
in_channels=1024, num_classes=1000)
)
22 changes: 22 additions & 0 deletions configs/classification/_base_/models/convnext/convnext_large.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# model settings
model = dict(
type='MixUpClassification',
pretrained=None,
alpha=[0.8, 1.0,],
mix_mode=["mixup", "cutmix",],
mix_args=dict(),
backbone=dict(
type='ConvNeXt',
arch='large',
out_indices=(3,), # x-1: stage-x
act_cfg=dict(type='GELU'),
drop_path_rate=0.5,
gap_before_final_norm=True,
),
head=dict(
type='ClsMixupHead',
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='original', loss_weight=1.0),
with_avg_pool=False,
in_channels=1536, num_classes=1000)
)
22 changes: 22 additions & 0 deletions configs/classification/_base_/models/convnext/convnext_small.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# model settings
model = dict(
type='MixUpClassification',
pretrained=None,
alpha=[0.8, 1.0,],
mix_mode=["mixup", "cutmix",],
mix_args=dict(),
backbone=dict(
type='ConvNeXt',
arch='small',
out_indices=(3,), # x-1: stage-x
act_cfg=dict(type='GELU'),
drop_path_rate=0.4,
gap_before_final_norm=True,
),
head=dict(
type='ClsMixupHead',
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='original', loss_weight=1.0),
with_avg_pool=False,
in_channels=768, num_classes=1000)
)
22 changes: 22 additions & 0 deletions configs/classification/_base_/models/convnext/convnext_tiny.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# model settings
model = dict(
type='MixUpClassification',
pretrained=None,
alpha=[0.8, 1.0,],
mix_mode=["mixup", "cutmix",],
mix_args=dict(),
backbone=dict(
type='ConvNeXt',
arch='tiny',
out_indices=(3,), # x-1: stage-x
act_cfg=dict(type='GELU'),
drop_path_rate=0.1,
gap_before_final_norm=True,
),
head=dict(
type='ClsMixupHead',
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='original', loss_weight=1.0),
with_avg_pool=False,
in_channels=768, num_classes=1000)
)
22 changes: 22 additions & 0 deletions configs/classification/_base_/models/convnext/convnext_xlarge.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# model settings
model = dict(
type='MixUpClassification',
pretrained=None,
alpha=[0.8, 1.0,],
mix_mode=["mixup", "cutmix",],
mix_args=dict(),
backbone=dict(
type='ConvNeXt',
arch='xlarge',
out_indices=(3,), # x-1: stage-x
act_cfg=dict(type='GELU'),
drop_path_rate=0.5,
gap_before_final_norm=True,
),
head=dict(
type='ClsMixupHead',
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='original', loss_weight=1.0),
with_avg_pool=False,
in_channels=2048, num_classes=1000)
)
13 changes: 13 additions & 0 deletions configs/classification/_base_/models/densenet/densenet121.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='DenseNet', arch='121',
out_indices=(3,), # x-1: stage-x
),
head=dict(
type='ClsHead',
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=True, in_channels=1024, num_classes=1000)
)
13 changes: 13 additions & 0 deletions configs/classification/_base_/models/densenet/densenet161.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='DenseNet', arch='161',
out_indices=(3,), # x-1: stage-x
),
head=dict(
type='ClsHead',
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=True, in_channels=2208, num_classes=1000)
)
13 changes: 13 additions & 0 deletions configs/classification/_base_/models/densenet/densenet169.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='DenseNet', arch='169',
out_indices=(3,), # x-1: stage-x
),
head=dict(
type='ClsHead',
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=True, in_channels=1664, num_classes=1000)
)
13 changes: 13 additions & 0 deletions configs/classification/_base_/models/densenet/densenet201.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='DenseNet', arch='201',
out_indices=(3,), # x-1: stage-x
),
head=dict(
type='ClsHead',
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=True, in_channels=1920, num_classes=1000)
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='EfficientNet',
arch='b0',
out_indices=(6,), # x-1: stage-x
norm_cfg=dict(type='BN', eps=1e-3),
),
head=dict(
type='ClsHead',
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=True, in_channels=1280, num_classes=1000)
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# model settings
model = dict(
type='Classification',
pretrained=None,
backbone=dict(
type='EfficientNet',
arch='b1',
out_indices=(6,), # x-1: stage-x
norm_cfg=dict(type='BN', eps=1e-3),
),
head=dict(
type='ClsHead',
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
with_avg_pool=True, in_channels=1280, num_classes=1000)
)
Loading

0 comments on commit ea81250

Please sign in to comment.