Skip to content

Commit

Permalink
Update README with new valid file #23 .
Browse files Browse the repository at this point in the history
  • Loading branch information
mboudiaf authored Mar 5, 2023
1 parent 790fa6c commit aa54b19
Showing 1 changed file with 7 additions and 12 deletions.
19 changes: 7 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,9 @@ pip install git+https://github.com/luizgh/visdom_logger.git

### Download data

#### All pre-processed from Google Drive
#### Pre-processed data from Drive

We provide the versions of Pascal-VOC 2012 and MS-COCO 2017 used in this work at https://drive.google.com/file/d/1Lj-oBzBNUsAqA9y65BDrSQxirV8S15Rk/view?usp=sharing. You can download the full .zip and directly extract it at the root of this repo.

#### If the previous download failed

Here is the structure of the data folder for you to reproduce:
We provide the versions of Pascal-VOC 2012 and MS-COCO 2017 used in this work at [icloud drive](https://www.icloud.com/iclouddrive/036FM-VRSeRfHsRqFxav-dGoA#RePRI). Because of the size of the data folder, it's been sharded. Download all the shard, and use the `cat` command to reform the original file before extracting. Here is the structure of the data folder for you to reproduce:

```
data
Expand All @@ -47,6 +43,9 @@ data
| ├── JPEGImages
| └── SegmentationClassAug
```

#### From scatch

**Pascal** : The JPEG images can be found in the PascalVOC 2012 toolkit to be downloaded at [PascalVOC2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) and [SegmentationClassAug](https://mycuhk-my.sharepoint.com/personal/1155122171_link_cuhk_edu_hk/_layouts/15/onedrive.aspx?id=%2Fpersonal%2F1155122171%5Flink%5Fcuhk%5Fedu%5Fhk%2FDocuments%2FTPAMI%20Submission%2FPFENet%5Fcheckpoints%2Fgt%5Fvoc%2Ezip&parent=%2Fpersonal%2F1155122171%5Flink%5Fcuhk%5Fedu%5Fhk%2FDocuments%2FTPAMI%20Submission%2FPFENet%5Fcheckpoints&originalPath=aHR0cHM6Ly9teWN1aGstbXkuc2hhcmVwb2ludC5jb20vOnU6L2cvcGVyc29uYWwvMTE1NTEyMjE3MV9saW5rX2N1aGtfZWR1X2hrL0VSZ3lTb05ZYjdoQnF2REJFOHo0cVZzQmg2dTNLaVdOQllEWUJNZWcxemdFS0E_cnRpbWU9ZTVBTWNtdTgyRWc) (pre-processed ground-truth masks).

**Coco** : Coco 2014 train, validation images and annotations can be downloaded at [Coco](https://cocodataset.org/#download). Once this is done, you will have to generate the subfolders coco/train and coco/val (ground truth masks). Both folders can be generated by executing the python script data/coco/create_masks.py (note that the script uses the package pycocotools that can be found at https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools):
Expand All @@ -63,11 +62,7 @@ The train/val splits are directly provided in lists/. How they were obtained is

### Download pre-trained models

#### Pre-trained backbones
First, you will need to download the ImageNet pre-trained backbones at https://drive.google.com/drive/folders/1Hrz1wOxOZm4nIIS7UMJeL79AQrdvpj6v and put them under initmodel/. These will be used if you decide to train your models from scratch.

#### Pre-trained models
We directly provide the full pre-trained models at https://drive.google.com/file/d/1iuMAo5cJ27oBdyDkUI0JyGIEH60Ln2zm/view?usp=sharing. You can download them and directly extract them at the root of this repo. This includes Resnet50 and Resnet101 backbones on Pascal-5i, and Resnet50 on Coco-20i.
We directly provide the full pre-trained models at [icloud drive](https://www.icloud.com/iclouddrive/036FM-VRSeRfHsRqFxav-dGoA#RePRI). You can download them and directly extract them at the root of this repo. This includes Resnet50 and Resnet101 backbones on Pascal-5i, and Resnet50 on Coco-20i.

## Overview of the repo

Expand All @@ -76,7 +71,7 @@ Data are located in data/. All the code is provided in src/. Default configurati

## Training (optional)

If you want to use the pre-trained models, this step is optional. Otherwise, you can train your own models from scratch with the scripts/train.sh script, as follows.
If you want to use the pre-trained models, this step is optional. Otherwise, you will need to create and fill the `initmodel/` folder with imagenet-pretrained models, as explained in https://github.com/dvlab-research/PFENet. Then, you can train your own models from scratch with the scripts/train.sh script, as follows.

```python
bash scripts/train.sh {data} {fold} {[gpu_ids]} {layers}
Expand Down

0 comments on commit aa54b19

Please sign in to comment.