From aa54b19b696623ff8a072a19c7392d43f4c914f7 Mon Sep 17 00:00:00 2001 From: Malik Boudiaf <40210629+mboudiaf@users.noreply.github.com> Date: Sun, 5 Mar 2023 17:14:37 -0500 Subject: [PATCH] Update README with new valid file #23 . --- README.md | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index e7d60279..50223dab 100644 --- a/README.md +++ b/README.md @@ -27,13 +27,9 @@ pip install git+https://github.com/luizgh/visdom_logger.git ### Download data -#### All pre-processed from Google Drive +#### Pre-processed data from Drive -We provide the versions of Pascal-VOC 2012 and MS-COCO 2017 used in this work at https://drive.google.com/file/d/1Lj-oBzBNUsAqA9y65BDrSQxirV8S15Rk/view?usp=sharing. You can download the full .zip and directly extract it at the root of this repo. - -#### If the previous download failed - -Here is the structure of the data folder for you to reproduce: +We provide the versions of Pascal-VOC 2012 and MS-COCO 2017 used in this work at [icloud drive](https://www.icloud.com/iclouddrive/036FM-VRSeRfHsRqFxav-dGoA#RePRI). Because of the size of the data folder, it's been sharded. Download all the shard, and use the `cat` command to reform the original file before extracting. Here is the structure of the data folder for you to reproduce: ``` data @@ -47,6 +43,9 @@ data | ├── JPEGImages | └── SegmentationClassAug ``` + +#### From scatch + **Pascal** : The JPEG images can be found in the PascalVOC 2012 toolkit to be downloaded at [PascalVOC2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) and [SegmentationClassAug](https://mycuhk-my.sharepoint.com/personal/1155122171_link_cuhk_edu_hk/_layouts/15/onedrive.aspx?id=%2Fpersonal%2F1155122171%5Flink%5Fcuhk%5Fedu%5Fhk%2FDocuments%2FTPAMI%20Submission%2FPFENet%5Fcheckpoints%2Fgt%5Fvoc%2Ezip&parent=%2Fpersonal%2F1155122171%5Flink%5Fcuhk%5Fedu%5Fhk%2FDocuments%2FTPAMI%20Submission%2FPFENet%5Fcheckpoints&originalPath=aHR0cHM6Ly9teWN1aGstbXkuc2hhcmVwb2ludC5jb20vOnU6L2cvcGVyc29uYWwvMTE1NTEyMjE3MV9saW5rX2N1aGtfZWR1X2hrL0VSZ3lTb05ZYjdoQnF2REJFOHo0cVZzQmg2dTNLaVdOQllEWUJNZWcxemdFS0E_cnRpbWU9ZTVBTWNtdTgyRWc) (pre-processed ground-truth masks). **Coco** : Coco 2014 train, validation images and annotations can be downloaded at [Coco](https://cocodataset.org/#download). Once this is done, you will have to generate the subfolders coco/train and coco/val (ground truth masks). Both folders can be generated by executing the python script data/coco/create_masks.py (note that the script uses the package pycocotools that can be found at https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools): @@ -63,11 +62,7 @@ The train/val splits are directly provided in lists/. How they were obtained is ### Download pre-trained models -#### Pre-trained backbones -First, you will need to download the ImageNet pre-trained backbones at https://drive.google.com/drive/folders/1Hrz1wOxOZm4nIIS7UMJeL79AQrdvpj6v and put them under initmodel/. These will be used if you decide to train your models from scratch. - -#### Pre-trained models -We directly provide the full pre-trained models at https://drive.google.com/file/d/1iuMAo5cJ27oBdyDkUI0JyGIEH60Ln2zm/view?usp=sharing. You can download them and directly extract them at the root of this repo. This includes Resnet50 and Resnet101 backbones on Pascal-5i, and Resnet50 on Coco-20i. +We directly provide the full pre-trained models at [icloud drive](https://www.icloud.com/iclouddrive/036FM-VRSeRfHsRqFxav-dGoA#RePRI). You can download them and directly extract them at the root of this repo. This includes Resnet50 and Resnet101 backbones on Pascal-5i, and Resnet50 on Coco-20i. ## Overview of the repo @@ -76,7 +71,7 @@ Data are located in data/. All the code is provided in src/. Default configurati ## Training (optional) -If you want to use the pre-trained models, this step is optional. Otherwise, you can train your own models from scratch with the scripts/train.sh script, as follows. +If you want to use the pre-trained models, this step is optional. Otherwise, you will need to create and fill the `initmodel/` folder with imagenet-pretrained models, as explained in https://github.com/dvlab-research/PFENet. Then, you can train your own models from scratch with the scripts/train.sh script, as follows. ```python bash scripts/train.sh {data} {fold} {[gpu_ids]} {layers}