Muzhi Zhu1, Hengtao Li1, Hao Chen1, Chengxiang Fan1, Weian Mao2,1, Chenchen Jing1, Yifan Liu2, Chunhua Shen1
1Zhejiang University, 2The University of Adelaide,
- [2023/07/14] Our work SegPrompt is accepted by Int. Conf. Computer Vision (ICCV) 2023! 🎉🎉🎉
- [2023/08/30] We release our new benchmark LVIS-OW.
Please follow the instructions in Mask2Former
pip install torchshow
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.1+cu113.html
pip install lvis
pip install setuptools==59.5.0
pip install seaborn
Here we provide our proposed new benchmark LVIS-OW.
First prepare COCO and LVIS dataset, place them under $DETECTRON2_DATASETS following Detectron2
The dataset structure is as follows:
datasets/
coco/
annotations/
instances_{train,val}2017.json
{train,val}2017/
lvis/
lvis_v1_{train,val}.json
We reorganize the dataset and divide the categories into Known-Seen-Unseen to better evaluate the open-world model. The json files can be downloaded from here.
Or you can directly use the command to generate from the json file of COCO and LVIS.
bash tools/prepare_lvisow.sh
After you successfully get lvis_v1_train_ow.json and lvis_v1_val_resplit_r.json, you can refer to here to register the training set and test set. Then you can use our benchmark for training and testing.
python tools/eval_lvis_ow.py --dt-json-file output/m2f_binary_lvis_ow/lvis_r/inference/lvis_instances_results.json
We thank the following repos for their great works:
If you found this project useful for your paper, please kindly cite our paper.
For non-commercial academic use, this project is licensed under the 2-clause BSD License. For commercial use, please contact Chunhua Shen.