Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trained discriminator not working after netD.eval() or shuffle = False #10

Open
1214710638 opened this issue Jul 4, 2022 · 8 comments

Comments

@1214710638
Copy link

hello, thanks for open source your code. I notice that in demo_CrossDatasetOpenSet_testing.ipynb, you didn't set netD.eval() before testing, and the dataloader definition sets shuffle=True. I train and test the model follow your given demo. however, when i set netD.eval() or shuffle=True in dataloader, the test result is not good, the discriminator is not working at all. am i missing something or any suggestion for me?

@1214710638 1214710638 changed the title Trained discriminator not working when after netD.eval() or shuffle = False Trained discriminator not working after netD.eval() or shuffle = False Jul 4, 2022
@aimerykong
Copy link
Owner

aimerykong commented Jul 4, 2022 via email

@1214710638
Copy link
Author

Thanks for your reply. it took me some times to get the two curves. Here they are:
企业微信截图_1656917892529
企业微信截图_16569179346353
As you can see, the discriminator is not working properly after netD.eval()

@1214710638
Copy link
Author

企业微信截图_16569180502202
企业微信截图_16569180758575
Also, there are curves for mean confidence scores, you can see the one with netD.eval() is unseparatable.

@aimerykong
Copy link
Owner

aimerykong commented Jul 4, 2022 via email

@1214710638
Copy link
Author

i checked more setups, and all of them didn't work under eval mode. i look close into your released test demo and i've discussed it with some other people. if you use train mode and batch to produce the test result, it might be kind of cheating. when you use train mode, it compute batch norm stats from the given batch, and in your setup, you use a batch of all open-set samples(or close-set samples). therefore, you took advantage of the label information or the test distribution indirectly to produce the test results which is an unfair comparison for other methods and could discredit your claims in the paper. you might took a closer look into this and might clarify this with more experiments.

@aimerykong
Copy link
Owner

aimerykong commented Jul 5, 2022 via email

@hutudebug
Copy link

In my experiments, even when turn on .train() mode in test phase, the performance still largely rely on batchsize, the order of test sequence, etc, which makes this idea hard to follow:(

@libo-huang
Copy link

In my experiments, even when turn on .train() mode in test phase, the performance still largely rely on batchsize, the order of test sequence, etc, which makes this idea hard to follow:(

When testing with eval mode I also get random results. That is netD.eval() in OpenGAN doesn't work for solving open-set problems. Can anyone help to check if we are wrong somewhere?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants