You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
That's a great question! For the final reporting of Acc in all Tables:
All experiments on UCI datasets & Synthetic binary data: (Table 1, 2, 9, 10, 11)
We report the accuracy of the test data in the final epoch during training;
Synthetic noisy CIFAR datasets: (Table 3, 4)
Since our primary purpose is to compare the potential/optimal performance of NLS/LS. To avoid reporting/comparing the optimal local performance, in an understanding view, we report the best-achieved test accuracy for Tables 3 & 4.
Comparing with existing methods: (Table 5, 6)
When comparing with existing methods on synthetic noisy CIFAR datasets (Table 5), Real-world noisy datasets (Table 6), we report the final accuracy of the test data after the training. Specifically, for positive label smoothing, we adopt a smooth rate 0.6; for negative label smoothing, we chose the smooth rate -6.0.
Thanks for your great work!
I wonder do you report the best ACC rather than the final ACC in training in Tables 1-4?
The text was updated successfully, but these errors were encountered: