You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am wondering, which of the implemented sampling strategies handles unbalanced data best?
I believe if I get the top 10000 uncertain data instances, but 99 % are in the same class, this would not help much for the next training process iteration, right?
Thank you in advance!
The text was updated successfully, but these errors were encountered:
For unbalanced data, where the estimator hasn't been trained on minority classes, typically the uncertainty measure fails to give epistemic uncertainty so won't (necessarily) sample the minority classes. Unlike uncertainty-based active learning, diversity-based AL handles this well. I've produced some diversity-based implementations privately and will look to submit a PR in the near future.
Hi!
I am wondering, which of the implemented sampling strategies handles unbalanced data best?
I believe if I get the top 10000 uncertain data instances, but 99 % are in the same class, this would not help much for the next training process iteration, right?
Thank you in advance!
The text was updated successfully, but these errors were encountered: