-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom metrics for evaluation #13
Comments
Hi @ogabrielluiz! First of all, thanks for showing interest in this humble project as well as for your suggestion, I think this is a wonderful idea that would make pyss3 better, allowing users to define their custom evaluation metrics (not only in code but also in the 3D evaluations plot). Plus, I think it shouldn't be too hard to actually implement it, and hence shouldn't take too much time to get it working. Lately, I've been very busy trying to finish a job that I must get done by the end of this month. Nevertheless, I'll try to give it a shot this weekend, what do you think? ☕ 💻 😃 |
That would be awesome. Looking forward to seeing your approach. In case it takes longer than you expected, I can try my hand on it too. |
@ogabrielluiz! I'm really sorry that I couldn't "give it shot" this last weekend as I was expecting. Unfortunately, I'm still very behind with the work that I have to finish by the end of this month, which, unfortunately, is extremely important. Meanwhile, how do you think it should be implemented from the point of view of the user? I was thinking of adding a function called, for example, something like "add_metric" to the Evaluation.add_metric("f2-score", my_f2_score_function) Which would then make any Evaluation.test(clf, x_test, y_test) # should now also include our "f2-score" among the printed results. And best_s, best_l, best_p, _ = Evaluation.grid_search(
clf, x_test, y_test,
s=s_vals, l=l_vals, p=p_vals,
metric=""f2-score"" # <- now, our new metric should also be accepted
) And the new metric would also be included in the 3D Evaluation Plot as well. What do you think about it? The idea behind is to treat new user-defined metrics exactly as any other built-in metric... I don't think I'll be able to implement any of this until 2 or 3 weeks from now. Would you like me to send you a "collaborator" request? so you can get full access to this repo in case you want to help me out. Any type of help would be really appreciated, for instance, adding a new Jupyter Notebook to the "examples" folder as if it were a tutorial for this new functionality (pretending like this function is already implemented), not only would help me to test it but also would help users to know how they can use custom metrics for evaluations once it is actually implemented (you'll be also added as a contributor in the README file). |
I think that would be a great solution. Even better than what I was thinking. I could definitely help with that. Count me in! I'll try to implement the functionality too, if you don't mind. |
Excellent! and I don't mind at all, any kind of help would be really appreciated, that's very kind of you, just make sure your pull requests follow the "seven rules of a great Git commit message" from "How to Write a Git Commit Message" and you're good to go, buddy! 😀 Let me know if you need any help, we could even use Discord (or Slack) in case it is needed 👍 |
Hi!
A way pass a scorer function (e.g using sklearn's make_scorer) on Evaluation would make this pyss3 even greater.
Any plans on this?
This is a very interesting project.
Thank you!
The text was updated successfully, but these errors were encountered: