Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving the warnings when there is a gradient mismatch #27

Open
seabbs opened this issue Jul 19, 2024 · 4 comments
Open

Improving the warnings when there is a gradient mismatch #27

seabbs opened this issue Jul 19, 2024 · 4 comments

Comments

@seabbs
Copy link

seabbs commented Jul 19, 2024

Firstly this is a great tool to have and I was pleased to find it!

Issue

We are using this here CDCgov/Rt-without-renewal#340 to help improve our code base and chase down issues.

We currently have a few gradient mismatches but the current approach to warnings (i.e. it doesn't indicate what part of our benchmark suite the issue is coming from) makes it a bit tricky/tedious to chase them down.

Potential solution

The addition of some information to the warning that helps localise the problem. I assume this would need to be information about the user-supplied model as information about any overarching suite organisation won't be available.

I am happy to have a go at implementing above suggestion or other suggestions if interested.

Alternative

This is an edge case due to introducing benchmarking all at once to an existing code base. If we instead had had it from the beginning or if we go through our code now and fix these things this should not be an issue as we add functionality iteratively going forward.

Alternatively there may be a feature here already we have missed that we could implement directly.

@torfjelde
Copy link
Member

Glad you find it useful!

The addition of some information to the warning that helps localise the problem.

Anything particular you have in mind here? Are talking about what the potential issues could be causing this? Or you mean more specifically exactly which function is casuing the issue? The former is easy enough to add, while hte latter is quite a bit more tricky (though technically not impossible) 😬

@seabbs
Copy link
Author

seabbs commented Jul 22, 2024

So either would be great functionality. I was thinking of something more limited which was just identifying which of our calls to make_turing_site was throwing this issue as we are running our benchmarks all together for many models.

The simplest way to do that seems like it would be adding the model set as the first argument to make_turing_suite part of the message.

Either of your suggestions might make that redundant.

@torfjelde
Copy link
Member

I was thinking of something more limited which was just identifying which of our calls to make_turing_site was throwing this issue as we are running our benchmarks all together for many models.

But given that make_turing_suite only takes a Model, is this not something that's fairly easy to add from a user-perspective by simply, say, printing the name of the model you're about to call make_turing_suite for, or just catch the error if you have error_on_fail=true and then print / inform which model you just tried constructing the suite fro? I feel like I might be misunderstanding something here 🤔

@seabbs
Copy link
Author

seabbs commented Jul 31, 2024

No, I don't think it is a huge amount of effort but also seems like a lot of boiler-plate for users to add across a whole testing suite?

My expectation was that this package was designed for that kind of deployment vs interactive running of a single suite and so this would make sense for a range of users as a feature but if that isn't the case I am happy to implement it in our wrapper and close this out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants