-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[API] design for generic optimizer #93
Comments
PS: I'm happy to try write this if you would like me to try? Not right now due to being busy, but maybe early Oct. |
Hello @fkiraly, I took some time to understand these changes you are proposing. I will show you how interpreted them, so please correct me if I misunderstood something. It appears, that you want to change the API of Hyperactive, so that it is possible to use different optimization backends. This also necessitates to implement an interface (Experiment), that is adapted to certain optimizers. I would be open to the possibility to optionally select other optimization backends for the experiment.
I do not understand this example, because it would already be covered by the sklearn integration. A separate experiment-class for each package (sklearn, xgboost, pytorch) would heavily decrease the flexibility of the interface.
Hyperactive does not fit an estimator at that point in the api. It runs the optimization setup. The fit-method makes sense in the sklearn integration. |
This would be used only for adaptation inside the sklearn adapter. The optimizer optimises the experiment. You would need at least one experiment per package or unified API, no? But not one per unified API and optimizer.
I just mean, why not call |
I think there is a small degree of miscommunication - would you like me to write a design document, or a draft PR (for demo purpose only)? |
That would be great! :-) |
Partially implemented here - feedback appreciated! |
Relevant comment: #85 (comment) |
From our earlier discussion.
I would design a generic interface as follows:
BaseOptimizer
and theBaseExperiment
(orBaseEvaluator
etc). Both inherit fromskbase
BaseObject
, so provide a dataclass-like, sklearn-like composable interface.__init__
args always must be explicit, and never use positional or kwargs.skbase
tag system can be used to collect all the tags, e.g., from GFO things like the type of optimizer (particle etc), or whether it is computationally expensive, or soft dependencies required for it.BaseExperiment
has ascore
method, it has the same signature as your "model" currently; its__call__
also redirects toscore
, so it can be used with the current signature. That's the "basic" interface, but we could also add an interface for gradients, to also cover gradient-based optimizers!BaseExperiment
could, for instance, be: evaluating ansklearn
classifier by cv on a dataset, so it could beSklearnExperiment(my_randomforest, X, y, KFold(5)
.BaseOptimizer
has__init__
, which passes parameters only, andadd_search
, which has almost the current signature - it takes aBaseExperiment
descendant instance, and one more object which configures the search space. Search behaviour liken_iter
would not be passed inadd_search
, but should be an__init__
arg.fit
method, as that would be compliant with multiple API naming choices, though I would not mindrun
oroptimize
etc. This method sets attributes to self, ending in_
, wo they are visible viaget_fitted_params
Thoughts?
The text was updated successfully, but these errors were encountered: