You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm not sure if this is the best place to ask this question, so please let me know if there's a more appropriate forum!
I have been performing ODE parameter estimation (fitting to experimentally observed data) using NLPModelsIpopt and recently tested MadNLP on the same problems. For moderately sized (90k variables, 800k Hessian nonzeros) test problems with noiseless synthetic data, MadNLP seemed to work well and exhibited reduced compute time relative to Ipopt (using the same linear solver Ma97), so I was hoping to transfer my workflow to MadNLP entirely. However, upon testing MadNLP with larger problems (180k variables, 1.6m Hessian nonzeros), or problems incorporating noisy data (in both the synthetic data + artificial noise and real experimental data cases), the solver either hits max iterations or shows a behavior where the problem appears to be solving well for several hundred steps before suddenly going into restoration and then failing. In contrast, these same problems (and much larger ones) solve to optimality in Ipopt without issue (using the same underlying NLPModel).
I'm curious what's going on, and I am also wondering if there are optimizer options I could modify that might help the issue. Would anyone here be able to point me in a direction to debug this problem?
The text was updated successfully, but these errors were encountered:
This type of convergence issue is difficult to debug, but if you could provide a simple example to reproduce this, it would be helpful for us to improve the convergence behavior of MadNLP
Hi @sshin23. I have attached a zip file containing an example problem -- the script examples/hh_sde_example.jl generates an NLPProblem called vap, which can then be passed to either NLPModelsIpopt or MadNLP. Example logs from both are also contained in the folder examples; you can see that Ipopt solves rather quickly, while MadNLP goes into restoration, and unfortunately fails. The example problem is to perform parameter estimation for a 4D Hodgkin-Huxley equation, discretized via Simpson-Hermite transcription. It is similar to the approach described in your paper here, but incorporating the synchronization-based control given in this paper to regularize convergence. The code in src computes the necessary derivatives using SymPy (this is probably not ideal). Please let me know if you have any questions -- thanks!
I'm not sure if this is the best place to ask this question, so please let me know if there's a more appropriate forum!
I have been performing ODE parameter estimation (fitting to experimentally observed data) using NLPModelsIpopt and recently tested MadNLP on the same problems. For moderately sized (90k variables, 800k Hessian nonzeros) test problems with noiseless synthetic data, MadNLP seemed to work well and exhibited reduced compute time relative to Ipopt (using the same linear solver Ma97), so I was hoping to transfer my workflow to MadNLP entirely. However, upon testing MadNLP with larger problems (180k variables, 1.6m Hessian nonzeros), or problems incorporating noisy data (in both the synthetic data + artificial noise and real experimental data cases), the solver either hits max iterations or shows a behavior where the problem appears to be solving well for several hundred steps before suddenly going into restoration and then failing. In contrast, these same problems (and much larger ones) solve to optimality in Ipopt without issue (using the same underlying
NLPModel
).I'm curious what's going on, and I am also wondering if there are optimizer options I could modify that might help the issue. Would anyone here be able to point me in a direction to debug this problem?
The text was updated successfully, but these errors were encountered: