-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
usage with JuMP? #107
Comments
I'd certainly love to see more complex solvers in pure Julia -- and Optim does seem to be the default place to include them. |
Basically, I'd like to have an NLP solver that exploits sparsity (unlike NLopt). I'm not convince the Optim API is enough to describe complex NLPs with sparse constraint Jacobians, however. You really want some kind of |
For JuMP, constrained nonlinear solvers are more of a priority than unconstrained or box-constrained newton-type solvers which is currently all that's available in Optim (see #50), so we haven't yet worked on interoperability between Optim and JuMP. |
I'd argue that the Optim API needs substantial redesign even without trying to bring in new functionality. @timholy already started thinking about it a bit for constrained optimization. What are the properties you'd want to see in |
What concerns me about the |
@stevengj, are you referring to the JuMP model object or the MathProgBase interface? |
Ah, looking in MathProgBase it seems that you do send all of that information to the solver for the most part, so I guess it's all in there. I guess if the objective and constraint and Jacobian calculations are interrelated (i.e. they can share computations), then they can just cache the shared computations somewhere (e.g. in the Model somewhere)? |
Right. The MathProgBase interface is supposed to be low level and more friendly towards solvers than users. Sharing computations has come up before: https://groups.google.com/d/msg/julia-opt/z8Ld4-kdvCI/FskMAOJ_9bAJ |
I see, thanks @mlubin. |
In pure Julia? Capable of handling nonlinear equality as well as non-convex inequality constraints? What type of algorithm are you thinking of, and what do you plan on using for the sparse linear algebra kernels? You could potentially use https://github.com/JuliaSparse/MultiFrontalCholesky.jl as the basis for a SNOPT-like SQP algorithm. Would be interesting to compare against Ipopt, obviously. (It's not pure-Julia, but can it solve your NLP's here?) MathProgBase is definitely the right place to hook the solver API up to here. For an evaluation-based solver, I'd really endeavor to separate the details of where the function evaluations come from or any interrelationships they may have from the implementation of the core optimization algorithm. The former can be set up in a way that user-defined callbacks can accomplish pretty much anything you might want, but coupling function evaluation design to optimization algorithm internals could cause trouble as soon as anyone tries to, say, hook up various AD implementations to the solver. |
In pure Julia, inequality constraints only (a nonempty feasible region, not necessarily convex), via the CCSA algorithm. I think the MathProgBase interface should be sufficient, and it sounds like the code should go into its own package rather than Optim. I agree that the optimization algorithm should not know much about the function, but there should be a mechanism by which the objective and the gradient etc. can share computations; caching it on the user's side might be enough there. |
Bummer. You said NLP, I thought you meant general NLP's and got excited. "A solver for a restricted sub-class of NLP's" would be more precise terminology then, admittedly a mouthful though.
It is - that's what all the MDAO libraries do when they interface with solvers like Ipopt or Snopt, for function and gradient evaluations coming from big expensive coupled CFD/FEM simulations. If you're using JuMP for modeling and function evaluations then you won't really need special handling for intermediate data reuse (aside from the new |
Are there any plans to make Optim solvers available with the JuMP? If I want to implement a pure-Julia NLP solver for use with JuMP, is Optim the right place to do it?
cc: @mlubin
The text was updated successfully, but these errors were encountered: