-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add algo enum #114
Add algo enum #114
Conversation
/cc @nbelakovski |
Other than the comment about using the enum type (which occurs in 3 or 4 other places besides the one I mentioned) it looks good to me. |
This simplifies even further the API with an enum to choose the algorithm instead of the 5 different functions for each algorithm: ``` enum prima_algorithm = {PRIMA_COBYLA, PRIMA_BOBYQA, ...} int prima_minimize(int algorithm, prima_problem *problem, prima_options *options, prima_result *result); ```
LGTM, I'll push on @zaikunzhang to get this merged and then rebase the callback PR and my python bindings branch |
TBH, I personally prefer the version before this PR. The reason is, there will be some users, that do not need all five solvers (like myself). Then one can selectively compile and include some but not all source files in the downstream project. Using a single entry point |
@TLCFEM A couple questions:
I'm also curious as to how you're solving this problem at the moment. Do you have some custom build scripts you use on top of prima to build only the algorithm you want? |
It may not be a problem for some, but could be a problem for others. Maybe they simply just do not want to include dead code (dead in the sense that they will not use in their particular use cases). Keeping those solvers indepedent allows linkers to omit unused code, reduce binary size, etc. Similar logic can be often seen elsewhere. I also do not use boost for the same reason. I would say adding a unified API while keeping the indepedent APIs would be good. This allows flexibility and for people who care about it and really do not want it, they can just remove the corresponding part. |
Currently I am not using it in production. I see the implemention is mostly sequential. I presume it can be easily parallelised by either omp or third-party lapack implementation as I'd like to discuss with you here. I do plan to employ some solvers in large scale (thousands to millions of DoFs) problems. But atm, I am mostly trying small problems, and checking how the performance would be with parallelised linear algebra operations. |
You make a good point about boost, but this library won't grow beyond these 5 solvers (in fact it doesn't and won't include COBYQA, an improvement over COBYLA, because the focus of this library is on Powell's solvers). Also, unlike boost, we have very few external dependencies (in many cases none). I could see an argument for structuring the code in such a way as to put all of the common code into a common library, and then make separate libraries on top of it for the various algorithms, and then a "master" library on top of all those to just provide all of them, but I think that in the interest of avoiding premature optimization I would prefer to defer that work until such a time as it becomes a problem. For what it's worth, in the fortran examples, we have makefiles that compile only the necessary algorithm, so at least there is a reference for compiling jsut the algorithm you need. |
Yeah, I am not very concerned as I can just extract part of the library and modify the c source files accordingly. And one can always directly call the |
Circling back to this, @zaikunzhang and I had a discussion about @TLCFEM 's concerns before merging, @zaikunzhang was going to add a comment to this but I guess it slipped through. The main reason for not exposing the individual algorithms is that it makes it very tedious to do common things like scaling the input. |
We should only provide one single function (e.g. |
This simplifies even further the API with an enum to choose the algorithm instead of the 5 different functions for each algorithm: