Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: A parallel global multiobjective framework for optimization: pagmo #2338

Closed
38 tasks done
whedon opened this issue Jun 15, 2020 · 92 comments
Closed
38 tasks done
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Jun 15, 2020

Submitting author: @bluescarni (Francesco Biscani)
Repository: https://github.com/esa/pagmo2-paper
Version: v2.15.0
Editor: @eloisabentivegna
Reviewer: @dgoldri25, @jangmys
Archive: 10.5281/zenodo.4013250

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/133bd81e4126c041c2998f744b0dc8c3"><img src="https://joss.theoj.org/papers/133bd81e4126c041c2998f744b0dc8c3/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/133bd81e4126c041c2998f744b0dc8c3/status.svg)](https://joss.theoj.org/papers/133bd81e4126c041c2998f744b0dc8c3)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@dgoldri25 & @jangmys, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @eloisabentivegna know.

Please try and complete your review in the next six weeks

Review checklist for @dgoldri25

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@bluescarni) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @jangmys

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@bluescarni) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Jun 15, 2020

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @dgoldri25, @jangmys it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jun 15, 2020

Reference check summary:

OK DOIs

- 10.5281/zenodo.3702783 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Jun 15, 2020

@eloisabentivegna
Copy link

Dear @dgoldri25 and @jangmys, thanks for agreeing to review this submission! As you can see, I have started the review process, whereby the original issue has been closed and a new (this) one has been opened, with further directions for you. Please take a moment to go over the instructions and checklists above, and let me know if anything is unclear.

Please notice that this is a slightly unusual submission with respect to standard JOSS practice, in that you will not find the source code under the paper repository above, as the submission consists of two separate packages:

https://github.com/esa/pagmo2
https://github.com/esa/pygmo2

This should not impact the review process, but let me know if you need further clarifications.

I look forward to your comments!

@jangmys
Copy link

jangmys commented Jul 29, 2020

Concerning the point "State of the field":

The paper (which is globally very well written) starts directly with the general formulation of the optimization problem.
While the "Summary" provides some general motivation for designing (parallel) frameworks for multi-objective optimization, I think it would be useful and important to give an overview of existing frameworks and to explain what differentiates pagmo/pygmo from them : Why should I choose pagmo/pygmo instead of, say, jMetal, jMetalPy, DEAP, HeuristicLab, ParadisEO, Mallba or Opt4j (plus NLOpt) ?

I am aware that a complete comparison of functionalities/documentation/etc is a huge task which is certainly beyond the scope of this paper - however at least a summary comparison with existing frameworks would clearly enhance the quality of the paper and give interested readers additional motivation to use pagmo. Maybe this comparision could be restricted to frameworks that offer some parallel processing support... In the (now 8 years old) paper by Parejo, José Antonio, et al., "Metaheuristic optimization frameworks: a survey and benchmarking.", Soft Computing 16.3 (2012): 527-561 (https://core.ac.uk/reader/51388224) the authors perform a complete comparative study of metaheuristic optimization frameworks. The results of this study are (at least partially) outdated, but it could still be a good starting point (and/or a reference to give to the reader!).

@jangmys
Copy link

jangmys commented Jul 29, 2020

The paper says "The parallel evaluation can be performed by multiple threads, processes, nodes in an HPC cluster or even by GPU devices (via, e.g., OpenCL or CUDA)" (p.3) - I could not find a "GPU batch evaluator" in the code (please help me out if I missed it).

Of course, conceptually, a batch of solutions could be evaluated on a GPU - technically, however, there are some challenges (for example: (1) no std library in device code -> DS for individuals on the device?; (2) need to explicitly copy problem data to the device and separate implementation of __device__ fitness function (alternatively "Unified Memory" + __host__ __device__ lambdas...which are still experimental: performance?); (3) thread-data mapping on the device (only 1 thread/fitness eval or possibility of parallelizing fitness evaluation itself?

If a GPUbatchEval has already been implemented (and I just didn't see it), the paper should provide some explanations and instructions for using GPU acceleration should appear in the documentation... otherwise, if GPU support for batch evaluation is future work, the paper should clarify this.
Concerning a "unified" heterogeneous programming approach that fits well into pagmo's design, I think intel OneAPI/SYCL/Data Parallel C++ is an interesting alternative to CUDA/OpenCL.

@jangmys
Copy link

jangmys commented Jul 29, 2020

Minor remarks (typos):
Summary: cuncurrently -> concurrently
p.2: non linear ; non linearly -> non-linear ; non-linearly
p.3: Cuncurrent fitness -> Concurrent

@jangmys
Copy link

jangmys commented Jul 29, 2020

I was able to install pygmo/pagmo easily, following the instructions, and the provided tutorials helped me to get started quickly.
While I haven't used the software myself before doing this review, some colleagues in my department have been using pagmo/pygmo in their research for some time not - as far as I know, the fact that pagmo/pygmo is well documented and very accessible has been an important factor in their choice. An active community of users and the possibility to find help in case of difficulties and/or exchange with the developers (e.g. Gitter) is also a very positive point. Overall, the pagmo/pymgo framework is a valuable and (I've got the impression) already quite well established optimization tool.

Therefore, in order to increase the software's visibility and allow researchers to properly cite pagmo/pygmo (the last (only?) pagmo-related paper I found is from 2010, "A global optimization toolbox for massively parallel engineering optimisation"), publishing this paper in JOSS definitely makes sense.
However, before I can recommend accepting the article, I think it is important that the issues I mentioned above are addressed.

@eloisabentivegna
Copy link

Thanks for your comments, @jangmys!

Could you raise any future issues in the respective repositories? This will ensure your suggestions will enter the package histories and be properly credited. You can create a mention here by using this issue's URL in a repository's issue, if you wish (see https://joss.readthedocs.io/en/latest/reviewer_guidelines.html#guiding-principles).

@eloisabentivegna
Copy link

@dgoldri25, do you concur with @jangmys' suggestions regarding a comparison with the state of the art? Is this why you have left the corresponding box unticked?

@davidfgold
Copy link

@eloisabentivegna I second @jangmys' suggestion. While I found the paper to be high quality overall, the review of the state of the art should be expanded.

@eloisabentivegna
Copy link

@bluescarni, does the request by @jangmys and @dgoldri25 make sense to you? Can you expand the literature review?

@bluescarni
Copy link

@eloisabentivegna @jangmys @dgoldri25 thanks for the review!

We can certainly expand the literature review. Currently all the authors are still on vacation, so apologies for the late reply. We should be able to revise the paper next week.

@bluescarni
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Aug 25, 2020

@bluescarni
Copy link

@eloisabentivegna @jangmys @dgoldri25 @darioizzo we have added a section about related projects/frameworks and fixed the typos. Please let us know if there is anything else.

@jangmys regarding your question about the GPU: what pagmo provides is an API to compute the fitness evaluation of a group of independent decision vectors in a (possibly) parallel fashion. Such API can then be used by optimisation algorithms capable of taking advantage of parallel fitness evaluation (in pagmo we have a handful of such algorithms, mostly genetic/evolutionary ones). Thus you are absolutely right that we don't directly address the specifics of GPU programming from within pagmo (such as data transfer, compilation model, etc.). This is left to the author of an optimisation problem, who is free to choose between CUDA/OpenCL/SYCL for the implementation of the batch evaluation API in his/her specific optimisation problem. We expanded a bit on this point in the latest revision of the paper, please let us know if this clarifies the matter.

@jangmys
Copy link

jangmys commented Aug 27, 2020

@bluescarni @eloisabentivegna @dgoldri25 @darioizzo I think the additional paragraph addresses the existing state-of-the-art appropriately (and I therefore checked the corresponding box). I also agree with the "Concurrent fitness evaluation" section which now clarifies that there is no "out-of-the-box" GPU support, but rather a clean interface that could allow users to build one on their own. From my side there are no further remarks and I recommend to accept the paper. I hope you'll be able to keep up the good work on this framework in the future ;-)

@davidfgold
Copy link

@bluescarni @eloisabentivegna @jangmys @darioizzo I concur with jangmys' assessment, I've also checked the box for "State of the Field".

I recommend to accept the paper as well.

@eloisabentivegna
Copy link

Thanks, @jangmys and @dgoldri25! It sounds like we are ready to accept the paper. @arfon, is there something special we need to do because of the double code repository?

@arfon
Copy link
Member

arfon commented Sep 1, 2020

@arfon, is there something special we need to do because of the double code repository?

We need the authors to make a single archive with e.g. Zenodo of all of the software associated with the submission. This will likely require a little extra work on the part of the authors compared with some of the automated methods for doing this with GitHub.

@bluescarni
Copy link

@arfon would it be enough to make an archive of the latest releases of pagmo and pygmo (i.e., from the released tarballs on github)?

@arfon
Copy link
Member

arfon commented Sep 1, 2020

@arfon would it be enough to make an archive of the latest releases of pagmo and pygmo (i.e., from the released tarballs on github)?

We would like the archive to include any changes that have been made as a result of this review. Would that be the case if you used the latest releases?

@bluescarni
Copy link

@arfon No, there were no code changes as a result of the review.

@bluescarni
Copy link

@arfon where should the archive be uploaded?

@arfon
Copy link
Member

arfon commented Sep 3, 2020

@arfon where should the archive be uploaded?

We need an archive DOI from e.g. Zenodo or figshare.

@whedon
Copy link
Author

whedon commented Sep 13, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1007/978-3-642-28789-3_7 is OK
- 10.5281/zenodo.3702783 is OK
- 10.1146/annurev.es.09.110178.000335 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1023/A:1008202821328 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1016/j.advengsoft.2011.05.014 is OK
- 10.1023/B:HEUR.0000026900.92269.ec is OK

MISSING DOIs

- None

INVALID DOIs

- None

@bluescarni
Copy link

@eloisabentivegna @dgoldri25 @jangmys Thanks for the review and all the help!

@danielskatz Thanks for the review, I updated the paper with your corrections/suggestions. Regarding the missing links for the Jakob and Johnson references, I have used the bibtex entries suggested in the documentation of those software projects. Perhaps those entries don't play well with JOSS's bib style?

@danielskatz
Copy link

danielskatz commented Sep 13, 2020

Unfortunately, bibtex entries are often built for a particular style, and don't work quite right for other styles. I think, rather than putting URLs in a note field, you should put them in a url field (going from https://joss.readthedocs.io/en/latest/submitting.html#example-paper-and-bibliography) Can you try this?

@bluescarni
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 13, 2020

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@bluescarni
Copy link

@whedon check references

@whedon
Copy link
Author

whedon commented Sep 13, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1007/978-3-642-28789-3_7 is OK
- 10.5281/zenodo.3702783 is OK
- 10.1146/annurev.es.09.110178.000335 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1023/A:1008202821328 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1016/j.advengsoft.2011.05.014 is OK
- 10.1023/B:HEUR.0000026900.92269.ec is OK

MISSING DOIs

- None

INVALID DOIs

- None

@bluescarni
Copy link

@danielskatz Cheers for the suggestion, it seems like now the URLs are showing up in the bibliography.

@danielskatz
Copy link

@whedon accept

@whedon
Copy link
Author

whedon commented Sep 13, 2020

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Sep 13, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1007/978-3-642-28789-3_7 is OK
- 10.5281/zenodo.3702783 is OK
- 10.1146/annurev.es.09.110178.000335 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1023/A:1008202821328 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1016/j.advengsoft.2011.05.014 is OK
- 10.1023/B:HEUR.0000026900.92269.ec is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Sep 13, 2020

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#1725

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1725, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@danielskatz
Copy link

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented Sep 13, 2020

Doing it live! Attempting automated processing of paper acceptance...

@whedon whedon added accepted published Papers published in JOSS labels Sep 13, 2020
@whedon
Copy link
Author

whedon commented Sep 13, 2020

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Sep 13, 2020

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.02338 joss-papers#1726
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.02338
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@danielskatz
Copy link

Congratulations to @bluescarni (Francesco Biscani) and co-author!!

Thanks to @dgoldri25 & @jangmys for reviewing, and @eloisabentivegna for editing!

@whedon
Copy link
Author

whedon commented Sep 13, 2020

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02338/status.svg)](https://doi.org/10.21105/joss.02338)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02338">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02338/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02338/status.svg
   :target: https://doi.org/10.21105/joss.02338

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@bluescarni
Copy link

@danielskatz @eloisabentivegna @dgoldri25 @jangmys great to hear, and thank you all!

@darioizzo
Copy link

@danielskatz @eloisabentivegna @dgoldri25 @jangmys thanks indeed!

@arfon
Copy link
Member

arfon commented Sep 14, 2020

🎉 and congratulations to @eloisabentivegna for editing her first JOSS paper through to completion!

@eloisabentivegna
Copy link

Thanks to you, @arfon and @danielskatz, for your kind guidance and patience. :)

@whedon
Copy link
Author

whedon commented Sep 14, 2020

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02338/status.svg)](https://doi.org/10.21105/joss.02338)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02338">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02338/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02338/status.svg
   :target: https://doi.org/10.21105/joss.02338

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

8 participants