Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Limbo: A Flexible High-performance Library for Gaussian Processes modeling and Data-Efficient Optimization #545

Closed
18 tasks done
whedon opened this issue Jan 23, 2018 · 21 comments
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Jan 23, 2018

Submitting author: @Aneoshun (Antoine Cully)
Repository: https://github.com/resibots/limbo
Version: V2.0
Editor: @arfon
Reviewer: @dfm
Archive: 10.5281/zenodo.1298561

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/ffe389ddf82a09b8397e6fb42c771ff0"><img src="http://joss.theoj.org/papers/ffe389ddf82a09b8397e6fb42c771ff0/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/ffe389ddf82a09b8397e6fb42c771ff0/status.svg)](http://joss.theoj.org/papers/ffe389ddf82a09b8397e6fb42c771ff0)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@dfm, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @arfon know.

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (V2.0)?
  • Authorship: Has the submitting author (@Aneoshun) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon
Copy link
Author

whedon commented Jan 23, 2018

Hello human, I'm @whedon. I'm here to help you with some common editorial tasks. @dfm it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

@whedon
Copy link
Author

whedon commented Jan 23, 2018

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Jan 23, 2018

https://github.com/openjournals/joss-papers/blob/joss.00545/joss.00545/10.21105.joss.00545.pdf

@arfon
Copy link
Member

arfon commented Jan 23, 2018

@dfm - thanks for agreeing to review this submission. Any questions along the way please shout!

@dfm
Copy link

dfm commented Jan 31, 2018

Hi all. I've had a crazy week so I probably won't get to this until next week. Please feel free to ping me if you don't hear anything from me by the end of next week. Sorry for the delay!

@Aneoshun
Copy link

Aneoshun commented Feb 1, 2018

Hi @dfm,
Sure, of course.
Good luck with your crazy week, and thank you in advance for your time. We really appreciate it.
Best regards,

@Aneoshun
Copy link

Hi @dfm,
I hope you are doing well and that you managed to go through your crazy week as well as you wanted.
As discussed, this is a friendly reminder regarding this review.

Best regards,

@arfon
Copy link
Member

arfon commented Feb 27, 2018

Hi all. I've had a crazy week so I probably won't get to this until next week. Please feel free to ping me if you don't hear anything from me by the end of next week. Sorry for the delay!

Friendly reminder to get to this sometime soonish @dfm

@arfon
Copy link
Member

arfon commented Mar 11, 2018

👋 @dfm - any chance you can take a look at this soon?

@dfm
Copy link

dfm commented Mar 11, 2018

Ugh. Sorryyyy! This all looks great, but I haven't found time to actually go through and check all the boxes and give feedback. I will make sure that I do before the end of the day on Wednesday. Sorry again!

@dfm
Copy link

dfm commented Mar 13, 2018

Hi team, thank you so much for you patience here! I've gone through this and I've included some comments below. This is a really impressive piece of software and I am excited to use it in my own work. I think that there are a few things that could improve the documentation make everything consistent with the JOSS guidelines, but it shouldn't be too onerous.

Installation:

  • Overall, installation isn't too bad and the required dependencies can be easily installed following the directions here. Some of the optional deps were a bit annoying (classic C++ 😄), but the docs here were complete.
  • The science tap on Homebrew has been discontinued so brew install homebrew/science/nlopt throws an error. Perhaps the suggestion of using that should be removed.

Performance:

  • There are strong claims in the paper when it comes to performance, but I didn't find that there was sufficient discussion on the benchmark pages of the docs to demonstrate this clearly. I think that it would be useful to include more discussion of what exactly is being tested in each experiment, why it matters, and a paragraph discussing the interpretation of the figures.
  • It's not entirely obvious to me that GPy is the right benchmark for the GP comparisons. I'm not really sure what would be better, but there should (at least) be some discussion of the fact that GPy is much more feature rich and easy to use. The focus of GPy is not performance and it is written in pure-Python (+numpy).

Documentation:

  • Overall the documentation is pretty complete, but I think that it would be much more useful if some of the earlier tutorial pages include more discussion of what is going on. Right now, the tutorials jump into source code pretty quickly and I think that a bit more theory (right on the tutorial pages!) would be useful.
  • Specifically related to the JOSS requirements, some of the text from the paper could probably be added to the documentation home page to make the statement of need clearer.
  • It would also be very useful to have examples of how to visualize the output of what is going on in each step of the optimization and tips on how to identify/debug models that aren't performing as expected.
  • In the Quick Start example, ./waf --exp test fails with Cannot read the folder 'PATH_TO_LIMBO/limbo/exp/test' for the correct value of PATH_TO_LIMBO.
  • I think that the Basic Example needs a longer introduction to explain what it going on. In my experience, this is where users will try to start, and I think that adding more details here would go a long way.
  • The full source code for the Basic Example includes the necessary includes and namespace, but the code snippets above should too.
  • Again, I think that the Basic Example needs more discussion at the end about what to expect as output, how to interpret it, and how to visualize what it going on.
  • The Advanced Example should probably have a listing of the full source code as well. Currently, the snippet for eval_func is missing the template and, even after fixing that, the code won't build on my machine (after I copied the listings directly). The error log is here: advanced.log

Thanks again for your patience! I hope that these comments are useful for improving the impact of this impressive library. Let me know if you have any questions.

@arfon
Copy link
Member

arfon commented Mar 22, 2018

@Aneoshun - please let me know when you've had a chance to incorporate @dfm's review feedback.

@Aneoshun
Copy link

Aneoshun commented Jun 1, 2018

Dear @dfm and @arfon,

Thank you very much for your comments and your patience. We have addressed all of them and we believe that we made the documentation better.

You can see the changes that we made in this pull request: resibots/limbo#257. Please, also find bellow our response to your comments:

Installation:

  • Overall, installation isn't too bad and the required dependencies can be easily installed following the directions here. Some of the optional deps were a bit annoying (classic C++ 😄), but the docs here were complete.

    • We are happy to read this.
  • The science tap on Homebrew has been discontinued so brew install homebrew/science/nlopt throws an error. Perhaps the suggestion of using that should be removed.

Performance:

  • There are strong claims in the paper when it comes to performance, but I didn't find that there was sufficient discussion on the benchmark pages of the docs to demonstrate this clearly. I think that it would be useful to include more discussion of what exactly is being tested in each experiment, why it matters, and a paragraph discussing the interpretation of the figures.

  • It's not entirely obvious to me that GPy is the right benchmark for the GP comparisons. I'm not really sure what would be better, but there should (at least) be some discussion of the fact that GPy is much more feature rich and easy to use. The focus of GPy is not performance and it is written in pure-Python (+numpy).

    • We have added another library in our regression benchmark: LibGP, which is a C++ library for Gaussian Processes (https://github.com/mblum/libgp). We also included in the discussion of this page (http://www.resibots.eu/limbo/reg_benchmarks.html), a paragraph that explains that GPy is a python library with much more feature and designed to be easy to use. Moreover, GPy can achieve comparable performance with C++ libraries in the hyper-parameters optimization part because it utilizes numpy and scipy that is basically calling C code with MKL bindings (which is almost identical to what we are doing in Limbo).

Documentation:

  • Overall the documentation is pretty complete, but I think that it would be much more useful if some of the earlier tutorial pages include more discussion of what is going on. Right now, the tutorials jump into source code pretty quickly and I think that a bit more theory (right on the tutorial pages!) would be useful.

    • We offer in the document a quick introduction to Bayesian Optimization in the Guide “ Introduction to Bayesian Optimization (BO)” (http://www.resibots.eu/limbo/guides/bo.html) in which we introduce the main concepts of BO. However, we agree that such concepts need to be known before starting the tutorial of Limbo. For this reason, and following your suggestions, we added a sentence at the beginning of the basic_example that invite interested readers to refer to this introduction (http://www.resibots.eu/limbo/tutorials/basic_example.html).
  • Specifically related to the JOSS requirements, some of the text from the paper could probably be added to the documentation home page to make the statement of need clearer.

  • It would also be very useful to have examples of how to visualize the output of what is going on in each step of the optimization and tips on how to identify/debug models that aren't performing as expected.

  • In the Quick Start example, ./waf --exp test fails with Cannot read the folder 'PATH_TO_LIMBO/limbo/exp/test' for the correct value of PATH_TO_LIMBO.

    • We have not been able to reproduce this problem. Are you sure that you ran the command: ./waf --create test before attempting to compile (as indicated in the documentation)?
  • I think that the Basic Example needs a longer introduction to explain what it going on. In my experience, this is where users will try to start, and I think that adding more details here would go a long way.

  • The full source code for the Basic Example includes the necessary includes and namespace, but the code snippets above should too.

  • Again, I think that the Basic Example needs more discussion at the end about what to expect as output, how to interpret it, and how to visualize what it going on.

  • The Advanced Example should probably have a listing of the full-time source code as well. Currently, the snippet for eval_func is missing the template and, even after fixing that, the code won't build on my machine (after I copied the listings directly). The error log is here: advanced.log

In general, we believe that your comments really helped us to improve the documentation and the library. We hope you will like these changes.

Best regards,

@arfon
Copy link
Member

arfon commented Jun 15, 2018

👋 @dfm - please take another look at this when you get a chance.

@dfm
Copy link

dfm commented Jun 25, 2018

Hi @arfon and @Aneoshun,

Thanks (again) for your patience!

This looks really great. I think that the docs are much clearer now and I'm happy to check off the rest of the checkboxes and recommend this for publication. Thanks again and congrats!

@arfon
Copy link
Member

arfon commented Jun 25, 2018

Thanks @dfm

@Aneoshun - At this point could you make an archive of the reviewed software in Zenodo/figshare/other service and update this thread with the DOI of the archive? I can then move forward with accepting the submission.

@jbmouret
Copy link

Hi @arfon

Here is the DOI from Zenodo: 10.5281/zenodo.1298561

@whedon generate pdf

@arfon
Copy link
Member

arfon commented Jun 26, 2018

@whedon set 10.5281/zenodo.1298561 as archive

@whedon
Copy link
Author

whedon commented Jun 26, 2018

OK. 10.5281/zenodo.1298561 is the archive.

@arfon arfon added the accepted label Jun 26, 2018
@arfon
Copy link
Member

arfon commented Jun 26, 2018

@dfm - many thanks for your review here ✨

@jbmouret - your paper is now accepted into JOSS and your DOI is https://doi.org/10.21105/joss.00545 ⚡️:rocket: :boom:

@arfon arfon closed this as completed Jun 26, 2018
@whedon
Copy link
Author

whedon commented Jun 26, 2018

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippet:

[![DOI](http://joss.theoj.org/papers/10.21105/joss.00545/status.svg)](https://doi.org/10.21105/joss.00545)

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@whedon whedon added published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. labels Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

5 participants