Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Pakman: a modular, efficient and portable tool for approximate Bayesian inference #1716

Closed
38 tasks done
whedon opened this issue Sep 6, 2019 · 62 comments
Closed
38 tasks done
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Sep 6, 2019

Submitting author: @ThomasPak (Thomas Pak)
Repository: https://github.com/ThomasPak/pakman
Version: v1.1.0
Editor: @jedbrown
Reviewer: @jmlarson1, @gonsie
Archive: 10.5281/zenodo.3697312

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/323183e9a1c9b05ca849cbf9982b7eaf"><img src="https://joss.theoj.org/papers/323183e9a1c9b05ca849cbf9982b7eaf/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/323183e9a1c9b05ca849cbf9982b7eaf/status.svg)](https://joss.theoj.org/papers/323183e9a1c9b05ca849cbf9982b7eaf)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@jmlarson1 & @gonsie, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @jedbrown know.

Please try and complete your review in the next two weeks

Review checklist for @jmlarson1

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@ThomasPak) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @gonsie

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@ThomasPak) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Sep 6, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @jmlarson1, @gonsie it looks like you're currently assigned to review this paper 🎉.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 6, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Sep 6, 2019

@jedbrown
Copy link
Member

jedbrown commented Sep 6, 2019

@jmlarson1, @gonsie 👋 Welcome and thanks for agreeing to review! The comments from @whedon above outline the review process, which takes place in this thread (possibly with issues filed in the Pakman repository and mentioning the URL of this issue). I'll be watching this thread if you have any questions.

@jmlarson1
Copy link

jmlarson1 commented Sep 10, 2019

Some things of note:

  1. I received the following warning when trying to build.
CMake Warning at examples/epithelial-cell-growth/CMakeLists.txt:3 (message):
  CHASTE_DIR ("") is not a valid directory, cannot build epithelial cell
  growth example.
  1. I tried to run the tests, but it hangs;
[jlarson@x1carbon build]$ ctest 
Test project /home/jlarson/software/pakman/build
      Start  1: MPIMasterSweepStandardSimulatorMatch
 1/32 Test  #1: MPIMasterSweepStandardSimulatorMatch ..........   Passed    0.03 sec
      Start  2: MPIMasterSweepStandardSimulatorError
 2/32 Test  #2: MPIMasterSweepStandardSimulatorError ..........   Passed    0.02 sec
      Start  3: SerialMasterSweepStandardSimulatorMatch
 3/32 Test  #3: SerialMasterSweepStandardSimulatorMatch .......   Passed    0.01 sec
      Start  4: SerialMasterSweepStandardSimulatorError
 4/32 Test  #4: SerialMasterSweepStandardSimulatorError .......   Passed    0.01 sec
      Start  5: MPIMasterRejectionStandardSimulatorMatch
 5/32 Test  #5: MPIMasterRejectionStandardSimulatorMatch ......   Passed    0.04 sec
      Start  6: MPIMasterRejectionStandardSimulatorError
 6/32 Test  #6: MPIMasterRejectionStandardSimulatorError ......   Passed    0.02 sec
      Start  7: SerialMasterRejectionStandardSimulatorMatch
 7/32 Test  #7: SerialMasterRejectionStandardSimulatorMatch ...   Passed    0.02 sec
      Start  8: SerialMasterRejectionStandardSimulatorError
 8/32 Test  #8: SerialMasterRejectionStandardSimulatorError ...   Passed    0.00 sec
      Start  9: MPIMasterSMCStandardSimulatorMatch
 9/32 Test  #9: MPIMasterSMCStandardSimulatorMatch ............   Passed    0.20 sec
      Start 10: MPIMasterSMCStandardSimulatorError
10/32 Test #10: MPIMasterSMCStandardSimulatorError ............   Passed    0.02 sec
      Start 11: SerialMasterSMCStandardSimulatorMatch
11/32 Test #11: SerialMasterSMCStandardSimulatorMatch .........   Passed    0.13 sec
      Start 12: SerialMasterSMCStandardSimulatorError
12/32 Test #12: SerialMasterSMCStandardSimulatorError .........   Passed    0.00 sec
      Start 13: RunMPISimulatorMatch
  1. Aren't the examples prior-sampler.py or simulator.py in /pakman/examples/biased-coin-flip
    just wrappers around numpy? (And therefore not a good example of the code being used?)

  2. The paper does not "clearly state what problems the software is designed to solve and who the target audience is?"

  3. The code does not seem to be well documented. Consider the example script:
    https://github.com/ThomasPak/pakman/blob/master/examples/sis-model/simulator.py
    The run_SIS_simulation routine does not state what the various arguments represent.

  4. The paper does not sufficiently summarize the state of the field.

@ThomasPak
Copy link

Hi @jmlarson1, thanks for agreeing to review this project and for your comments. Please find my responses to points 1, 2, and 3 below. I will get back to you on the other points as soon as possible.

  1. This warning shows up when Chaste is not installed, or when the environment variable CHASTE_DIR is not set to the Chaste source directory. It is not an error with regards to building Pakman itself, it only informs the user that the epithelial cell growth example cannot be built and/or executed. The wiki tutorial for this example is found here and it contains instructions on building the example.

    The reason for this warning is that, unlike the other two examples, the epithelial cell growth example is not a self-contained example. It requires Chaste as a dependency, which is an open-source simulation software package for problems in physiology and biology that is too big to pull into Pakman.

    Chaste users write applications by creating a project subfolder within the Chaste source tree, writing the application code, and then running CMake on Chaste and building Chaste. More info on that here. Therefore, Pakman needs access to a Chaste source tree so it can add the project subfolder to the Chaste source tree, run CMake on Chaste, and build the binary for the epithelial cell growth example.

    I understand that this warning may lead users to mistakenly believe that their Pakman build is somehow compromised. Perhaps it would make more sense to let the epithelial cell growth example be controlled by a CMake flag instead? In that way, CMake would only build and run the epithelial cell growth example when explicitly requested to do so. Please let me know your thoughts on this.

  2. Apologies for that. I added a few tests yesterday at the end of the day, and they ran fine on my laptop but clearly they are not very portable as they are not working for you and Travis CI is also not (entirely) successful at running them. I tried them out on a machine from my department today and it does not work there either.

    The test was intended to test the functionality of the utility executable utils/run-mpi-simulator. This executable allows a user to run an MPI simulator in the terminal as if it was a standard simulator. See this for an explanation what the difference is between a standard simulator and an MPI simulator.

    Briefly, whereas a standard simulator reads an epsilon value and a parameter value from the standard input, an MPI simulator receives those values from the parent MPI process that spawned it. Normally, that parent MPI process is Pakman. I wrote run-mpi-simulator as an MPI program that reads those epsilon and parameter values from the standard input and pretends to be Pakman by spawning the given MPI simulator and relaying those values to the MPI simulator in the same way Pakman would. The purpose of run-mpi-simulator is for the user to be able to test out the MPI simulator in the terminal.

    However, the MPI standard does not mention standard input. As a result, different MPI implementations will handle standard input differently. I had forgotten about this, but I have now corrected it in commit 0d1fa18, and updated the wiki tutorial accordingly. Now, run-mpi-simulator will read the epsilon and parameter from an input file instead of its standard input. This should be portable across MPI implementations and it is working on Travis CI, so please pull the latest commit and give it another try.

  3. You are correct that the biased coin flip is not a good example of using Pakman. The simulator for this example is very simple and amounts to just a few simple numpy operations. Therefore, running a simulation takes very little time and thus the parallelisation overhead will dominate the total runtime. Pakman was indeed designed for problems where simulation times are relatively long (on the order of multiple seconds at least), so that it would benefit from parallelisation, and that is not the case here.

    However, I deliberately chose this example to be as simple as possible for two reasons. The biased coin example is first and foremost a guide on how to use Pakman, and introduces the general workflow of using Pakman, primarily through its wiki tutorial.

    The second reason is that in this simple case, the analytical likelihood is well characterised so we know what the result of running Pakman should converge to. Hence, it demonstrates that the correct results are being generated.

    Because of your comment, however, I realised that not every use would realise this. Would you suggest that I add a paragraph to the wiki tutorial to warn the user about this?

@gonsie
Copy link

gonsie commented Sep 17, 2019

Hi,

Do I understand pakman correctly:

Pakman has the ability to launch ABC simulators in parallel. Using the stdout from the simulators, pakman applies an ABC sweep, rejection, or smc scheme. The simulators (and models they are running) should be supplied by users of pakman.

It would be helpful to have some more comprehensive documentation around the pakman options. That is, it would be nice for the --help text to appear somewhere in the web documentation so that the user could get a sense of what arguments are required for the different masters and controllers.

Do you have any examples / performance numbers of this working on an HPC cluster, possibly with a batch scheduler (e.g., slurm)? Does pakman performance scale as node-count increases?

@ThomasPak
Copy link

Hi @gonsie,

You have summarised it very well! The only things I would add is that the stdin of simulators is also used to pass on parameters, and that the sweep functionality is not an ABC method, it simply performs a parameter sweep.

I agree that documentation on the pakman options would be helpful. I will add a wiki page listing the pakman options similarly to the --help text. I am also thinking of adding a "Getting Started" section in README.md that would include the step of getting the --help text to set up the right command-line options.

I have used pakman on a HPC cluster, but I have not yet made a performance benchmark for HPC clusters. I will do so and get back to you when I have the results.

@kthyng
Copy link

kthyng commented Oct 22, 2019

Hi @jmlarson1, @gonsie, and @ThomasPak — how are things going? What are the next steps for this review?

@ThomasPak
Copy link

Hi @kthyng, the hold-up is because of me, I have yet to implement the changes that the reviewers have asked for. I was very busy for the past month because I had to write a progress report for my course and be examined on it. Fortunately, that is now behind me so I can focus on this submission again.

@ThomasPak
Copy link

Hi @jmlarson1,

Apologies for the delay, but I have now addressed points 4 through 6 of your comments. I have improved and expanded the documentation of the code. Also, I have added a paragraph to the paper to clearly state the problem my software is intended to solve and the intended target audience, and another paragraph to better summarise the state of the field.

@ThomasPak
Copy link

Hi @gonsie

As per your suggestion, I have added a Wiki page documenting the command-line interface of Pakman here: https://github.com/ThomasPak/pakman/wiki/Command-line-interface

I have performed a benchmark on my University's HPC cluster. The cluster uses Slurm for job scheduling and I performed the test on 16 nodes with 16 CPU cores each. The CPUs were all Intel Xeon E5-2650 processors. Briefly, I sent a job to the cluster with Pakman to run 4096 simulations of scaling/scaling-simulator.sh with varying numbers of CPU cores used. The numbers of CPU cores range from 1 to 256 in powers of 2. The resulting computation times are given below.

Number of CPU cores Computation time
1 3h 35m 50.286s
2 1h 49m 06.142s
4 57m 08.339s
8 30m 02.199s
16 15m 43.652s
32 10m 28.441s
64 3m 53.066s
128 2m 26.269s
256 1m 19.147s

Moreover, I have plotted the speedup and efficiency (defined with respect to computation times under ideal linear scaling) using scaling/plot-scaling.py below.

speedup

efficiency

The computation times scale roughly linear, although there is a drop in efficiency at higher core counts. Part of this can be attributed to overhead involved with to message passing, while part of it may be because of inefficiencies in the code. Since the average duration of a single simulation in this example is roughly three seconds, the parallel overhead would have a smaller impact for longer-running simulations.

Overall, I believe that these results demonstrate that Pakman scales fairly well with increased computational resources.

@gonsie
Copy link

gonsie commented Nov 6, 2019

Thanks @ThomasPak. Maybe you want to document the performance graphs/details to the article? Or maybe to a wiki page? Otherwise, I think it looks good.

@jmlarson1
Copy link

I believe the checklist requirements have been satisfied.

@ThomasPak
Copy link

@gonsie I have put the results of this performance test on the Pakman wiki because I may add more tests in the future. Please have a look at https://github.com/ThomasPak/pakman/wiki/Benchmark%3A-HPC-cluster

@ThomasPak
Copy link

Hi @gonsie and @jmlarson1 , what are the next steps to complete the review process?

@jedbrown
Copy link
Member

I see the checklists are complete; thanks. @gonsie @jmlarson1 Please let me know if you have any further concerns.

@ThomasPak
Copy link

I see the checklists are complete; thanks. @gonsie @jmlarson1 Please let me know if you have any
further concerns.

Hi @gonsie @jmlarson1, have you had time to look into this?

@jmlarson1
Copy link

I have no further concerns.

@ThomasPak
Copy link

Hi @gonsie, do you have any further concerns?

@gonsie
Copy link

gonsie commented Dec 18, 2019

yep, it looks good.

@jedbrown
Copy link
Member

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Dec 18, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Dec 18, 2019

@jedbrown
Copy link
Member

@whedon check references

@ThomasPak
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Mar 4, 2020

@ThomasPak
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Mar 4, 2020

@ThomasPak
Copy link

@jedbrown No worries!

With regard to the experiment in Figure 2, is there a set of parameters that would acceptably reproduce the analytic result?

Yes. The reason why the computational results are different from the analytical results is that the tolerance series used ends with a tolerance of 35. Whenever the target tolerance in the ABC SMC is nonzero, the computed posterior is only an approximation of the posterior. If the target tolerance was zero, however, the computed posterior would reproduce the analytical result.

Thanks for asking this question; because of your question I discovered an error in the caption of Figure 2. In particular, I wrote that the reasons for the discrepancy were insufficient summary statistics and a nonzero tolerance. However, no summary statistics were used, so the only reason for the discrepancy is a nonzero tolerance.

Does Pakman have any functionality to inform the user when their choices are insufficient and thus can't be trusted?

Pakman currently does not have this functionality because there is no easy and straightforward way to check the quality of a computed posterior. The most general method is to use posterior predictive checks, which involves generating 'fake' data from the fitted model and comparing it to the real data. We did not include this in the scope of Pakman, although we do not rule out incorporating this functionality in the future.

Also, is there a reason you to use \langle 75, 70, ..., 35 \rangle as notation for what I understand to just be a list (in which case braces \{75, ..., 35\} are more common)?

We used angle brackets to stress that the order of elements is important. If we used braces, it could be mistaken for a set instead of an ordered list.

Finally, please update the bibliography to use full journal names instead of the (PubMed?) abbreviations like Stat. Appl. Genet. Mol. Biol.. If that renders alright, I think we'll be ready to archive.

I have made these changes.

Thanks for your comments and please let me know if you are happy to proceed with archiving.

@jedbrown
Copy link
Member

jedbrown commented Mar 4, 2020

Thanks for your explanation and updates. At this point, here are the next steps:

  • Make a tagged release of your software (v1.0.1 or whatever is appropriate), and list the version tag of the archived version here. Annotated tags are preferred.
  • Archive the reviewed software in Zenodo or a similar service (e.g. figshare, an institutional repository)
  • Check the archival deposit (e.g., in Zenodo) has the correct metadata, this includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it); you may also add the authors' ORCID.
  • Please list the DOI of the archived version here.

I can then move forward with accepting the submission.

@ThomasPak
Copy link

Thanks for the quick response! I followed the steps that you outlined, and the DOI is 10.5281/zenodo.3697312.

@jedbrown
Copy link
Member

jedbrown commented Mar 5, 2020

@whedon set v1.1.0 as version

@whedon
Copy link
Author

whedon commented Mar 5, 2020

OK. v1.1.0 is the version.

@jedbrown
Copy link
Member

jedbrown commented Mar 5, 2020

@whedon set 10.5281/zenodo.3697312 as archive

@whedon
Copy link
Author

whedon commented Mar 5, 2020

OK. 10.5281/zenodo.3697312 is the archive.

@jedbrown
Copy link
Member

jedbrown commented Mar 5, 2020

@whedon accept

@whedon
Copy link
Author

whedon commented Mar 5, 2020

Attempting dry run of processing paper acceptance...

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label Mar 5, 2020
@whedon
Copy link
Author

whedon commented Mar 5, 2020

Reference check summary:

OK DOIs

- 10.1073/pnas.0306899100 is OK
- 10.1098/rsif.2008.0172 is OK
- 10.1016/j.cels.2016.12.002 is OK
- 10.1093/bioinformatics/bty361 is OK
- 10.1371/journal.pcbi.1002970 is OK
- 10.1371/journal.pcbi.1005387 is OK
- 10.1093/biomet/asp052 is OK
- 10.1515/sagmb-2012-0069 is OK
- 10.1007/s11222-012-9328-6 is OK
- 10.1073/pnas.1304382110 is OK
- 10.1126/sciadv.1701676 is OK
- 10.1093/bioinformatics/btt763 is OK
- 10.1093/bioinformatics/btq278 is OK
- 10.1111/j.2041-210X.2011.00179.x is OK
- 10.1186/1471-2105-11-116 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Mar 5, 2020

👋 @openjournals/joss-eics, this paper is ready to be accepted and published

. Check final proof 👉 openjournals/joss-papers#1352

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1352, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@ThomasPak
Copy link

The paper PDF and Crossref deposit XML look good to me 👍.

@Kevin-Mattheus-Moerman
Copy link
Member

Kevin-Mattheus-Moerman commented Mar 6, 2020

Updated comment-The following is no longer needed:

@ThomasPak I'm helping with the final processing for your paper. Can you check the point below.

The following sentence needs rephrasing, please remove the words "embarrassingly": Fortunately, some ABC algorithms have a simulation workload that is embarrassingly parallel, or proceed through a sequence of embarrassingly parallel simulation workloads.,

@jedbrown
Copy link
Member

jedbrown commented Mar 7, 2020

@Kevin-Mattheus-Moerman Other wordings are possible, but it as a widely used technical term, and wouldn't have the same meaning if you simply cut "embarrassingly".

@Kevin-Mattheus-Moerman
Copy link
Member

@ThomasPak @jedbrown okay. In that case ignore my comment.

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon accept deposit=true

@whedon whedon added accepted published Papers published in JOSS labels Mar 7, 2020
@whedon
Copy link
Author

whedon commented Mar 7, 2020

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

whedon commented Mar 7, 2020

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Mar 7, 2020

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.01716 joss-papers#1359
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01716
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@whedon
Copy link
Author

whedon commented Mar 7, 2020

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01716/status.svg)](https://doi.org/10.21105/joss.01716)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01716">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01716/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01716/status.svg
   :target: https://doi.org/10.21105/joss.01716

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@ThomasPak
Copy link

Thanks to the editor and reviewers for making this possible! 🎉 🎉 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

8 participants