Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: cca-zoo #3823

Closed
60 tasks done
whedon opened this issue Oct 13, 2021 · 63 comments
Closed
60 tasks done

[REVIEW]: cca-zoo #3823

whedon opened this issue Oct 13, 2021 · 63 comments
Assignees
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Oct 13, 2021

Submitting author: @jameschapman19 (James Chapman)
Repository: https://github.com/jameschapman19/cca_zoo
Version: v1.10.8
Editor: @emdupre
Reviewer: @robbisg, @hugorichard, @ejolly
Archive: 10.5281/zenodo.5786616

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/3f5f0727ab300be64719665f2e43b055"><img src="https://joss.theoj.org/papers/3f5f0727ab300be64719665f2e43b055/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/3f5f0727ab300be64719665f2e43b055/status.svg)](https://joss.theoj.org/papers/3f5f0727ab300be64719665f2e43b055)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@robbisg & @hugorichard & @ejolly, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @emdupre know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @robbisg

✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@jameschapman19) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @hugorichard

✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@jameschapman19) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @ejolly

✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@jameschapman19) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Oct 13, 2021

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @robbisg, @hugorichard, @ejolly it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Oct 13, 2021

Wordcount for paper.md is 1397

@whedon
Copy link
Author

whedon commented Oct 13, 2021

Software report (experimental):

github.com/AlDanial/cloc v 1.88  T=0.14 s (436.4 files/s, 106033.6 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          36            779           1686           4621
Jupyter Notebook                 3              0           4301           2375
reStructuredText                14            216            139            255
TeX                              1             26              0            225
Markdown                         2             47              0            216
YAML                             4             17             17             91
DOS Batch                        1              8              1             26
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            62           1097           6151           7818
-------------------------------------------------------------------------------


Statistical information for the repository '701fd338069e8ec2384f9b55' was
gathered on 2021/10/13.
The following historical commit information, by author, was found:

Author                     Commits    Insertions      Deletions    % of changes
Hao-Ting Wang                    2            82             25            0.20
James Chapman                  423         17935          13864           58.33
jameschapman                    55         12783           9824           41.47

Below are the number of rows from each author that have survived and are still
intact in the current revision:

Author                     Rows      Stability          Age       % in comments
Hao-Ting Wang                48           58.5          4.1                6.25
James Chapman              7038           39.2          2.5                7.22

@whedon
Copy link
Author

whedon commented Oct 13, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- 10.2307/2333955 may be a valid DOI for title: Relations between two sets of variates
- 10.1016/0304-4076(76)90010-5 may be a valid DOI for title: Canonical ridge and econometrics of joint production
- 10.1162/0899766042321814 may be a valid DOI for title: Canonical correlation analysis: An overview with application to learning methods
- 10.1007/s11336-011-9206-8 may be a valid DOI for title: Regularized generalized canonical correlation analysis
- 10.1093/biostatistics/kxp008 may be a valid DOI for title: A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis
- 10.2202/1544-6115.1329 may be a valid DOI for title: Quantifying the association between gene expressions and DNA-markers by penalized canonical correlation analysis
- 10.2202/1544-6115.1406 may be a valid DOI for title: Sparse canonical correlation analysis with application to genomic data integration
- 10.1111/biom.13043 may be a valid DOI for title: An iterative penalized least squares approach to sparse canonical correlation analysis
- 10.3389/fninf.2016.00049 may be a valid DOI for title: Pyrcca: regularized kernel canonical correlation analysis in python and its applications to neuroimaging
- 10.1007/978-1-4612-4228-4_3 may be a valid DOI for title: The canonical correlations of matrix pairs and their numerical computation
- 10.1109/allerton.2015.7447071 may be a valid DOI for title: Stochastic optimization for deep CCA via nonlinear orthogonal iterations
- 10.1109/cvpr.2007.383137 may be a valid DOI for title: Tensor canonical correlation analysis for action classification
- 10.1109/tnn.2007.891186 may be a valid DOI for title: Variational Bayesian approach to canonical correlation analysis

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Oct 13, 2021

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@emdupre
Copy link
Member

emdupre commented Oct 14, 2021

Hi everyone ! 👋 Thanks again for agreeing to review this submission ! The review will take place in this issue.

Whenever possible, please open relevant issues on the linked software repository (and cross-link them with this issue) rather than discussing them here. This helps to make sure that feedback is translated into actionable items to improve the software. If you aren't sure how to get started, please see the Reviewing for JOSS guide -- and, of course, feel free to ping me with any questions !

@jameschapman19, one small formatting note : It looks like your paper.bib file lacks DOI fields for the included entries. This is causing whedon to complain about missing DOIs. Can you please add a DOI field for each entry ? You can see a first guess for the relevant DOIs -- to be confirmed for correctness ! -- in whedon's comment here.

@jameschapman19
Copy link

Hi @emdupre. I have updated the DOIs for the ones that @whedon flagged and checked them. I guess the ones that don't flag are OK? The JOSS template bib doesn't have all references with DOIs.

@emdupre
Copy link
Member

emdupre commented Oct 15, 2021

@whedon check references

@whedon
Copy link
Author

whedon commented Oct 15, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.2307/2333955 is OK
- 10.1016/0304-4076(76)90010-5 is OK
- 10.1162/0899766042321814 is OK
- 10.1007/s11336-011-9206-8 is OK
- 10.1093/biostatistics/kxp008 is OK
- 10.2202/1544-6115.1329 is OK
- 10.2202/1544-6115.1406 is OK
- 10.1111/biom.13043 is OK
- 10.3389/fninf.2016.00049 is OK
- 10.1007/978-1-4612-4228-4_3 is OK
- 10.1109/allerton.2015.7447071 is OK
- 10.1109/cvpr.2007.383137 is OK
- 10.1109/tnn.2007.891186 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Oct 27, 2021

👋 @robbisg, please update us on how your review is going (this is an automated reminder).

@whedon
Copy link
Author

whedon commented Oct 27, 2021

👋 @ejolly, please update us on how your review is going (this is an automated reminder).

@whedon
Copy link
Author

whedon commented Oct 27, 2021

👋 @hugorichard, please update us on how your review is going (this is an automated reminder).

@robbisg
Copy link

robbisg commented Nov 3, 2021

Sorry for the delay, I will review the package this week.

@hugorichard
Copy link

hugorichard commented Nov 3, 2021

@emdupre, here is my review:

This is a really nice piece of software. I believe it is really useful to have all these CCA methods gathered at the same place.

I recommend acceptance.

As a side note, I believe the vision of mvlearn and cca-zoo coincide. It would be interesting to study how to fuse the two in my opinion.

A few comments:

  • The statement of need is missing in the documentation
  • I don't like to have examples as notebooks. I believe it is much better to do it like in sklearn (with .py files that generate figures along with some text that describes why the experiment is performed and why the model is useful).
  • I believe the tests are not rigorous enough. In many tests, nothing is actually checked (there is no assert statement). In other tests, it is only checked that the performance of some methods is lower than others. It would be more convincing to test that the method actually works: take some synthetic data that match the CCA model and show that the method can learn projection so that the correlation of the projected data is above 0.9.
  • I get an ImportError: cannot import name 'delayed' from 'sklearn.utils.fixes'
    after running pip install cca-zoo. It is fixed if I force the version of scikit-learn to be 0.24 so something needs to be fixed there.
  • The tests are extremely long to run... much longer than sklearn although the library is not as huge. This should definitely be fixed.
  • Some test is failing on my end when I run py.test test_models.py with the error ValueError: Unknown projection '3d'

@robbisg
Copy link

robbisg commented Nov 9, 2021

Hi everyone,

@emdupre I will not open an issue in the cca-zoo repo since I think that on my side there are few requests that can be addressed in this thread. But if you prefer, I can open the issue! ;)

Review

I think cca-zoo is a very useful package that can be used to relate multivariate datasets. Personally, I used very basic implementations of these technique and I am very happy for the possibility of using more advance technique included in this tool!

Paper

The paper is well written and presents an extensive comparison of other similar packages and the implementation philosophy.
These edits should be included, in my point of view:

  • Although you are presenting a software tool, I believe that a brief mathematical formulation of the problem, as presented in the tutorial notebook, can help a partially-expert reader to understand the application of the tool
  • As before, a very simple example of the application of these technique can help a non-expert reader to understand the package.

Software

The software uses the scikit-learn standard and this is very useful for the users, moreover CI and tests are valuable efforts to make the package more robust.
I hadn't any test or import errors, and by inspecting github CI, it seems that everything is ok, also using different python versions.

  • I agree with @hugorichard, that examples can be "prettified" as in scikit-learn or mvlearn and is quite easy using sphinx-gallery.
  • I think that you can also add in the documentation the mathematical formulation of these algorithms.

I think that the tool is in a very good shape, I recommend acceptance.

Thanks @jameschapman19 and @emdupre, and sorry for the delay.

@jameschapman19
Copy link

I was wary of replying as I don't know the process but it feels appropriate for me to say thanks both. All of these comments I agree with and will action them.

  • examples in .py files (another benefit I have realized of this is that they can be tested cleanly on commit)
  • smaller dimensionality tests should speed things up and improve the nature of the tests (and also make hard tests on results less unwieldy i.e. I can put correct array results in the script if they aren't high dimensional). I notice this is essentially how the scikit-learn ones work. I also agree that a quick-and-easy approach is just to check correlations are reasonably high (and perhaps that regularization makes them lower). I took this approach a little with the deep tests (i.e. DCCA higher correlation than linear CCA).
  • maths in the docs is a great idea.

@emdupre
Copy link
Member

emdupre commented Nov 9, 2021

Thanks both @hugorichard and @robbisg for your reviews, and @jameschapman19 for your thoughtful response !

Please do feel free to continue discussions in thread, though as specific issues on cca-zoo arise from this review process it will be helpful to cross-link them here !

@emdupre
Copy link
Member

emdupre commented Nov 9, 2021

👋 hi @ejolly ! I just wanted to check-in on this review.

Please let me know if you're encountering any technical difficulties that I can help with, or if you have a timeline for when you expect to be able to complete this !

@ejolly
Copy link

ejolly commented Nov 9, 2021

hey @emdupre sorry for the delay. I'm hoping to have this done by the end of this week or the start of next at the latest!

@ejolly
Copy link

ejolly commented Nov 11, 2021

Hey @emdupre @jameschapman19 here's my review:

Review

  • Super cool package and it's nice to see to all these algorithms (including their deep learning variants) in one place!
  • I found the manuscript clear and adhering to JOSS requirements
  • The API reference is also great and nicely documented
  • Recommend acceptance

Suggested Changes

  • I do agree with the other reviewers that it would be nice to move some of the math + high-level explanations from the tutorial notebooks into a dedicated page in the documentation site. This would help non-experts get acquainted with the methods and more clearly see how they could use cca-zoo for their own problems
  • I'd also recommend using sphinx-gallery to combine your tutorials with your documentation. You might consider adding another step in your CI pipeline to build and deploy your documentation after your test suite finishes. This has a few advantages:
    • It ensures that your live docs automatically stay updated with any new commits
    • It executes the code in the tutorial notebooks which serve as a secondary layer of testing that your test-suite might not cover

Minor suggested changes

  • Not strictly necessary but it might be nice to have a pip install cca-zoo[all] option
  • Since the intended audience seem to be folks who are familiar with scikit-learn, you might consider adopting the American-English spelling of "center" as oppose to "centre" for any method and function argument as they do in their API and documentation for consistency and user-experience.

Test and Issues

  • I had no issues running the full testing suite in a fresh environment on macOS with Python 3.8.12
  • There were a few issues with one of the tutorial notebooks which I noted here

@emdupre
Copy link
Member

emdupre commented Nov 15, 2021

Thanks @ejolly for your review !

It looks like jameschapman19/cca_zoo#74, jameschapman19/cca_zoo#75, jameschapman19/cca_zoo#76, and jameschapman19/cca_zoo#77 were all created in response to reviewer feedback, so I'll monitor those issues to see how the revisions are going.

@hugorichard @robbisg @ejolly if there are any additional issues you noted that you consider to not be covered by those issues, please let us know here !

@whedon
Copy link
Author

whedon commented Dec 15, 2021

Attempting dry run of processing paper acceptance...

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label Dec 15, 2021
@whedon
Copy link
Author

whedon commented Dec 15, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.2307/2333955 is OK
- 10.1016/0304-4076(76)90010-5 is OK
- 10.1162/0899766042321814 is OK
- 10.1007/s11336-011-9206-8 is OK
- 10.1093/biostatistics/kxp008 is OK
- 10.2202/1544-6115.1329 is OK
- 10.2202/1544-6115.1406 is OK
- 10.1111/biom.13043 is OK
- 10.3389/fninf.2016.00049 is OK
- 10.1007/978-1-4612-4228-4_3 is OK
- 10.1109/allerton.2015.7447071 is OK
- 10.1109/cvpr.2007.383137 is OK
- 10.1109/tnn.2007.891186 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Dec 15, 2021

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#2827

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#2827, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@kyleniemeyer
Copy link

Hi @jameschapman19, I'm just doing some final checks before accepting.

Can you capitalize "python" in the paper's title? There are some uses throughout the paper where it should be capitalized as well.

Also, the Bach and Jordan reference needs a few more details. This is a technical report, published by University of California Berkeley Department of Statistics, Tech. Rep. 688. You can also add the URL https://statistics.berkeley.edu/sites/default/files/tech-reports/688.pdf.

Lastly, please confirm that there are no DOIs available for the Wenwen et al. and Wong et al. references.

@emdupre
Copy link
Member

emdupre commented Dec 15, 2021

Thank you for catching those points, @kyleniemeyer !

Can you capitalize "python" in the paper's title? There are some uses throughout the paper where it should be capitalized as well.

@jameschapman19, when you're making those changes please update it in the zenodo archive as well, just so those two documents match !

@jameschapman19
Copy link

Thanks @kyleniemeyer - I was able to track down a doi for Wenwen et al. too

@emdupre I made a related change to the references in the docstrings themselves so best to also change to 1.10.8 (but don't worry if it's too much trouble).

@emdupre
Copy link
Member

emdupre commented Dec 17, 2021

@emdupre I made a related change to the references in the docstrings themselves so best to also change to 1.10.8 (but don't worry if it's too much trouble).

Thank you, @jameschapman19 ! To confirm, did you also create a corresponding archive for the new release ? Can you list the DOI here if so ?

@jameschapman19
Copy link

New DOI: 10.5281/zenodo.5786616

Just realised there is also the DOI which resolves to the most recent: 10.5281/zenodo.4382739

I don't know what is more appropriate for JOSS but I'm happy with either.

@kyleniemeyer
Copy link

@whedon set 10.5281/zenodo.5786616 as archive

We prefer the DOI that points to the specific version

@whedon
Copy link
Author

whedon commented Dec 17, 2021

OK. 10.5281/zenodo.5786616 is the archive.

@kyleniemeyer
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Dec 17, 2021

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@kyleniemeyer
Copy link

@jameschapman19 sorry, the Wenwen et al. reference still needs one final change—the DOI field should not have the extra doi= in it, just do doi={10.1049/cje.2017.08.004}

@jameschapman19
Copy link

🤦 fixed

@emdupre
Copy link
Member

emdupre commented Dec 18, 2021

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Dec 18, 2021

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@emdupre
Copy link
Member

emdupre commented Dec 18, 2021

@whedon set v1.10.8 as version

@whedon
Copy link
Author

whedon commented Dec 18, 2021

OK. v1.10.8 is the version.

@kyleniemeyer
Copy link

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented Dec 18, 2021

Doing it live! Attempting automated processing of paper acceptance...

@whedon whedon added accepted published Papers published in JOSS labels Dec 18, 2021
@whedon
Copy link
Author

whedon commented Dec 18, 2021

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Dec 18, 2021

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.03823 joss-papers#2836
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.03823
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@kyleniemeyer
Copy link

Congratulations @jameschapman19 on your article's publication in JOSS!

Many thanks to @robbisg, @hugorichard, and @ejolly for reviewing this submission, and @emdupre for editing.

@whedon
Copy link
Author

whedon commented Dec 18, 2021

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.03823/status.svg)](https://doi.org/10.21105/joss.03823)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.03823">
  <img src="https://joss.theoj.org/papers/10.21105/joss.03823/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.03823/status.svg
   :target: https://doi.org/10.21105/joss.03823

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

7 participants