Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PRE REVIEW]: Learning from Crowds with Crowd-Kit #5898

Closed
editorialbot opened this issue Sep 28, 2023 · 30 comments
Closed

[PRE REVIEW]: Learning from Crowds with Crowd-Kit #5898

editorialbot opened this issue Sep 28, 2023 · 30 comments
Assignees
Labels
pre-review Python TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Sep 28, 2023

Submitting author: @dustalov (Dmitry Ustalov)
Repository: https://github.com/Toloka/crowd-kit
Branch with paper.md (empty if default branch):
Version: v1.2.1
Editor: @arfon
Reviewers: @jorgedch
Managing EiC: Arfon Smith

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/2684e43cf35482812ae02396b3312fad"><img src="https://joss.theoj.org/papers/2684e43cf35482812ae02396b3312fad/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/2684e43cf35482812ae02396b3312fad/status.svg)](https://joss.theoj.org/papers/2684e43cf35482812ae02396b3312fad)

Author instructions

Thanks for submitting your paper to JOSS @dustalov. Currently, there isn't a JOSS editor assigned to your paper.

@dustalov if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). You can search the list of people that have already agreed to review and may be suitable for this submission.

Editor instructions

The JOSS submission bot @editorialbot is here to help you find and assign reviewers and start the main review. To find out what @editorialbot can do for you type:

@editorialbot commands
@editorialbot editorialbot added pre-review Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning labels Sep 28, 2023
@editorialbot
Copy link
Collaborator Author

Hello human, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.16 s (626.9 files/s, 179648.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          78           1687           2258           5004
Jupyter Notebook                 6              0          15669           2141
YAML                             8              5              3            709
TeX                              1             37              0            425
Markdown                         5            120              0            310
TOML                             1              0              0              3
-------------------------------------------------------------------------------
SUM:                            99           1849          17930           8592
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 2354

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1145/1866029.1866078 is OK
- 10.2307/2334029 is OK
- 10.1145/2433396.2433420 is OK
- 10.1609/aaai.v35i7.16730 is OK
- 10.2307/2346806 is OK
- 10.1109/ASRU.1997.659110 is OK
- 10.1162/neco.1997.9.8.1735 is OK
- 10.1287/opre.2013.1235 is OK
- 10.18653/v1/D19-5904 is OK
- 10.1145/3397271.3401239 is OK
- 10.1007/978-3-319-10602-1_48 is OK
- 10.17863/CAM.45912 is OK
- 10.1109/ICASSP.2010.5494979 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.1609/aaai.v32i1.11506 is OK
- 10.1007/s11263-016-0940-3 is OK
- 10.14778/3055540.3055547 is OK

MISSING DOIs

- 10.1609/hcomp.v1i1.13088 may be a valid DOI for title: SQUARE: A Benchmark for Research on Computing Crowd Consensus

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Five most similar historical JOSS papers:

Efficiently Learning Relative Similarity Embeddings with Crowdsourcing
Submitting author: @stsievert
Handling editor: @ajstewartlang (Active)
Reviewers: @hoechenberger, @stain, @jorgedch
Similarity score: 0.8092

A Framework to Quality Control Oceanographic Data
Submitting author: @castelao
Handling editor: @kthyng (Active)
Reviewers: @jessicaaustin, @evanleeturner
Similarity score: 0.7988

Fireworks: Reproducible Machine Learning and Preprocessing with PyTorch
Submitting author: @smk508
Handling editor: @arokem (Retired)
Reviewers: @dirmeier
Similarity score: 0.7975

Kinetics Toolkit: An Open-Source Python Package to Facilitate Research in Biomechanics
Submitting author: @felixchenier
Handling editor: @meg-simula (Retired)
Reviewers: @alcantarar, @melund
Similarity score: 0.7969

BioPsyKit: A Python package for the analysis of biopsychological data
Submitting author: @richrobe
Handling editor: @osorensen (Active)
Reviewers: @zen-juen, @espenhgn
Similarity score: 0.7942

⚠️ Note to editors: If these papers look like they might be a good match, click through to the review issue for that paper and invite one or more of the authors before before considering asking the reviewers of these papers to review again for JOSS.

@arfon arfon added the waitlisted Submissions in the JOSS backlog due to reduced service mode. label Sep 28, 2023
@arfon
Copy link
Member

arfon commented Sep 28, 2023

@dustalov - thanks for your submission to JOSS. We're currently managing a large backlog of submissions and the editor most appropriate for your area is already rather busy.

For now, we will need to waitlist this paper and process it as the queue reduces. Thanks for your patience!

@arfon
Copy link
Member

arfon commented Nov 24, 2023

@editorialbot invite @ajstewartlang as editor

👋 @ajstewartlang – I realize you're at/close to editorial capacity right now but I'm also trying to get some of the older submissions from our backlog assigned. Would you be willing to take this one one for us?

@editorialbot
Copy link
Collaborator Author

Invitation to edit this submission sent!

@ajstewartlang
Copy link

@editorialbot invite @ajstewartlang as editor

👋 @ajstewartlang – I realize you're at/close to editorial capacity right now but I'm also trying to get some of the older submissions from our backlog assigned. Would you be willing to take this one one for us?

Sorry @arfon but I'm absolutely out of bandwidth at the moment.

@arfon
Copy link
Member

arfon commented Dec 9, 2023

Sorry @arfon but I'm absolutely out of bandwidth at the moment.

No problem, thanks for getting back to me!

@arfon
Copy link
Member

arfon commented Dec 9, 2023

@editorialbot assign me as editor

@editorialbot
Copy link
Collaborator Author

Assigned! @arfon is now the editor

@arfon arfon removed the waitlisted Submissions in the JOSS backlog due to reduced service mode. label Dec 9, 2023
@arfon
Copy link
Member

arfon commented Dec 9, 2023

@stsievert @chrislintott @hoechenberger @stain @jorgedch – 👋 would any of you be willing to review this submission for JOSS? The submission under consideration is Learning from Crowds with Crowd-Kit (#5898)

The review process at JOSS is unique: it takes place in a GitHub issue, is open, and author-reviewer-editor conversations are encouraged. You can learn more about the process in these guidelines: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html

Based on your experience, we think you might be able to provide a great review of this submission. Please let me know if you think you can help us out!

Many thanks
Arfon

@arfon
Copy link
Member

arfon commented Jan 6, 2024

👋 @dustalov – as you can see, I'm struggling to find reviewers here. Do you have any suggestions for people who might be able to give a good review here?

@dustalov
Copy link

dustalov commented Jan 8, 2024

HI @arfon, thank you for letting me know. I checked the reviewer guidelines and have three questions:

  1. if the potential reviewer has no business relationship with us, but they are a user of our library, would it be a COI? (I guess not, but let's clarify this, please)
  2. is it sufficient for the reviewers to write to this issue?
  3. how many reviewers do we need?

@arfon
Copy link
Member

arfon commented Jan 9, 2024

if the potential reviewer has no business relationship with us, but they are a user of our library, would it be a COI? (I guess not, but let's clarify this, please)

Users are absolutely fine – in fact this can make for a great reviewer.

is it sufficient for the reviewers to write to this issue?

Reviews happen on GitHub yes. If you can suggest GitHub handles here I can ping them to ask.

how many reviewers do we need?

At least two, and ideally three.

@jorgedch
Copy link

Hi @arfon, I can be a reviewer. Recently we had a baby, so I'll probably need something between 4-8 weekends (sorry for the large margin) to be finished if it's ok :)

@arfon
Copy link
Member

arfon commented Jan 13, 2024

Hi @arfon, I can be a reviewer. Recently we had a baby, so I'll probably need something between 4-8 weekends (sorry for the large margin) to be finished if it's ok :)

Great, thank you @jorgedch! I'll go ahead and start the review while we wait for a second reviewer to be identified. 6-8 weeks is perfectly fine for a review time.

@arfon
Copy link
Member

arfon commented Jan 13, 2024

@editorialbot add @jorgedch as reviewer

@editorialbot
Copy link
Collaborator Author

@jorgedch added to the reviewers list!

@arfon
Copy link
Member

arfon commented Jan 13, 2024

@editorialbot start review

@editorialbot
Copy link
Collaborator Author

OK, I've started the review over in #6227.

@arfon
Copy link
Member

arfon commented Jan 13, 2024

@dustalov – we still need one or two more reviewers here but I'll get the review started with the first reviewer for now. @jorgedch @dustalov – see you over in #6227 where the actual review will take place.

@dustalov
Copy link

Thank you, I'm on it!

@dustalov
Copy link

We interacted with a lot of labs doing crowdsourcing research, which prevents me from asking them for a review as per the guidelines. I'm still on it, though it seems harder than I expected.

@dustalov
Copy link

dustalov commented Feb 6, 2024

@arfon: could you please add @mitchellg as a reviewer?

@arfon
Copy link
Member

arfon commented Feb 7, 2024

@mitchellg – can you affirm over in #6227 that you're happy to be a reviewer here?

@mitchellg
Copy link

done!

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pre-review Python TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

6 participants