Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: ollamar: An R Package for running large language models #7211

Open
editorialbot opened this issue Sep 10, 2024 · 15 comments
Open

[REVIEW]: ollamar: An R Package for running large language models #7211

editorialbot opened this issue Sep 10, 2024 · 15 comments
Assignees
Labels
R review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Sep 10, 2024

Submitting author: @hauselin (Hause Lin)
Repository: https://github.com/hauselin/ollama-r
Branch with paper.md (empty if default branch): joss
Version: v1.2.0.9000
Editor: @crvernon
Reviewers: @KennethEnevoldsen, @elenlefoll
Archive: Pending

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/8777293d4e8ad448fea1520b780387d6"><img src="https://joss.theoj.org/papers/8777293d4e8ad448fea1520b780387d6/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/8777293d4e8ad448fea1520b780387d6/status.svg)](https://joss.theoj.org/papers/8777293d4e8ad448fea1520b780387d6)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@KennethEnevoldsen & @elenlefoll, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @crvernon know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @KennethEnevoldsen

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.02 s (1603.8 files/s, 190658.8 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               20            494            640           1413
Markdown                         8            178              0            587
YAML                             4             26              9            160
Rmd                              1            112            195            125
TeX                              1             14              0             89
-------------------------------------------------------------------------------
SUM:                            34            824            844           2374
-------------------------------------------------------------------------------

Commit count by author:

   139	Hause Lin
     9	Tawab Safi

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 1189

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.48550/arXiv.2408.11707 is OK
- 10.1002/widm.1531 is OK
- 10.48550/arXiv.2404.07654 is OK
- 10.48550/arXiv.2408.05933 is OK
- 10.48550/arXiv.2403.12082 is OK
- 10.48550/arXiv.2408.11847 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Enhancing propaganda detection with open source la...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

License info:

🟡 License found: Other (Check here for OSI approval)

@crvernon
Copy link

👋 @hauselin, @elenlefoll, and @KennethEnevoldsen - This is the review thread for the paper. All of our communications will happen here from now on.

Please read the "Reviewer instructions & questions" in the first comment above.

Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention #7211 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@KennethEnevoldsen
Copy link

KennethEnevoldsen commented Sep 10, 2024

Review checklist for @KennethEnevoldsen

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/hauselin/ollama-r?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@hauselin) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@KennethEnevoldsen
Copy link

  • There seem to be two license files. I believe these can be combined to one
  • Substantial scholarly effort: I believe this is a borderline case. This package is quite young and provides a wrapper around ollama. I believe this in itself is quite valuable, but I am unsure if there is workflow in place to ensure maintainability and compatibility. This could e.g. be ensured using a scheduled test. I would love a section added to the paper on this. I would suggest in favor of the tutorials which are best kept in the documentation (as it will become outdated).
  • Installation: I have created an issue here: Broken link in the readme hauselin/ollama-r#25
  • State of the field: I believe the comparison to existing packages is lacking. From the paper I am unsure what rollama lack which ollama-r adds. It would be nice if it was clearer. I also believe there is other packages seeking to provide such as tidychatmodels which are not currently included.
  • Quality of writing: While I appreciate the simple usage example I would remove these from the paper and instead spend some time on the reasoning of the implementation.
  • ** References**: see state-of-the-field
  • ** Documentation**: I believe the readability of the documentation can be improved. For instance, there is a header called "Notes", which seems like it should be reformatted. Fold out menus could also be used to allow for ease of navigation.
  • Community guidelines: can't find any community guidelines

@hauselin
Copy link

Thanks @KennethEnevoldsen for your review/comments! I've made changes to the repo/doc to address your comments, which definitely clarified things a lot. Let me know if they've addressed your comments (see responses below). Your other comments relate to the paper itself, which I'll address later.

  • There seem to be two license files. I believe these can be combined to one

The R community has workflows/package structures that produce multiple licenses that usually aren't combined into one—ollamar is following closely the conventions followed by the R community. For example, see the multiple licenses in ggplot and dplyr, two of the most used R libraries.

  • Substantial scholarly effort: I believe this is a borderline case. This package is quite young and provides a wrapper around ollama. I believe this in itself is quite valuable, but I am unsure if there is workflow in place to ensure maintainability and compatibility. This could e.g. be ensured using a scheduled test. I would love a section added to the paper on this. I would suggest in favor of the tutorials which are best kept in the documentation (as it will become outdated).

Glad you think it's valuable! There are many workflows in place. First, the package already has github continuous integration and deployment, so it will be tested on macOS/linux/windows whenever there are changes to the repository (there are a lot of test cases, which are also being run whenever the repo updates). Second, because it's hosted on R's CRAN, the same tests are also being run regularly on CRAN's servers to ensure maintainability and compatibility (see regular test results here; note that on 2024-09-10, a few CRAN servers are down, resulting in failed tests on certain linux machines and test results page might not load). Note also that for a library to be hosted on CRAN, it has to satisfy many strict requirements regarding maintainability and compatibility (otherwise, CRAN will inform the author and take it down).

  • Documentation: I believe the readability of the documentation can be improved. For instance, there is a header called "Notes", which seems like it should be reformatted. Fold out menus could also be used to allow for ease of navigation.
  • Community guidelines: can't find any community guidelines

I've restructured the site so the home page focuses on installation and basic usage (also added a table of contents on the right). I've also added a Get started page that uses foldout menus to allow for ease of navigation. The old "Notes" section no longer exist and has been integrated into the rest of the documentation. There's a new Community section on the right sidebar that links to contributing guide and code of conduct.

I've updated the installation instructions so they are clearer, especially for different OS.

@KennethEnevoldsen
Copy link

KennethEnevoldsen commented Sep 10, 2024

thanks for the quick fixes!

The R community has workflows/package structures that produce multiple licenses that usually aren't combined into one
Second, because it's hosted on R's CRAN, the same tests are also being run regularly on CRAN's servers to ensure maintainability and compatibility

Thanks for the clarification. It has been a while since I did packages in R and they were only for internal projects so didn't know CRAN regularly ran tests (def. nice to know).

Just to clear up any worries I have: What happens if the Ollama community pushes an update with breaking changes? As I understand it would require an update from you. It might be ideal to add information about compatible versions to allow users to resolve compatibility issues.

I've made changes to the repo/doc to address your comments, which definitely clarified things a lot

I totally agree, makes navigation noticeably easier. Once the updates for the paper are in I will do a full run-through of code examples in the docs and run the tests.

Optional:

  • I note that you do not use the citation.cff file for citations. Would recommend adding it, but it is def. up to you.
  • To increase the visibility of the package it might be worth checking if the ollama folks want the add your library to their list of libraries. Understand if you would rather do this after the review.

@hauselin
Copy link

hauselin commented Sep 10, 2024

Just to clear up any worries I have: What happens if the Ollama community pushes an update with breaking changes? As I understand it would require an update from you. It might be ideal to add information about compatible versions to allow users to resolve compatibility issues.

I've added the versions that have been tested in the updated README (https://github.com/hauselin/ollama-r/blob/4fca9c0546b45e7ea998e600e8112de17e028340/README.md?plain=1#L32C1-L36C16). Let me know if this is good, @KennethEnevoldsen. Ollama should be relatively stable (86k stars and almost 7k forks) so it's unlikely they'll introduce breaking changes. But if they do, the (official) Python and JS libraries (and the hundreds of apps/tools that have already been built on top of it) will break too—ollamar's design philosophy is similar to these two libraries' and is very modular/follows good software design practices, so it should not be difficult to update.

Regarding your two optional comments:

To increase the visibility of the package it might be worth checking if the ollama folks want the add your library to their list of libraries. Understand if you would rather do this after the review.

It actually already is in their list of libraries (but in a different section). I think the section you referred to is for their official libraries (other libraries are listed lower down on the same page).

I note that you do not use the citation.cff file for citations. Would recommend adding it, but it is def. up to you.

There is a citation file here (again, it's located in this directory because I'm following R package development conventions. It is picked up by github though (if you go to the main page, you can see "Cite this repository" in the right sidebar). If I add a citation.cff, the automated tests flags it and leave this note: Found the following CITATION file in a non-standard place: CITATION.cff Most likely ‘inst/CITATION’ should be used instead.

@crvernon
Copy link

crvernon commented Oct 8, 2024

👋 @hauselin, @elenlefoll, and @KennethEnevoldsen - just checking in to see how things are going with this review. Could you each post a short update here?

Also, @elenlefoll I don't see that you have created your checklist yet. Are you still able to conduct this review?

Thanks!

@KennethEnevoldsen
Copy link

I am waiting for the updated article draft, made clear here:

Once the updates for the paper are in I will do a full run-through of code examples in the docs and run the tests.

Notably the missing state-of-the-field and design considerations as it allow me to evaluate whether the code lives up to the intent

However, I understand that @hauselin was waiting for the second review before making too many changes.

@hauselin
Copy link

hauselin commented Oct 8, 2024

Yes @crvernon I've already revised the codebase based on @KennethEnevoldsen's suggestions. What's left are changes to the paper itself, and waiting for the second review before revising the paper makes more sense to me (but @crvernon, if you think it makes sense for me to revise the paper at this point too, let me know).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
R review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

4 participants