From 2147f400fb29c731f4bd5df67118a3b0cde5e826 Mon Sep 17 00:00:00 2001 From: Hause Lin Date: Tue, 10 Sep 2024 11:00:46 -0400 Subject: [PATCH] Update docs --- .github/CODE_OF_CONDUCT.md | 13 +++++++++++++ .github/CONTRIBUTING.md | 16 ++++++++++++++++ README.Rmd | 21 +++++++-------------- README.md | 26 +++++++------------------- _pkgdown.yml | 4 ++++ 5 files changed, 47 insertions(+), 33 deletions(-) create mode 100644 .github/CODE_OF_CONDUCT.md create mode 100644 .github/CONTRIBUTING.md diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md new file mode 100644 index 0000000..dbc0ea0 --- /dev/null +++ b/.github/CODE_OF_CONDUCT.md @@ -0,0 +1,13 @@ +# Contributor Code of Conduct + +As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. + +We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion. + +Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory comments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct. + +Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team. + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers. + +This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 1.0.0, available at http://contributor-covenant.org/version/1/0/0/ diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md new file mode 100644 index 0000000..6b54c87 --- /dev/null +++ b/.github/CONTRIBUTING.md @@ -0,0 +1,16 @@ +# Community guidelines and contributing + +## Report issues or seek support + +Open a [Github issue](https://github.com/hauselin/ollama-r/issues) with a concise description of the problem, including steps to reproduce and your environment. Check existing/closed issues before posting. + +## Contribute to ollamar + +Before you make a substantial pull request, you should always file an issue and make sure someone from the team agrees that it’s a problem. + +Fork the repository, create a branch for your changes, and submit a pull request with documented and tested code.Refer to [R packages](https://r-pkgs.org/) by Hadley Wickham and Jennifer Bryan for R package development guidelines. + +- We use [roxygen2](https://roxygen2.r-lib.org/), with Markdown syntax, for documentation. +- We use [testthat](https://testthat.r-lib.org/) for testing. Contributions with test cases included are easier to accept. + + diff --git a/README.Rmd b/README.Rmd index c35db85..48bfa13 100644 --- a/README.Rmd +++ b/README.Rmd @@ -48,13 +48,13 @@ If you use this library, please cite [this paper](https://doi.org/10.31234/osf.i } ``` -## Ollama R versus Ollama Python/JavaScript +## Ollama R vs Ollama Python/JS This library has been inspired by the official [Ollama Python](https://github.com/ollama/ollama-python) and [Ollama JavaScript](https://github.com/ollama/ollama-js) libraries. If you're coming from Python or JavaScript, you should feel right at home. Alternatively, if you plan to use Ollama with Python or JavaScript, using this R library will help you understand the Python/JavaScript libraries as well. ## Installation -1. Download and install [Ollama](https://ollama.com). +1. Download and install the [Ollama](https://ollama.com) app. - [macOS](https://ollama.com/download/Ollama-darwin.zip) - [Windows preview](https://ollama.com/download/OllamaSetup.exe) @@ -123,7 +123,7 @@ pull("llama3.1") # download a model (the equivalent bash code: ollama run llama list_models() # verify you've pulled/downloaded the model ``` -### Delete a model +### Delete model Delete a model and its data (see [API doc](https://github.com/ollama/ollama/blob/main/docs/api.md#delete-a-model)). You can see what models you've downloaded with `list_models()`. To download a model, specify the name of the model. @@ -132,7 +132,7 @@ list_models() # see the models you've pulled/downloaded delete("all-minilm:latest") # returns a httr2 response object ``` -### Generate a completion +### Generate completion Generate a response for a given prompt (see [API doc](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion)). @@ -185,7 +185,7 @@ messages <- create_message("What is in the image?", images = "image.png") chat("benzie/llava-phi-3", messages, output = "text") ``` -#### Streaming responses +#### Stream responses ```{r eval=FALSE} messages <- create_message("Tell me a 1-paragraph story.") @@ -195,8 +195,7 @@ chat("llama3.1", messages, output = "text", stream = TRUE) # chat(model = "llama3.1", messages = messages, output = "text", stream = TRUE) # same as above ``` - -#### Format and prepare messages for the `chat()` function +#### Format messages for chat Internally, messages are represented as a `list` of many distinct `list` messages. Each list/message object has two elements: `role` (can be `"user"` or `"assistant"` or `"system"`) and `content` (the message text). The example below shows how the messages/lists are presented. @@ -301,7 +300,7 @@ e3 <- embed("llama3.1", "Hello, how are you?", normalize = FALSE) e4 <- embed("llama3.1", "Hi, how are you?", normalize = FALSE) ``` -### Parsing `httr2_response` objects with `resp_process()` +### Parse `httr2_response` objects with `resp_process()` `ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html) to make HTTP requests to the Ollama server, so many functions in this library returns an `httr2_response` object by default. @@ -428,9 +427,3 @@ bind_rows(lapply(resps, resp_process, "df")) # get responses as dataframes # 3 llama3.1 assistant other 2024-08-05T17:54:27.657067Z ``` - -## Community guidelines - -Contribute: Fork the repository, create a branch for your changes, and submit a pull request with documented and tested code. Refer to [R packages](https://r-pkgs.org/) by Hadley Wickham and Jennifer Bryan for R package development guidelines. - -Report issues or seek support: Open a [Github issue](https://github.com/hauselin/ollama-r/issues) with a concise description of the problem, including steps to reproduce and your environment. Check existing/closed issues before posting. diff --git a/README.md b/README.md index 0381a1e..1492ad6 100644 --- a/README.md +++ b/README.md @@ -55,7 +55,7 @@ entry: } ``` -## Ollama R versus Ollama Python/JavaScript +## Ollama R vs Ollama Python/JS This library has been inspired by the official [Ollama Python](https://github.com/ollama/ollama-python) and [Ollama @@ -67,7 +67,7 @@ libraries as well. ## Installation -1. Download and install [Ollama](https://ollama.com). +1. Download and install the [Ollama](https://ollama.com) app. - [macOS](https://ollama.com/download/Ollama-darwin.zip) - [Windows preview](https://ollama.com/download/OllamaSetup.exe) @@ -150,7 +150,7 @@ pull("llama3.1") # download a model (the equivalent bash code: ollama run llama list_models() # verify you've pulled/downloaded the model ``` -### Delete a model +### Delete model Delete a model and its data (see [API doc](https://github.com/ollama/ollama/blob/main/docs/api.md#delete-a-model)). @@ -162,7 +162,7 @@ list_models() # see the models you've pulled/downloaded delete("all-minilm:latest") # returns a httr2 response object ``` -### Generate a completion +### Generate completion Generate a response for a given prompt (see [API doc](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion)). @@ -216,7 +216,7 @@ messages <- create_message("What is in the image?", images = "image.png") chat("benzie/llava-phi-3", messages, output = "text") ``` -#### Streaming responses +#### Stream responses ``` r messages <- create_message("Tell me a 1-paragraph story.") @@ -226,7 +226,7 @@ chat("llama3.1", messages, output = "text", stream = TRUE) # chat(model = "llama3.1", messages = messages, output = "text", stream = TRUE) # same as above ``` -#### Format and prepare messages for the `chat()` function +#### Format messages for chat Internally, messages are represented as a `list` of many distinct `list` messages. Each list/message object has two elements: `role` (can be @@ -348,7 +348,7 @@ e3 <- embed("llama3.1", "Hello, how are you?", normalize = FALSE) e4 <- embed("llama3.1", "Hi, how are you?", normalize = FALSE) ``` -### Parsing `httr2_response` objects with `resp_process()` +### Parse `httr2_response` objects with `resp_process()` `ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html) to make HTTP requests to the Ollama server, so many functions in this @@ -480,15 +480,3 @@ bind_rows(lapply(resps, resp_process, "df")) # get responses as dataframes # 2 llama3.1 assistant negative 2024-08-05T17:54:27.657525Z # 3 llama3.1 assistant other 2024-08-05T17:54:27.657067Z ``` - -## Community guidelines - -Contribute: Fork the repository, create a branch for your changes, and -submit a pull request with documented and tested code. Refer to [R -packages](https://r-pkgs.org/) by Hadley Wickham and Jennifer Bryan for -R package development guidelines. - -Report issues or seek support: Open a [Github -issue](https://github.com/hauselin/ollama-r/issues) with a concise -description of the problem, including steps to reproduce and your -environment. Check existing/closed issues before posting. diff --git a/_pkgdown.yml b/_pkgdown.yml index 8861074..a928a40 100644 --- a/_pkgdown.yml +++ b/_pkgdown.yml @@ -3,6 +3,10 @@ url: https://hauselin.github.io/ollama-r home: title: Ollama R Library description: Run Ollama language models in R. + sidebar: + structure: [links, license, community, citation, dev, toc] + toc: + depth: 4 template: bootstrap: 5