Skip to content

Commit

Permalink
Update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
hauselin committed Sep 10, 2024
1 parent 85bcdea commit 2147f40
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 33 deletions.
13 changes: 13 additions & 0 deletions .github/CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Contributor Code of Conduct

As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.

We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion.

Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory comments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team.

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers.

This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 1.0.0, available at http://contributor-covenant.org/version/1/0/0/
16 changes: 16 additions & 0 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Community guidelines and contributing

## Report issues or seek support

Open a [Github issue](https://github.com/hauselin/ollama-r/issues) with a concise description of the problem, including steps to reproduce and your environment. Check existing/closed issues before posting.

## Contribute to ollamar

Before you make a substantial pull request, you should always file an issue and make sure someone from the team agrees that it’s a problem.

Fork the repository, create a branch for your changes, and submit a pull request with documented and tested code.Refer to [R packages](https://r-pkgs.org/) by Hadley Wickham and Jennifer Bryan for R package development guidelines.

- We use [roxygen2](https://roxygen2.r-lib.org/), with Markdown syntax, for documentation.
- We use [testthat](https://testthat.r-lib.org/) for testing. Contributions with test cases included are easier to accept.


21 changes: 7 additions & 14 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,13 @@ If you use this library, please cite [this paper](https://doi.org/10.31234/osf.i
}
```

## Ollama R versus Ollama Python/JavaScript
## Ollama R vs Ollama Python/JS

This library has been inspired by the official [Ollama Python](https://github.com/ollama/ollama-python) and [Ollama JavaScript](https://github.com/ollama/ollama-js) libraries. If you're coming from Python or JavaScript, you should feel right at home. Alternatively, if you plan to use Ollama with Python or JavaScript, using this R library will help you understand the Python/JavaScript libraries as well.

## Installation

1. Download and install [Ollama](https://ollama.com).
1. Download and install the [Ollama](https://ollama.com) app.

- [macOS](https://ollama.com/download/Ollama-darwin.zip)
- [Windows preview](https://ollama.com/download/OllamaSetup.exe)
Expand Down Expand Up @@ -123,7 +123,7 @@ pull("llama3.1") # download a model (the equivalent bash code: ollama run llama
list_models() # verify you've pulled/downloaded the model
```

### Delete a model
### Delete model

Delete a model and its data (see [API doc](https://github.com/ollama/ollama/blob/main/docs/api.md#delete-a-model)). You can see what models you've downloaded with `list_models()`. To download a model, specify the name of the model.

Expand All @@ -132,7 +132,7 @@ list_models() # see the models you've pulled/downloaded
delete("all-minilm:latest") # returns a httr2 response object
```

### Generate a completion
### Generate completion

Generate a response for a given prompt (see [API doc](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion)).

Expand Down Expand Up @@ -185,7 +185,7 @@ messages <- create_message("What is in the image?", images = "image.png")
chat("benzie/llava-phi-3", messages, output = "text")
```

#### Streaming responses
#### Stream responses

```{r eval=FALSE}
messages <- create_message("Tell me a 1-paragraph story.")
Expand All @@ -195,8 +195,7 @@ chat("llama3.1", messages, output = "text", stream = TRUE)
# chat(model = "llama3.1", messages = messages, output = "text", stream = TRUE) # same as above
```


#### Format and prepare messages for the `chat()` function
#### Format messages for chat

Internally, messages are represented as a `list` of many distinct `list` messages. Each list/message object has two elements: `role` (can be `"user"` or `"assistant"` or `"system"`) and `content` (the message text). The example below shows how the messages/lists are presented.

Expand Down Expand Up @@ -301,7 +300,7 @@ e3 <- embed("llama3.1", "Hello, how are you?", normalize = FALSE)
e4 <- embed("llama3.1", "Hi, how are you?", normalize = FALSE)
```

### Parsing `httr2_response` objects with `resp_process()`
### Parse `httr2_response` objects with `resp_process()`

`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html) to make HTTP requests to the Ollama server, so many functions in this library returns an `httr2_response` object by default.

Expand Down Expand Up @@ -428,9 +427,3 @@ bind_rows(lapply(resps, resp_process, "df")) # get responses as dataframes
# 3 llama3.1 assistant other 2024-08-05T17:54:27.657067Z
```

## Community guidelines

Contribute: Fork the repository, create a branch for your changes, and submit a pull request with documented and tested code. Refer to [R packages](https://r-pkgs.org/) by Hadley Wickham and Jennifer Bryan for R package development guidelines.

Report issues or seek support: Open a [Github issue](https://github.com/hauselin/ollama-r/issues) with a concise description of the problem, including steps to reproduce and your environment. Check existing/closed issues before posting.
26 changes: 7 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ entry:
}
```

## Ollama R versus Ollama Python/JavaScript
## Ollama R vs Ollama Python/JS

This library has been inspired by the official [Ollama
Python](https://github.com/ollama/ollama-python) and [Ollama
Expand All @@ -67,7 +67,7 @@ libraries as well.

## Installation

1. Download and install [Ollama](https://ollama.com).
1. Download and install the [Ollama](https://ollama.com) app.

- [macOS](https://ollama.com/download/Ollama-darwin.zip)
- [Windows preview](https://ollama.com/download/OllamaSetup.exe)
Expand Down Expand Up @@ -150,7 +150,7 @@ pull("llama3.1") # download a model (the equivalent bash code: ollama run llama
list_models() # verify you've pulled/downloaded the model
```

### Delete a model
### Delete model

Delete a model and its data (see [API
doc](https://github.com/ollama/ollama/blob/main/docs/api.md#delete-a-model)).
Expand All @@ -162,7 +162,7 @@ list_models() # see the models you've pulled/downloaded
delete("all-minilm:latest") # returns a httr2 response object
```

### Generate a completion
### Generate completion

Generate a response for a given prompt (see [API
doc](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion)).
Expand Down Expand Up @@ -216,7 +216,7 @@ messages <- create_message("What is in the image?", images = "image.png")
chat("benzie/llava-phi-3", messages, output = "text")
```

#### Streaming responses
#### Stream responses

``` r
messages <- create_message("Tell me a 1-paragraph story.")
Expand All @@ -226,7 +226,7 @@ chat("llama3.1", messages, output = "text", stream = TRUE)
# chat(model = "llama3.1", messages = messages, output = "text", stream = TRUE) # same as above
```

#### Format and prepare messages for the `chat()` function
#### Format messages for chat

Internally, messages are represented as a `list` of many distinct `list`
messages. Each list/message object has two elements: `role` (can be
Expand Down Expand Up @@ -348,7 +348,7 @@ e3 <- embed("llama3.1", "Hello, how are you?", normalize = FALSE)
e4 <- embed("llama3.1", "Hi, how are you?", normalize = FALSE)
```

### Parsing `httr2_response` objects with `resp_process()`
### Parse `httr2_response` objects with `resp_process()`

`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html)
to make HTTP requests to the Ollama server, so many functions in this
Expand Down Expand Up @@ -480,15 +480,3 @@ bind_rows(lapply(resps, resp_process, "df")) # get responses as dataframes
# 2 llama3.1 assistant negative 2024-08-05T17:54:27.657525Z
# 3 llama3.1 assistant other 2024-08-05T17:54:27.657067Z
```

## Community guidelines

Contribute: Fork the repository, create a branch for your changes, and
submit a pull request with documented and tested code. Refer to [R
packages](https://r-pkgs.org/) by Hadley Wickham and Jennifer Bryan for
R package development guidelines.

Report issues or seek support: Open a [Github
issue](https://github.com/hauselin/ollama-r/issues) with a concise
description of the problem, including steps to reproduce and your
environment. Check existing/closed issues before posting.
4 changes: 4 additions & 0 deletions _pkgdown.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ url: https://hauselin.github.io/ollama-r
home:
title: Ollama R Library
description: Run Ollama language models in R.
sidebar:
structure: [links, license, community, citation, dev, toc]
toc:
depth: 4

template:
bootstrap: 5
Expand Down

0 comments on commit 2147f40

Please sign in to comment.