Skip to content

Commit

Permalink
Update readme to include ollama version
Browse files Browse the repository at this point in the history
  • Loading branch information
hauselin committed Sep 10, 2024
1 parent 9080c7e commit 4fca9c0
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 4 deletions.
2 changes: 1 addition & 1 deletion README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The library also makes it easy to work with data structures (e.g., conversationa

To use this R library, ensure the [Ollama](https://ollama.com) app is installed. Ollama can use GPUs for accelerating LLM inference. See [Ollama GPU documentation](https://github.com/ollama/ollama/blob/main/docs/gpu.md) for more information.

See [Ollama's Github page](https://github.com/ollama/ollama) for more information. This library uses the [Ollama REST API (see documentation for details)](https://github.com/ollama/ollama/blob/main/docs/api.md).
See [Ollama's Github page](https://github.com/ollama/ollama) for more information. This library uses the [Ollama REST API (see documentation for details)](https://github.com/ollama/ollama/blob/main/docs/api.md) and has been tested on Ollama v0.1.30 and above. It was last tested on Ollama v0.3.10.

> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Expand Down
9 changes: 6 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ status](https://www.r-pkg.org/badges/version/ollamar)](https://CRAN.R-project.or

The [Ollama R library](https://hauselin.github.io/ollama-r/) is the
easiest way to integrate R with [Ollama](https://ollama.com/), which
lets you run language models locally on your own machine.
lets you run language models locally on your own machine. Main site:
<https://hauselin.github.io/ollama-r/>

The library also makes it easy to work with data structures (e.g.,
conversational/chat histories) that are standard for different LLMs
Expand All @@ -30,7 +31,9 @@ for more information.

See [Ollama’s Github page](https://github.com/ollama/ollama) for more
information. This library uses the [Ollama REST API (see documentation
for details)](https://github.com/ollama/ollama/blob/main/docs/api.md).
for details)](https://github.com/ollama/ollama/blob/main/docs/api.md)
and has been tested on Ollama v0.1.30 and above. It was last tested on
Ollama v0.3.10.

> Note: You should have at least 8 GB of RAM available to run the 7B
> models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Expand Down Expand Up @@ -82,7 +85,7 @@ remotes::install_github("hauselin/ollamar")

Below is a basic demonstration of how to use the library. For details,
see the [getting started
vignette](https://hauselin.github.io/ollama-r/articles/ollamar.html) on our [main page](https://hauselin.github.io/ollama-r/).
vignette](https://hauselin.github.io/ollama-r/articles/ollamar.html).

`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html)
to make HTTP requests to the Ollama server, so many functions in this
Expand Down

0 comments on commit 4fca9c0

Please sign in to comment.