Skip to content

Commit

Permalink
Add copy #13
Browse files Browse the repository at this point in the history
  • Loading branch information
hauselin committed Aug 4, 2024
1 parent 9c92008 commit 277aaf9
Show file tree
Hide file tree
Showing 7 changed files with 141 additions and 19 deletions.
1 change: 1 addition & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ export(append_message)
export(chat)
export(check_option_valid)
export(check_options)
export(copy)
export(create_message)
export(create_request)
export(delete)
Expand Down
44 changes: 44 additions & 0 deletions R/ollama.R
Original file line number Diff line number Diff line change
Expand Up @@ -354,6 +354,50 @@ show <- function(name, verbose = FALSE, output = c("jsonlist", "resp", "raw"), e



#' Copy a model
#'
#' @param source The name of the model to copy.
#' @param destination The name for the new model.
#' @param endpoint The endpoint to copy the model. Default is "/api/copy".
#' @param host The base URL to use. Default is NULL, which uses Ollama's default base URL.
#'
#' @references
#' [API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md#copy-a-model)
#'
#' @return A httr2 response object.
#' @export
#'
#' @examplesIf test_connection()$status_code == 200
#' copy("llama3", "llama3_copy")
#' delete("llama3_copy") # delete the model was just got copied
copy <- function(source, destination, endpoint = "/api/copy", host = NULL) {

if (!model_avail(source)) {
return(invisible())
}

body_json <- list(
source = source,
destination = destination
)
req <- create_request(endpoint, host)
req <- httr2::req_method(req, "POST")
tryCatch(
{
req <- httr2::req_body_json(req, body_json)
resp <- httr2::req_perform(req)
return(resp)
},
error = function(e) {
stop(e)
}
)
}








Expand Down
36 changes: 27 additions & 9 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -53,20 +53,38 @@ If it doesn't work or you don't have `devtools` installed, please run `install.p

## Usage

`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html) to make HTTP requests to the Ollama server, so many functions in this library returns an `httr2_response` object by default. See [Notes section](#notes) below for more information.
`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html) to make HTTP requests to the Ollama server, so many functions in this library returns an `httr2_response` object by default. If the response object says `Status: 200 OK`, then the request was successful. See [Notes section](#notes) below for more information.

```{r eval=FALSE}
library(ollamar)
test_connection() # test connection to Ollama server; returns a httr2 response object
# Ollama local server running successfully
# <httr2_response>
test_connection() # test connection to Ollama server
# if you see Ollama local server running, it's working
# generate a response/text based on a prompt; returns an httr2 response by default
resp <- generate("llama3", "tell me a 5-word story")
resp
list_models() # list available models (models you've pulled/downloaded)
name size parameter_size quantization_level modified
<chr> <chr> <chr> <chr> <chr>
1 llama3:latest 4.7 GB 8B Q4_0 2024-05-01T21:01:00
2 mistral-openorca:latest 4.1 GB 7B Q4_0 2024-04-25T16:45:00
#' interpret httr2 response object
#' <httr2_response>
#' POST http://127.0.0.1:11434/api/generate # endpoint
#' Status: 200 OK # if successful, status code should be 200 OK
#' Content-Type: application/json
#' Body: In memory (414 bytes)
# get just the text from the response object
resp_process(resp, "text")
# get the text as a tibble dataframe
resp_process(resp, "df")
# alternatively, specify the output type when calling the function initially
txt <- generate("llama3", "tell me a 5-word story", output = "text")
# list available models (models you've pulled/downloaded)
list_models()
name size parameter_size quantization_level modified
1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10
2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33
```

### Pull/download model
Expand Down
39 changes: 29 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,21 +59,40 @@ If it doesn’t work or you don’t have `devtools` installed, please run

`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html)
to make HTTP requests to the Ollama server, so many functions in this
library returns an `httr2_response` object by default. See [Notes
section](#notes) below for more information.
library returns an `httr2_response` object by default. If the response
object says `Status: 200 OK`, then the request was successful. See
[Notes section](#notes) below for more information.

``` r
library(ollamar)

test_connection() # test connection to Ollama server; returns a httr2 response object
# Ollama local server running successfully
# <httr2_response>
test_connection() # test connection to Ollama server
# if you see Ollama local server running, it's working

# generate a response/text based on a prompt; returns an httr2 response by default
resp <- generate("llama3", "tell me a 5-word story")
resp

list_models() # list available models (models you've pulled/downloaded)
name size parameter_size quantization_level modified
<chr> <chr> <chr> <chr> <chr>
1 llama3:latest 4.7 GB 8B Q4_0 2024-05-01T21:01:00
2 mistral-openorca:latest 4.1 GB 7B Q4_0 2024-04-25T16:45:00
#' interpret httr2 response object
#' <httr2_response>
#' POST http://127.0.0.1:11434/api/generate # endpoint
#' Status: 200 OK # if successful, status code should be 200 OK
#' Content-Type: application/json
#' Body: In memory (414 bytes)

# get just the text from the response object
resp_process(resp, "text")
# get the text as a tibble dataframe
resp_process(resp, "df")

# alternatively, specify the output type when calling the function initially
txt <- generate("llama3", "tell me a 5-word story", output = "text")

# list available models (models you've pulled/downloaded)
list_models()
name size parameter_size quantization_level modified
1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10
2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33
```

### Pull/download model
Expand Down
1 change: 1 addition & 0 deletions _pkgdown.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ reference:
- chat
- list_models
- show
- copy
- delete
- pull
- embed
Expand Down
32 changes: 32 additions & 0 deletions man/copy.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 7 additions & 0 deletions tests/testthat/test-copy.R
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,11 @@ library(ollamar)

test_that("copy function works with basic input", {
skip_if_not(test_connection()$status_code == 200, "Ollama server not available")

copy("llama3", "llama3-BACKUP")
expect_true(model_avail("llama3-BACKUP"))
delete("llama3-BACKUP")

expect_invisible(copy("wrong_model", "wrong_model_backup"))
expect_false(model_avail("wrong_model_backup"))
})

0 comments on commit 277aaf9

Please sign in to comment.