From 0e2e5dd76ba8e3bdd87409c60650ac179bbe0a02 Mon Sep 17 00:00:00 2001 From: hauselin Date: Tue, 10 Sep 2024 16:31:22 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20hauselin?= =?UTF-8?q?/ollama-r@892ef6aae54a5c3538177ea7f2e82130f2755416=20?= =?UTF-8?q?=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- CODE_OF_CONDUCT.html | 4 ++-- CONTRIBUTING.html | 6 +++--- pkgdown.yml | 2 +- search.json | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/CODE_OF_CONDUCT.html b/CODE_OF_CONDUCT.html index 6a5d5bb..68e2800 100644 --- a/CODE_OF_CONDUCT.html +++ b/CODE_OF_CONDUCT.html @@ -1,5 +1,5 @@ -Contributor Code of Conduct • ollamar +Contributor code of conduct • ollamar Skip to contents @@ -35,7 +35,7 @@
diff --git a/CONTRIBUTING.html b/CONTRIBUTING.html index ac9755a..9191af4 100644 --- a/CONTRIBUTING.html +++ b/CONTRIBUTING.html @@ -48,13 +48,13 @@

Report issues or seek support

Contribute to ollamar

Before you make a substantial pull request, you should always file an issue and make sure someone from the team agrees that it’s a problem.

-

Fork the repository, create a branch for your changes, and submit a pull request with documented and tested code.Refer to R packages by Hadley Wickham and Jennifer Bryan for R package development guidelines.

+

Fork the repository, create a branch for your changes, and submit a pull request with documented and tested code. Refer to R packages by Hadley Wickham and Jennifer Bryan for R package development guidelines.

  • We use roxygen2, with Markdown syntax, for documentation.
  • We use testthat for testing. Contributions with test cases included are easier to accept. - We use continuous integration and deployment through GitHub and GitHub actions.

-

Code of Conduct

-

Please note that the ollamar project is released with a Contributor Code of Conduct. By contributing to this project you agree to abide by its terms.

+

Code of conduct

+

Please note that the ollamar project is released with a contributor code of conduct. By contributing to this project you agree to abide by its terms.

diff --git a/pkgdown.yml b/pkgdown.yml index 06651eb..9b7969f 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -3,7 +3,7 @@ pkgdown: 2.1.0 pkgdown_sha: ~ articles: ollamar: ollamar.html -last_built: 2024-09-10T15:49Z +last_built: 2024-09-10T16:31Z urls: reference: https://hauselin.github.io/ollama-r/reference article: https://hauselin.github.io/ollama-r/articles diff --git a/search.json b/search.json index fedccb5..a380f48 100644 --- a/search.json +++ b/search.json @@ -1 +1 @@ -[{"path":"https://hauselin.github.io/ollama-r/CODE_OF_CONDUCT.html","id":null,"dir":"","previous_headings":"","what":"Contributor Code of Conduct","title":"Contributor Code of Conduct","text":"contributors maintainers project, pledge respect people contribute reporting issues, posting feature requests, updating documentation, submitting pull requests patches, activities. committed making participation project harassment-free experience everyone, regardless level experience, gender, gender identity expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion. Examples unacceptable behavior participants include use sexual language imagery, derogatory comments personal attacks, trolling, public private harassment, insults, unprofessional conduct. Project maintainers right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct. Project maintainers follow Code Conduct may removed project team. Instances abusive, harassing, otherwise unacceptable behavior may reported opening issue contacting one project maintainers. Code Conduct adapted Contributor Covenant (http://contributor-covenant.org), version 1.0.0, available http://contributor-covenant.org/version/1/0/0/","code":""},{"path":[]},{"path":"https://hauselin.github.io/ollama-r/CONTRIBUTING.html","id":"report-issues-or-seek-support","dir":"","previous_headings":"","what":"Report issues or seek support","title":"Community guidelines and contributing","text":"Open GitHub issue concise description problem, including steps reproduce environment. Check existing/closed issues posting.","code":""},{"path":"https://hauselin.github.io/ollama-r/CONTRIBUTING.html","id":"contribute-to-ollamar","dir":"","previous_headings":"","what":"Contribute to ollamar","title":"Community guidelines and contributing","text":"make substantial pull request, always file issue make sure someone team agrees ’s problem. Fork repository, create branch changes, submit pull request documented tested code.Refer R packages Hadley Wickham Jennifer Bryan R package development guidelines. use roxygen2, Markdown syntax, documentation. use testthat testing. Contributions test cases included easier accept. - use continuous integration deployment GitHub GitHub actions.","code":""},{"path":"https://hauselin.github.io/ollama-r/CONTRIBUTING.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Community guidelines and contributing","text":"Please note ollamar project released Contributor Code Conduct. contributing project agree abide terms.","code":""},{"path":"https://hauselin.github.io/ollama-r/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"MIT License","title":"MIT License","text":"Copyright (c) 2024 ollamar authors Permission hereby granted, free charge, person obtaining copy software associated documentation files (“Software”), deal Software without restriction, including without limitation rights use, copy, modify, merge, publish, distribute, sublicense, /sell copies Software, permit persons Software furnished , subject following conditions: copyright notice permission notice shall included copies substantial portions Software. SOFTWARE PROVIDED “”, WITHOUT WARRANTY KIND, EXPRESS IMPLIED, INCLUDING LIMITED WARRANTIES MERCHANTABILITY, FITNESS PARTICULAR PURPOSE NONINFRINGEMENT. EVENT SHALL AUTHORS COPYRIGHT HOLDERS LIABLE CLAIM, DAMAGES LIABILITY, WHETHER ACTION CONTRACT, TORT OTHERWISE, ARISING , CONNECTION SOFTWARE USE DEALINGS SOFTWARE.","code":""},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"installation","dir":"Articles","previous_headings":"","what":"Installation","title":"Using ollamar","text":"Download install Ollama app. macOS Windows preview Linux: curl -fsSL https://ollama.com/install.sh | sh Docker image Open/launch Ollama app start local server. Install either stable latest/development version ollamar. Stable version: latest/development version features/bug fixes (see latest changes ), can install GitHub using install_github function remotes library. doesn’t work don’t remotes library, please run install.packages(\"remotes\") R RStudio running code .","code":"install.packages(\"ollamar\") # install.packages(\"remotes\") # run this line if you don't have the remotes library remotes::install_github(\"hauselin/ollamar\")"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"usage","dir":"Articles","previous_headings":"","what":"Usage","title":"Using ollamar","text":"ollamar uses httr2 library make HTTP requests Ollama server, many functions library returns httr2_response object default. response object says Status: 200 OK, request successful.","code":"library(ollamar) test_connection() # test connection to Ollama server # if you see \"Ollama local server not running or wrong server,\" Ollama app/server isn't running # generate a response/text based on a prompt; returns an httr2 response by default resp <- generate(\"llama3.1\", \"tell me a 5-word story\") resp #' interpret httr2 response object #' #' POST http://127.0.0.1:11434/api/generate # endpoint #' Status: 200 OK # if successful, status code should be 200 OK #' Content-Type: application/json #' Body: In memory (414 bytes) # get just the text from the response object resp_process(resp, \"text\") # get the text as a tibble dataframe resp_process(resp, \"df\") # alternatively, specify the output type when calling the function initially txt <- generate(\"llama3.1\", \"tell me a 5-word story\", output = \"text\") # list available models (models you've pulled/downloaded) list_models() name size parameter_size quantization_level modified 1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10 2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"pulldownload-model","dir":"Articles","previous_headings":"Usage","what":"Pull/download model","title":"Using ollamar","text":"Download model ollama library (see API doc). list models can pull/download, see Ollama library.","code":"pull(\"llama3.1\") # download a model (equivalent bash code: ollama run llama3.1) list_models() # verify you've pulled/downloaded the model"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"delete-model","dir":"Articles","previous_headings":"Usage","what":"Delete model","title":"Using ollamar","text":"Delete model data (see API doc). can see models ’ve downloaded list_models(). download model, specify name model.","code":"list_models() # see the models you've pulled/downloaded delete(\"all-minilm:latest\") # returns a httr2 response object"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"generate-completion","dir":"Articles","previous_headings":"Usage","what":"Generate completion","title":"Using ollamar","text":"Generate response given prompt (see API doc).","code":"resp <- generate(\"llama3.1\", \"Tomorrow is a...\") # return httr2 response object by default resp resp_process(resp, \"text\") # process the response to return text/vector output generate(\"llama3.1\", \"Tomorrow is a...\", output = \"text\") # directly return text/vector output generate(\"llama3.1\", \"Tomorrow is a...\", stream = TRUE) # return httr2 response object and stream output generate(\"llama3.1\", \"Tomorrow is a...\", output = \"df\", stream = TRUE) # image prompt # use a vision/multi-modal model generate(\"benzie/llava-phi-3\", \"What is in the image?\", images = \"image.png\", output = 'text')"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"chat","dir":"Articles","previous_headings":"Usage","what":"Chat","title":"Using ollamar","text":"Generate next message chat/conversation.","code":"messages <- create_message(\"what is the capital of australia\") # default role is user resp <- chat(\"llama3.1\", messages) # default returns httr2 response object resp # resp_process(resp, \"text\") # process the response to return text/vector output # specify output type when calling the function chat(\"llama3.1\", messages, output = \"text\") # text vector chat(\"llama3.1\", messages, output = \"df\") # data frame/tibble chat(\"llama3.1\", messages, output = \"jsonlist\") # list chat(\"llama3.1\", messages, output = \"raw\") # raw string chat(\"llama3.1\", messages, stream = TRUE) # stream output and return httr2 response object # create chat history messages <- create_messages( create_message(\"end all your sentences with !!!\", role = \"system\"), create_message(\"Hello!\"), # default role is user create_message(\"Hi, how can I help you?!!!\", role = \"assistant\"), create_message(\"What is the capital of Australia?\"), create_message(\"Canberra!!!\", role = \"assistant\"), create_message(\"what is your name?\") ) cat(chat(\"llama3.1\", messages, output = \"text\")) # print the formatted output # image prompt messages <- create_message(\"What is in the image?\", images = \"image.png\") # use a vision/multi-modal model chat(\"benzie/llava-phi-3\", messages, output = \"text\")"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"stream-responses","dir":"Articles","previous_headings":"Usage > Chat","what":"Stream responses","title":"Using ollamar","text":"","code":"messages <- create_message(\"Tell me a 1-paragraph story.\") # use \"llama3.1\" model, provide list of messages, return text/vector output, and stream the output chat(\"llama3.1\", messages, output = \"text\", stream = TRUE) # chat(model = \"llama3.1\", messages = messages, output = \"text\", stream = TRUE) # same as above"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"format-messages-for-chat","dir":"Articles","previous_headings":"Usage > Chat","what":"Format messages for chat","title":"Using ollamar","text":"Internally, messages represented list many distinct list messages. list/message object two elements: role (can \"user\" \"assistant\" \"system\") content (message text). example shows messages/lists presented. simplify process creating managing messages, ollamar provides functions format prepare messages chat() function. functions also work APIs LLM providers like OpenAI Anthropic. create_messages(): create messages build chat history create_message() creates chat history single message append_message() adds new message end existing messages prepend_message() adds new message beginning existing messages default, inserts message -1 (final) position positive negative indices/positions supported 5 messages, positions 1 (-5), 2 (-4), 3 (-3), 4 (-2), 5 (-1) can convert data.frame, tibble data.table objects list() messages vice versa functions base R popular libraries.","code":"list( # main list containing all the messages list(role = \"user\", content = \"Hello!\"), # first message as a list list(role = \"assistant\", content = \"Hi! How are you?\") # second message as a list ) # create a chat history with one message messages <- create_message(content = \"Hi! How are you? (1ST MESSAGE)\", role = \"assistant\") # or simply, messages <- create_message(\"Hi! How are you?\", \"assistant\") messages[[1]] # get 1st message # append (add to the end) a new message to the existing messages messages <- append_message(\"I'm good. How are you? (2ND MESSAGE)\", \"user\", messages) messages[[1]] # get 1st message messages[[2]] # get 2nd message (newly added message) # prepend (add to the beginning) a new message to the existing messages messages <- prepend_message(\"I'm good. How are you? (0TH MESSAGE)\", \"user\", messages) messages[[1]] # get 0th message (newly added message) messages[[2]] # get 1st message messages[[3]] # get 2nd message # insert a new message at a specific index/position (2nd position in the example below) # by default, the message is inserted at the end of the existing messages (position -1 is the end/default) messages <- insert_message(\"I'm good. How are you? (BETWEEN 0 and 1 MESSAGE)\", \"user\", messages, 2) messages[[1]] # get 0th message messages[[2]] # get between 0 and 1 message (newly added message) messages[[3]] # get 1st message messages[[4]] # get 2nd message # delete a message at a specific index/position (2nd position in the example below) messages <- delete_message(messages, 2) # create a chat history with multiple messages messages <- create_messages( create_message(\"You're a knowledgeable tour guide.\", role = \"system\"), create_message(\"What is the capital of Australia?\") # default role is user ) # create a list of messages messages <- create_messages( create_message(\"You're a knowledgeable tour guide.\", role = \"system\"), create_message(\"What is the capital of Australia?\") ) # convert to dataframe df <- dplyr::bind_rows(messages) # with dplyr library df <- data.table::rbindlist(messages) # with data.table library # convert dataframe to list with apply, purrr functions apply(df, 1, as.list) # convert each row to a list with base R apply purrr::transpose(df) # with purrr library"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"embeddings","dir":"Articles","previous_headings":"Usage","what":"Embeddings","title":"Using ollamar","text":"Get vector embedding prompt/text (see API doc). default, embeddings normalized length 1, means following: cosine similarity can computed slightly faster using just dot product cosine similarity Euclidean distance result identical rankings","code":"embed(\"llama3.1\", \"Hello, how are you?\") # don't normalize embeddings embed(\"llama3.1\", \"Hello, how are you?\", normalize = FALSE) # get embeddings for similar prompts e1 <- embed(\"llama3.1\", \"Hello, how are you?\") e2 <- embed(\"llama3.1\", \"Hi, how are you?\") # compute cosine similarity sum(e1 * e2) # not equals to 1 sum(e1 * e1) # 1 (identical vectors/embeddings) # non-normalized embeddings e3 <- embed(\"llama3.1\", \"Hello, how are you?\", normalize = FALSE) e4 <- embed(\"llama3.1\", \"Hi, how are you?\", normalize = FALSE)"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"parse-httr2_response-objects-with-resp_process","dir":"Articles","previous_headings":"Usage","what":"Parse httr2_response objects with resp_process()","title":"Using ollamar","text":"ollamar uses httr2 library make HTTP requests Ollama server, many functions library returns httr2_response object default. can either parse output resp_process() use output parameter function specify output format. Generally, output parameter can one \"df\", \"jsonlist\", \"raw\", \"resp\", \"text\".","code":"resp <- list_models(output = \"resp\") # returns a httr2 response object # # GET http://127.0.0.1:11434/api/tags # Status: 200 OK # Content-Type: application/json # Body: In memory (5401 bytes) # process the httr2 response object with the resp_process() function resp_process(resp, \"df\") # or list_models(output = \"df\") resp_process(resp, \"jsonlist\") # list # or list_models(output = \"jsonlist\") resp_process(resp, \"raw\") # raw string # or list_models(output = \"raw\") resp_process(resp, \"resp\") # returns the input httr2 response object # or list_models() or list_models(\"resp\") resp_process(resp, \"text\") # text vector # or list_models(\"text\")"},{"path":[]},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"parallel-requests","dir":"Articles","previous_headings":"Advanced usage","what":"Parallel requests","title":"Using ollamar","text":"generate() chat() endpoints/functions, can specify output = 'req' function functions return httr2_request objects instead httr2_response objects. multiple httr2_request objects list, can make parallel requests req_perform_parallel function httr2 library. See httr2 documentation details. Example sentiment analysis parallel requests generate() function Example sentiment analysis parallel requests chat() function","code":"prompt <- \"Tell me a 10-word story\" req <- generate(\"llama3.1\", prompt, output = \"req\") # returns a httr2_request object # # POST http://127.0.0.1:11434/api/generate # Headers: # • content_type: 'application/json' # • accept: 'application/json' # • user_agent: 'ollama-r/1.1.1 (aarch64-apple-darwin20) R/4.4.0' # Body: json encoded data library(httr2) prompt <- \"Tell me a 5-word story\" # create 5 httr2_request objects that generate a response to the same prompt reqs <- lapply(1:5, function(r) generate(\"llama3.1\", prompt, output = \"req\")) # make parallel requests and get response resps <- req_perform_parallel(reqs) # list of httr2_request objects # process the responses sapply(resps, resp_process, \"text\") # get responses as text # [1] \"She found him in Paris.\" \"She found the key upstairs.\" # [3] \"She found her long-lost sister.\" \"She found love on Mars.\" # [5] \"She found the diamond ring.\" library(httr2) library(glue) library(dplyr) # text to classify texts <- c('I love this product', 'I hate this product', 'I am neutral about this product') # create httr2_request objects for each text, using the same system prompt reqs <- lapply(texts, function(text) { prompt <- glue(\"Your only task/role is to evaluate the sentiment of product reviews, and your response should be one of the following:'positive', 'negative', or 'other'. Product review: {text}\") generate(\"llama3.1\", prompt, output = \"req\") }) # make parallel requests and get response resps <- req_perform_parallel(reqs) # list of httr2_request objects # process the responses sapply(resps, resp_process, \"text\") # get responses as text # [1] \"Positive\" \"Negative.\" # [3] \"'neutral' translates to... 'other'.\" library(httr2) library(dplyr) # text to classify texts <- c('I love this product', 'I hate this product', 'I am neutral about this product') # create system prompt chat_history <- create_message(\"Your only task/role is to evaluate the sentiment of product reviews provided by the user. Your response should simply be 'positive', 'negative', or 'other'.\", \"system\") # create httr2_request objects for each text, using the same system prompt reqs <- lapply(texts, function(text) { messages <- append_message(text, \"user\", chat_history) chat(\"llama3.1\", messages, output = \"req\") }) # make parallel requests and get response resps <- req_perform_parallel(reqs) # list of httr2_request objects # process the responses bind_rows(lapply(resps, resp_process, \"df\")) # get responses as dataframes # # A tibble: 3 × 4 # model role content created_at # # 1 llama3.1 assistant Positive 2024-08-05T17:54:27.758618Z # 2 llama3.1 assistant negative 2024-08-05T17:54:27.657525Z # 3 llama3.1 assistant other 2024-08-05T17:54:27.657067Z"},{"path":"https://hauselin.github.io/ollama-r/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Hause Lin. Author, maintainer, copyright holder. Tawab Safi. Author, contributor.","code":""},{"path":"https://hauselin.github.io/ollama-r/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Lin, H., & Safi, T. (2024). ollamar: R package running large language models. PsyArXiv. https://doi.org/10.31234/osf.io/zsrg5","code":"@Article{, title = {ollamar: An R package for running large language models}, author = {Hause Lin and Tawab Safi}, journal = {PsyArXiv}, year = {2024}, month = {aug}, publisher = {OSF}, doi = {10.31234/osf.io/zsrg5}, url = {https://doi.org/10.31234/osf.io/zsrg5}, }"},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"ollama-r-library","dir":"","previous_headings":"","what":"Ollama R Library","title":"Ollama R Library","text":"Ollama R library easiest way integrate R Ollama, lets run language models locally machine. Main site: https://hauselin.github.io/ollama-r/ library also makes easy work data structures (e.g., conversational/chat histories) standard different LLMs (provided OpenAI Anthropic). also lets specify different output formats (e.g., dataframes, text/vector, lists) best suit need, allowing easy integration libraries/tools parallelization via httr2 library. use R library, ensure Ollama app installed. Ollama can use GPUs accelerating LLM inference. See Ollama GPU documentation information. See Ollama’s Github page information. library uses Ollama REST API (see documentation details). Note: least 8 GB RAM available run 7B models, 16 GB run 13B models, 32 GB run 33B models.","code":""},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"ollama-r-vs-ollama-pythonjs","dir":"","previous_headings":"","what":"Ollama R vs Ollama Python/JS","title":"Ollama R Library","text":"library inspired official Ollama Python Ollama JavaScript libraries. ’re coming Python JavaScript, feel right home. Alternatively, plan use Ollama Python JavaScript, using R library help understand Python/JavaScript libraries well.","code":""},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Ollama R Library","text":"Download install Ollama app. macOS Windows preview Linux: curl -fsSL https://ollama.com/install.sh | sh Docker image Open/launch Ollama app start local server. Install either stable latest/development version ollamar. Stable version: latest/development version features/bug fixes (see latest changes ), can install GitHub using install_github function remotes library. doesn’t work don’t remotes library, please run install.packages(\"remotes\") R RStudio running code .","code":"install.packages(\"ollamar\") # install.packages(\"remotes\") # run this line if you don't have the remotes library remotes::install_github(\"hauselin/ollamar\")"},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"example-usage","dir":"","previous_headings":"","what":"Example usage","title":"Ollama R Library","text":"basic demonstration use library. details, see getting started vignette. ollamar uses httr2 library make HTTP requests Ollama server, many functions library returns httr2_response object default. response object says Status: 200 OK, request successful.","code":"library(ollamar) test_connection() # test connection to Ollama server # if you see \"Ollama local server not running or wrong server,\" Ollama app/server isn't running # download a model pull(\"llama3.1\") # download a model (equivalent bash code: ollama run llama3.1) # generate a response/text based on a prompt; returns an httr2 response by default resp <- generate(\"llama3.1\", \"tell me a 5-word story\") resp #' interpret httr2 response object #' #' POST http://127.0.0.1:11434/api/generate # endpoint #' Status: 200 OK # if successful, status code should be 200 OK #' Content-Type: application/json #' Body: In memory (414 bytes) # get just the text from the response object resp_process(resp, \"text\") # get the text as a tibble dataframe resp_process(resp, \"df\") # alternatively, specify the output type when calling the function initially txt <- generate(\"llama3.1\", \"tell me a 5-word story\", output = \"text\") # list available models (models you've pulled/downloaded) list_models() name size parameter_size quantization_level modified 1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10 2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33"},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"citing-ollamar","dir":"","previous_headings":"","what":"Citing ollamar","title":"Ollama R Library","text":"use library, please cite paper using following BibTeX entry:","code":"@article{Lin2024Aug, author = {Lin, Hause and Safi, Tawab}, title = {{ollamar: An R package for running large language models}}, journal = {PsyArXiv}, year = {2024}, month = aug, publisher = {OSF}, doi = {10.31234/osf.io/zsrg5}, url = {https://doi.org/10.31234/osf.io/zsrg5} }"},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Append message to a list — append_message","title":"Append message to a list — append_message","text":"Appends message (add end list) list messages. role content converted list appended input list.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Append message to a list — append_message","text":"","code":"append_message(content, role = \"user\", x = NULL, ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Append message to a list — append_message","text":"content content message. role role message. Can \"user\", \"system\", \"assistant\". Default \"user\". x list messages. Default NULL. ... Additional arguments images.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Append message to a list — append_message","text":"list messages new message appended.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Append message to a list — append_message","text":"","code":"append_message(\"user\", \"Hello\") #> [[1]] #> [[1]]$role #> [1] \"Hello\" #> #> [[1]]$content #> [1] \"user\" #> #> append_message(\"system\", \"Always respond nicely\") #> [[1]] #> [[1]]$role #> [1] \"Always respond nicely\" #> #> [[1]]$content #> [1] \"system\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate a chat completion with message history — chat","title":"Generate a chat completion with message history — chat","text":"Generate chat completion message history","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate a chat completion with message history — chat","text":"","code":"chat( model, messages, tools = list(), stream = FALSE, keep_alive = \"5m\", output = c(\"resp\", \"jsonlist\", \"raw\", \"df\", \"text\", \"req\"), endpoint = \"/api/chat\", host = NULL, ... )"},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate a chat completion with message history — chat","text":"model character string model name \"llama3\". messages list list messages model (see examples ). tools Tools model use supported. Requires stream = FALSE. Default empty list. stream Enable response streaming. Default FALSE. keep_alive duration keep connection alive. Default \"5m\". output output format. Default \"resp\". options \"jsonlist\", \"raw\", \"df\", \"text\", \"req\" (httr2_request object). endpoint endpoint chat model. Default \"/api/chat\". host base URL use. Default NULL, uses Ollama's default base URL. ... Additional options pass model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate a chat completion with message history — chat","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate a chat completion with message history — chat","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate a chat completion with message history — chat","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 # one message messages <- list( list(role = \"user\", content = \"How are you doing?\") ) chat(\"llama3\", messages) # returns response by default chat(\"llama3\", messages, output = \"text\") # returns text/vector chat(\"llama3\", messages, temperature = 2.8) # additional options chat(\"llama3\", messages, stream = TRUE) # stream response chat(\"llama3\", messages, output = \"df\", stream = TRUE) # stream and return dataframe # multiple messages messages <- list( list(role = \"user\", content = \"Hello!\"), list(role = \"assistant\", content = \"Hi! How are you?\"), list(role = \"user\", content = \"Who is the prime minister of the uk?\"), list(role = \"assistant\", content = \"Rishi Sunak\"), list(role = \"user\", content = \"List all the previous messages.\") ) chat(\"llama3\", messages, stream = TRUE) # image image_path <- file.path(system.file(\"extdata\", package = \"ollamar\"), \"image1.png\") messages <- list( list(role = \"user\", content = \"What is in the image?\", images = image_path) ) chat(\"benzie/llava-phi-3\", messages, output = 'text') }"},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if an option is valid — check_option_valid","title":"Check if an option is valid — check_option_valid","text":"Check option valid","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if an option is valid — check_option_valid","text":"","code":"check_option_valid(opt)"},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if an option is valid — check_option_valid","text":"opt option (character) check.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if an option is valid — check_option_valid","text":"Returns TRUE option valid, FALSE otherwise.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if an option is valid — check_option_valid","text":"","code":"check_option_valid(\"mirostat\") #> [1] TRUE check_option_valid(\"invalid_option\") #> [1] FALSE"},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if a vector of options are valid — check_options","title":"Check if a vector of options are valid — check_options","text":"Check vector options valid","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if a vector of options are valid — check_options","text":"","code":"check_options(opts = NULL)"},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if a vector of options are valid — check_options","text":"opts vector options check.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if a vector of options are valid — check_options","text":"Returns list two elements: valid_options invalid_options.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if a vector of options are valid — check_options","text":"","code":"check_options(c(\"mirostat\", \"invalid_option\")) #> $valid_options #> [1] \"mirostat\" #> #> $invalid_options #> [1] \"invalid_option\" #> check_options(c(\"mirostat\", \"num_predict\")) #> $valid_options #> [1] \"mirostat\" \"num_predict\" #> #> $invalid_options #> character(0) #>"},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":null,"dir":"Reference","previous_headings":"","what":"Copy a model — copy","title":"Copy a model — copy","text":"Creates model another name existing model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Copy a model — copy","text":"","code":"copy(source, destination, endpoint = \"/api/copy\", host = NULL)"},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Copy a model — copy","text":"source name model copy. destination name new model. endpoint endpoint copy model. Default \"/api/copy\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Copy a model — copy","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Copy a model — copy","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Copy a model — copy","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 copy(\"llama3\", \"llama3_copy\") delete(\"llama3_copy\") # delete the model was just got copied }"},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a model from a Modelfile — create","title":"Create a model from a Modelfile — create","text":"recommended set modelfile content Modelfile rather just set path.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a model from a Modelfile — create","text":"","code":"create( name, modelfile = NULL, stream = FALSE, path = NULL, endpoint = \"/api/create\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a model from a Modelfile — create","text":"name Name model create. modelfile Contents Modelfile character string. Default NULL. stream Enable response streaming. Default FALSE. path path Modelfile. Default NULL. endpoint endpoint create model. Default \"/api/create\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a model from a Modelfile — create","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Create a model from a Modelfile — create","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a model from a Modelfile — create","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 create(\"mario\", \"FROM llama3\\nSYSTEM You are mario from Super Mario Bros.\") generate(\"mario\", \"who are you?\", output = \"text\") # model should say it's Mario delete(\"mario\") # delete the model created above }"},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a message — create_message","title":"Create a message — create_message","text":"Create message","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a message — create_message","text":"","code":"create_message(content, role = \"user\", ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a message — create_message","text":"content content message. role role message. Can \"user\", \"system\", \"assistant\". Default \"user\". ... Additional arguments images.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a message — create_message","text":"list messages.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a message — create_message","text":"","code":"create_message(\"Hello\", \"user\") #> [[1]] #> [[1]]$role #> [1] \"user\" #> #> [[1]]$content #> [1] \"Hello\" #> #> create_message(\"Always respond nicely\", \"system\") #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Always respond nicely\" #> #> create_message(\"I am here to help\", \"assistant\") #> [[1]] #> [[1]]$role #> [1] \"assistant\" #> #> [[1]]$content #> [1] \"I am here to help\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a list of messages — create_messages","title":"Create a list of messages — create_messages","text":"Create messages chat() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a list of messages — create_messages","text":"","code":"create_messages(...)"},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a list of messages — create_messages","text":"... list messages, list class.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a list of messages — create_messages","text":"list messages, list class.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a list of messages — create_messages","text":"","code":"messages <- create_messages( create_message(\"be nice\", \"system\"), create_message(\"tell me a 3-word joke\") ) messages <- create_messages( list(role = \"system\", content = \"be nice\"), list(role = \"user\", content = \"tell me a 3-word joke\") )"},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a httr2 request object — create_request","title":"Create a httr2 request object — create_request","text":"Creates httr2 request object base URL, headers endpoint. Used functions package intended used directly.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a httr2 request object — create_request","text":"","code":"create_request(endpoint, host = NULL)"},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a httr2 request object — create_request","text":"endpoint endpoint create request host base URL use. Default NULL, uses http://127.0.0.1:11434","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a httr2 request object — create_request","text":"httr2 request object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a httr2 request object — create_request","text":"","code":"create_request(\"/api/tags\") #> #> GET http://127.0.0.1:11434/api/tags #> Headers: #> • content_type: 'application/json' #> • accept: 'application/json' #> • user_agent: 'ollama-r/1.2.1.9000 (x86_64-pc-linux-gnu) R/4.4.1' #> Body: empty create_request(\"/api/chat\") #> #> GET http://127.0.0.1:11434/api/chat #> Headers: #> • content_type: 'application/json' #> • accept: 'application/json' #> • user_agent: 'ollama-r/1.2.1.9000 (x86_64-pc-linux-gnu) R/4.4.1' #> Body: empty create_request(\"/api/embeddings\") #> #> GET http://127.0.0.1:11434/api/embeddings #> Headers: #> • content_type: 'application/json' #> • accept: 'application/json' #> • user_agent: 'ollama-r/1.2.1.9000 (x86_64-pc-linux-gnu) R/4.4.1' #> Body: empty"},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":null,"dir":"Reference","previous_headings":"","what":"Delete a model and its data — delete","title":"Delete a model and its data — delete","text":"Delete model local machine downloaded using pull() function. see models available, use list_models() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Delete a model and its data — delete","text":"","code":"delete(name, endpoint = \"/api/delete\", host = NULL)"},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Delete a model and its data — delete","text":"name character string model name \"llama3\". endpoint endpoint delete model. Default \"/api/delete\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Delete a model and its data — delete","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Delete a model and its data — delete","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Delete a model and its data — delete","text":"","code":"if (FALSE) { # \\dontrun{ delete(\"llama3\") } # }"},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Delete a message in a specified position from a list — delete_message","title":"Delete a message in a specified position from a list — delete_message","text":"Delete message using positive negative positions/indices. Negative positions/indices can used refer elements/messages end sequence.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Delete a message in a specified position from a list — delete_message","text":"","code":"delete_message(x, position = -1)"},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Delete a message in a specified position from a list — delete_message","text":"x list messages. position position message delete.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Delete a message in a specified position from a list — delete_message","text":"list messages message specified position removed.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Delete a message in a specified position from a list — delete_message","text":"","code":"messages <- list( list(role = \"system\", content = \"Be friendly\"), list(role = \"user\", content = \"How are you?\") ) delete_message(messages, 1) # delete first message #> [[1]] #> [[1]]$role #> [1] \"user\" #> #> [[1]]$content #> [1] \"How are you?\" #> #> delete_message(messages, -2) # same as above (delete first message) #> [[1]] #> [[1]]$role #> [1] \"user\" #> #> [[1]]$content #> [1] \"How are you?\" #> #> delete_message(messages, 2) # delete second message #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Be friendly\" #> #> delete_message(messages, -1) # same as above (delete second message) #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Be friendly\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate embedding for inputs — embed","title":"Generate embedding for inputs — embed","text":"Supercedes embeddings() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate embedding for inputs — embed","text":"","code":"embed( model, input, truncate = TRUE, normalize = TRUE, keep_alive = \"5m\", endpoint = \"/api/embed\", host = NULL, ... )"},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate embedding for inputs — embed","text":"model character string model name \"llama3\". input vector characters want get embeddings . truncate Truncates end input fit within context length. Returns error FALSE context length exceeded. Defaults TRUE. normalize Normalize vector length 1. Default TRUE. keep_alive time keep connection alive. Default \"5m\" (5 minutes). endpoint endpoint get vector embedding. Default \"/api/embeddings\". host base URL use. Default NULL, uses Ollama's default base URL. ... Additional options pass model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate embedding for inputs — embed","text":"numeric matrix embedding. column embedding one input.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate embedding for inputs — embed","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate embedding for inputs — embed","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 embed(\"nomic-embed-text:latest\", \"The quick brown fox jumps over the lazy dog.\") # pass multiple inputs embed(\"nomic-embed-text:latest\", c(\"Good bye\", \"Bye\", \"See you.\")) # pass model options to the model embed(\"nomic-embed-text:latest\", \"Hello!\", temperature = 0.1, num_predict = 3) }"},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"function deprecated time superceded embed(). See embed() details.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"","code":"embeddings( model, prompt, normalize = TRUE, keep_alive = \"5m\", endpoint = \"/api/embeddings\", host = NULL, ... )"},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"model character string model name \"llama3\". prompt character string prompt want get vector embedding . normalize Normalize vector length 1. Default TRUE. keep_alive time keep connection alive. Default \"5m\" (5 minutes). endpoint endpoint get vector embedding. Default \"/api/embeddings\". host base URL use. Default NULL, uses Ollama's default base URL. ... Additional options pass model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"numeric vector embedding.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 embeddings(\"nomic-embed-text:latest\", \"The quick brown fox jumps over the lazy dog.\") # pass model options to the model embeddings(\"nomic-embed-text:latest\", \"Hello!\", temperature = 0.1, num_predict = 3) }"},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":null,"dir":"Reference","previous_headings":"","what":"Encode images in messages to base64 format — encode_images_in_messages","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"Encode images messages base64 format","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"","code":"encode_images_in_messages(messages)"},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"messages list messages, list class. Generally used chat() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"list messages images encoded base64 format.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"","code":"image <- file.path(system.file(\"extdata\", package = \"ollamar\"), \"image1.png\") message <- create_message(content = \"what is in the image?\", images = image) message_updated <- encode_images_in_messages(message)"},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate a response for a given prompt — generate","title":"Generate a response for a given prompt — generate","text":"Generate response given prompt","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate a response for a given prompt — generate","text":"","code":"generate( model, prompt, suffix = \"\", images = \"\", system = \"\", template = \"\", context = list(), stream = FALSE, raw = FALSE, keep_alive = \"5m\", output = c(\"resp\", \"jsonlist\", \"raw\", \"df\", \"text\", \"req\"), endpoint = \"/api/generate\", host = NULL, ... )"},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate a response for a given prompt — generate","text":"model character string model name \"llama3\". prompt character string prompt like \"sky ...\" suffix character string model response. Default \"\". images path image file include prompt. Default \"\". system character string system prompt (overrides defined Modelfile). Default \"\". template character string prompt template (overrides defined Modelfile). Default \"\". context list context previous response include previous conversation prompt. Default empty list. stream Enable response streaming. Default FALSE. raw TRUE, formatting applied prompt. may choose use raw parameter specifying full templated prompt request API. Default FALSE. keep_alive time keep connection alive. Default \"5m\" (5 minutes). output character vector output format. Default \"resp\". Options \"resp\", \"jsonlist\", \"raw\", \"df\", \"text\", \"req\" (httr2_request object). endpoint endpoint generate completion. Default \"/api/generate\". host base URL use. Default NULL, uses Ollama's default base URL. ... Additional options pass model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate a response for a given prompt — generate","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate a response for a given prompt — generate","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate a response for a given prompt — generate","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 # text prompt generate(\"llama3\", \"The sky is...\", stream = FALSE, output = \"df\") # stream and increase temperature generate(\"llama3\", \"The sky is...\", stream = TRUE, output = \"text\", temperature = 2.0) # image prompt # something like \"image1.png\" image_path <- file.path(system.file(\"extdata\", package = \"ollamar\"), \"image1.png\") # use vision or multimodal model such as https://ollama.com/benzie/llava-phi-3 generate(\"benzie/llava-phi-3:latest\", \"What is in the image?\", images = image_path, output = \"text\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":null,"dir":"Reference","previous_headings":"","what":"Read image file and encode it to base64 — image_encode_base64","title":"Read image file and encode it to base64 — image_encode_base64","text":"Read image file encode base64","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Read image file and encode it to base64 — image_encode_base64","text":"","code":"image_encode_base64(image_path)"},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Read image file and encode it to base64 — image_encode_base64","text":"image_path path image file.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Read image file and encode it to base64 — image_encode_base64","text":"base64 encoded string.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Read image file and encode it to base64 — image_encode_base64","text":"","code":"image_path <- file.path(system.file(\"extdata\", package = \"ollamar\"), \"image1.png\") substr(image_encode_base64(image_path), 1, 5) # truncate output #> [1] \"iVBOR\""},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Insert message into a list at a specified position — insert_message","title":"Insert message into a list at a specified position — insert_message","text":"Inserts message specified position list messages. role content converted list inserted input list given position.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Insert message into a list at a specified position — insert_message","text":"","code":"insert_message(content, role = \"user\", x = NULL, position = -1, ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Insert message into a list at a specified position — insert_message","text":"content content message. role role message. Can \"user\", \"system\", \"assistant\". Default \"user\". x list messages. Default NULL. position position insert new message. Default -1 (end list). ... Additional arguments images.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Insert message into a list at a specified position — insert_message","text":"list messages new message inserted specified position.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Insert message into a list at a specified position — insert_message","text":"","code":"messages <- list( list(role = \"system\", content = \"Be friendly\"), list(role = \"user\", content = \"How are you?\") ) insert_message(\"INSERT MESSAGE AT THE END\", \"user\", messages) #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Be friendly\" #> #> #> [[2]] #> [[2]]$role #> [1] \"user\" #> #> [[2]]$content #> [1] \"How are you?\" #> #> #> [[3]] #> [[3]]$role #> [1] \"user\" #> #> [[3]]$content #> [1] \"INSERT MESSAGE AT THE END\" #> #> insert_message(\"INSERT MESSAGE AT THE BEGINNING\", \"user\", messages, 2) #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Be friendly\" #> #> #> [[2]] #> [[2]]$role #> [1] \"user\" #> #> [[2]]$content #> [1] \"INSERT MESSAGE AT THE BEGINNING\" #> #> #> [[3]] #> [[3]]$role #> [1] \"user\" #> #> [[3]]$content #> [1] \"How are you?\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":null,"dir":"Reference","previous_headings":"","what":"List models that are available locally — list_models","title":"List models that are available locally — list_models","text":"List models available locally","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"List models that are available locally — list_models","text":"","code":"list_models( output = c(\"df\", \"resp\", \"jsonlist\", \"raw\", \"text\"), endpoint = \"/api/tags\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"List models that are available locally — list_models","text":"output output format. Default \"df\". options \"resp\", \"jsonlist\", \"raw\", \"text\". endpoint endpoint get models. Default \"/api/tags\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"List models that are available locally — list_models","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"List models that are available locally — list_models","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"List models that are available locally — list_models","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 list_models() # returns dataframe list_models(\"df\") # returns dataframe list_models(\"resp\") # httr2 response object list_models(\"jsonlist\") list_models(\"raw\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if model is available locally — model_avail","title":"Check if model is available locally — model_avail","text":"Check model available locally","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if model is available locally — model_avail","text":"","code":"model_avail(model)"},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if model is available locally — model_avail","text":"model character string model name \"llama3\".","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if model is available locally — model_avail","text":"logical value indicating model exists.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if model is available locally — model_avail","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 model_avail(\"codegemma:7b\") model_avail(\"abc\") model_avail(\"llama3\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/model_options.html","id":null,"dir":"Reference","previous_headings":"","what":"Model options — model_options","title":"Model options — model_options","text":"Model options","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/model_options.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Model options — model_options","text":"","code":"model_options"},{"path":"https://hauselin.github.io/ollama-r/reference/model_options.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Model options — model_options","text":"object class list length 13.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":null,"dir":"Reference","previous_headings":"","what":"Chat with a model in real-time in R console — ohelp","title":"Chat with a model in real-time in R console — ohelp","text":"Chat model real-time R console","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Chat with a model in real-time in R console — ohelp","text":"","code":"ohelp(model = \"codegemma:7b\", ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Chat with a model in real-time in R console — ohelp","text":"model character string model name \"llama3\". Defaults \"codegemma:7b\" decent coding model 2024-07-27. ... Additional options. options currently available time.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Chat with a model in real-time in R console — ohelp","text":"return anything. prints conversation console.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Chat with a model in real-time in R console — ohelp","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 ohelp(first_prompt = \"quit\") # regular usage: ohelp() }"},{"path":"https://hauselin.github.io/ollama-r/reference/package_config.html","id":null,"dir":"Reference","previous_headings":"","what":"Package configuration — package_config","title":"Package configuration — package_config","text":"Package configuration","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/package_config.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Package configuration — package_config","text":"","code":"package_config"},{"path":"https://hauselin.github.io/ollama-r/reference/package_config.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Package configuration — package_config","text":"object class list length 3.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Prepend message to a list — prepend_message","title":"Prepend message to a list — prepend_message","text":"Prepends message (add beginning list) list messages. role content converted list prepended input list.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Prepend message to a list — prepend_message","text":"","code":"prepend_message(content, role = \"user\", x = NULL, ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Prepend message to a list — prepend_message","text":"content content message. role role message. Can \"user\", \"system\", \"assistant\". x list messages. Default NULL. ... Additional arguments images.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Prepend message to a list — prepend_message","text":"list messages new message prepended.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Prepend message to a list — prepend_message","text":"","code":"prepend_message(\"user\", \"Hello\") #> [[1]] #> [[1]]$role #> [1] \"Hello\" #> #> [[1]]$content #> [1] \"user\" #> #> prepend_message(\"system\", \"Always respond nicely\") #> [[1]] #> [[1]]$role #> [1] \"Always respond nicely\" #> #> [[1]]$content #> [1] \"system\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":null,"dir":"Reference","previous_headings":"","what":"List models that are currently loaded into memory — ps","title":"List models that are currently loaded into memory — ps","text":"List models currently loaded memory","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"List models that are currently loaded into memory — ps","text":"","code":"ps( output = c(\"df\", \"resp\", \"jsonlist\", \"raw\", \"text\"), endpoint = \"/api/ps\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"List models that are currently loaded into memory — ps","text":"output output format. Default \"df\". Supported formats \"df\", \"resp\", \"jsonlist\", \"raw\", \"text\". endpoint endpoint list running models. Default \"/api/ps\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"List models that are currently loaded into memory — ps","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"List models that are currently loaded into memory — ps","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"List models that are currently loaded into memory — ps","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 ps(\"text\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":null,"dir":"Reference","previous_headings":"","what":"Pull/download a model from the Ollama library — pull","title":"Pull/download a model from the Ollama library — pull","text":"See https://ollama.com/library list available models. Use list_models() function get list models already downloaded/installed machine. Cancelled pulls resumed left , multiple calls share download progress.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Pull/download a model from the Ollama library — pull","text":"","code":"pull( name, stream = FALSE, insecure = FALSE, endpoint = \"/api/pull\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Pull/download a model from the Ollama library — pull","text":"name character string model name download/pull, \"llama3\". stream Enable response streaming. Default FALSE. insecure Allow insecure connections use pulling library development. Default FALSE. endpoint endpoint pull model. Default \"/api/pull\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Pull/download a model from the Ollama library — pull","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Pull/download a model from the Ollama library — pull","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Pull/download a model from the Ollama library — pull","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 pull(\"llama3\") pull(\"all-minilm\", stream = FALSE) }"},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":null,"dir":"Reference","previous_headings":"","what":"Push or upload a model to a model library — push","title":"Push or upload a model to a model library — push","text":"Push upload model Ollama model library. Requires registering ollama.ai adding public key first.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Push or upload a model to a model library — push","text":"","code":"push( name, insecure = FALSE, stream = FALSE, output = c(\"resp\", \"jsonlist\", \"raw\", \"text\", \"df\"), endpoint = \"/api/push\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Push or upload a model to a model library — push","text":"name character string model name upload, form /: insecure Allow insecure connections. use pushing library development. Default FALSE. stream Enable response streaming. Default FALSE. output output format. Default \"resp\". options \"jsonlist\", \"raw\", \"text\", \"df\". endpoint endpoint push model. Default \"/api/push\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Push or upload a model to a model library — push","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Push or upload a model to a model library — push","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Push or upload a model to a model library — push","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 push(\"mattw/pygmalion:latest\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":null,"dir":"Reference","previous_headings":"","what":"Process httr2 response object — resp_process","title":"Process httr2 response object — resp_process","text":"Process httr2 response object","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Process httr2 response object — resp_process","text":"","code":"resp_process(resp, output = c(\"df\", \"jsonlist\", \"raw\", \"resp\", \"text\"))"},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Process httr2 response object — resp_process","text":"resp httr2 response object. output output format. Default \"df\". options \"jsonlist\", \"raw\", \"resp\" (httr2 response object), \"text\"","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Process httr2 response object — resp_process","text":"data frame, json list, raw httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Process httr2 response object — resp_process","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 resp <- list_models(\"resp\") resp_process(resp, \"df\") # parse response to dataframe/tibble resp_process(resp, \"jsonlist\") # parse response to list resp_process(resp, \"raw\") # parse response to raw string resp_process(resp, \"resp\") # return input response object resp_process(resp, \"text\") # return text/character vector }"},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process_stream.html","id":null,"dir":"Reference","previous_headings":"","what":"Process httr2 response object for streaming — resp_process_stream","title":"Process httr2 response object for streaming — resp_process_stream","text":"Process httr2 response object streaming","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process_stream.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Process httr2 response object for streaming — resp_process_stream","text":"","code":"resp_process_stream(resp, output)"},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":null,"dir":"Reference","previous_headings":"","what":"Search for options based on a query — search_options","title":"Search for options based on a query — search_options","text":"Search options based query","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Search for options based on a query — search_options","text":"","code":"search_options(query)"},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Search for options based on a query — search_options","text":"query query (character) search options.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Search for options based on a query — search_options","text":"Returns list matching options.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Search for options based on a query — search_options","text":"","code":"search_options(\"learning rate\") #> Matching options: mirostat_eta #> $mirostat_eta #> $mirostat_eta$description #> [1] \"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.\" #> #> $mirostat_eta$default_value #> [1] 0.1 #> #> search_options(\"tokens\") #> Matching options: tfs_z, num_predict #> $tfs_z #> $tfs_z$description #> [1] \"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.\" #> #> $tfs_z$default_value #> [1] 1 #> #> #> $num_predict #> $num_predict$description #> [1] \"Maximum number of tokens to predict when generating text. (Default: 128, -1 = infinite generation, -2 = fill context)\" #> #> $num_predict$default_value #> [1] 128 #> #> search_options(\"invalid query\") #> No matching options found #> #> list()"},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":null,"dir":"Reference","previous_headings":"","what":"Show model information — show","title":"Show model information — show","text":"Model information includes details, modelfile, template, parameters, license, system prompt.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Show model information — show","text":"","code":"show( name, verbose = FALSE, output = c(\"jsonlist\", \"resp\", \"raw\"), endpoint = \"/api/show\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Show model information — show","text":"name Name model show verbose Returns full data verbose response fields. Default FALSE. output output format. Default \"jsonlist\". options \"resp\", \"raw\". endpoint endpoint show model. Default \"/api/show\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Show model information — show","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Show model information — show","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Show model information — show","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 # show(\"llama3\") # returns jsonlist show(\"llama3\", output = \"resp\") # returns response object }"},{"path":"https://hauselin.github.io/ollama-r/reference/stream_handler.html","id":null,"dir":"Reference","previous_headings":"","what":"Stream handler helper function — stream_handler","title":"Stream handler helper function — stream_handler","text":"Function handle streaming.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/stream_handler.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Stream handler helper function — stream_handler","text":"","code":"stream_handler(x, env, endpoint)"},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":null,"dir":"Reference","previous_headings":"","what":"Test connection to Ollama server — test_connection","title":"Test connection to Ollama server — test_connection","text":"test_connection() tests whether Ollama server running .","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test connection to Ollama server — test_connection","text":"","code":"test_connection(url = \"http://localhost:11434\")"},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test connection to Ollama server — test_connection","text":"url URL Ollama server. Default http://localhost:11434","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test connection to Ollama server — test_connection","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Test connection to Ollama server — test_connection","text":"","code":"test_connection() #> Ollama local server not running or wrong server. #> Download and launch Ollama app to run the server. Visit https://ollama.com or https://github.com/ollama/ollama #> #> GET http://localhost:11434 #> Body: empty test_connection(\"http://localhost:11434\") # default url #> Ollama local server not running or wrong server. #> Download and launch Ollama app to run the server. Visit https://ollama.com or https://github.com/ollama/ollama #> #> GET http://localhost:11434 #> Body: empty test_connection(\"http://127.0.0.1:11434\") #> Ollama local server not running or wrong server. #> Download and launch Ollama app to run the server. Visit https://ollama.com or https://github.com/ollama/ollama #> #> GET http://127.0.0.1:11434 #> Body: empty"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a message — validate_message","title":"Validate a message — validate_message","text":"Validate message ensure required fields correct data types chat() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a message — validate_message","text":"","code":"validate_message(message)"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a message — validate_message","text":"message list single message list class.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate a message — validate_message","text":"TRUE message valid, otherwise error thrown.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Validate a message — validate_message","text":"","code":"validate_message(create_message(\"Hello\")) #> [1] TRUE validate_message(list(role = \"user\", content = \"Hello\")) #> [1] TRUE"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a list of messages — validate_messages","title":"Validate a list of messages — validate_messages","text":"Validate list messages ensure required fields correct data types chat() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a list of messages — validate_messages","text":"","code":"validate_messages(messages)"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a list of messages — validate_messages","text":"messages list messages, list class.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate a list of messages — validate_messages","text":"TRUE messages valid, otherwise warning messages printed FALSE returned.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Validate a list of messages — validate_messages","text":"","code":"validate_messages(create_messages( create_message(\"Be friendly\", \"system\"), create_message(\"Hello\") )) #> [1] TRUE"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate additional options or parameters provided to the API call — validate_options","title":"Validate additional options or parameters provided to the API call — validate_options","text":"Validate additional options parameters provided API call","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate additional options or parameters provided to the API call — validate_options","text":"","code":"validate_options(...)"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate additional options or parameters provided to the API call — validate_options","text":"... Additional options parameters provided API call","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate additional options or parameters provided to the API call — validate_options","text":"TRUE additional options valid, FALSE otherwise","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Validate additional options or parameters provided to the API call — validate_options","text":"","code":"validate_options(mirostat = 1, mirostat_eta = 0.2, num_ctx = 1024) #> [1] TRUE validate_options(mirostat = 1, mirostat_eta = 0.2, invalid_opt = 1024) #> Valid options: mirostat, mirostat_eta #> Invalid options: invalid_opt #> See available options with check_options() or model_options. #> See also https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter #> [1] FALSE"},{"path":[]},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"ollamar-121","dir":"Changelog","previous_headings":"","what":"ollamar 1.2.1","title":"ollamar 1.2.1","text":"CRAN release: 2024-08-25 generate() chat() accept multiple images prompts/messages. Add functions validate messages chat() function: validate_message(), validate_messages(). Add encode_images_in_messages() encode images messages chat() function. Add create_messages() create messages easily. Helper functions managing messages accept ... parameter pass additional options. Update README docs reflect changes.","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"ollamar-120","dir":"Changelog","previous_headings":"","what":"ollamar 1.2.0","title":"ollamar 1.2.0","text":"CRAN release: 2024-08-17 functions calling API endpoints endpoint parameter. functions calling API endpoints ... parameter pass additional model options API. functions calling API endpoints host parameter specify host URL. Default NULL, uses default Ollama URL. Add req output format generate() chat(). Add new functions calling APIs: create(), show(), copy(), delete(), push(), embed() (supercedes embeddings()), ps(). Add helper functions manipulate chat/conversation history chat() function (APIs like OpenAI): create_message(), append_message(), prepend_message(), delete_message(), insert_message(). Add ohelp() function chat models real-time. Add helper functions: model_avail(), image_encode_base64(), check_option_valid(), check_options(), search_options(), validate_options()","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"ollamar-111","dir":"Changelog","previous_headings":"","what":"ollamar 1.1.1","title":"ollamar 1.1.1","text":"CRAN release: 2024-05-02","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"bug-fixes-1-1-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"ollamar 1.1.1","text":"Fixed invalid URLs. Updated title description.","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"ollamar-100","dir":"Changelog","previous_headings":"","what":"ollamar 1.0.0","title":"ollamar 1.0.0","text":"Initial CRAN submission.","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"new-features-1-0-0","dir":"Changelog","previous_headings":"","what":"New features","title":"ollamar 1.0.0","text":"Integrate R Ollama run language models locally machine. Include test_connection() function test connection Ollama server. Include list_models() function list available models. Include pull() function pull model Ollama server. Include delete() function delete model Ollama server. Include chat() function chat model. Include generate() function generate text model. Include embeddings() function get embeddings model. Include resp_process() function process httr2_response objects.","code":""}] +[{"path":"https://hauselin.github.io/ollama-r/CODE_OF_CONDUCT.html","id":null,"dir":"","previous_headings":"","what":"Contributor code of conduct","title":"Contributor code of conduct","text":"contributors maintainers project, pledge respect people contribute reporting issues, posting feature requests, updating documentation, submitting pull requests patches, activities. committed making participation project harassment-free experience everyone, regardless level experience, gender, gender identity expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion. Examples unacceptable behavior participants include use sexual language imagery, derogatory comments personal attacks, trolling, public private harassment, insults, unprofessional conduct. Project maintainers right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct. Project maintainers follow Code Conduct may removed project team. Instances abusive, harassing, otherwise unacceptable behavior may reported opening issue contacting one project maintainers. Code Conduct adapted Contributor Covenant (http://contributor-covenant.org), version 1.0.0, available http://contributor-covenant.org/version/1/0/0/","code":""},{"path":[]},{"path":"https://hauselin.github.io/ollama-r/CONTRIBUTING.html","id":"report-issues-or-seek-support","dir":"","previous_headings":"","what":"Report issues or seek support","title":"Community guidelines and contributing","text":"Open GitHub issue concise description problem, including steps reproduce environment. Check existing/closed issues posting.","code":""},{"path":"https://hauselin.github.io/ollama-r/CONTRIBUTING.html","id":"contribute-to-ollamar","dir":"","previous_headings":"","what":"Contribute to ollamar","title":"Community guidelines and contributing","text":"make substantial pull request, always file issue make sure someone team agrees ’s problem. Fork repository, create branch changes, submit pull request documented tested code. Refer R packages Hadley Wickham Jennifer Bryan R package development guidelines. use roxygen2, Markdown syntax, documentation. use testthat testing. Contributions test cases included easier accept. - use continuous integration deployment GitHub GitHub actions.","code":""},{"path":"https://hauselin.github.io/ollama-r/CONTRIBUTING.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of conduct","title":"Community guidelines and contributing","text":"Please note ollamar project released contributor code conduct. contributing project agree abide terms.","code":""},{"path":"https://hauselin.github.io/ollama-r/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"MIT License","title":"MIT License","text":"Copyright (c) 2024 ollamar authors Permission hereby granted, free charge, person obtaining copy software associated documentation files (“Software”), deal Software without restriction, including without limitation rights use, copy, modify, merge, publish, distribute, sublicense, /sell copies Software, permit persons Software furnished , subject following conditions: copyright notice permission notice shall included copies substantial portions Software. SOFTWARE PROVIDED “”, WITHOUT WARRANTY KIND, EXPRESS IMPLIED, INCLUDING LIMITED WARRANTIES MERCHANTABILITY, FITNESS PARTICULAR PURPOSE NONINFRINGEMENT. EVENT SHALL AUTHORS COPYRIGHT HOLDERS LIABLE CLAIM, DAMAGES LIABILITY, WHETHER ACTION CONTRACT, TORT OTHERWISE, ARISING , CONNECTION SOFTWARE USE DEALINGS SOFTWARE.","code":""},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"installation","dir":"Articles","previous_headings":"","what":"Installation","title":"Using ollamar","text":"Download install Ollama app. macOS Windows preview Linux: curl -fsSL https://ollama.com/install.sh | sh Docker image Open/launch Ollama app start local server. Install either stable latest/development version ollamar. Stable version: latest/development version features/bug fixes (see latest changes ), can install GitHub using install_github function remotes library. doesn’t work don’t remotes library, please run install.packages(\"remotes\") R RStudio running code .","code":"install.packages(\"ollamar\") # install.packages(\"remotes\") # run this line if you don't have the remotes library remotes::install_github(\"hauselin/ollamar\")"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"usage","dir":"Articles","previous_headings":"","what":"Usage","title":"Using ollamar","text":"ollamar uses httr2 library make HTTP requests Ollama server, many functions library returns httr2_response object default. response object says Status: 200 OK, request successful.","code":"library(ollamar) test_connection() # test connection to Ollama server # if you see \"Ollama local server not running or wrong server,\" Ollama app/server isn't running # generate a response/text based on a prompt; returns an httr2 response by default resp <- generate(\"llama3.1\", \"tell me a 5-word story\") resp #' interpret httr2 response object #' #' POST http://127.0.0.1:11434/api/generate # endpoint #' Status: 200 OK # if successful, status code should be 200 OK #' Content-Type: application/json #' Body: In memory (414 bytes) # get just the text from the response object resp_process(resp, \"text\") # get the text as a tibble dataframe resp_process(resp, \"df\") # alternatively, specify the output type when calling the function initially txt <- generate(\"llama3.1\", \"tell me a 5-word story\", output = \"text\") # list available models (models you've pulled/downloaded) list_models() name size parameter_size quantization_level modified 1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10 2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"pulldownload-model","dir":"Articles","previous_headings":"Usage","what":"Pull/download model","title":"Using ollamar","text":"Download model ollama library (see API doc). list models can pull/download, see Ollama library.","code":"pull(\"llama3.1\") # download a model (equivalent bash code: ollama run llama3.1) list_models() # verify you've pulled/downloaded the model"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"delete-model","dir":"Articles","previous_headings":"Usage","what":"Delete model","title":"Using ollamar","text":"Delete model data (see API doc). can see models ’ve downloaded list_models(). download model, specify name model.","code":"list_models() # see the models you've pulled/downloaded delete(\"all-minilm:latest\") # returns a httr2 response object"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"generate-completion","dir":"Articles","previous_headings":"Usage","what":"Generate completion","title":"Using ollamar","text":"Generate response given prompt (see API doc).","code":"resp <- generate(\"llama3.1\", \"Tomorrow is a...\") # return httr2 response object by default resp resp_process(resp, \"text\") # process the response to return text/vector output generate(\"llama3.1\", \"Tomorrow is a...\", output = \"text\") # directly return text/vector output generate(\"llama3.1\", \"Tomorrow is a...\", stream = TRUE) # return httr2 response object and stream output generate(\"llama3.1\", \"Tomorrow is a...\", output = \"df\", stream = TRUE) # image prompt # use a vision/multi-modal model generate(\"benzie/llava-phi-3\", \"What is in the image?\", images = \"image.png\", output = 'text')"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"chat","dir":"Articles","previous_headings":"Usage","what":"Chat","title":"Using ollamar","text":"Generate next message chat/conversation.","code":"messages <- create_message(\"what is the capital of australia\") # default role is user resp <- chat(\"llama3.1\", messages) # default returns httr2 response object resp # resp_process(resp, \"text\") # process the response to return text/vector output # specify output type when calling the function chat(\"llama3.1\", messages, output = \"text\") # text vector chat(\"llama3.1\", messages, output = \"df\") # data frame/tibble chat(\"llama3.1\", messages, output = \"jsonlist\") # list chat(\"llama3.1\", messages, output = \"raw\") # raw string chat(\"llama3.1\", messages, stream = TRUE) # stream output and return httr2 response object # create chat history messages <- create_messages( create_message(\"end all your sentences with !!!\", role = \"system\"), create_message(\"Hello!\"), # default role is user create_message(\"Hi, how can I help you?!!!\", role = \"assistant\"), create_message(\"What is the capital of Australia?\"), create_message(\"Canberra!!!\", role = \"assistant\"), create_message(\"what is your name?\") ) cat(chat(\"llama3.1\", messages, output = \"text\")) # print the formatted output # image prompt messages <- create_message(\"What is in the image?\", images = \"image.png\") # use a vision/multi-modal model chat(\"benzie/llava-phi-3\", messages, output = \"text\")"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"stream-responses","dir":"Articles","previous_headings":"Usage > Chat","what":"Stream responses","title":"Using ollamar","text":"","code":"messages <- create_message(\"Tell me a 1-paragraph story.\") # use \"llama3.1\" model, provide list of messages, return text/vector output, and stream the output chat(\"llama3.1\", messages, output = \"text\", stream = TRUE) # chat(model = \"llama3.1\", messages = messages, output = \"text\", stream = TRUE) # same as above"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"format-messages-for-chat","dir":"Articles","previous_headings":"Usage > Chat","what":"Format messages for chat","title":"Using ollamar","text":"Internally, messages represented list many distinct list messages. list/message object two elements: role (can \"user\" \"assistant\" \"system\") content (message text). example shows messages/lists presented. simplify process creating managing messages, ollamar provides functions format prepare messages chat() function. functions also work APIs LLM providers like OpenAI Anthropic. create_messages(): create messages build chat history create_message() creates chat history single message append_message() adds new message end existing messages prepend_message() adds new message beginning existing messages default, inserts message -1 (final) position positive negative indices/positions supported 5 messages, positions 1 (-5), 2 (-4), 3 (-3), 4 (-2), 5 (-1) can convert data.frame, tibble data.table objects list() messages vice versa functions base R popular libraries.","code":"list( # main list containing all the messages list(role = \"user\", content = \"Hello!\"), # first message as a list list(role = \"assistant\", content = \"Hi! How are you?\") # second message as a list ) # create a chat history with one message messages <- create_message(content = \"Hi! How are you? (1ST MESSAGE)\", role = \"assistant\") # or simply, messages <- create_message(\"Hi! How are you?\", \"assistant\") messages[[1]] # get 1st message # append (add to the end) a new message to the existing messages messages <- append_message(\"I'm good. How are you? (2ND MESSAGE)\", \"user\", messages) messages[[1]] # get 1st message messages[[2]] # get 2nd message (newly added message) # prepend (add to the beginning) a new message to the existing messages messages <- prepend_message(\"I'm good. How are you? (0TH MESSAGE)\", \"user\", messages) messages[[1]] # get 0th message (newly added message) messages[[2]] # get 1st message messages[[3]] # get 2nd message # insert a new message at a specific index/position (2nd position in the example below) # by default, the message is inserted at the end of the existing messages (position -1 is the end/default) messages <- insert_message(\"I'm good. How are you? (BETWEEN 0 and 1 MESSAGE)\", \"user\", messages, 2) messages[[1]] # get 0th message messages[[2]] # get between 0 and 1 message (newly added message) messages[[3]] # get 1st message messages[[4]] # get 2nd message # delete a message at a specific index/position (2nd position in the example below) messages <- delete_message(messages, 2) # create a chat history with multiple messages messages <- create_messages( create_message(\"You're a knowledgeable tour guide.\", role = \"system\"), create_message(\"What is the capital of Australia?\") # default role is user ) # create a list of messages messages <- create_messages( create_message(\"You're a knowledgeable tour guide.\", role = \"system\"), create_message(\"What is the capital of Australia?\") ) # convert to dataframe df <- dplyr::bind_rows(messages) # with dplyr library df <- data.table::rbindlist(messages) # with data.table library # convert dataframe to list with apply, purrr functions apply(df, 1, as.list) # convert each row to a list with base R apply purrr::transpose(df) # with purrr library"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"embeddings","dir":"Articles","previous_headings":"Usage","what":"Embeddings","title":"Using ollamar","text":"Get vector embedding prompt/text (see API doc). default, embeddings normalized length 1, means following: cosine similarity can computed slightly faster using just dot product cosine similarity Euclidean distance result identical rankings","code":"embed(\"llama3.1\", \"Hello, how are you?\") # don't normalize embeddings embed(\"llama3.1\", \"Hello, how are you?\", normalize = FALSE) # get embeddings for similar prompts e1 <- embed(\"llama3.1\", \"Hello, how are you?\") e2 <- embed(\"llama3.1\", \"Hi, how are you?\") # compute cosine similarity sum(e1 * e2) # not equals to 1 sum(e1 * e1) # 1 (identical vectors/embeddings) # non-normalized embeddings e3 <- embed(\"llama3.1\", \"Hello, how are you?\", normalize = FALSE) e4 <- embed(\"llama3.1\", \"Hi, how are you?\", normalize = FALSE)"},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"parse-httr2_response-objects-with-resp_process","dir":"Articles","previous_headings":"Usage","what":"Parse httr2_response objects with resp_process()","title":"Using ollamar","text":"ollamar uses httr2 library make HTTP requests Ollama server, many functions library returns httr2_response object default. can either parse output resp_process() use output parameter function specify output format. Generally, output parameter can one \"df\", \"jsonlist\", \"raw\", \"resp\", \"text\".","code":"resp <- list_models(output = \"resp\") # returns a httr2 response object # # GET http://127.0.0.1:11434/api/tags # Status: 200 OK # Content-Type: application/json # Body: In memory (5401 bytes) # process the httr2 response object with the resp_process() function resp_process(resp, \"df\") # or list_models(output = \"df\") resp_process(resp, \"jsonlist\") # list # or list_models(output = \"jsonlist\") resp_process(resp, \"raw\") # raw string # or list_models(output = \"raw\") resp_process(resp, \"resp\") # returns the input httr2 response object # or list_models() or list_models(\"resp\") resp_process(resp, \"text\") # text vector # or list_models(\"text\")"},{"path":[]},{"path":"https://hauselin.github.io/ollama-r/articles/ollamar.html","id":"parallel-requests","dir":"Articles","previous_headings":"Advanced usage","what":"Parallel requests","title":"Using ollamar","text":"generate() chat() endpoints/functions, can specify output = 'req' function functions return httr2_request objects instead httr2_response objects. multiple httr2_request objects list, can make parallel requests req_perform_parallel function httr2 library. See httr2 documentation details. Example sentiment analysis parallel requests generate() function Example sentiment analysis parallel requests chat() function","code":"prompt <- \"Tell me a 10-word story\" req <- generate(\"llama3.1\", prompt, output = \"req\") # returns a httr2_request object # # POST http://127.0.0.1:11434/api/generate # Headers: # • content_type: 'application/json' # • accept: 'application/json' # • user_agent: 'ollama-r/1.1.1 (aarch64-apple-darwin20) R/4.4.0' # Body: json encoded data library(httr2) prompt <- \"Tell me a 5-word story\" # create 5 httr2_request objects that generate a response to the same prompt reqs <- lapply(1:5, function(r) generate(\"llama3.1\", prompt, output = \"req\")) # make parallel requests and get response resps <- req_perform_parallel(reqs) # list of httr2_request objects # process the responses sapply(resps, resp_process, \"text\") # get responses as text # [1] \"She found him in Paris.\" \"She found the key upstairs.\" # [3] \"She found her long-lost sister.\" \"She found love on Mars.\" # [5] \"She found the diamond ring.\" library(httr2) library(glue) library(dplyr) # text to classify texts <- c('I love this product', 'I hate this product', 'I am neutral about this product') # create httr2_request objects for each text, using the same system prompt reqs <- lapply(texts, function(text) { prompt <- glue(\"Your only task/role is to evaluate the sentiment of product reviews, and your response should be one of the following:'positive', 'negative', or 'other'. Product review: {text}\") generate(\"llama3.1\", prompt, output = \"req\") }) # make parallel requests and get response resps <- req_perform_parallel(reqs) # list of httr2_request objects # process the responses sapply(resps, resp_process, \"text\") # get responses as text # [1] \"Positive\" \"Negative.\" # [3] \"'neutral' translates to... 'other'.\" library(httr2) library(dplyr) # text to classify texts <- c('I love this product', 'I hate this product', 'I am neutral about this product') # create system prompt chat_history <- create_message(\"Your only task/role is to evaluate the sentiment of product reviews provided by the user. Your response should simply be 'positive', 'negative', or 'other'.\", \"system\") # create httr2_request objects for each text, using the same system prompt reqs <- lapply(texts, function(text) { messages <- append_message(text, \"user\", chat_history) chat(\"llama3.1\", messages, output = \"req\") }) # make parallel requests and get response resps <- req_perform_parallel(reqs) # list of httr2_request objects # process the responses bind_rows(lapply(resps, resp_process, \"df\")) # get responses as dataframes # # A tibble: 3 × 4 # model role content created_at # # 1 llama3.1 assistant Positive 2024-08-05T17:54:27.758618Z # 2 llama3.1 assistant negative 2024-08-05T17:54:27.657525Z # 3 llama3.1 assistant other 2024-08-05T17:54:27.657067Z"},{"path":"https://hauselin.github.io/ollama-r/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Hause Lin. Author, maintainer, copyright holder. Tawab Safi. Author, contributor.","code":""},{"path":"https://hauselin.github.io/ollama-r/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Lin, H., & Safi, T. (2024). ollamar: R package running large language models. PsyArXiv. https://doi.org/10.31234/osf.io/zsrg5","code":"@Article{, title = {ollamar: An R package for running large language models}, author = {Hause Lin and Tawab Safi}, journal = {PsyArXiv}, year = {2024}, month = {aug}, publisher = {OSF}, doi = {10.31234/osf.io/zsrg5}, url = {https://doi.org/10.31234/osf.io/zsrg5}, }"},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"ollama-r-library","dir":"","previous_headings":"","what":"Ollama R Library","title":"Ollama R Library","text":"Ollama R library easiest way integrate R Ollama, lets run language models locally machine. Main site: https://hauselin.github.io/ollama-r/ library also makes easy work data structures (e.g., conversational/chat histories) standard different LLMs (provided OpenAI Anthropic). also lets specify different output formats (e.g., dataframes, text/vector, lists) best suit need, allowing easy integration libraries/tools parallelization via httr2 library. use R library, ensure Ollama app installed. Ollama can use GPUs accelerating LLM inference. See Ollama GPU documentation information. See Ollama’s Github page information. library uses Ollama REST API (see documentation details). Note: least 8 GB RAM available run 7B models, 16 GB run 13B models, 32 GB run 33B models.","code":""},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"ollama-r-vs-ollama-pythonjs","dir":"","previous_headings":"","what":"Ollama R vs Ollama Python/JS","title":"Ollama R Library","text":"library inspired official Ollama Python Ollama JavaScript libraries. ’re coming Python JavaScript, feel right home. Alternatively, plan use Ollama Python JavaScript, using R library help understand Python/JavaScript libraries well.","code":""},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Ollama R Library","text":"Download install Ollama app. macOS Windows preview Linux: curl -fsSL https://ollama.com/install.sh | sh Docker image Open/launch Ollama app start local server. Install either stable latest/development version ollamar. Stable version: latest/development version features/bug fixes (see latest changes ), can install GitHub using install_github function remotes library. doesn’t work don’t remotes library, please run install.packages(\"remotes\") R RStudio running code .","code":"install.packages(\"ollamar\") # install.packages(\"remotes\") # run this line if you don't have the remotes library remotes::install_github(\"hauselin/ollamar\")"},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"example-usage","dir":"","previous_headings":"","what":"Example usage","title":"Ollama R Library","text":"basic demonstration use library. details, see getting started vignette. ollamar uses httr2 library make HTTP requests Ollama server, many functions library returns httr2_response object default. response object says Status: 200 OK, request successful.","code":"library(ollamar) test_connection() # test connection to Ollama server # if you see \"Ollama local server not running or wrong server,\" Ollama app/server isn't running # download a model pull(\"llama3.1\") # download a model (equivalent bash code: ollama run llama3.1) # generate a response/text based on a prompt; returns an httr2 response by default resp <- generate(\"llama3.1\", \"tell me a 5-word story\") resp #' interpret httr2 response object #' #' POST http://127.0.0.1:11434/api/generate # endpoint #' Status: 200 OK # if successful, status code should be 200 OK #' Content-Type: application/json #' Body: In memory (414 bytes) # get just the text from the response object resp_process(resp, \"text\") # get the text as a tibble dataframe resp_process(resp, \"df\") # alternatively, specify the output type when calling the function initially txt <- generate(\"llama3.1\", \"tell me a 5-word story\", output = \"text\") # list available models (models you've pulled/downloaded) list_models() name size parameter_size quantization_level modified 1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10 2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33"},{"path":"https://hauselin.github.io/ollama-r/index.html","id":"citing-ollamar","dir":"","previous_headings":"","what":"Citing ollamar","title":"Ollama R Library","text":"use library, please cite paper using following BibTeX entry:","code":"@article{Lin2024Aug, author = {Lin, Hause and Safi, Tawab}, title = {{ollamar: An R package for running large language models}}, journal = {PsyArXiv}, year = {2024}, month = aug, publisher = {OSF}, doi = {10.31234/osf.io/zsrg5}, url = {https://doi.org/10.31234/osf.io/zsrg5} }"},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Append message to a list — append_message","title":"Append message to a list — append_message","text":"Appends message (add end list) list messages. role content converted list appended input list.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Append message to a list — append_message","text":"","code":"append_message(content, role = \"user\", x = NULL, ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Append message to a list — append_message","text":"content content message. role role message. Can \"user\", \"system\", \"assistant\". Default \"user\". x list messages. Default NULL. ... Additional arguments images.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Append message to a list — append_message","text":"list messages new message appended.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/append_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Append message to a list — append_message","text":"","code":"append_message(\"user\", \"Hello\") #> [[1]] #> [[1]]$role #> [1] \"Hello\" #> #> [[1]]$content #> [1] \"user\" #> #> append_message(\"system\", \"Always respond nicely\") #> [[1]] #> [[1]]$role #> [1] \"Always respond nicely\" #> #> [[1]]$content #> [1] \"system\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate a chat completion with message history — chat","title":"Generate a chat completion with message history — chat","text":"Generate chat completion message history","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate a chat completion with message history — chat","text":"","code":"chat( model, messages, tools = list(), stream = FALSE, keep_alive = \"5m\", output = c(\"resp\", \"jsonlist\", \"raw\", \"df\", \"text\", \"req\"), endpoint = \"/api/chat\", host = NULL, ... )"},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate a chat completion with message history — chat","text":"model character string model name \"llama3\". messages list list messages model (see examples ). tools Tools model use supported. Requires stream = FALSE. Default empty list. stream Enable response streaming. Default FALSE. keep_alive duration keep connection alive. Default \"5m\". output output format. Default \"resp\". options \"jsonlist\", \"raw\", \"df\", \"text\", \"req\" (httr2_request object). endpoint endpoint chat model. Default \"/api/chat\". host base URL use. Default NULL, uses Ollama's default base URL. ... Additional options pass model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate a chat completion with message history — chat","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate a chat completion with message history — chat","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/chat.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate a chat completion with message history — chat","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 # one message messages <- list( list(role = \"user\", content = \"How are you doing?\") ) chat(\"llama3\", messages) # returns response by default chat(\"llama3\", messages, output = \"text\") # returns text/vector chat(\"llama3\", messages, temperature = 2.8) # additional options chat(\"llama3\", messages, stream = TRUE) # stream response chat(\"llama3\", messages, output = \"df\", stream = TRUE) # stream and return dataframe # multiple messages messages <- list( list(role = \"user\", content = \"Hello!\"), list(role = \"assistant\", content = \"Hi! How are you?\"), list(role = \"user\", content = \"Who is the prime minister of the uk?\"), list(role = \"assistant\", content = \"Rishi Sunak\"), list(role = \"user\", content = \"List all the previous messages.\") ) chat(\"llama3\", messages, stream = TRUE) # image image_path <- file.path(system.file(\"extdata\", package = \"ollamar\"), \"image1.png\") messages <- list( list(role = \"user\", content = \"What is in the image?\", images = image_path) ) chat(\"benzie/llava-phi-3\", messages, output = 'text') }"},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if an option is valid — check_option_valid","title":"Check if an option is valid — check_option_valid","text":"Check option valid","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if an option is valid — check_option_valid","text":"","code":"check_option_valid(opt)"},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if an option is valid — check_option_valid","text":"opt option (character) check.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if an option is valid — check_option_valid","text":"Returns TRUE option valid, FALSE otherwise.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_option_valid.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if an option is valid — check_option_valid","text":"","code":"check_option_valid(\"mirostat\") #> [1] TRUE check_option_valid(\"invalid_option\") #> [1] FALSE"},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if a vector of options are valid — check_options","title":"Check if a vector of options are valid — check_options","text":"Check vector options valid","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if a vector of options are valid — check_options","text":"","code":"check_options(opts = NULL)"},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if a vector of options are valid — check_options","text":"opts vector options check.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if a vector of options are valid — check_options","text":"Returns list two elements: valid_options invalid_options.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/check_options.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if a vector of options are valid — check_options","text":"","code":"check_options(c(\"mirostat\", \"invalid_option\")) #> $valid_options #> [1] \"mirostat\" #> #> $invalid_options #> [1] \"invalid_option\" #> check_options(c(\"mirostat\", \"num_predict\")) #> $valid_options #> [1] \"mirostat\" \"num_predict\" #> #> $invalid_options #> character(0) #>"},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":null,"dir":"Reference","previous_headings":"","what":"Copy a model — copy","title":"Copy a model — copy","text":"Creates model another name existing model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Copy a model — copy","text":"","code":"copy(source, destination, endpoint = \"/api/copy\", host = NULL)"},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Copy a model — copy","text":"source name model copy. destination name new model. endpoint endpoint copy model. Default \"/api/copy\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Copy a model — copy","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Copy a model — copy","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/copy.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Copy a model — copy","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 copy(\"llama3\", \"llama3_copy\") delete(\"llama3_copy\") # delete the model was just got copied }"},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a model from a Modelfile — create","title":"Create a model from a Modelfile — create","text":"recommended set modelfile content Modelfile rather just set path.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a model from a Modelfile — create","text":"","code":"create( name, modelfile = NULL, stream = FALSE, path = NULL, endpoint = \"/api/create\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a model from a Modelfile — create","text":"name Name model create. modelfile Contents Modelfile character string. Default NULL. stream Enable response streaming. Default FALSE. path path Modelfile. Default NULL. endpoint endpoint create model. Default \"/api/create\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a model from a Modelfile — create","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Create a model from a Modelfile — create","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a model from a Modelfile — create","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 create(\"mario\", \"FROM llama3\\nSYSTEM You are mario from Super Mario Bros.\") generate(\"mario\", \"who are you?\", output = \"text\") # model should say it's Mario delete(\"mario\") # delete the model created above }"},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a message — create_message","title":"Create a message — create_message","text":"Create message","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a message — create_message","text":"","code":"create_message(content, role = \"user\", ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a message — create_message","text":"content content message. role role message. Can \"user\", \"system\", \"assistant\". Default \"user\". ... Additional arguments images.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a message — create_message","text":"list messages.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a message — create_message","text":"","code":"create_message(\"Hello\", \"user\") #> [[1]] #> [[1]]$role #> [1] \"user\" #> #> [[1]]$content #> [1] \"Hello\" #> #> create_message(\"Always respond nicely\", \"system\") #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Always respond nicely\" #> #> create_message(\"I am here to help\", \"assistant\") #> [[1]] #> [[1]]$role #> [1] \"assistant\" #> #> [[1]]$content #> [1] \"I am here to help\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a list of messages — create_messages","title":"Create a list of messages — create_messages","text":"Create messages chat() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a list of messages — create_messages","text":"","code":"create_messages(...)"},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a list of messages — create_messages","text":"... list messages, list class.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a list of messages — create_messages","text":"list messages, list class.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_messages.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a list of messages — create_messages","text":"","code":"messages <- create_messages( create_message(\"be nice\", \"system\"), create_message(\"tell me a 3-word joke\") ) messages <- create_messages( list(role = \"system\", content = \"be nice\"), list(role = \"user\", content = \"tell me a 3-word joke\") )"},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a httr2 request object — create_request","title":"Create a httr2 request object — create_request","text":"Creates httr2 request object base URL, headers endpoint. Used functions package intended used directly.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a httr2 request object — create_request","text":"","code":"create_request(endpoint, host = NULL)"},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a httr2 request object — create_request","text":"endpoint endpoint create request host base URL use. Default NULL, uses http://127.0.0.1:11434","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a httr2 request object — create_request","text":"httr2 request object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/create_request.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a httr2 request object — create_request","text":"","code":"create_request(\"/api/tags\") #> #> GET http://127.0.0.1:11434/api/tags #> Headers: #> • content_type: 'application/json' #> • accept: 'application/json' #> • user_agent: 'ollama-r/1.2.1.9000 (x86_64-pc-linux-gnu) R/4.4.1' #> Body: empty create_request(\"/api/chat\") #> #> GET http://127.0.0.1:11434/api/chat #> Headers: #> • content_type: 'application/json' #> • accept: 'application/json' #> • user_agent: 'ollama-r/1.2.1.9000 (x86_64-pc-linux-gnu) R/4.4.1' #> Body: empty create_request(\"/api/embeddings\") #> #> GET http://127.0.0.1:11434/api/embeddings #> Headers: #> • content_type: 'application/json' #> • accept: 'application/json' #> • user_agent: 'ollama-r/1.2.1.9000 (x86_64-pc-linux-gnu) R/4.4.1' #> Body: empty"},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":null,"dir":"Reference","previous_headings":"","what":"Delete a model and its data — delete","title":"Delete a model and its data — delete","text":"Delete model local machine downloaded using pull() function. see models available, use list_models() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Delete a model and its data — delete","text":"","code":"delete(name, endpoint = \"/api/delete\", host = NULL)"},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Delete a model and its data — delete","text":"name character string model name \"llama3\". endpoint endpoint delete model. Default \"/api/delete\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Delete a model and its data — delete","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Delete a model and its data — delete","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Delete a model and its data — delete","text":"","code":"if (FALSE) { # \\dontrun{ delete(\"llama3\") } # }"},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Delete a message in a specified position from a list — delete_message","title":"Delete a message in a specified position from a list — delete_message","text":"Delete message using positive negative positions/indices. Negative positions/indices can used refer elements/messages end sequence.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Delete a message in a specified position from a list — delete_message","text":"","code":"delete_message(x, position = -1)"},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Delete a message in a specified position from a list — delete_message","text":"x list messages. position position message delete.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Delete a message in a specified position from a list — delete_message","text":"list messages message specified position removed.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/delete_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Delete a message in a specified position from a list — delete_message","text":"","code":"messages <- list( list(role = \"system\", content = \"Be friendly\"), list(role = \"user\", content = \"How are you?\") ) delete_message(messages, 1) # delete first message #> [[1]] #> [[1]]$role #> [1] \"user\" #> #> [[1]]$content #> [1] \"How are you?\" #> #> delete_message(messages, -2) # same as above (delete first message) #> [[1]] #> [[1]]$role #> [1] \"user\" #> #> [[1]]$content #> [1] \"How are you?\" #> #> delete_message(messages, 2) # delete second message #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Be friendly\" #> #> delete_message(messages, -1) # same as above (delete second message) #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Be friendly\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate embedding for inputs — embed","title":"Generate embedding for inputs — embed","text":"Supercedes embeddings() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate embedding for inputs — embed","text":"","code":"embed( model, input, truncate = TRUE, normalize = TRUE, keep_alive = \"5m\", endpoint = \"/api/embed\", host = NULL, ... )"},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate embedding for inputs — embed","text":"model character string model name \"llama3\". input vector characters want get embeddings . truncate Truncates end input fit within context length. Returns error FALSE context length exceeded. Defaults TRUE. normalize Normalize vector length 1. Default TRUE. keep_alive time keep connection alive. Default \"5m\" (5 minutes). endpoint endpoint get vector embedding. Default \"/api/embeddings\". host base URL use. Default NULL, uses Ollama's default base URL. ... Additional options pass model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate embedding for inputs — embed","text":"numeric matrix embedding. column embedding one input.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate embedding for inputs — embed","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embed.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate embedding for inputs — embed","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 embed(\"nomic-embed-text:latest\", \"The quick brown fox jumps over the lazy dog.\") # pass multiple inputs embed(\"nomic-embed-text:latest\", c(\"Good bye\", \"Bye\", \"See you.\")) # pass model options to the model embed(\"nomic-embed-text:latest\", \"Hello!\", temperature = 0.1, num_predict = 3) }"},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"function deprecated time superceded embed(). See embed() details.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"","code":"embeddings( model, prompt, normalize = TRUE, keep_alive = \"5m\", endpoint = \"/api/embeddings\", host = NULL, ... )"},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"model character string model name \"llama3\". prompt character string prompt want get vector embedding . normalize Normalize vector length 1. Default TRUE. keep_alive time keep connection alive. Default \"5m\" (5 minutes). endpoint endpoint get vector embedding. Default \"/api/embeddings\". host base URL use. Default NULL, uses Ollama's default base URL. ... Additional options pass model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"numeric vector embedding.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/embeddings.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate embeddings for a single prompt - deprecated in favor of embed() — embeddings","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 embeddings(\"nomic-embed-text:latest\", \"The quick brown fox jumps over the lazy dog.\") # pass model options to the model embeddings(\"nomic-embed-text:latest\", \"Hello!\", temperature = 0.1, num_predict = 3) }"},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":null,"dir":"Reference","previous_headings":"","what":"Encode images in messages to base64 format — encode_images_in_messages","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"Encode images messages base64 format","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"","code":"encode_images_in_messages(messages)"},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"messages list messages, list class. Generally used chat() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"list messages images encoded base64 format.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/encode_images_in_messages.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Encode images in messages to base64 format — encode_images_in_messages","text":"","code":"image <- file.path(system.file(\"extdata\", package = \"ollamar\"), \"image1.png\") message <- create_message(content = \"what is in the image?\", images = image) message_updated <- encode_images_in_messages(message)"},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate a response for a given prompt — generate","title":"Generate a response for a given prompt — generate","text":"Generate response given prompt","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate a response for a given prompt — generate","text":"","code":"generate( model, prompt, suffix = \"\", images = \"\", system = \"\", template = \"\", context = list(), stream = FALSE, raw = FALSE, keep_alive = \"5m\", output = c(\"resp\", \"jsonlist\", \"raw\", \"df\", \"text\", \"req\"), endpoint = \"/api/generate\", host = NULL, ... )"},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate a response for a given prompt — generate","text":"model character string model name \"llama3\". prompt character string prompt like \"sky ...\" suffix character string model response. Default \"\". images path image file include prompt. Default \"\". system character string system prompt (overrides defined Modelfile). Default \"\". template character string prompt template (overrides defined Modelfile). Default \"\". context list context previous response include previous conversation prompt. Default empty list. stream Enable response streaming. Default FALSE. raw TRUE, formatting applied prompt. may choose use raw parameter specifying full templated prompt request API. Default FALSE. keep_alive time keep connection alive. Default \"5m\" (5 minutes). output character vector output format. Default \"resp\". Options \"resp\", \"jsonlist\", \"raw\", \"df\", \"text\", \"req\" (httr2_request object). endpoint endpoint generate completion. Default \"/api/generate\". host base URL use. Default NULL, uses Ollama's default base URL. ... Additional options pass model.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate a response for a given prompt — generate","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate a response for a given prompt — generate","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/generate.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate a response for a given prompt — generate","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 # text prompt generate(\"llama3\", \"The sky is...\", stream = FALSE, output = \"df\") # stream and increase temperature generate(\"llama3\", \"The sky is...\", stream = TRUE, output = \"text\", temperature = 2.0) # image prompt # something like \"image1.png\" image_path <- file.path(system.file(\"extdata\", package = \"ollamar\"), \"image1.png\") # use vision or multimodal model such as https://ollama.com/benzie/llava-phi-3 generate(\"benzie/llava-phi-3:latest\", \"What is in the image?\", images = image_path, output = \"text\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":null,"dir":"Reference","previous_headings":"","what":"Read image file and encode it to base64 — image_encode_base64","title":"Read image file and encode it to base64 — image_encode_base64","text":"Read image file encode base64","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Read image file and encode it to base64 — image_encode_base64","text":"","code":"image_encode_base64(image_path)"},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Read image file and encode it to base64 — image_encode_base64","text":"image_path path image file.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Read image file and encode it to base64 — image_encode_base64","text":"base64 encoded string.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/image_encode_base64.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Read image file and encode it to base64 — image_encode_base64","text":"","code":"image_path <- file.path(system.file(\"extdata\", package = \"ollamar\"), \"image1.png\") substr(image_encode_base64(image_path), 1, 5) # truncate output #> [1] \"iVBOR\""},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Insert message into a list at a specified position — insert_message","title":"Insert message into a list at a specified position — insert_message","text":"Inserts message specified position list messages. role content converted list inserted input list given position.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Insert message into a list at a specified position — insert_message","text":"","code":"insert_message(content, role = \"user\", x = NULL, position = -1, ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Insert message into a list at a specified position — insert_message","text":"content content message. role role message. Can \"user\", \"system\", \"assistant\". Default \"user\". x list messages. Default NULL. position position insert new message. Default -1 (end list). ... Additional arguments images.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Insert message into a list at a specified position — insert_message","text":"list messages new message inserted specified position.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/insert_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Insert message into a list at a specified position — insert_message","text":"","code":"messages <- list( list(role = \"system\", content = \"Be friendly\"), list(role = \"user\", content = \"How are you?\") ) insert_message(\"INSERT MESSAGE AT THE END\", \"user\", messages) #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Be friendly\" #> #> #> [[2]] #> [[2]]$role #> [1] \"user\" #> #> [[2]]$content #> [1] \"How are you?\" #> #> #> [[3]] #> [[3]]$role #> [1] \"user\" #> #> [[3]]$content #> [1] \"INSERT MESSAGE AT THE END\" #> #> insert_message(\"INSERT MESSAGE AT THE BEGINNING\", \"user\", messages, 2) #> [[1]] #> [[1]]$role #> [1] \"system\" #> #> [[1]]$content #> [1] \"Be friendly\" #> #> #> [[2]] #> [[2]]$role #> [1] \"user\" #> #> [[2]]$content #> [1] \"INSERT MESSAGE AT THE BEGINNING\" #> #> #> [[3]] #> [[3]]$role #> [1] \"user\" #> #> [[3]]$content #> [1] \"How are you?\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":null,"dir":"Reference","previous_headings":"","what":"List models that are available locally — list_models","title":"List models that are available locally — list_models","text":"List models available locally","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"List models that are available locally — list_models","text":"","code":"list_models( output = c(\"df\", \"resp\", \"jsonlist\", \"raw\", \"text\"), endpoint = \"/api/tags\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"List models that are available locally — list_models","text":"output output format. Default \"df\". options \"resp\", \"jsonlist\", \"raw\", \"text\". endpoint endpoint get models. Default \"/api/tags\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"List models that are available locally — list_models","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"List models that are available locally — list_models","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/list_models.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"List models that are available locally — list_models","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 list_models() # returns dataframe list_models(\"df\") # returns dataframe list_models(\"resp\") # httr2 response object list_models(\"jsonlist\") list_models(\"raw\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if model is available locally — model_avail","title":"Check if model is available locally — model_avail","text":"Check model available locally","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if model is available locally — model_avail","text":"","code":"model_avail(model)"},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if model is available locally — model_avail","text":"model character string model name \"llama3\".","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if model is available locally — model_avail","text":"logical value indicating model exists.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/model_avail.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if model is available locally — model_avail","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 model_avail(\"codegemma:7b\") model_avail(\"abc\") model_avail(\"llama3\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/model_options.html","id":null,"dir":"Reference","previous_headings":"","what":"Model options — model_options","title":"Model options — model_options","text":"Model options","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/model_options.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Model options — model_options","text":"","code":"model_options"},{"path":"https://hauselin.github.io/ollama-r/reference/model_options.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Model options — model_options","text":"object class list length 13.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":null,"dir":"Reference","previous_headings":"","what":"Chat with a model in real-time in R console — ohelp","title":"Chat with a model in real-time in R console — ohelp","text":"Chat model real-time R console","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Chat with a model in real-time in R console — ohelp","text":"","code":"ohelp(model = \"codegemma:7b\", ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Chat with a model in real-time in R console — ohelp","text":"model character string model name \"llama3\". Defaults \"codegemma:7b\" decent coding model 2024-07-27. ... Additional options. options currently available time.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Chat with a model in real-time in R console — ohelp","text":"return anything. prints conversation console.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ohelp.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Chat with a model in real-time in R console — ohelp","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 ohelp(first_prompt = \"quit\") # regular usage: ohelp() }"},{"path":"https://hauselin.github.io/ollama-r/reference/package_config.html","id":null,"dir":"Reference","previous_headings":"","what":"Package configuration — package_config","title":"Package configuration — package_config","text":"Package configuration","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/package_config.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Package configuration — package_config","text":"","code":"package_config"},{"path":"https://hauselin.github.io/ollama-r/reference/package_config.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Package configuration — package_config","text":"object class list length 3.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Prepend message to a list — prepend_message","title":"Prepend message to a list — prepend_message","text":"Prepends message (add beginning list) list messages. role content converted list prepended input list.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Prepend message to a list — prepend_message","text":"","code":"prepend_message(content, role = \"user\", x = NULL, ...)"},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Prepend message to a list — prepend_message","text":"content content message. role role message. Can \"user\", \"system\", \"assistant\". x list messages. Default NULL. ... Additional arguments images.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Prepend message to a list — prepend_message","text":"list messages new message prepended.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/prepend_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Prepend message to a list — prepend_message","text":"","code":"prepend_message(\"user\", \"Hello\") #> [[1]] #> [[1]]$role #> [1] \"Hello\" #> #> [[1]]$content #> [1] \"user\" #> #> prepend_message(\"system\", \"Always respond nicely\") #> [[1]] #> [[1]]$role #> [1] \"Always respond nicely\" #> #> [[1]]$content #> [1] \"system\" #> #>"},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":null,"dir":"Reference","previous_headings":"","what":"List models that are currently loaded into memory — ps","title":"List models that are currently loaded into memory — ps","text":"List models currently loaded memory","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"List models that are currently loaded into memory — ps","text":"","code":"ps( output = c(\"df\", \"resp\", \"jsonlist\", \"raw\", \"text\"), endpoint = \"/api/ps\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"List models that are currently loaded into memory — ps","text":"output output format. Default \"df\". Supported formats \"df\", \"resp\", \"jsonlist\", \"raw\", \"text\". endpoint endpoint list running models. Default \"/api/ps\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"List models that are currently loaded into memory — ps","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"List models that are currently loaded into memory — ps","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/ps.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"List models that are currently loaded into memory — ps","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 ps(\"text\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":null,"dir":"Reference","previous_headings":"","what":"Pull/download a model from the Ollama library — pull","title":"Pull/download a model from the Ollama library — pull","text":"See https://ollama.com/library list available models. Use list_models() function get list models already downloaded/installed machine. Cancelled pulls resumed left , multiple calls share download progress.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Pull/download a model from the Ollama library — pull","text":"","code":"pull( name, stream = FALSE, insecure = FALSE, endpoint = \"/api/pull\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Pull/download a model from the Ollama library — pull","text":"name character string model name download/pull, \"llama3\". stream Enable response streaming. Default FALSE. insecure Allow insecure connections use pulling library development. Default FALSE. endpoint endpoint pull model. Default \"/api/pull\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Pull/download a model from the Ollama library — pull","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Pull/download a model from the Ollama library — pull","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/pull.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Pull/download a model from the Ollama library — pull","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 pull(\"llama3\") pull(\"all-minilm\", stream = FALSE) }"},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":null,"dir":"Reference","previous_headings":"","what":"Push or upload a model to a model library — push","title":"Push or upload a model to a model library — push","text":"Push upload model Ollama model library. Requires registering ollama.ai adding public key first.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Push or upload a model to a model library — push","text":"","code":"push( name, insecure = FALSE, stream = FALSE, output = c(\"resp\", \"jsonlist\", \"raw\", \"text\", \"df\"), endpoint = \"/api/push\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Push or upload a model to a model library — push","text":"name character string model name upload, form /: insecure Allow insecure connections. use pushing library development. Default FALSE. stream Enable response streaming. Default FALSE. output output format. Default \"resp\". options \"jsonlist\", \"raw\", \"text\", \"df\". endpoint endpoint push model. Default \"/api/push\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Push or upload a model to a model library — push","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Push or upload a model to a model library — push","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/push.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Push or upload a model to a model library — push","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 push(\"mattw/pygmalion:latest\") }"},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":null,"dir":"Reference","previous_headings":"","what":"Process httr2 response object — resp_process","title":"Process httr2 response object — resp_process","text":"Process httr2 response object","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Process httr2 response object — resp_process","text":"","code":"resp_process(resp, output = c(\"df\", \"jsonlist\", \"raw\", \"resp\", \"text\"))"},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Process httr2 response object — resp_process","text":"resp httr2 response object. output output format. Default \"df\". options \"jsonlist\", \"raw\", \"resp\" (httr2 response object), \"text\"","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Process httr2 response object — resp_process","text":"data frame, json list, raw httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Process httr2 response object — resp_process","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 resp <- list_models(\"resp\") resp_process(resp, \"df\") # parse response to dataframe/tibble resp_process(resp, \"jsonlist\") # parse response to list resp_process(resp, \"raw\") # parse response to raw string resp_process(resp, \"resp\") # return input response object resp_process(resp, \"text\") # return text/character vector }"},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process_stream.html","id":null,"dir":"Reference","previous_headings":"","what":"Process httr2 response object for streaming — resp_process_stream","title":"Process httr2 response object for streaming — resp_process_stream","text":"Process httr2 response object streaming","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/resp_process_stream.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Process httr2 response object for streaming — resp_process_stream","text":"","code":"resp_process_stream(resp, output)"},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":null,"dir":"Reference","previous_headings":"","what":"Search for options based on a query — search_options","title":"Search for options based on a query — search_options","text":"Search options based query","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Search for options based on a query — search_options","text":"","code":"search_options(query)"},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Search for options based on a query — search_options","text":"query query (character) search options.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Search for options based on a query — search_options","text":"Returns list matching options.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/search_options.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Search for options based on a query — search_options","text":"","code":"search_options(\"learning rate\") #> Matching options: mirostat_eta #> $mirostat_eta #> $mirostat_eta$description #> [1] \"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.\" #> #> $mirostat_eta$default_value #> [1] 0.1 #> #> search_options(\"tokens\") #> Matching options: tfs_z, num_predict #> $tfs_z #> $tfs_z$description #> [1] \"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.\" #> #> $tfs_z$default_value #> [1] 1 #> #> #> $num_predict #> $num_predict$description #> [1] \"Maximum number of tokens to predict when generating text. (Default: 128, -1 = infinite generation, -2 = fill context)\" #> #> $num_predict$default_value #> [1] 128 #> #> search_options(\"invalid query\") #> No matching options found #> #> list()"},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":null,"dir":"Reference","previous_headings":"","what":"Show model information — show","title":"Show model information — show","text":"Model information includes details, modelfile, template, parameters, license, system prompt.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Show model information — show","text":"","code":"show( name, verbose = FALSE, output = c(\"jsonlist\", \"resp\", \"raw\"), endpoint = \"/api/show\", host = NULL )"},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Show model information — show","text":"name Name model show verbose Returns full data verbose response fields. Default FALSE. output output format. Default \"jsonlist\". options \"resp\", \"raw\". endpoint endpoint show model. Default \"/api/show\". host base URL use. Default NULL, uses Ollama's default base URL.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Show model information — show","text":"response format specified output parameter.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Show model information — show","text":"API documentation","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/show.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Show model information — show","text":"","code":"if (FALSE) { # test_connection()$status_code == 200 # show(\"llama3\") # returns jsonlist show(\"llama3\", output = \"resp\") # returns response object }"},{"path":"https://hauselin.github.io/ollama-r/reference/stream_handler.html","id":null,"dir":"Reference","previous_headings":"","what":"Stream handler helper function — stream_handler","title":"Stream handler helper function — stream_handler","text":"Function handle streaming.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/stream_handler.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Stream handler helper function — stream_handler","text":"","code":"stream_handler(x, env, endpoint)"},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":null,"dir":"Reference","previous_headings":"","what":"Test connection to Ollama server — test_connection","title":"Test connection to Ollama server — test_connection","text":"test_connection() tests whether Ollama server running .","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test connection to Ollama server — test_connection","text":"","code":"test_connection(url = \"http://localhost:11434\")"},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test connection to Ollama server — test_connection","text":"url URL Ollama server. Default http://localhost:11434","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test connection to Ollama server — test_connection","text":"httr2 response object.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/test_connection.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Test connection to Ollama server — test_connection","text":"","code":"test_connection() #> Ollama local server not running or wrong server. #> Download and launch Ollama app to run the server. Visit https://ollama.com or https://github.com/ollama/ollama #> #> GET http://localhost:11434 #> Body: empty test_connection(\"http://localhost:11434\") # default url #> Ollama local server not running or wrong server. #> Download and launch Ollama app to run the server. Visit https://ollama.com or https://github.com/ollama/ollama #> #> GET http://localhost:11434 #> Body: empty test_connection(\"http://127.0.0.1:11434\") #> Ollama local server not running or wrong server. #> Download and launch Ollama app to run the server. Visit https://ollama.com or https://github.com/ollama/ollama #> #> GET http://127.0.0.1:11434 #> Body: empty"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a message — validate_message","title":"Validate a message — validate_message","text":"Validate message ensure required fields correct data types chat() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a message — validate_message","text":"","code":"validate_message(message)"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a message — validate_message","text":"message list single message list class.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate a message — validate_message","text":"TRUE message valid, otherwise error thrown.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_message.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Validate a message — validate_message","text":"","code":"validate_message(create_message(\"Hello\")) #> [1] TRUE validate_message(list(role = \"user\", content = \"Hello\")) #> [1] TRUE"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a list of messages — validate_messages","title":"Validate a list of messages — validate_messages","text":"Validate list messages ensure required fields correct data types chat() function.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a list of messages — validate_messages","text":"","code":"validate_messages(messages)"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a list of messages — validate_messages","text":"messages list messages, list class.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate a list of messages — validate_messages","text":"TRUE messages valid, otherwise warning messages printed FALSE returned.","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_messages.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Validate a list of messages — validate_messages","text":"","code":"validate_messages(create_messages( create_message(\"Be friendly\", \"system\"), create_message(\"Hello\") )) #> [1] TRUE"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate additional options or parameters provided to the API call — validate_options","title":"Validate additional options or parameters provided to the API call — validate_options","text":"Validate additional options parameters provided API call","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate additional options or parameters provided to the API call — validate_options","text":"","code":"validate_options(...)"},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate additional options or parameters provided to the API call — validate_options","text":"... Additional options parameters provided API call","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate additional options or parameters provided to the API call — validate_options","text":"TRUE additional options valid, FALSE otherwise","code":""},{"path":"https://hauselin.github.io/ollama-r/reference/validate_options.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Validate additional options or parameters provided to the API call — validate_options","text":"","code":"validate_options(mirostat = 1, mirostat_eta = 0.2, num_ctx = 1024) #> [1] TRUE validate_options(mirostat = 1, mirostat_eta = 0.2, invalid_opt = 1024) #> Valid options: mirostat, mirostat_eta #> Invalid options: invalid_opt #> See available options with check_options() or model_options. #> See also https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter #> [1] FALSE"},{"path":[]},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"ollamar-121","dir":"Changelog","previous_headings":"","what":"ollamar 1.2.1","title":"ollamar 1.2.1","text":"CRAN release: 2024-08-25 generate() chat() accept multiple images prompts/messages. Add functions validate messages chat() function: validate_message(), validate_messages(). Add encode_images_in_messages() encode images messages chat() function. Add create_messages() create messages easily. Helper functions managing messages accept ... parameter pass additional options. Update README docs reflect changes.","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"ollamar-120","dir":"Changelog","previous_headings":"","what":"ollamar 1.2.0","title":"ollamar 1.2.0","text":"CRAN release: 2024-08-17 functions calling API endpoints endpoint parameter. functions calling API endpoints ... parameter pass additional model options API. functions calling API endpoints host parameter specify host URL. Default NULL, uses default Ollama URL. Add req output format generate() chat(). Add new functions calling APIs: create(), show(), copy(), delete(), push(), embed() (supercedes embeddings()), ps(). Add helper functions manipulate chat/conversation history chat() function (APIs like OpenAI): create_message(), append_message(), prepend_message(), delete_message(), insert_message(). Add ohelp() function chat models real-time. Add helper functions: model_avail(), image_encode_base64(), check_option_valid(), check_options(), search_options(), validate_options()","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"ollamar-111","dir":"Changelog","previous_headings":"","what":"ollamar 1.1.1","title":"ollamar 1.1.1","text":"CRAN release: 2024-05-02","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"bug-fixes-1-1-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"ollamar 1.1.1","text":"Fixed invalid URLs. Updated title description.","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"ollamar-100","dir":"Changelog","previous_headings":"","what":"ollamar 1.0.0","title":"ollamar 1.0.0","text":"Initial CRAN submission.","code":""},{"path":"https://hauselin.github.io/ollama-r/news/index.html","id":"new-features-1-0-0","dir":"Changelog","previous_headings":"","what":"New features","title":"ollamar 1.0.0","text":"Integrate R Ollama run language models locally machine. Include test_connection() function test connection Ollama server. Include list_models() function list available models. Include pull() function pull model Ollama server. Include delete() function delete model Ollama server. Include chat() function chat model. Include generate() function generate text model. Include embeddings() function get embeddings model. Include resp_process() function process httr2_response objects.","code":""}]