- added support for structured output
- added make_query() function to facilitate easier annotation
- added more output formats to query()/chat()
- improved performance of embed_text()
- improved performance of query() for multiple queries
- changed default model to llama3.1
- added option to employ multiple servers
- pull_model() gained verbose option
- improved annotation vignette
- added vignette on how to use Hugging Face Hub models
- some bug fixes
- adds function
check_model_installed
- changes default model to llama3
- add option to query several models at once
- dedicated embedding models are available now (see
vignette("text-embedding", "rollama")
) - error handling and bug fixes
- Initial CRAN submission.