Version 1.0.0 - Enhanced Security & Multi-LLM Capabilities
chatAI4R, Chat-based Interactive Artificial Intelligence for R, is an R package designed to integrate the OpenAI API and other APIs for artificial intelligence (AI) applications. This package leverages large language model (LLM)-based AI techniques, enabling efficient knowledge discovery and data analysis. chatAI4R provides basic R functions for using LLM and a set of R functions to support the creation of prompts for using LLM. The LLMs allow us to extend the world of R. Additionally, I strongly believe that the LLM is becoming so generalized that "Are you searching Google?" is likely to evolve into "Are you LLMing?".
chatAI4R is an experimental project aimed at developing and implementing various LLM applications in R. Furthermore, the package is under continuous development with a focus on extending its capabilities for bioinformatics analysis.
-
π Multi-API AI Integration with R (v1.0.0 Features)
- OpenAI API (ChatGPT, GPT-4, text embeddings, vision) β Enhanced Security
- Google Gemini API (Gemini models, search grounding) β Enhanced Security
- Replicate API (Llama, other open-source models) β Enhanced Security
- Dify API (Workflow-based AI applications) β Enhanced Security
- NEW: io.net API (Multi-LLM parallel execution, 23+ models)
- DeepL API (Professional translation)
-
π Security Enhancements (v1.0.0)
- HTTP Status Validation: All API functions now validate HTTP response codes
- Safe Data Access: Null-safe nested JSON parsing across all functions
- Enhanced Error Handling: Comprehensive error messages without credential exposure
- Input Validation: Standardized parameter validation using assertthat patterns
-
4-Layer Architecture for Progressive AI Capabilities
- Core Layer: Direct API access and basic utilities
- Usage/Task Layer: Conversation management, text processing, proofreading
- Workflow Layer: Multi-bot systems, R package development automation
- Expertise Layer: Advanced data mining, pattern recognition, expert analysis
-
LLM-assisted R Package Development
- Complete package design and architecture planning
- Automated R function creation with documentation (β¨ Updated:
autocreateFunction4R
now uses modernchat4R
API) - Code optimization and error analysis
- Professional text proofreading and enhancement
-
Advanced Data Analysis and Knowledge Discovery
- Multi-domain statistical analysis interpretation (13+ domains)
- Scientific literature processing and knowledge extraction
- Web data mining and intelligent summarization
- Expert-level discussion simulation and peer review processes
The functionality for interlanguage translation using DeepL has been separated as the 'deepRstudio' package. Functions related to text-to-image generation were separated as the 'stableDiffusion4R' package.
# CRAN-version installation
install.packages("chatAI4R")
library(chatAI4R)
# Dev-version installation
devtools::install_github("kumeS/chatAI4R")
library(chatAI4R)
# Release v0.2.3
devtools::install_github("kumeS/chatAI4R", ref = "v0.2.3")
library(chatAI4R)
#For MacOS X, installation from source
system("wget https://github.com/kumeS/chatAI4R/archive/refs/tags/v0.2.3.tar.gz")
#or system("wget https://github.com/kumeS/chatAI4R/archive/refs/tags/v0.2.3.tar.gz --no-check-certificate")
system("R CMD INSTALL v0.2.3.tar.gz")
chatAI4R supports multiple AI APIs. Configure the APIs you want to use:
Register at OpenAI website and obtain your API key.
# Set your OpenAI API key (required)
Sys.setenv(OPENAI_API_KEY = "sk-your-openai-api-key")
# Google Gemini API (for gemini4R, geminiGrounding4R)
Sys.setenv(GoogleGemini_API_KEY = "your-gemini-api-key")
# Replicate API (for replicatellmAPI4R)
Sys.setenv(Replicate_API_KEY = "your-replicate-api-key")
# Dify API (for DifyChat4R)
Sys.setenv(DIFY_API_KEY = "your-dify-api-key")
# DeepL API (for discussion_flow functions with translation)
Sys.setenv(DeepL_API_KEY = "your-deepl-api-key")
# io.net API (for multiLLMviaionet functions)
Sys.setenv(IONET_API_KEY = "your-ionet-api-key")
Create an .Rprofile file in your home directory and add your API keys:
# Create a file
file.create("~/.Rprofile")
# Add all your API keys to the file
cat('
# chatAI4R API Keys Configuration
Sys.setenv(OPENAI_API_KEY = "sk-your-openai-api-key")
Sys.setenv(GoogleGemini_API_KEY = "your-gemini-api-key")
Sys.setenv(Replicate_API_KEY = "your-replicate-api-key")
Sys.setenv(DIFY_API_KEY = "your-dify-api-key")
Sys.setenv(DeepL_API_KEY = "your-deepl-api-key")
Sys.setenv(IONET_API_KEY = "your-ionet-api-key")
', file = "~/.Rprofile", append = TRUE)
# [MacOS X] Open the file and edit it
system("open ~/.Rprofile")
Note: Please be aware of newline character inconsistencies across different operating systems.
-
Multi-LLM Usage Examples (see function documentation)
- AI-based chatting loaded with highly-technical documents (RIKEN Pressrelease text)
File | Description | Prompt |
---|---|---|
create_flowcharts | A prompt to create a flowchart | Prompt |
create_roxygen2_v01 | A prompt to create a roxygen2 description | Prompt |
create_roxygen2_v02 | A prompt to create a roxygen2 description | Prompt |
edit_DESCRIPTION | A prompt to edit DESCRIPTION | Prompt |
Img2txt_prompt_v01 | A prompt to create a i2i prompt | Prompt |
Img2txt_prompt_v02 | A prompt to create a i2i prompt | Prompt |
The chatAI4R package is structured as 4 Layered Functions that provide increasingly sophisticated AI capabilities, from basic API access to expert-level data mining and analysis.
Access to LLM API / Multi-APIs
Core functions provide direct access to multiple AI APIs, enabling basic AI operations.
Function | Description | API Service | Script | Flowchart |
---|---|---|---|---|
chat4R | Chat with GPT models using OpenAI API (One-shot) | OpenAI | Script | Flowchart |
chat4R_history | Use chat history for OpenAI's GPT model | OpenAI | Script | Flowchart |
chat4R_streaming | Chat with GPT models using streaming response | OpenAI | Script | |
chat4Rv2 | Enhanced chat interface with system prompt support | OpenAI | Script | |
textEmbedding | Text Embedding from OpenAI Embeddings API (1536-dimensional) | OpenAI | Script | Flowchart |
vision4R | Advanced image analysis and interpretation | OpenAI | Script | |
gemini4R | Chat with Google Gemini AI models | Google Gemini | Script | |
replicatellmAPI4R | Access various LLM models through Replicate platform | Replicate | Script | |
DifyChat4R | Chat and completion endpoints through Dify platform | Dify | Script | |
multiLLMviaionet | Execute multiple LLM models simultaneously via io.net API | io.net | Script | |
list_ionet_models | List available LLM models on io.net platform | io.net | Script | |
multiLLM_random10 | Quick execution of 10 randomly selected models via io.net | io.net | Script | |
multiLLM_random5 | Quick execution of 5 randomly selected models via io.net | io.net | Script | |
completions4R | OpenAI | Script | Flowchart |
Utility Functions (Non-API)
Function | Description | Script |
---|---|---|
slow_print_v2 | Slowly print text with typewriter effect | Script |
ngsub | Remove extra spaces and newline characters | Script |
removeQuotations | Remove all types of quotations from text | Script |
speakInEN | Text-to-speech functionality for English | Script |
speakInJA | Text-to-speech functionality for Japanese | Script |
speakInJA_v2 | Enhanced text-to-speech functionality for Japanese | Script |
Execution of simple LLM tasks: Chat memory, translation, proofreading, etc.
These functions combine core APIs to perform specific tasks and maintain conversation context.
Function | Description | Script | Flowchart |
---|---|---|---|
conversation4R | Manage conversation with persistent history | Script | Flowchart |
TextSummary | Summarize long texts with intelligent chunking | Script | |
TextSummaryAsBullet | Summarize selected text into bullet points | Script | |
revisedText | Revision for scientific text with AI assistance | Script | |
proofreadEnglishText | Proofread English text via RStudio API | Script | |
proofreadText | Proofread text with grammar and style correction | Script | |
enrichTextContent | Enrich text content with additional information | Script | |
convertBullet2Sentence | Convert bullet points to sentences | Script |
LLM Workflow, LLM Bots, R Packaging Supports
Advanced workflow functions that orchestrate multiple AI operations and support complex development tasks.
Function | Description | Script |
---|---|---|
discussion_flow_v1 | Multi-agent expert system simulation (3 roles) | Script |
discussion_flow_v2 | Enhanced multi-bot conversation system | Script |
createSpecifications4R | Create detailed specifications for R functions | Script |
createRfunction | Create R functions from selected text or clipboard | Script |
createRcode | Generate R code from clipboard content | Script |
convertRscript2Function | Convert R script to structured R function | Script |
addRoxygenDescription | Add Roxygen documentation to R functions | Script |
OptimizeRcode | Optimize and complete R code | Script |
RcodeImprovements | Suggest improvements for R code from clipboard | Script |
designPackage | Design complete R packages | Script |
addCommentCode | Add intelligent comments to R code (supports OpenAI & Gemini) | Script |
checkErrorDet | Analyze and explain R error messages | Script |
checkErrorDet_JP | Analyze and explain R error messages (Japanese) | Script |
autocreateFunction4R | β¨ UPDATED - Generate and improve R functions (now uses chat4R) | Script |
supportIdeaGeneration | Support idea generation from text input | Script |
createEBAYdes | Create professional eBay product descriptions | Script |
createImagePrompt_v1 | Create image generation prompts | Script |
createImagePrompt_v2 | Enhanced image generation prompts | Script |
Data mining & Advanced Analysis
Expert-level functions that provide sophisticated data analysis, pattern recognition, and knowledge extraction capabilities.
Function | Description | Script |
---|---|---|
interpretResult | Interpret analysis results across 13 analytical domains | Script |
extractKeywords | Extract key concepts and terms from complex text | Script |
convertScientificLiterature | Convert text to scientific literature format | Script |
summaryWebScrapingText | Web scraping with intelligent summarization | Script |
geminiGrounding4R | Advanced AI with Google Search grounding | Script |
chatAI4pdf | Intelligent PDF document analysis and summarization | Script |
textFileInput4ai | Large-scale text file analysis with chunking | Script |
searchFunction | Expert-level R function discovery and recommendation | Script |
- get_riken_pressrelease_urls: Get URLs of RIKEN Press Releases
- riken_pressrelease_text_jpn: Extract text from RIKEN press-release (Japanese)
- riken_pressrelease_textEmbedding: Extract text and perform text embedding from RIKEN press-release
All runs using the chat4R function are One-Shot Chatting. Conversation history is not carried over to the next conversation.
#API: "https://api.openai.com/v1/chat/completions"
chat4R("Hello")
#β οΈ DEPRECATED: OpenAI completions API (scheduled for removal)
# completions4R("Hello") # Use chat4R() instead
Executions using the conversation4R function will keep a history of conversations. The number of previous messages to keep in memory defaults to 2.
#First shot
conversation4R("Hello")
#Second shot
conversation4R("Hello")
Converts input text to a numeric vector. The model text-embedding-ada-002 results in a vector of 1536 floats.
#Embedding
textEmbedding("Hello, world!")
Execute multiple LLM models simultaneously for comprehensive AI responses across 23+ cutting-edge models.
# Set io.net API key
Sys.setenv(IONET_API_KEY = "your-ionet-api-key")
# Basic multi-LLM execution with latest 2025 models
result <- multiLLMviaionet(
prompt = "Explain quantum computing",
models = c("deepseek-ai/DeepSeek-R1-0528", # Latest reasoning model
"meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8", # Llama 4 multimodal
"Qwen/Qwen3-235B-A22B-FP8", # Latest Qwen3 MoE
"mistralai/Magistral-Small-2506", # Advanced multilingual
"microsoft/phi-4") # Compact powerhouse
)
# π² Quick random 10 model comparison (balanced across families)
result <- multiLLM_random10("What is artificial intelligence?")
# β‘ Quick random 5 model comparison (for faster testing)
result <- multiLLM_random5("Write a Python function")
# π Explore available models (23+ total as of 2025)
all_models <- list_ionet_models()
print(paste("Total models available:", length(all_models)))
# π·οΈ Browse by category
llama_models <- list_ionet_models("llama") # Meta Llama series (3 models)
deepseek_models <- list_ionet_models("deepseek") # DeepSeek reasoning (4 models)
qwen_models <- list_ionet_models("qwen") # Alibaba Qwen series (2 models)
mistral_models <- list_ionet_models("mistral") # Mistral AI series (4 models)
compact_models <- list_ionet_models("compact") # Efficient models (4 models)
reasoning_models <- list_ionet_models("reasoning") # Math/logic specialists (2 models)
# π Detailed model information
detailed_info <- list_ionet_models(detailed = TRUE)
View(detailed_info)
# π Advanced usage with custom parameters
result <- multiLLMviaionet(
prompt = "Design a machine learning pipeline for time series forecasting",
max_models = 8,
random_selection = TRUE,
temperature = 0.3, # More deterministic for technical tasks
max_tokens = 2000, # Longer responses
streaming = FALSE, # Wait for complete responses
parallel = TRUE, # True async execution
verbose = TRUE # Monitor progress
)
# Access comprehensive results
print(result$summary) # Execution statistics
lapply(result$results, function(x) { # Individual model responses
if(x$success) cat(x$model, ":", substr(x$response, 1, 200), "...\n\n")
})
π₯ Featured Models (2025):
- DeepSeek-R1-0528: Latest reasoning model with o1-like capabilities
- Llama-4-Maverick: Multimodal model with 128 experts architecture
- Qwen3-235B: Advanced MoE with 235B parameters
- Magistral-Small-2506: European AI with multilingual support
- Phi-4: Microsoft's efficient 14B parameter model
- Critical Security Fixes: Resolved HTTP status validation issues across all API functions
- Safe Data Access: Implemented null-safe nested JSON parsing to prevent runtime crashes
- Enhanced Error Handling: Comprehensive error messages without API key exposure
- Input Validation: Standardized parameter validation using assertthat patterns
- Security Score: Improved from C+ (60/100) to A- (85/100)
- io.net Integration: Execute 23+ models simultaneously via io.net API
- Advanced Model Selection: Random balanced selection across model families
- True Async Processing: Parallel execution using future package
- Comprehensive Testing: Enhanced test suite with 40+ functions tested
- 54 Functions: Complete AI toolkit for R with enhanced coverage
- Enhanced Documentation: Comprehensive examples and usage patterns
- CRAN Ready: Production-quality codebase with consistent patterns
- 25 RStudio Addins: Integrated development workflow
- Core Layer: 19 functions for direct API access and utilities
- Usage/Task Layer: 8 functions for conversation management and text processing
- Workflow Layer: 18 functions for R package development and content creation
- Expertise Layer: 8 functions for advanced data analysis and knowledge mining
Copyright (c) 2025 Satoshi Kume. Released under the Artistic License 2.0.
Kume S. (2025) chatAI4R: Chat-based Interactive Artificial Intelligence for R. Version 1.0.0.
#BibTeX
@misc{Kume2025chatAI4R,
title={chatAI4R: Chat-Based Interactive Artificial Intelligence for R},
author={Kume, Satoshi},
year={2025},
version={1.0.0},
publisher={GitHub},
note={R Package with Multi-LLM Capabilities},
howpublished={\url{https://github.com/kumeS/chatAI4R}},
}
- Satoshi Kume