Skip to content

Commit

Permalink
Merge pull request #120 from rainoffallingstar/main
Browse files Browse the repository at this point in the history
Add support for openai-compatible models from 01AI, moonshot, Qwen,and GLM
  • Loading branch information
edgararuiz authored Sep 2, 2024
2 parents 1b2fc30 + a2774d5 commit d25262c
Show file tree
Hide file tree
Showing 4 changed files with 132 additions and 0 deletions.
33 changes: 33 additions & 0 deletions inst/configs/glm4.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
default:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
provider: OpenAI - Chat Completions
path: https://open.bigmodel.cn/api/paas/v4/chat/completions
label: glm-4-0520 (GLM)
model: glm-4-0520
max_data_files: 0
max_data_frames: 0
include_doc_contents: FALSE
include_history: TRUE
system_msg: You are a helpful coding assistant
model_arguments:
temperature: 0.03
max_tokens: 1000
stream: TRUE
chat:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For code output, use RMarkdown code chunks
Avoid all code chunk options
console:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For any line that is not code, prefix with a: #
Keep each line of explanations to no more than 80 characters
DO NOT use Markdown for the code
script:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For any line that is not code, prefix with a: #
Keep each line of explanations to no more than 80 characters
DO NOT use Markdown for the code
33 changes: 33 additions & 0 deletions inst/configs/moonshot8k.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
default:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
provider: OpenAI - Chat Completions
path: https://api.moonshot.cn/v1/chat/completions
label: moonshot-v1-8k (Moonshot AI)
model: moonshot-v1-8k
max_data_files: 0
max_data_frames: 0
include_doc_contents: FALSE
include_history: TRUE
system_msg: You are a helpful coding assistant
model_arguments:
temperature: 0.03
max_tokens: 1000
stream: TRUE
chat:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For code output, use RMarkdown code chunks
Avoid all code chunk options
console:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For any line that is not code, prefix with a: #
Keep each line of explanations to no more than 80 characters
DO NOT use Markdown for the code
script:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For any line that is not code, prefix with a: #
Keep each line of explanations to no more than 80 characters
DO NOT use Markdown for the code
33 changes: 33 additions & 0 deletions inst/configs/qwen.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
default:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
provider: OpenAI - Chat Completions
path: https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
label: qwen-turbo (alibaba)
model: qwen-turbo
max_data_files: 0
max_data_frames: 0
include_doc_contents: FALSE
include_history: TRUE
system_msg: You are a helpful coding assistant
model_arguments:
temperature: 0.03
max_tokens: 1000
stream: TRUE
chat:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For code output, use RMarkdown code chunks
Avoid all code chunk options
console:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For any line that is not code, prefix with a: #
Keep each line of explanations to no more than 80 characters
DO NOT use Markdown for the code
script:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For any line that is not code, prefix with a: #
Keep each line of explanations to no more than 80 characters
DO NOT use Markdown for the code
33 changes: 33 additions & 0 deletions inst/configs/yi.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
default:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
provider: OpenAI - Chat Completions
path: https://api.lingyiwanwu.com/v1/chat/completions
label: yi-34b-chat-0205 (01.AI)
model: yi-34b-chat-0205
max_data_files: 0
max_data_frames: 0
include_doc_contents: FALSE
include_history: TRUE
system_msg: You are a helpful coding assistant
model_arguments:
temperature: 0.03
max_tokens: 1000
stream: TRUE
chat:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For code output, use RMarkdown code chunks
Avoid all code chunk options
console:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For any line that is not code, prefix with a: #
Keep each line of explanations to no more than 80 characters
DO NOT use Markdown for the code
script:
prompt: |
{readLines(system.file('prompt/base.txt', package = 'chattr'))}
For any line that is not code, prefix with a: #
Keep each line of explanations to no more than 80 characters
DO NOT use Markdown for the code

0 comments on commit d25262c

Please sign in to comment.