Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified _assets/plugin_preview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed _assets/qiniu_ai.png
Binary file not shown.
2 changes: 1 addition & 1 deletion manifest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,4 @@ resource:
tool:
enabled: false
type: plugin
version: 0.0.2
version: 0.0.3
71 changes: 71 additions & 0 deletions models/llm/glm45-air.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
model: glm-4.5-air
label:
zh_Hans: glm-4.5-air
en_US: glm-4.5-air
model_type: llm
features:
- agent-thought
- multi-tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
- name: max_tokens
use_template: max_tokens
type: int
default: 512
min: 1
max: 8192
help:
zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: frequency_penalty
use_template: frequency_penalty
- name: response_format
label:
zh_Hans: 回复格式
en_US: Response Format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- name: enable_thinking
required: false
type: boolean
default: true
label:
zh_Hans: 思考模式
en_US: Thinking mode
help:
zh_Hans: 是否开启思考模式。
en_US: Whether to enable thinking mode.
- name: thinking_budget
required: false
type: int
default: 512
min: 1
max: 8192
label:
zh_Hans: 思考长度限制
en_US: Thinking budget
help:
zh_Hans: 思考过程的最大长度,只在思考模式为true时生效。
en_US: The maximum length of the thinking process, only effective when thinking mode is true.
71 changes: 71 additions & 0 deletions models/llm/glm45.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
model: glm-4.5
label:
zh_Hans: glm-4.5
en_US: glm-4.5
model_type: llm
features:
- agent-thought
- multi-tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
- name: max_tokens
use_template: max_tokens
type: int
default: 512
min: 1
max: 8192
help:
zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: frequency_penalty
use_template: frequency_penalty
- name: response_format
label:
zh_Hans: 回复格式
en_US: Response Format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- name: enable_thinking
required: false
type: boolean
default: true
label:
zh_Hans: 思考模式
en_US: Thinking mode
help:
zh_Hans: 是否开启思考模式。
en_US: Whether to enable thinking mode.
- name: thinking_budget
required: false
type: int
default: 512
min: 1
max: 8192
label:
zh_Hans: 思考长度限制
en_US: Thinking budget
help:
zh_Hans: 思考过程的最大长度,只在思考模式为true时生效。
en_US: The maximum length of the thinking process, only effective when thinking mode is true.
4 changes: 2 additions & 2 deletions provider/qiniu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ label:
en_US: Qiniu Cloud
zh_Hans: 七牛云
description:
en_US: Official Qiniu Dify plugin providing AI inference services, supporting models such as deepseek-r1, deepseek-v3, and more.
zh_Hans: 七牛云官方 Dify 插件,提供 AI 推理服务,支持例如 deepseek-r1、deepseek-v3 等模型。
en_US: Official Qiniu Dify plugin providing AI inference services, supporting models such as glm 4.5, deepseek-r1, deepseek-v3, and more.
zh_Hans: 七牛云官方 Dify 插件,提供 AI 推理服务,支持例如 glm 4.5、deepseek-r1、deepseek-v3 等模型。
icon_large:
en_US: icon_l_en.svg
icon_small:
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
dify_plugin>=0.3.0,<0.4.0
dify_plugin>=0.3.0,<0.5.0