-
Notifications
You must be signed in to change notification settings - Fork 2
feat: Add new LLM model configurations #14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add DeepSeek V3.1 with 128k context support - Add Claude series models (3.5 Sonnet, 3.7 Sonnet, 4.0 Sonnet, 4.0 Opus, 4.1 Opus) with 200k context - Add Qwen3 series models (235B-A22B with 128k, Max Preview with 256k context) - Add Grok Code Fast 1 with 256k context for coding scenarios - Update model position order in _position.yaml - All models support agent-thought, tool-call, multi-tool-call, and stream-tool-call features
- Add repository information section with GitHub URL and installation guidance - Update AI model list with newly added models (Claude 4.x, DeepSeek V3.1, Qwen3, Grok Code) - Clarify marketplace limitations and multi-package support - Enhance installation instructions with clear package type explanations - Add project architecture and CI/CD workflow documentation - Improve configuration sections with usage context for different installation methods - Add prominent repository URL references for users to find complete functionality
PR Review Complete ✅Todo List:
📋 Review SummaryThis PR adds support for 9 new LLM models and significantly improves the README documentation. Overall, the changes are well-structured and consistent, but there are several areas that need attention. 🔍 Detailed Findings✅ Positive Aspects
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @yinxulai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
此拉取请求旨在扩展Dify插件的功能,通过引入一系列新的大型语言模型来增强其AI推理能力。同时,它还改进了文档,为用户提供了更清晰的安装指南和包类型说明,以确保用户能够根据需求选择并安装完整功能的插件。
Highlights
- 新增大型语言模型配置: 增加了对多个最新大型语言模型(LLM)的支持,包括Claude 4.x系列、DeepSeek V3.1、Qwen3系列和Grok Code。
- 更新README文档: 详细更新了README文件,以反映新增的模型支持,并澄清了插件的安装方式和不同包类型的功能差异。
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces new LLM model configurations and updates the README to reflect these changes, including installation instructions and supported models. The addition of new model configurations in YAML format and the corresponding updates in the README provide users with more options and clear instructions for using the plugin.
| - stream-tool-call | ||
| model_properties: | ||
| mode: chat | ||
| context_size: 200000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - stream-tool-call | ||
| model_properties: | ||
| mode: chat | ||
| context_size: 200000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - stream-tool-call | ||
| model_properties: | ||
| mode: chat | ||
| context_size: 200000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - stream-tool-call | ||
| model_properties: | ||
| mode: chat | ||
| context_size: 200000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - stream-tool-call | ||
| model_properties: | ||
| mode: chat | ||
| context_size: 200000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - stream-tool-call | ||
| model_properties: | ||
| mode: chat | ||
| context_size: 256000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - stream-tool-call | ||
| model_properties: | ||
| mode: chat | ||
| context_size: 128000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - stream-tool-call | ||
| model_properties: | ||
| mode: chat | ||
| context_size: 256000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # 七牛云 Dify 插件 | ||
|
|
||
| 七牛云官方的 Dify 插件,为 Dify 平台提供全面的 AI 推理服务和云存储管理功能。 | ||
| 七牛云官方的 Dify 插件,为 Dify 平台提供全面的 AI 推理服务和云存储管理功能。支持 Claude 4.x 系列、DeepSeek V3.1、Qwen3 系列、Grok Code 等最新 AI 模型,并提供灵活的包类型选择。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| - **OpenAI 开源系列**:GPT-OSS-120b、GPT-OSS-20b | ||
| - **DeepSeek 系列**:deepseek-r1、deepseek-v3 | ||
| - **DeepSeek 系列**:DeepSeek-R1、DeepSeek-V3、DeepSeek-V3.1(128k上下文) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.