Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: [#5714] 支持GLM #5741

Merged
merged 4 commits into from
Nov 1, 2024
Merged

feat: [#5714] 支持GLM #5741

merged 4 commits into from
Nov 1, 2024

Conversation

DDMeaqua
Copy link
Contributor

@DDMeaqua DDMeaqua commented Oct 30, 2024

💻 变更类型 | Change Type

  • feat
  • fix
  • refactor
  • perf
  • style
  • test
  • docs
  • ci
  • chore
  • build

🔀 变更说明 | Description of Change

支持 GLM 模型 #5714

📝 补充信息 | Additional Information

image

image

Summary by CodeRabbit

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced support for the ChatGLM model provider, enabling users to configure and interact with the ChatGLM API.
    • Added new settings in the application for ChatGLM API Key and Endpoint.
    • Enhanced API client to support ChatGLM model interactions, including new methods for chat and request handling.
    • Integrated ChatGLM into the application's access store for better management of API keys and URLs.
  • Documentation

    • Updated localization files to include ChatGLM settings in both English and Chinese.
  • Bug Fixes

    • Improved error handling and validation for ChatGLM API interactions.

These updates enhance the application's capabilities, providing users with new options for utilizing the ChatGLM model effectively.

Copy link

vercel bot commented Oct 30, 2024

@DDMeaqua is attempting to deploy a commit to the NextChat Team on Vercel.

A member of the Team first needs to authorize it.

@DDMeaqua DDMeaqua requested review from lloydzhou and Dogtiti October 30, 2024 11:27
Copy link
Contributor

coderabbitai bot commented Oct 30, 2024

Walkthrough

This pull request introduces support for the ChatGLM model provider across multiple components of the application. Key changes include the addition of a handler for ChatGLM requests in the API routes, updates to the authentication logic to accommodate the ChatGLM API key, and the creation of a dedicated ChatGLM API handler. Additionally, modifications to the settings component allow users to configure ChatGLM-specific parameters, and localization files are updated to include ChatGLM settings. The changes ensure that the application can effectively interact with the ChatGLM model alongside existing providers.

Changes

File Change Summary
app/api/[provider]/[...path]/route.ts Added handler for ChatGLM provider in handle function, including a new case for ApiPath.ChatGLM.
app/api/auth.ts Modified auth function to include a case for ModelProvider.ChatGLM, assigning systemApiKey to serverConfig.chatglmApiKey.
app/api/glm.ts Introduced new file for ChatGLM API handling, including request processing, authentication, error handling, and response formatting.
app/client/api.ts Integrated ChatGLMApi class into ClientApi, adding support for ModelProvider.ChatGLM and updating header retrieval logic.
app/client/platforms/glm.ts Created ChatGLMApi class implementing LLMApi, with methods for path construction, message extraction, and chat processing.
app/components/settings.tsx Added configuration section for ChatGLM in settings, including input fields for endpoint and API key.
app/config/server.ts Updated getServerSideConfig to include new environment variables for ChatGLM and return related configuration settings.
app/constant.ts Added constants and enumerations for ChatGLM, including CHATGLM_BASE_URL, ApiPath.ChatGLM, and updates to ServiceProvider and ModelProvider enums.
app/locales/cn.ts Introduced ChatGLM settings in the Chinese locale file, adding properties for API key and endpoint.
app/locales/en.ts Added ChatGLM settings in the English locale file, including properties for API key and endpoint.
app/store/access.ts Updated access store to include ChatGLM URL and API key, with a new method for validating the ChatGLM API key.
app/utils.ts Modified showPlugins function to include a check for ServiceProvider.ChatGLM.

Possibly related PRs

  • feat: add getClientApi method #4935: The changes in this PR involve adding a new method to the ClientApi class, which may relate to the handling of different model providers, including ChatGLM.
  • feat: support baidu model #4936: This PR modifies the auth function to support a new model provider, ChatGLM, which is relevant as the main PR also introduces functionality for handling requests related to the ChatGLM provider.
  • feat: qwen #4942: The changes in this PR involve adding a new model provider, which may relate to the main PR's addition of a handler for the ChatGLM provider.
  • [Feature] Stable Diffusion #4983: This PR introduces a new service provider, which may be relevant as it expands the functionality of the application in a manner similar to the main PR's addition of the ChatGLM provider.
  • add moonshot api #5156: The changes in this PR enhance the auth function to support a new model provider, which aligns with the main PR's focus on adding support for the ChatGLM provider.
  • Feature/tencent #5157: This PR introduces a new service provider, which is relevant as it expands the functionality of the application in a manner similar to the main PR's addition of the ChatGLM provider.
  • xAi support  #5704: This PR introduces support for a new provider, XAI, which is relevant as it expands the functionality of the application in a manner similar to the main PR's addition of the ChatGLM provider.
  • feat: 新增阿里系模型代码配置 #5740: This PR adds a new proxy configuration for the Alibaba Cloud API, which may relate to the main PR's focus on enhancing API handling capabilities, similar to the addition of the ChatGLM provider.

Suggested labels

enhancement, planned

Suggested reviewers

  • Dogtiti

Poem

🐰 In the meadow, new paths we weave,
For ChatGLM's magic, we now believe.
With keys and endpoints, we take our flight,
In the world of models, we shine so bright!
Hopping along, with joy we play,
Embracing the changes, come what may! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

Your build has completed!

Preview deployment

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Outside diff range and nitpick comments (14)
app/api/auth.ts (1)

98-100: Consider refactoring provider configuration management.

As the number of model providers grows, the switch statement becomes increasingly complex. Consider extracting provider-specific configurations into a separate module or using a configuration map for better maintainability.

Example approach:

// providers/config.ts
const PROVIDER_CONFIGS = {
  [ModelProvider.GLM]: {
    getApiKey: (config) => config.glmApiKey,
  },
  // ... other providers
} as const;

// Then in auth.ts
systemApiKey = PROVIDER_CONFIGS[modelProvider]?.getApiKey(serverConfig);

This would:

  1. Improve maintainability by isolating provider-specific logic
  2. Make it easier to add new providers
  3. Reduce the complexity of the auth function
app/store/access.ts (1)

Line range hint 251-266: Consider documenting GLM fields in migration logic.

While no migration is needed for the new GLM fields (they'll default to empty strings), it might be helpful to document this in the migration logic for future reference.

Add a comment in the migration logic:

  migrate(persistedState, version) {
    if (version < 2) {
      const state = persistedState as {
        token: string;
        openaiApiKey: string;
        azureApiVersion: string;
        googleApiKey: string;
      };
      state.openaiApiKey = state.token;
      state.azureApiVersion = "2023-08-01-preview";
    }
+   // Version 2 added GLM support with optional glmUrl and glmApiKey fields
 
    return persistedState as any;
  },
app/config/server.ts (1)

Line range hint 78-228: Consider refactoring provider configurations for better maintainability.

As the number of supported providers grows, this configuration file is becoming quite large. Consider splitting provider-specific configurations into separate modules that follow a common interface, making it easier to maintain and add new providers.

Example structure:

// types/provider-config.ts
interface ProviderConfig {
  isEnabled: boolean;
  getConfig(): Record<string, unknown>;
}

// config/providers/glm.ts
class GLMConfig implements ProviderConfig {
  isEnabled = !!process.env.GLM_API_KEY;
  getConfig() {
    return {
      glmUrl: process.env.GLM_URL,
      glmApiKey: getApiKey(process.env.GLM_API_KEY),
    };
  }
}
app/client/api.ts (2)

349-350: Consider refactoring provider mapping

While the implementation is correct, consider refactoring the provider mapping to reduce duplication:

Consider creating a mapping object:

const PROVIDER_MODEL_MAP = {
  [ServiceProvider.Google]: ModelProvider.GeminiPro,
  [ServiceProvider.GLM]: ModelProvider.GLM,
  // ... other mappings
} as const;

export function getClientApi(provider: ServiceProvider): ClientApi {
  return new ClientApi(PROVIDER_MODEL_MAP[provider] ?? ModelProvider.GPT);
}

This would:

  1. Reduce switch statement complexity
  2. Make provider-to-model mapping more maintainable
  3. Provide better type safety

Line range hint 24-350: Consider enhancing provider abstraction

While the GLM integration is solid, consider these architectural improvements:

  1. Create an abstract factory for provider instantiation
  2. Define a common interface for provider configurations
  3. Implement a provider registry pattern

This would make adding new providers like GLM more streamlined and reduce the need for switch statements throughout the codebase.

app/constant.ts (2)

233-236: LGTM: GLM configuration is properly structured.

The GLM configuration follows the established pattern for provider configurations.

Consider adding knowledge cutoff dates for GLM models in the KnowledgeCutOffDate record to maintain consistency with other providers:

export const KnowledgeCutOffDate: Record<string, string> = {
  default: "2021-09",
+ "glm-4": "2024-03",
+ "glm-4-plus": "2024-03",
  // ... existing entries
};

389-398: Consider sorting GLM models by capability/version.

While the models list is comprehensive, consider sorting them in a logical order (e.g., by capability or version) to improve maintainability and readability.

Suggested order:

 const glmModels = [
   "glm-4-plus",
-  "glm-4-0520",
   "glm-4",
+  "glm-4-0520",
   "glm-4-air",
   "glm-4-airx",
   "glm-4-long",
-  "glm-4-flashx",
   "glm-4-flash",
+  "glm-4-flashx",
 ];
app/locales/cn.ts (1)

482-485: Consider adding an example endpoint URL.

The SubTitle for the endpoint configuration only shows "样例:" without providing an actual example URL. This differs from other providers that show concrete examples to guide users.

Consider updating the SubTitle to include an example endpoint URL:

-          SubTitle: "样例:",
+          SubTitle: "样例:https://api.zhipuai.cn/v1",
app/locales/en.ts (1)

467-468: Add an example endpoint URL in the SubTitle.

The SubTitle for the Endpoint configuration only shows "Example: " without providing an actual example URL. This differs from other providers that include example endpoints to guide users.

Consider adding a representative endpoint URL, similar to how it's done for other providers. For example:

 Endpoint: {
   Title: "Endpoint Address",
-  SubTitle: "Example: ",
+  SubTitle: "Example: https://api.zhipuai.cn/v1",
 },
app/components/settings.tsx (1)

1238-1275: LGTM! Consider adding input validation.

The GLM configuration component follows the established patterns and best practices, including proper state management, accessibility, and security considerations for sensitive data.

Consider adding input validation for the endpoint URL to ensure it's a valid URL format before saving:

 onChange={(e) =>
+  const url = e.currentTarget.value;
+  if (url && !url.startsWith('http')) {
+    showToast(Locale.Settings.Access.GLM.Endpoint.Invalid);
+    return;
+  }
   accessStore.update(
     (access) => (access.glmUrl = e.currentTarget.value),
   )
 }
🧰 Tools
🪛 Biome

[error] 1253-1253: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


[error] 1269-1269: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

app/api/glm.ts (2)

19-19: Avoid Logging Potentially Sensitive Information

Logging request parameters and paths can lead to exposure of sensitive information. Consider removing or sanitizing these log statements to prevent any unintended data leakage.

Apply this diff to remove or modify the log statements:

-  console.log("[GLM Route] params ", params);
...
-  console.log("[Proxy] ", path);
-  console.log("[Base Url]", baseUrl);
+  console.log("[GLM Route] Request received");
+  // Consider logging only non-sensitive information or using debug levels

Also applies to: 57-58


77-78: Avoid Using @ts-ignore; Resolve TypeScript Errors Properly

Using @ts-ignore suppresses TypeScript's type checking, which can hide potential bugs. The duplex property is not standard in the RequestInit interface. Ensure that this property is necessary and consider updating TypeScript definitions or handling the request differently.

If duplex is required for streaming, you might need to adjust your TypeScript configuration or cast fetchOptions to any, though casting should be used cautiously.

-   // @ts-ignore
    duplex: "half",
app/client/platforms/glm.ts (2)

57-59: Handle the unimplemented speech method appropriately.

The speech method currently throws an error and is unimplemented. If the GLM API does not support speech synthesis, consider removing this method to avoid potential runtime errors.

You can remove the unimplemented method:

-  speech(options: SpeechOptions): Promise<ArrayBuffer> {
-    throw new Error("Method not implemented.");
-  }

87-87: Replace console.log statements with a proper logging mechanism.

Using console.log is not recommended in production code. Consider utilizing a dedicated logging library or mechanism that can be configured for different environments, ensuring better control over log levels and outputs.

Also applies to: 178-178

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between d0bd1bf and d357b45.

📒 Files selected for processing (11)
  • app/api/[provider]/[...path]/route.ts (2 hunks)
  • app/api/auth.ts (1 hunks)
  • app/api/glm.ts (1 hunks)
  • app/client/api.ts (6 hunks)
  • app/client/platforms/glm.ts (1 hunks)
  • app/components/settings.tsx (3 hunks)
  • app/config/server.ts (3 hunks)
  • app/constant.ts (7 hunks)
  • app/locales/cn.ts (1 hunks)
  • app/locales/en.ts (1 hunks)
  • app/store/access.ts (5 hunks)
🧰 Additional context used
🪛 Biome
app/components/settings.tsx

[error] 1253-1253: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


[error] 1269-1269: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

🔇 Additional comments (26)
app/api/[provider]/[...path]/route.ts (2)

45-46: LGTM! GLM case implementation is consistent.

The GLM case follows the established pattern of other providers in the switch statement, maintaining consistency in:

  • Using the ApiPath constant
  • Handler invocation with req and params
  • Error handling (inherited from the handler)

14-14: Verify GLM handler implementation.

The import statement follows the consistent pattern, but let's verify the GLM handler implementation.

✅ Verification successful

Let me gather more information about the GLM handler implementation since the previous results were incomplete.


GLM handler implementation is properly structured and complete

The GLM handler implementation in app/api/glm.ts follows the expected patterns and includes:

  • Proper error handling with try-catch blocks
  • Authentication checks using the auth middleware
  • Consistent request handling with timeout controls
  • Model availability verification
  • Appropriate header management

The import statement in the route file correctly references this implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify GLM handler implementation and structure

# Test 1: Check if glm.ts exists and contains handle export
echo "Checking GLM handler implementation..."
fd --type f "glm.ts" app/api -x cat {} | grep -A 10 "export const handle"

# Test 2: Verify ApiPath.GLM constant exists
echo "Verifying GLM API path constant..."
rg "GLM\s*=" app/constant.ts

Length of output: 343


Script:

#!/bin/bash
# Let's try a different approach to find and verify the GLM handler

# Check for glm.ts file existence and location
echo "Locating glm.ts file..."
fd "glm.ts" app/api

# Get full content of glm.ts if it exists
echo -e "\nChecking glm.ts content..."
fd "glm.ts" app/api -x cat {}

# Search for any GLM-related handler implementations
echo -e "\nSearching for GLM handler patterns..."
ast-grep --pattern 'export const handle = async $_'

Length of output: 3534

app/store/access.ts (4)

17-17: LGTM: GLM URL configuration follows established patterns.

The GLM URL configuration follows the same pattern as other providers, correctly handling both app and non-app build modes.

Also applies to: 51-51


114-117: LGTM: GLM state properties are well-structured.

The GLM-related state properties are properly organized and follow the established pattern of other providers.


210-210: LGTM: Authorization check properly includes GLM validation.

The GLM validation is correctly integrated into the authorization chain.


190-192: Verify if GLM requires additional validation fields.

While the implementation is consistent with the pattern, some other providers require multiple fields for validation. Please verify if GLM authentication requires only the API key.

app/config/server.ts (3)

78-81: LGTM! Environment variable declarations are well-structured.

The GLM environment variable declarations follow the established pattern and are properly typed as optional strings.


158-158: LGTM! GLM provider flag is correctly implemented.

The isGLM flag follows the established pattern of checking for the presence of the API key.


225-228: Verify if GLM requires additional security parameters.

The implementation follows the established pattern, but please verify if GLM requires additional security parameters (like a secret key) similar to other providers such as Baidu.

app/client/api.ts (5)

24-24: LGTM: Import follows established pattern

The GLM import follows the same structure as other provider imports.


284-284: LGTM: Config export follows pattern

The GLM flag export maintains consistency with other provider configurations.


267-268: Add type safety for GLM API key

While the API key handling is correctly placed, consider adding type safety:

  1. Verify glmApiKey exists in AccessStore
  2. Consider adding runtime validation for the API key format
#!/bin/bash
# Verify glmApiKey is defined in AccessStore
rg "glmApiKey.*:" app/store/access.ts

251-251: Verify ServiceProvider.GLM enum existence

The GLM provider check follows the established pattern.

#!/bin/bash
# Verify ServiceProvider.GLM is defined
rg -A 1 "ServiceProvider = {" 

160-162: Verify ModelProvider.GLM enum existence

The GLM case implementation looks correct, following the same pattern as other providers.

✅ Verification successful

ModelProvider.GLM enum exists and is properly defined

The GLM case implementation is correct and matches the codebase. The ModelProvider.GLM enum is properly defined in app/constant.ts along with other model providers.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify ModelProvider.GLM is defined in constants
rg -A 1 "ModelProvider = {" 

Length of output: 234


Script:

#!/bin/bash
# Search for ModelProvider enum definition with different patterns
rg -l "enum ModelProvider"

# Also try searching for object literal definition
rg "ModelProvider\s*=\s*[{]" --type ts

# Search for any mentions of ModelProvider and GLM together
rg "ModelProvider.*GLM|GLM.*ModelProvider" --type ts

Length of output: 365


Script:

#!/bin/bash
# Check the ModelProvider enum definition in constant.ts
cat app/constant.ts

Length of output: 13757

app/constant.ts (3)

33-34: LGTM: Base URL and API path are correctly defined.

The GLM base URL and API path definitions follow the established pattern and correctly point to the official GLM API endpoint.

Also applies to: 67-67


121-121: LGTM: Provider enums are properly updated.

The GLM provider is correctly added to both ServiceProvider and ModelProvider enums, maintaining consistency with other providers.

Also applies to: 145-145


523-533: LGTM: DEFAULT_MODELS extension is properly implemented.

The GLM models are correctly integrated into the DEFAULT_MODELS array, maintaining consistency with other providers and following the established sorting pattern.

app/locales/cn.ts (1)

476-486: LGTM! The GLM localization entries are well-structured.

The implementation follows the established pattern for model provider configurations and maintains consistency with other providers in the codebase.

app/locales/en.ts (1)

460-470: LGTM! Structure and format are consistent with other providers.

The GLM configuration section follows the established pattern and maintains consistency with other provider configurations.

app/components/settings.tsx (2)

1736-1736: LGTM! GLM component integration is correct.

The GLM configuration component is properly integrated into the provider configuration section, maintaining consistency with other providers.


1253-1253: Static analysis warnings can be ignored.

The assignments in expressions are part of React's state management pattern and are commonly used throughout the codebase.

Also applies to: 1269-1269

🧰 Tools
🪛 Biome

[error] 1253-1253: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

app/api/glm.ts (1)

116-119: Preserve Important Response Headers

When modifying response headers, ensure that essential headers like CORS headers are preserved. Deleting or not including headers like Access-Control-Allow-Origin may prevent clients from accessing the resource due to CORS restrictions.

Please verify if CORS headers need to be added or preserved in the response. You can check for the existence of CORS headers in the original response and include them as needed.

If the headers are missing, consider adding them to newHeaders:

    // to prevent browser prompt for credentials
    const newHeaders = new Headers(res.headers);
    newHeaders.delete("www-authenticate");
+   newHeaders.set('Access-Control-Allow-Origin', '*'); // Adjust as necessary
app/client/platforms/glm.ts (4)

81-85: Confirm usage of presence_penalty and frequency_penalty parameters.

The presence_penalty and frequency_penalty parameters are included in the request payload. Verify whether the GLM API supports these parameters, as they might be specific to the OpenAI API. Including unsupported parameters could lead to API errors.

If these parameters are not supported by GLM, consider removing them:

         model: modelConfig.model,
         temperature: modelConfig.temperature,
-        presence_penalty: modelConfig.presence_penalty,
-        frequency_penalty: modelConfig.frequency_penalty,
         top_p: modelConfig.top_p,

94-100: Ensure correct headers are included in the API request.

When making requests to the GLM API, verify that all necessary headers are provided, such as authentication tokens, content types, and any other required metadata. This ensures successful communication with the API.

If authentication is needed, include it in the headers:

     const chatPayload = {
       method: "POST",
       body: JSON.stringify(requestPayload),
       signal: controller.signal,
-      headers: getHeaders(),
+      headers: {
+        ...getHeaders(),
+        Authorization: `Bearer ${accessStore.glmApiKey}`,
+      },
     };

29-51: Review base URL construction logic for accuracy.

The logic for constructing the baseUrl may not handle all possible user configurations correctly. Ensure that the conditions cover scenarios where the user provides custom URLs, and that the resulting baseUrl is accurate and properly formatted.

Consider adding unit tests to validate different configurations of baseUrl.


20-20: ⚠️ Potential issue

Verify the compatibility of RequestPayload with the GLM API.

The RequestPayload type is imported from ./openai, which may be specific to the OpenAI API. Ensure that this payload structure aligns with the GLM API requirements to prevent potential data mismatches or API errors.

Consider defining a new RequestPayload type specific to GLM or adapting the existing one to meet GLM's specifications.

Also applies to: 77-85

app/api/auth.ts Outdated
Comment on lines 98 to 100
case ModelProvider.GLM:
systemApiKey = serverConfig.glmApiKey;
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

GLM configuration is properly implemented but needs documentation

The implementation of GLM API key is correctly structured in the codebase:

  • Proper environment variable handling in server.ts
  • Type definitions and validation through isGLM check
  • Consistent integration with the authentication system

However, the GLM configuration is missing from the documentation. The developer should:

  • Add GLM_API_KEY to the environment variables section in README.md
  • Include GLM_URL configuration instructions
  • Document the GLM provider setup process
🔗 Analysis chain

LGTM! Verify server configuration for GLM.

The implementation follows the established pattern for model providers and correctly assigns the GLM API key.

Let's verify the server configuration includes the GLM API key definition:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify GLM API key configuration in server config
# Expected: Find the definition of glmApiKey in server configuration

# Search for GLM API key configuration
rg -l 'glmApiKey'

# Check the type definition to ensure GLM API key is properly typed
ast-grep --pattern 'interface $_ {
  $$$
  glmApiKey?: $_
  $$$
}'

Length of output: 189


Script:

#!/bin/bash
# Let's check the server config implementation and type definitions
rg -A 5 -B 5 'glmApiKey' app/config/server.ts

# Also check the interface/type definitions that might contain glmApiKey
ast-grep --pattern 'type $_ = {
  $$$
  glmApiKey?: $_
  $$$
}'

# Check if there are any validation or requirement checks for glmApiKey
rg -A 3 'glmApiKey.*required|required.*glmApiKey|validate.*glmApiKey'

Length of output: 615


Script:

#!/bin/bash
# Let's check the isGLM configuration and usage
rg -A 2 -B 2 'isGLM'

# Check if there's any validation or initialization of GLM_API_KEY env var
rg -A 2 'GLM_API_KEY'

# Look for any GLM-related environment variable documentation
rg -l '\.env|environment|config' --type md

Length of output: 2251


Script:

#!/bin/bash
# Check README and documentation for GLM configuration instructions
rg -A 5 -B 5 'GLM|Environment' README.md

# Check environment variable declarations in any example env files
fd -e example -e sample -e template | grep -i env | xargs cat

# Look for any GLM-related configuration in the docs
rg -A 3 'GLM' docs/

Length of output: 368

Comment on lines +75 to +76
body: req.body,
redirect: "manual",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure Request Body Consistency When Reading Streams

The request body (req.body) is a stream and can only be read once. Assigning it directly to fetchOptions.body and then reading it again with await req.text() can cause the stream to be empty or lead to runtime errors.

To fix this, read the body once and reuse it:

-   body: req.body,
+   const clonedBody = await req.text();
+   body: clonedBody,
...
    if (serverConfig.customModels && clonedBody) {
      try {
-       const clonedBody = await req.text();
-       fetchOptions.body = clonedBody;
+       fetchOptions.body = clonedBody;

Also applies to: 85-86

Comment on lines +36 to +38
console.error("[GLM] ", e);
return NextResponse.json(prettyObject(e));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid Exposing Internal Error Details to Clients

Returning the full error object in the API response may reveal sensitive internal information. It's better to return a generic error message to the client and log the detailed error on the server side.

Apply this diff to return a generic error response:

    console.error("[GLM] ", e);
-   return NextResponse.json(prettyObject(e));
+   return NextResponse.json({ error: "Internal Server Error" }, { status: 500 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
console.error("[GLM] ", e);
return NextResponse.json(prettyObject(e));
}
console.error("[GLM] ", e);
return NextResponse.json({ error: "Internal Server Error" }, { status: 500 });
}

Comment on lines +21 to +23
if (req.method === "OPTIONS") {
return NextResponse.json({ body: "OK" }, { status: 200 });
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Set CORS Headers for Preflight Requests

When handling OPTIONS requests for CORS preflight, it's important to include the appropriate headers to allow cross-origin requests. Currently, the response lacks CORS headers, which may cause client requests to fail due to CORS policy restrictions.

Apply this diff to include the necessary CORS headers:

    return NextResponse.json({ body: "OK" }, {
      status: 200,
+     headers: {
+       'Access-Control-Allow-Origin': '*', // Adjust as needed
+       'Access-Control-Allow-Methods': 'GET,POST,OPTIONS',
+       'Access-Control-Allow-Headers': 'Content-Type, Authorization',
+     },
    });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (req.method === "OPTIONS") {
return NextResponse.json({ body: "OK" }, { status: 200 });
}
if (req.method === "OPTIONS") {
return NextResponse.json({ body: "OK" }, {
status: 200,
headers: {
'Access-Control-Allow-Origin': '*', // Adjust as needed
'Access-Control-Allow-Methods': 'GET,POST,OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
},
});
}

Comment on lines +182 to +191
async usage() {
return {
used: 0,
total: 0,
};
}

async models(): Promise<LLMModel[]> {
return [];
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Implement the usage and models methods or handle them appropriately.

The usage and models methods currently return default values or empty arrays. If the GLM API provides usage statistics or model listings, consider implementing these methods to reflect accurate information. Otherwise, handle these methods appropriately to avoid confusion.

Copy link
Contributor

建议将GLM更换成 ChatGLM(在模型选择列表,括号里面会写这个名字,如果只是一个GLM可能识别不出来)

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


It is recommended to replace GLM with ChatGLM (in the model selection list, this name will be written in brackets. If there is only one GLM, it may not be recognized)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
app/utils.ts (1)

281-282: LGTM! Consider renaming GLM to ChatGLM.

The implementation correctly adds GLM support for plugins, following the established pattern. However, based on the PR comments, consider using "ChatGLM" instead of "GLM" in the model selection list for better user recognition.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between d3f0a77 and 7a8d557.

📒 Files selected for processing (1)
  • app/utils.ts (1 hunks)

app/constant.ts Outdated
@@ -118,7 +118,7 @@ export enum ServiceProvider {
Stability = "Stability",
Iflytek = "Iflytek",
XAI = "XAI",
GLM = "GLM",
GLM = "ChatGLM",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里不改吗

@Dogtiti Dogtiti merged commit 36bfa2e into ChatGPTNextWeb:main Nov 1, 2024
1 check failed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Outside diff range and nitpick comments (3)
app/client/platforms/glm.ts (3)

53-53: Consider using debug logging.

Replace console.log with a proper logging utility that can be disabled in production.

-console.log("[Proxy Endpoint] ", baseUrl, path);
+logger.debug("[Proxy Endpoint]", { baseUrl, path });

62-64: Enhance error message for unsupported feature.

The error message could be more informative about the unsupported feature.

-throw new Error("Method not implemented.");
+throw new Error("Speech synthesis is not supported by the ChatGLM API.");

92-92: Consider using debug logging.

Replace console.log with a proper logging utility.

-console.log("[Request] glm payload: ", requestPayload);
+logger.debug("[Request] glm payload:", requestPayload);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 7a8d557 and afe12c2.

📒 Files selected for processing (12)
  • app/api/[provider]/[...path]/route.ts (2 hunks)
  • app/api/auth.ts (1 hunks)
  • app/api/glm.ts (1 hunks)
  • app/client/api.ts (6 hunks)
  • app/client/platforms/glm.ts (1 hunks)
  • app/components/settings.tsx (3 hunks)
  • app/config/server.ts (3 hunks)
  • app/constant.ts (7 hunks)
  • app/locales/cn.ts (1 hunks)
  • app/locales/en.ts (1 hunks)
  • app/store/access.ts (5 hunks)
  • app/utils.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • app/api/[provider]/[...path]/route.ts
  • app/api/auth.ts
  • app/config/server.ts
  • app/locales/cn.ts
  • app/locales/en.ts
  • app/utils.ts
🧰 Additional context used
🪛 Biome
app/components/settings.tsx

[error] 1255-1255: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


[error] 1271-1271: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

🔇 Additional comments (20)
app/api/glm.ts (2)

1-13: LGTM: Imports and configuration are well-structured.

The necessary dependencies, configurations, and types are properly imported and initialized.


60-65: Consider reducing the timeout duration.

A 10-minute timeout is quite long and could lead to resource exhaustion under load. Consider reducing it to a more reasonable duration (e.g., 2-3 minutes) unless there's a specific requirement for long-running requests.

Let's check if there are any comments or discussions about the timeout duration:

app/client/platforms/glm.ts (1)

58-60: LGTM!

The message extraction is implemented correctly with proper null handling.

app/store/access.ts (4)

17-17: LGTM: URL configuration follows established patterns

The ChatGLM URL configuration is properly implemented, following the same pattern as other providers with appropriate conditional logic based on the build mode.

Also applies to: 51-51


114-117: LGTM: Default state configuration is consistent

The ChatGLM configuration in DEFAULT_ACCESS_STATE follows the established pattern with proper initialization of URL and API key properties.


210-210: LGTM: Authorization check properly updated

The ChatGLM validation is correctly integrated into the authorization chain, maintaining consistency with other providers.


114-117: Verify property name consistency across components

Let's ensure that the property names (chatglmUrl and chatglmApiKey) are consistently used across all related components.

Also applies to: 190-192

✅ Verification successful

Property names are consistently used across the codebase

The verification shows that chatglmUrl and chatglmApiKey are used consistently across all relevant components:

  • Configuration (app/config/server.ts)
  • Store (app/store/access.ts)
  • API clients (app/api/glm.ts, app/client/platforms/glm.ts)
  • UI components (app/components/settings.tsx)
  • Authentication (app/api/auth.ts)

No inconsistencies or variations in property naming were found.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify consistent usage of ChatGLM property names across the codebase

# Test 1: Check for chatglmUrl usage
echo "Checking chatglmUrl usage:"
rg "chatglmUrl" --type ts

# Test 2: Check for chatglmApiKey usage
echo "Checking chatglmApiKey usage:"
rg "chatglmApiKey" --type ts

# Test 3: Check for any variations that might indicate inconsistency
echo "Checking for potential inconsistent naming:"
rg -i "glm.*(url|key|token)" --type ts

Length of output: 3428

app/client/api.ts (4)

24-24: LGTM: Import follows established patterns

The ChatGLMApi import follows the existing pattern and is properly organized with other platform-specific imports.


160-162: LGTM: ChatGLM case properly integrated

The ChatGLM case is correctly implemented in the ClientApi constructor, following the established pattern of other providers.


349-350: LGTM: Proper provider mapping

The ChatGLM case in getClientApi correctly maps the service provider to the model provider, maintaining consistency with other implementations.


24-24: Verify ChatGLMApi implementation

Let's verify the ChatGLMApi implementation to ensure it properly implements the LLMApi interface.

✅ Verification successful

ChatGLMApi implementation verified successfully

The ChatGLMApi class correctly implements the LLMApi interface with all required methods:

  • chat(): Implements chat functionality with proper streaming and tool support
  • speech(): Implements with "not implemented" error as expected
  • usage(): Returns basic usage stats
  • models(): Returns empty model list (matches disableListModels = true)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify ChatGLMApi implementation exists and implements required methods

# Check if ChatGLMApi file exists and contains required method implementations
ast-grep --pattern 'class ChatGLMApi {
  $$$
  chat($$$) {
    $$$
  }
  $$$
  speech($$$) {
    $$$
  }
  $$$
  usage($$$) {
    $$$
  }
  $$$
  models($$$) {
    $$$
  }
  $$$
}'

Length of output: 182


Script:

#!/bin/bash
# Let's first locate the ChatGLMApi file
fd "glm.ts"

# Then examine its content
rg "class ChatGLMApi" -A 5

# Also check for any interface implementations
rg "implements.*Api" -A 2

# Look for the base interface definition
rg "interface.*Api" -A 10

Length of output: 2698


Script:

#!/bin/bash
# Let's examine the ChatGLMApi implementation in detail
cat app/client/platforms/glm.ts

# Also look for the LLMApi interface definition
rg "interface LLMApi" -A 15

Length of output: 5501


Script:

#!/bin/bash
# Let's find the LLMApi interface definition in the api.ts file
cat app/client/api.ts

Length of output: 9501

app/constant.ts (5)

33-33: LGTM: Base URL constant follows conventions

The CHATGLM_BASE_URL constant is correctly defined and follows the same pattern as other provider base URLs in the file.


67-67: LGTM: API path follows consistent pattern

The ChatGLM API path follows the same naming convention as other providers in the enum.


121-121: LGTM: Provider enums use the recommended "ChatGLM" name

The enums correctly use "ChatGLM" instead of "GLM" as suggested in the PR comments, maintaining consistency with the provider's recognized name.

Also applies to: 145-145


233-236: LGTM: ChatGLM configuration follows standard structure

The configuration object correctly defines the endpoint and chat path according to ChatGLM's API specifications.


523-533: LGTM: Model configuration follows standard pattern

The ChatGLM models are correctly mapped to the standard configuration structure, with appropriate provider details and sorting.

app/components/settings.tsx (4)

75-75: LGTM: Import follows established pattern.

The ChatGLM import is correctly placed with other provider imports.


1238-1277: LGTM: ChatGLM configuration follows established patterns.

The implementation:

  • Maintains consistency with other provider configurations
  • Properly handles sensitive information using PasswordInput
  • Includes appropriate accessibility attributes
  • Uses "ChatGLM" branding as suggested in PR comments
🧰 Tools
🪛 Biome

[error] 1255-1255: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


[error] 1271-1271: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)


1738-1738: LGTM: ChatGLM component properly integrated.

The ChatGLM configuration is correctly integrated into the settings UI alongside other providers.


75-75: Verify complete ChatGLM integration.

The settings UI implementation looks good, but let's verify the complete ChatGLM integration:

Also applies to: 1238-1277, 1738-1738

✅ Verification successful

ChatGLM integration appears complete and well-structured

The verification shows a complete ChatGLM integration with all necessary components in place:

  • Constants and endpoints defined in app/constant.ts
  • API implementation in app/client/platforms/glm.ts
  • Store configuration in app/store/access.ts with URL handling and validation
  • UI integration in settings component
  • Proper API client initialization in app/client/api.ts

The implementation follows the same pattern as other model integrations in the codebase, with all required components properly connected.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify ChatGLM integration across the codebase

# Test 1: Check for ChatGLM constant definition
echo "Checking ChatGLM constant..."
rg -p "ChatGLM.*ExampleEndpoint"

# Test 2: Check for ChatGLM API handler
echo "Checking ChatGLM API handler..."
rg -p "ChatGLMApi"

# Test 3: Check for ChatGLM localization
echo "Checking ChatGLM localization..."
rg -p "ChatGLM.*Title|ChatGLM.*SubTitle" "app/locales/"

Length of output: 628


Script:

#!/bin/bash
# Let's verify additional ChatGLM integration aspects

# Check for ChatGLM model configuration and constants
echo "Checking ChatGLM model configuration..."
rg -p "export const ChatGLM" -A 5

# Check for ChatGLM implementation details
echo "Checking ChatGLM implementation..."
rg -p "class ChatGLMApi" -A 10

# Check for ChatGLM in language files
echo "Checking ChatGLM translations..."
fd -e json . app/locales/ -x grep -l "ChatGLM"

# Check for any ChatGLM-related types
echo "Checking ChatGLM types..."
rg -p "type.*ChatGLM|interface.*ChatGLM"

# Check for ChatGLM in store/config
echo "Checking ChatGLM in store/config..."
rg -p "ChatGLM" "app/store/"

Length of output: 1295

Comment on lines +108 to +111
} catch (e) {
console.error(`[GLM] filter`, e);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve error handling in model validation.

The catch block silently logs errors from JSON parsing or model validation. These should be properly handled as they indicate invalid requests.

Apply this diff:

    } catch (e) {
      console.error(`[GLM] filter`, e);
+     return NextResponse.json(
+       {
+         error: true,
+         message: "Invalid request format",
+       },
+       {
+         status: 400,
+       }
+     );
    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (e) {
console.error(`[GLM] filter`, e);
}
}
} catch (e) {
console.error(`[GLM] filter`, e);
return NextResponse.json(
{
error: true,
message: "Invalid request format",
},
{
status: 400,
}
);
}
}

Comment on lines +32 to +35
try {
const response = await request(req);
return response;
} catch (e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error cleanup in catch block.

The try-catch block should clean up any resources (like response streams) in case of an error to prevent memory leaks.

Apply this diff:

  try {
    const response = await request(req);
    return response;
  } catch (e) {
+   if (response?.body) {
+     try {
+       await response.body.cancel();
+     } catch {} // Ignore cleanup errors
+   }
    console.error("[GLM] ", e);
    return NextResponse.json(prettyObject(e));
  }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +47 to +51
let baseUrl = serverConfig.chatglmUrl || CHATGLM_BASE_URL;

if (!baseUrl.startsWith("http")) {
baseUrl = `https://${baseUrl}`;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add URL validation for security.

The base URL construction should validate the URL to prevent potential SSRF attacks.

Apply this diff:

+ function isValidUrl(url: string): boolean {
+   try {
+     const parsedUrl = new URL(url);
+     return ['http:', 'https:'].includes(parsedUrl.protocol);
+   } catch {
+     return false;
+   }
+ }

  let baseUrl = serverConfig.chatglmUrl || CHATGLM_BASE_URL;

  if (!baseUrl.startsWith("http")) {
    baseUrl = `https://${baseUrl}`;
  }
+ if (!isValidUrl(baseUrl)) {
+   throw new Error("Invalid base URL configuration");
+ }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let baseUrl = serverConfig.chatglmUrl || CHATGLM_BASE_URL;
if (!baseUrl.startsWith("http")) {
baseUrl = `https://${baseUrl}`;
}
function isValidUrl(url: string): boolean {
try {
const parsedUrl = new URL(url);
return ['http:', 'https:'].includes(parsedUrl.protocol);
} catch {
return false;
}
}
let baseUrl = serverConfig.chatglmUrl || CHATGLM_BASE_URL;
if (!baseUrl.startsWith("http")) {
baseUrl = `https://${baseUrl}`;
}
if (!isValidUrl(baseUrl)) {
throw new Error("Invalid base URL configuration");
}

Comment on lines +91 to +107
if (
isModelAvailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.ChatGLM as string,
)
) {
return NextResponse.json(
{
error: true,
message: `you are not allowed to use ${jsonBody?.model} model`,
},
{
status: 403,
},
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix inverted logic in model availability check.

The condition for returning a 403 response is incorrect. It returns 403 when the model IS available, which is the opposite of what's intended.

Apply this diff:

  if (
-   isModelAvailableInServer(
+   !isModelAvailableInServer(
      serverConfig.customModels,
      jsonBody?.model as string,
      ServiceProvider.ChatGLM as string,
    )
  ) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (
isModelAvailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.ChatGLM as string,
)
) {
return NextResponse.json(
{
error: true,
message: `you are not allowed to use ${jsonBody?.model} model`,
},
{
status: 403,
},
);
}
if (
!isModelAvailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.ChatGLM as string,
)
) {
return NextResponse.json(
{
error: true,
message: `you are not allowed to use ${jsonBody?.model} model`,
},
{
status: 403,
},
);
}

Comment on lines +113 to +156
if (shouldStream) {
const [tools, funcs] = usePluginStore
.getState()
.getAsTools(
useChatStore.getState().currentSession().mask?.plugin || [],
);
return stream(
chatPath,
requestPayload,
getHeaders(),
tools as any,
funcs,
controller,
// parseSSE
(text: string, runTools: ChatMessageTool[]) => {
// console.log("parseSSE", text, runTools);
const json = JSON.parse(text);
const choices = json.choices as Array<{
delta: {
content: string;
tool_calls: ChatMessageTool[];
};
}>;
const tool_calls = choices[0]?.delta?.tool_calls;
if (tool_calls?.length > 0) {
const index = tool_calls[0]?.index;
const id = tool_calls[0]?.id;
const args = tool_calls[0]?.function?.arguments;
if (id) {
runTools.push({
id,
type: tool_calls[0]?.type,
function: {
name: tool_calls[0]?.function?.name as string,
arguments: args,
},
});
} else {
// @ts-ignore
runTools[index]["function"]["arguments"] += args;
}
}
return choices[0]?.delta?.content;
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider extracting stream handling logic.

The stream handling logic is complex and could be extracted into a separate method for better maintainability.

Consider creating a separate method like handleStreamResponse to encapsulate this logic.

Comment on lines +49 to +51
if (!baseUrl.startsWith("http") && !baseUrl.startsWith(ApiPath.ChatGLM)) {
baseUrl = "https://" + baseUrl;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Enhance URL validation.

The current URL validation is basic. Consider using a URL validation utility to handle edge cases and ensure security.

-if (!baseUrl.startsWith("http") && !baseUrl.startsWith(ApiPath.ChatGLM)) {
-  baseUrl = "https://" + baseUrl;
-}
+try {
+  const url = new URL(baseUrl);
+  if (!['http:', 'https:'].includes(url.protocol)) {
+    baseUrl = `https://${baseUrl}`;
+  }
+} catch {
+  if (!baseUrl.startsWith(ApiPath.ChatGLM)) {
+    baseUrl = `https://${baseUrl}`;
+  }
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!baseUrl.startsWith("http") && !baseUrl.startsWith(ApiPath.ChatGLM)) {
baseUrl = "https://" + baseUrl;
}
try {
const url = new URL(baseUrl);
if (!['http:', 'https:'].includes(url.protocol)) {
baseUrl = `https://${baseUrl}`;
}
} catch {
if (!baseUrl.startsWith(ApiPath.ChatGLM)) {
baseUrl = `https://${baseUrl}`;
}
}

Comment on lines +41 to +43
const isApp = !!getClientConfig()?.isApp;
const apiPath = ApiPath.ChatGLM;
baseUrl = isApp ? CHATGLM_BASE_URL : apiPath;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider extracting app detection logic.

The app detection logic could be moved to a utility function for better reusability and testability.

-const isApp = !!getClientConfig()?.isApp;
-const apiPath = ApiPath.ChatGLM;
-baseUrl = isApp ? CHATGLM_BASE_URL : apiPath;
+const isApp = isAppEnvironment(); // New utility function
+baseUrl = isApp ? CHATGLM_BASE_URL : ApiPath.ChatGLM;

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +190 to +192
isValidChatGLM() {
return ensure(get(), ["chatglmApiKey"]);
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider validating the ChatGLM URL

While the current implementation follows the pattern of other providers, consider whether the URL should also be validated to ensure proper configuration, especially in custom deployment scenarios.

 isValidChatGLM() {
-  return ensure(get(), ["chatglmApiKey"]);
+  return ensure(get(), ["chatglmApiKey", "chatglmUrl"]);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
isValidChatGLM() {
return ensure(get(), ["chatglmApiKey"]);
},
isValidChatGLM() {
return ensure(get(), ["chatglmApiKey", "chatglmUrl"]);
},

@@ -244,6 +248,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
const isMoonshot = modelConfig.providerName === ServiceProvider.Moonshot;
const isIflytek = modelConfig.providerName === ServiceProvider.Iflytek;
const isXAI = modelConfig.providerName === ServiceProvider.XAI;
const isChatGLM = modelConfig.providerName === ServiceProvider.ChatGLM;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider refactoring the nested ternary operators

While the ChatGLM integration is functionally correct, the nested ternary operators for API key selection are becoming increasingly complex and harder to maintain. Consider refactoring this logic to improve readability.

Here's a suggested refactor using a more maintainable approach:

function getApiKeyForProvider(
  modelConfig: ModelConfig,
  accessStore: AccessStore
): string {
  const providerMap = {
    [ServiceProvider.Google]: accessStore.googleApiKey,
    [ServiceProvider.Azure]: accessStore.azureApiKey,
    [ServiceProvider.Anthropic]: accessStore.anthropicApiKey,
    [ServiceProvider.ByteDance]: accessStore.bytedanceApiKey,
    [ServiceProvider.Alibaba]: accessStore.alibabaApiKey,
    [ServiceProvider.Moonshot]: accessStore.moonshotApiKey,
    [ServiceProvider.XAI]: accessStore.xaiApiKey,
    [ServiceProvider.ChatGLM]: accessStore.chatglmApiKey,
    [ServiceProvider.Iflytek]: accessStore.iflytekApiKey && accessStore.iflytekApiSecret
      ? `${accessStore.iflytekApiKey}:${accessStore.iflytekApiSecret}`
      : "",
    // Default to OpenAI
    default: accessStore.openaiApiKey,
  };

  return providerMap[modelConfig.providerName] ?? providerMap.default;
}

This approach:

  • Improves readability and maintainability
  • Makes it easier to add new providers
  • Reduces the complexity of nested ternaries
  • Makes the provider-key mapping explicit

Also applies to: 267-268, 284-284

Comment on lines +389 to +398
const chatglmModels = [
"glm-4-plus",
"glm-4-0520",
"glm-4",
"glm-4-air",
"glm-4-airx",
"glm-4-long",
"glm-4-flashx",
"glm-4-flash",
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add knowledge cutoff dates for ChatGLM models

The ChatGLM models are missing from the KnowledgeCutOffDate record. This information is needed to display the correct knowledge cutoff date in the chat interface.

Add the following entries to the KnowledgeCutOffDate record:

 export const KnowledgeCutOffDate: Record<string, string> = {
   default: "2021-09",
+  "glm-4": "2023-12",
+  "glm-4-plus": "2023-12",
+  "glm-4-0520": "2023-12",
+  "glm-4-air": "2023-12",
+  "glm-4-airx": "2023-12",
+  "glm-4-long": "2023-12",
+  "glm-4-flashx": "2023-12",
+  "glm-4-flash": "2023-12",
   // ... existing entries
 };

Committable suggestion skipped: line range outside the PR's diff.

@coderabbitai coderabbitai bot mentioned this pull request Nov 20, 2024
10 tasks
@coderabbitai coderabbitai bot mentioned this pull request Nov 29, 2024
10 tasks
eric19960304 added a commit to eric19960304/crossgpt that referenced this pull request Feb 21, 2025
commit f5f3ce94f63bceadff24ca1beff3ae85d142f92e
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Feb 21 08:56:43 2025 +0800

    Update README.md

commit 2b5f6003086f65f5361ccfc5dc83242f2ca813b8
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Feb 21 08:55:40 2025 +0800

    Update README.md

commit b96610711763d5b5e64d138f6599008c0065accc
Merge: 377480b4 90827fc5
Author: RiverRay <laolei@forkway.cn>
Date:   Mon Feb 17 22:58:01 2025 +0800

    Merge pull request #6235 from DBCDK/danish-locale

    Translation to danish

commit 377480b448798488db5ef03f4d5c44d2d930693c
Merge: 8bd0d6a1 12863f52
Author: river <laolei@forkway.cn>
Date:   Sun Feb 16 10:50:07 2025 +0800

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit 8bd0d6a1a7abccc736769b9f2b2b9c9ee75b81a8
Author: river <laolei@forkway.cn>
Date:   Sun Feb 16 10:48:54 2025 +0800

    chore: Update NextChatAI domain from nextchat.dev to nextchat.club

commit 90827fc593f2e756264c0d309e638491105b669b
Author: Rasmus Erik Voel Jensen <raje@dbc.dk>
Date:   Sat Feb 15 13:08:58 2025 +0100

    danish rewording / improved button label

commit 008e339b6d1c227c47a9cb4877ba8bb064f41043
Author: Rasmus Erik Voel Jensen <raje@dbc.dk>
Date:   Sat Feb 15 12:52:44 2025 +0100

    danish locale

commit 12863f52131bba9d75b29038970c9293f3b54cb5
Merge: 48cd4b11 cf140d42
Author: RiverRay <laolei@forkway.cn>
Date:   Thu Feb 13 14:53:47 2025 +0800

    Merge pull request #6204 from bestsanmao/ali_bytedance_reasoning_content

    add 3 type of reasoning_content support (+deepseek-r1@OpenAI @Alibaba @ByteDance), parse <think></think> from SSE

commit cf140d422863e313ad1609dfd33fd19ea5463ff3
Merge: 476d946f 48cd4b11
Author: suruiqiang <bestsanmao@qq.com>
Date:   Wed Feb 12 17:54:50 2025 +0800

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web into ali_bytedance_reasoning_content

commit 476d946f961a551ffedc7734dcce28faa7dc30fe
Author: suruiqiang <bestsanmao@qq.com>
Date:   Wed Feb 12 17:49:54 2025 +0800

    fix bug (trim eats space or \n mistakenly), optimize timeout by model

commit 97142583224faa28e7cdd43eba75b77828f280af
Author: suruiqiang <bestsanmao@qq.com>
Date:   Tue Feb 11 18:57:16 2025 +0800

    support deepseek-r1@OpenAI's reasoning_content, parse <think></think> from stream

commit 48cd4b11b5dd14b333350a80be597a2b98391d1c
Merge: 77c78b23 18fa2cc3
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 11 18:37:47 2025 +0800

    Merge pull request #6190 from siliconflow/refine-emoji-siliconflow

    Fix model icon on SiliconFlow

commit 77c78b230a8e0a06f7a6fb8d09837cf5056c0a8c
Merge: b44686b8 2137aa65
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 11 18:37:22 2025 +0800

    Merge pull request #6193 from siliconflow/get-models-siliconflow

    Model listing of SiliconFlow

commit b44686b887c9f9f6495b67c4dda77ed5111c30b7
Merge: 34bdd4b9 9f91c2d0
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 11 18:36:50 2025 +0800

    Merge pull request #6189 from bestsanmao/bug_fix

    fix avatar for export message preview and saved image

commit 34bdd4b945bea213d660fec4c15796d70b2c137b
Merge: a029b433 86f86962
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 11 18:35:02 2025 +0800

    Merge pull request #6194 from siliconflow/vl-support-on-sf

    Support VLM on SiliconFlow

commit b0758cccde8709af7fa31aed8c019029c97be82b
Author: suruiqiang <bestsanmao@qq.com>
Date:   Tue Feb 11 16:08:30 2025 +0800

    optimization

commit 98a11e56d2c55d7d89dfc4c8905045781863bf98
Author: suruiqiang <bestsanmao@qq.com>
Date:   Tue Feb 11 12:46:46 2025 +0800

    support alibaba and bytedance's reasoning_content

commit 86f86962fb0725b888cee6ebd9eb9f818a0c9cee
Author: Shenghang Tsai <jackalcooper@gmail.com>
Date:   Mon Feb 10 13:37:48 2025 +0800

    Support VLM on SiliconFlow

commit 2137aa65bfaeda33bdbfad7f1ae36bfdde8c9edf
Author: Shenghang Tsai <jackalcooper@gmail.com>
Date:   Mon Feb 10 11:03:49 2025 +0800

    Model listing of SiliconFlow

commit 18fa2cc30d96fbb452efd9226db7ca6021cacb3e
Author: Shenghang Tsai <jackalcooper@gmail.com>
Date:   Sun Feb 9 18:49:26 2025 +0800

    fix model icon on siliconflow

commit 0bfc6480855640032ec3593960b434fc5e1c1de5
Author: Shenghang Tsai <jackalcooper@gmail.com>
Date:   Sun Feb 9 18:47:57 2025 +0800

    fix model icon on siliconflow

commit 9f91c2d05c21c7fea604a88a0974679a07293c81
Author: suruiqiang <bestsanmao@qq.com>
Date:   Sun Feb 9 16:52:46 2025 +0800

    fix avatar for export message preview and saved image

commit a029b4330b89f8f2d1258e46fa68ba87c998a745
Merge: c2edfec1 2842b264
Author: RiverRay <laolei@forkway.cn>
Date:   Sun Feb 9 11:05:43 2025 +0800

    Merge pull request #6188 from ChatGPTNextWeb/Leizhenpeng-patch-4

    Update LICENSE

commit 2842b264e06b08de9cfdcb84982ee6571fa45881
Author: RiverRay <laolei@forkway.cn>
Date:   Sun Feb 9 11:05:32 2025 +0800

    Update LICENSE

commit c2edfec16fc97446b58149312706f1539e60fc58
Merge: 6406ac99 97a4aafc
Author: RiverRay <laolei@forkway.cn>
Date:   Sun Feb 9 11:03:44 2025 +0800

    Merge pull request #6172 from bestsanmao/bug_fix

    fix several bugs

commit 6406ac99a30659abd7a93e47a5faa2def9464f46
Merge: c6199dbf d8f533e1
Author: RiverRay <laolei@forkway.cn>
Date:   Sun Feb 9 11:02:13 2025 +0800

    Merge pull request #6175 from itsevin/main

    Add other Xai model

commit 97a4aafc9276760443c2044bad5c454c7d9435a4
Merge: 1ae5fdbf c6199dbf
Author: suruiqiang <bestsanmao@qq.com>
Date:   Sun Feb 9 09:46:07 2025 +0800

    Merge remote-tracking branch 'remotes/origin/main' into bug_fix

commit d8f533e1f35b75905ef822cbc50e62c3f228b148
Merge: fdbaddde c6199dbf
Author: GH Action - Upstream Sync <action@github.com>
Date:   Sun Feb 9 01:22:47 2025 +0000

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit c6199dbf9f1a9f729eebff28944b59d3bcc2c6a4
Merge: 4273aa08 acf75ce6
Author: RiverRay <laolei@forkway.cn>
Date:   Sat Feb 8 23:40:39 2025 +0800

    Merge pull request #6186 from siliconflow/fix-truc-of-reasoning-model

    Fix formatting of reasoning model on SiliconFlow

commit 4273aa0803b5b7267730b77995c55fd7f674b0d4
Merge: bf265d33 2a3996e0
Author: RiverRay <laolei@forkway.cn>
Date:   Sat Feb 8 23:39:49 2025 +0800

    Merge pull request #6185 from siliconflow/larger_timeout_for_siliconflow

    Larger timeout for SiliconFlow

commit acf75ce68f7152972fe5924b4880b3ae06c0ca65
Author: Shenghang Tsai <jackalcooper@gmail.com>
Date:   Sat Feb 8 16:34:17 2025 +0800

    Remove unnecessary trimming

commit 1ae5fdbf013349a2c32e6083b41500cbf2c4000d
Author: suruiqiang <bestsanmao@qq.com>
Date:   Sat Feb 8 16:15:10 2025 +0800

    mini optimizations

commit 2a3996e0d66e41a99bfd4373c2bd9dec4d78652a
Author: Shenghang Tsai <jackalcooper@gmail.com>
Date:   Sat Feb 8 14:38:12 2025 +0800

    Update siliconflow.ts

commit fdbaddde37165b293ba7d246108ea1a5610b92da
Merge: c4e9cb03 bf265d33
Author: GH Action - Upstream Sync <action@github.com>
Date:   Sat Feb 8 01:16:56 2025 +0000

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit d74f79e9c5ee36fa9c3a59f8390a444857417b1e
Merge: a5a97682 bf265d33
Author: suruiqiang <bestsanmao@qq.com>
Date:   Sat Feb 8 08:29:34 2025 +0800

    Merge remote-tracking branch 'remotes/origin/HEAD' into bug_fix

commit c4e9cb03a92751b37ec0b9615ef5ec056fa20bde
Author: itsevin <2720269770@qq.com>
Date:   Fri Feb 7 20:29:21 2025 +0800

    Add Xai model

commit bf265d33759dfada0d91248cbcb5ae22326e745f
Merge: 17f391d9 51384ddc
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Feb 7 20:25:20 2025 +0800

    Merge pull request #6164 from ZhangYichi-ZYc/main

    Fix: Set consistent fill color for OpenAI/MoonShot/Grok SVG to prevent color inversion in dark mode

commit 17f391d929a0281f67e22d87a95f06ed9c811cf8
Merge: 78186c27 e5e5fde9
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Feb 7 20:23:47 2025 +0800

    Merge pull request #6158 from dupl/main

    update the lastest Gemini models

commit 78186c27fb70fdeeafb8a9ac6ab7a3b43bb9c8e0
Merge: add9ca20 a780b39c
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Feb 7 20:23:01 2025 +0800

    Merge pull request #6168 from xiexin12138/fix-env

    Fix: 补充 env 中硅基流动的环境变量;追加硅基流动 2 个支持的付费模型

commit a5a976824591a7e2c228dbb257616b98fd7a53ed
Author: suruiqiang <bestsanmao@qq.com>
Date:   Fri Feb 7 16:34:14 2025 +0800

    change request timeout for thinking mode

commit 3fe55b4f7ff1791cf6e8c5d9da02b69a240e98a8
Author: suruiqiang <bestsanmao@qq.com>
Date:   Fri Feb 7 16:20:07 2025 +0800

    fix bug that gemini has multiple candidates part

commit f156430cc5f9451618b13e6432148d1d0dd35c5c
Author: suruiqiang <bestsanmao@qq.com>
Date:   Fri Feb 7 16:18:15 2025 +0800

    fix emoji issue for doubao and glm's congview & congvideox

commit f30c6a4348fb25fead1d1ba4f4ff6717a45496fb
Author: suruiqiang <bestsanmao@qq.com>
Date:   Fri Feb 7 16:14:19 2025 +0800

    fix doubao and grok not upload image

commit a780b39c17a271eb44421ac2f027fcf91c3b77cf
Author: xiexin12138 <xiexin344@163.com>
Date:   Fri Feb 7 15:43:50 2025 +0800

    fix: 补充硅基流动对 DeepSeek 支持的付费模型

commit 1010db834ce52f6a832bf50d3645527f3b42697e
Author: xiexin12138 <xiexin344@163.com>
Date:   Fri Feb 7 15:41:40 2025 +0800

    fix: 补充硅基流动的 env 环境变量

commit 51384ddc5feff6ca31028c77cf6b17b751a0ab24
Author: ZhangYichi <admin@zhangyichi.cn>
Date:   Fri Feb 7 11:13:22 2025 +0800

    Fix: Set consistent fill color for OpenAI/MoonShot/Grok SVG to prevent color inversion in dark mode

commit e5e5fde924a7598a6c447c079cce7337294b9d81
Author: dupl <67990457+dupl@users.noreply.github.com>
Date:   Fri Feb 7 06:50:31 2025 +0800

    update the lastest Gemini models

commit add9ca200cf91e467c2d6b770e4a8f140aff18e8
Merge: 28cbe56c 5225a6e1
Author: RiverRay <laolei@forkway.cn>
Date:   Thu Feb 6 18:08:08 2025 +0800

    Merge pull request #6144 from Eric-2369/add-more-llm-icons

    feat: add more llm icons

commit 5225a6e1921d170803ab11aa8ba09957cf0b678b
Author: Eric-2369 <admin@eri.cx>
Date:   Wed Feb 5 12:34:00 2025 +0800

    feat: add more llm icons

commit 28cbe56cec390c2309a3995b300c8d5aa1387c0a
Merge: bb4832e6 ad9ab9d4
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 4 21:29:02 2025 +0800

    Merge pull request #6141 from siliconflow/provider_silicon

    New provider SiliconFlow and Its Latest DeekSeek Models

commit ad9ab9d45afa384718a59bce23d9b70e3e8ed08a
Author: Shenghang Tsai <jackalcooper@gmail.com>
Date:   Tue Feb 4 15:02:18 2025 +0800

    New provider SiliconFlow and Its Latest DeekSeek Models

    Update README.md

    Update constant.ts

    Update README_CN.md

commit bb4832e6e71626184c339cce86eab94dbc3f8a59
Merge: 32b60909 39b3487e
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 4 09:38:04 2025 +0800

    Merge pull request #6129 from MonadMonAmi/update_knowledge_cutoff_date

    chore: add knowledge cut off dates for o1 and o3

commit 39b3487ea0f62065ffbe5f743c298af8247e9834
Merge: 2e7cac32 32b60909
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 4 09:37:55 2025 +0800

    Merge branch 'main' into update_knowledge_cutoff_date

commit 32b60909ae37948aca97a0fbec218372086e938f
Merge: 5db6775c 4c4d44e2
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 4 09:35:43 2025 +0800

    Merge pull request #6132 from RetiredQQ/main

    temporary fix for o3-mini

commit 5db6775cb85917c99d171905c38efa4a368bb873
Merge: b6881c77 92f57fb1
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 4 09:34:43 2025 +0800

    Merge pull request #6134 from zcong1993/main

    fix: fix isModelNotavailableInServer logic for bytedance models

commit b6881c77978b50debfd3907a863f2481e51ff299
Merge: 9943a522 60fa3580
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 4 09:33:13 2025 +0800

    Merge pull request #6127 from dupl/main

    add gemini-2.0-flash-thinking-exp, gemini-2.0-flash-thinking-exp-01-21

commit 9943a52295e36b0c296110f31643090f5fe0bb35
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 4 09:31:16 2025 +0800

    Update README.md

commit 1db4d25370d5754576c2bddc29ee75c6869b2696
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Feb 4 09:29:56 2025 +0800

    Update README.md

commit 92f57fb18fe40e73a425842747d4b5654493f275
Author: zcong1993 <zhangcong1992@gmail.com>
Date:   Mon Feb 3 16:58:42 2025 +0800

    fix: fix isModelNotavailableInServer logic for bytedance models

commit 4c4d44e2f831ec3296bbfd9f3c9e8b201e6bf18d
Author: Sky <retiredqq@gmail.com>
Date:   Sun Feb 2 21:45:30 2025 +0000

    fix

commit 8f12beb8f0b65d9b3de009348b0a8b2397e5574c
Author: Sky <retiredqq@gmail.com>
Date:   Sun Feb 2 21:43:30 2025 +0000

    support o3-mini

commit 2e7cac32185e52f648d30a76a61474951295c0be
Author: AndrewS <andrewdevandroid@gmail.com>
Date:   Sun Feb 2 19:44:53 2025 +0100

    chore: add knowledge cut off dates for o1 and o3

commit 60fa358010125894dc85f19618081040eccce15c
Author: dupl <67990457+dupl@users.noreply.github.com>
Date:   Sun Feb 2 23:27:45 2025 +0800

    typo: OpanAI -> OpenAI

commit 034b7d4655c55ecd5a8e6abd5a130356e4f6b38a
Author: dupl <67990457+dupl@users.noreply.github.com>
Date:   Sun Feb 2 23:11:07 2025 +0800

    add gemini-2.0-flash-thinking-exp, gemini-2.0-flash-thinking-exp-01-21

commit 1e20b64048769b4a11ec4691670a1a34711d962e
Merge: 3ef59930 4f28fca5
Author: RiverRay <laolei@forkway.cn>
Date:   Sun Feb 2 20:57:21 2025 +0800

    Merge pull request #6121 from ChatGPTNextWeb/feat/support-openai-o3-mini

    feat(model): add support for OpenAI o3-mini model

commit 4f28fca506980306c7d5810d4b6c9365503dce7f
Author: Kadxy <2230318258@qq.com>
Date:   Sat Feb 1 15:02:06 2025 +0800

    feat: Support OpenAI o3-mini

commit 3ef599308512b41134a67a2087111e729b25d35c
Merge: 31e52cb4 09ad7c18
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Jan 31 08:18:47 2025 +0800

    Merge pull request #6119 from ChatGPTNextWeb/Leizhenpeng-patch-3

    Update README.md

commit 09ad7c187556da8af88fa1adf42bc475e22e50eb
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Jan 31 08:18:13 2025 +0800

    Update README.md

commit 31e52cb47e61c4aa39e3f66f94ed0018968f5bd9
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Jan 31 06:53:39 2025 +0800

    更新 README.md

commit 9a69c5bd7c6bd496e3920f9d87bc8899737c25f2
Merge: c41e86fa be645aab
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Jan 31 06:48:00 2025 +0800

    Merge pull request #6118 from ChatGPTNextWeb/feat/issue-6104-deepseek-reasoning-content

commit be645aab37121ed488ce0c0e41c6d41a0e32c39d
Author: Kadxy <2230318258@qq.com>
Date:   Fri Jan 31 00:59:03 2025 +0800

    fix: revert unintended changes

commit c41e86faa6aea948811770d1871f4ca034c6fefd
Merge: 553b8c9f 143be69a
Author: RiverRay <laolei@forkway.cn>
Date:   Fri Jan 31 00:52:18 2025 +0800

    Merge pull request #6116 from ChatGPTNextWeb/feat/issue-6104-deepseek-reasoning-content

    Support DeepSeek API streaming reasoning content

commit 143be69a7ffc68f5498328152c6eb6aaa67294fe
Author: river <laolei@forkway.cn>
Date:   Fri Jan 31 00:50:03 2025 +0800

    chore: remove log

commit 63b7626656d2d8a221aa9edb491bf6912460e449
Author: river <laolei@forkway.cn>
Date:   Fri Jan 31 00:49:09 2025 +0800

    chore: change md

commit dabb7c70d5e4438b3f6f8cd67ed729c966d3da37
Author: Kadxy <2230318258@qq.com>
Date:   Fri Jan 31 00:30:08 2025 +0800

    feat:  Remove reasoning_contentfor DeepSeek API messages

commit c449737127dcdde4547a8b0caa6070ab1c17b45c
Author: Kadxy <2230318258@qq.com>
Date:   Fri Jan 31 00:07:52 2025 +0800

    feat: Support DeepSeek API streaming with thinking mode

commit 553b8c9f284bff6ec059b4d69f3f91c10105fbc0
Author: RiverRay <laolei@forkway.cn>
Date:   Mon Jan 27 13:05:17 2025 +0800

    Update .env.template

commit 19314793b8d379cb9936b6ba1205bd701f48c320
Merge: 2f9cb5a6 86801829
Author: river <laolei@forkway.cn>
Date:   Mon Jan 27 12:55:31 2025 +0800

    Merge branch 'bestsanmao-bug_fix'

commit 86801829215e8efd5f935eb8221b104a6456c177
Author: river <laolei@forkway.cn>
Date:   Mon Jan 27 12:48:59 2025 +0800

    feat: Add DeepSeek API key and fix MCP environment variable parsing

commit 2173c82bb55e1cd7c7bf994dcb6e0d7484c71daf
Author: suruiqiang <bestsanmao@qq.com>
Date:   Thu Jan 23 18:47:22 2025 +0800

    add deepseek-reasoner, and change deepseek's summary model to deepseek-chat

commit 0d5e66a9aeca9dd454df46fbdd1f12d69ba9b5a2
Author: suruiqiang <bestsanmao@qq.com>
Date:   Thu Jan 23 18:24:38 2025 +0800

    not insert mcpSystemPrompt if not ENABLE_MCP

commit 2f9cb5a68f59c512508a341f76b30f807d1e17fe
Merge: 6a862372 55cacfb7
Author: RiverRay <laolei@forkway.cn>
Date:   Wed Jan 22 21:40:37 2025 +0800

    Merge pull request #6084 from ChatGPTNextWeb/temp-fix

    fix: missing mcp_config.json files required for building

commit 55cacfb7e22034f15bb3541813ad6f3f621f2fcc
Author: Kadxy <2230318258@qq.com>
Date:   Wed Jan 22 21:28:29 2025 +0800

    fix: missing files required for building

commit 6a862372f7e4b1d8b188fd97d016081dc371cbdb
Merge: b2b6fd81 81bd83eb
Author: RiverRay <laolei@forkway.cn>
Date:   Wed Jan 22 13:11:11 2025 +0800

    Merge pull request #6082 from ChatGPTNextWeb/Leizhenpeng-patch-2

    Update README_CN.md

commit 81bd83eb444ed5d0ebd6f7169de5944549bda32d
Author: RiverRay <laolei@forkway.cn>
Date:   Wed Jan 22 13:08:33 2025 +0800

    Update README_CN.md

commit b2b6fd81bedeb55043a13ec41ee6775df48ef5b3
Merge: 8111acff f22cfd7b
Author: RiverRay <laolei@forkway.cn>
Date:   Mon Jan 20 10:44:46 2025 +0800

    Merge pull request #6075 from Kadxy/main

commit f22cfd7b33a81c8f245001ccd772c94a6162a54b
Author: Kadxy <2230318258@qq.com>
Date:   Mon Jan 20 10:10:52 2025 +0800

    Update chat.tsx

commit 8111acff34189ab980baca279c4fa811f63aac8b
Author: RiverRay <laolei@forkway.cn>
Date:   Mon Jan 20 00:17:47 2025 +0800

    Update README.md

commit 4cad55379d8b32d6b46c2f6c7bd42a8f3dd17877
Merge: 93652db6 a3d3ce3f
Author: RiverRay <laolei@forkway.cn>
Date:   Mon Jan 20 00:07:41 2025 +0800

    Merge pull request #5974 from ChatGPTNextWeb/feat-mcp

    Support MCP( WIP)

commit a3d3ce3f4cb95837811deb366459f40447f7af6d
Merge: 611e97e6 93652db6
Author: Kadxy <2230318258@qq.com>
Date:   Sun Jan 19 23:28:12 2025 +0800

    Merge branch 'main' into feat-mcp

commit 611e97e641d9d8b6c80e36da29fa21a2705f972d
Author: Kadxy <2230318258@qq.com>
Date:   Sun Jan 19 23:20:58 2025 +0800

    docs: update README.md

commit bfeea4ed4996c103d5ee36a908d6726e82472300
Author: Kadxy <2230318258@qq.com>
Date:   Sun Jan 19 01:02:01 2025 +0800

    fix: prevent MCP operations from blocking chat interface

commit bc71ae247bd1110658aef933eaf301b344181122
Author: Kadxy <2230318258@qq.com>
Date:   Sat Jan 18 21:19:01 2025 +0800

    feat: add ENABLE_MCP env var to toggle MCP feature globally and in Docker

commit 0112b54bc7b0d929b6f127daf00cfb0f2e05d1bc
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 16 22:35:26 2025 +0800

    fix: missing en translation

commit 65810d918bb599716e35c8ea515a265da909cf2f
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 16 21:30:15 2025 +0800

    feat: improve async operations and UI feedback

commit 4d535b1cd0c641d573a97e03fb5d9cb84a9f5ce5
Author: river <laolei@forkway.cn>
Date:   Thu Jan 16 20:54:24 2025 +0800

    chore: enhance mcp prompt

commit 588d81e8f19047110a87196259df9fc2e8dbc0ce
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 16 09:17:08 2025 +0800

    feat: remove unused files

commit d4f499ee41c8ab1c044fb690b980dc3d903d4e25
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 16 09:11:53 2025 +0800

    feat: adjust form style

commit 4d63d73b2e8b7b382a4cc1f60fdd20cb8c5f953a
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 16 09:00:57 2025 +0800

    feat: load MCP preset data from server

commit 07c63497dcbacee489d24db890281f84c2793e78
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 16 08:52:54 2025 +0800

    feat: support stop/start MCP servers

commit e440ff56c89d11b29cdbb303eb8a9a71cddc2553
Author: Kadxy <2230318258@qq.com>
Date:   Wed Jan 15 18:47:05 2025 +0800

    fix: env not work

commit c89e4883b29142cfcb9254b7ff9815a5fe0b8d67
Author: river <laolei@forkway.cn>
Date:   Wed Jan 15 17:31:18 2025 +0800

    chore: update icon

commit ac3d940de8d949e40a91d903d17901384d55e79c
Merge: a70e9a3c be59de56
Author: river <laolei@forkway.cn>
Date:   Wed Jan 15 17:29:43 2025 +0800

    Merge branch 'feat-mcp' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web into feat-mcp

commit be59de56f0074c4fde7358465f844d09b48ab273
Author: Kadxy <2230318258@qq.com>
Date:   Wed Jan 15 17:24:04 2025 +0800

    feat: Display the number of clients instead of the number of available tools.

commit a70e9a3c01dccb887fc41c3d60f2c101d0b1cf2e
Author: river <laolei@forkway.cn>
Date:   Wed Jan 15 17:23:10 2025 +0800

    chore:update mcp icon

commit 8aa9a500fdee762abe5fd8e0bba00065be1725f4
Author: Kadxy <2230318258@qq.com>
Date:   Wed Jan 15 16:52:54 2025 +0800

    feat: Optimize MCP configuration logic

commit 93652db688d2697abc7a6d4bdbe672fb8b509e33
Author: RiverRay <laolei@forkway.cn>
Date:   Mon Jan 13 16:57:50 2025 +0800

    Update README.md

commit 8421c483e880d39405404ba1697a2169becee9f3
Author: RiverRay <laolei@forkway.cn>
Date:   Sun Jan 12 12:56:13 2025 +0800

    Update README.md

commit 4ac27fdd4d98e6d339976b676b5be709973014cf
Merge: b6b2c501 840c151a
Author: Dogtiti <499960698@qq.com>
Date:   Sat Jan 11 16:19:02 2025 +0800

    Merge pull request #6033 from lvguanjun/fix_fork_session

    fix: prevent message sync between forked sessions by generating unique IDs

commit b6b2c501fd58c8b533ff7890c00ce8eee887617d
Merge: 0af04e0f c56587c4
Author: Dogtiti <499960698@qq.com>
Date:   Sat Jan 11 16:17:32 2025 +0800

    Merge pull request #6034 from dupl/main

    Correct the typos in user-manual-cn.md

commit ce13cf61a74f7b0682c230efed2742db91c7d1b7
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 20:15:47 2025 +0800

    feat: ignore mcp_config.json

commit a3af563e894286654bf1e7cf1f66190d9c467a79
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 20:13:16 2025 +0800

    feat: Reset mcp_config.json to empty

commit e95c94d7be72490668d8e022fd126cfe637b5f2a
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 20:10:10 2025 +0800

    fix: inaccurate content

commit 125a71feade05ad5f5a75dc8f979c1efc946cdab
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 20:07:24 2025 +0800

    fix: unnecessary initialization

commit b410ec399cefc78b7313ff387537edbe87ef4235
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 20:02:27 2025 +0800

    feat: auto scroll to bottom when MCP response

commit 7d51bfd42e0f60a328abed353ab1ef717b6f3ba8
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 19:51:01 2025 +0800

    feat: MCP market

commit 0c14ce6417821d512d04dec5a5755bf35deed51d
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 13:41:17 2025 +0800

    fix: MCP execution content matching failed.

commit f2a2b40d2c07172db28cdd685fa8c9098c995acc
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 10:20:56 2025 +0800

    feat: carry mcp primitives content as a system prompt

commit 77be190d763189915c520d431fc4aa889ca96c7e
Author: Kadxy <2230318258@qq.com>
Date:   Thu Jan 9 10:09:46 2025 +0800

    feat: carry mcp primitives content as a system prompt

commit c56587c438611e55251d930d038878e660145ad1
Author: dupl <67990457+dupl@users.noreply.github.com>
Date:   Sun Jan 5 20:34:18 2025 +0800

    Correct the typos in user-manual-cn.md

commit 840c151ab9ea7e384be37b774ea339264b5c0dc6
Author: lvguanjun <lvguanjun135@foxmail.com>
Date:   Sun Jan 5 11:22:53 2025 +0800

    fix: prevent message sync between forked sessions by generating unique IDs

commit 0af04e0f2f5af2c39cdd771b2ebb496d9ca47f28
Merge: 63c5baaa d184eb64
Author: RiverRay <laolei@forkway.cn>
Date:   Tue Dec 31 16:23:10 2024 +0800

    Merge pull request #5468 from DDMeaqua/feat-shortcutkey

    feat: #5422 快捷键清除上下文

commit d184eb64585562de7f75e1ff7d291eb242b2f076
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Tue Dec 31 14:50:54 2024 +0800

    chore: cmd + shift+ backspace

commit c5d9b1131ec932e53cd0394c283e24549f6426cb
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Tue Dec 31 14:38:58 2024 +0800

    fix: merge bug

commit e13408dd2480c1726d0333d8ede3a937187f7991
Merge: aba4baf3 63c5baaa
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Tue Dec 31 14:30:09 2024 +0800

    Merge branch 'main' into feat-shortcutkey

commit aba4baf38403dd717ee04f5555ba81749d9ee6c8
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Tue Dec 31 14:25:43 2024 +0800

    chore: update

commit 6d84f9d3ae62da0c5d1617645f961d9f9e1a1a27
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Tue Dec 31 13:27:15 2024 +0800

    chore: update

commit 63c5baaa80878cb4c1a883ed658e8115b8006dce
Merge: defefba9 266e9efd
Author: Dogtiti <499960698@qq.com>
Date:   Tue Dec 31 09:56:46 2024 +0800

    Merge pull request #6010 from code-october/fix-visionModels

    修复 VISION_MDOELS 在 docker 运行阶段不生效的问题

commit defefba925ef20b639d04cbab1ca3fdad3919eb6
Merge: d56566cd 90c531c2
Author: Dogtiti <499960698@qq.com>
Date:   Mon Dec 30 19:27:20 2024 +0800

    Merge pull request #6016 from bestsanmao/add_deepseek

    fix issue #6009  add setting items for deepseek

commit 90c531c2249c1e2070e4f605d25a8e31c315ebdb
Author: suruiqiang <bestsanmao@qq.com>
Date:   Mon Dec 30 18:23:18 2024 +0800

    fix issue #6009  add setting items for deepseek

commit 266e9efd2e004664d73f0aa7f93a8684c0e5c55e
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Mon Dec 30 09:13:12 2024 +0000

    rename the function

commit 57c88c0717bf21f29395642f32a306dc2388018d
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Mon Dec 30 08:58:41 2024 +0000

    修复 VISION_MDOELS 在 docker 运行阶段不生效的问题

commit 5b5dea1c59605f26b382d780b5a558169d1a1021
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Mon Dec 30 12:11:50 2024 +0800

    chore: 更换快捷键

commit d56566cd73b1936a66f3ad2e8a30dd351aa96464
Merge: f9e9129d b5d104c9
Author: Dogtiti <499960698@qq.com>
Date:   Mon Dec 30 09:42:22 2024 +0800

    Merge pull request #6001 from bestsanmao/add_deepseek

    docs: add DEEPSEEK_API_KEY and DEEPSEEK_URL in README

commit b5d104c908fb9442ddadb9bd4e544b7fa974c9fd
Merge: 2a8a1839 f9e9129d
Author: suruiqiang <bestsanmao@qq.com>
Date:   Mon Dec 30 09:04:40 2024 +0800

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web into add_deepseek

commit f9e9129d527a644d8baad97e12ece04601035b2c
Author: RiverRay <laolei@forkway.cn>
Date:   Sun Dec 29 19:57:27 2024 +0800

    Update README.md

commit 2a8a18391ebc563a9a552dfdac8a0a66d833e0d7
Author: suruiqiang <bestsanmao@qq.com>
Date:   Sun Dec 29 15:31:50 2024 +0800

    docs: add DEEPSEEK_API_KEY and DEEPSEEK_URL in README

commit e1cb8e36fab99faa1e60fecc6d7fd57607177cd2
Merge: c0062ff2 67338ff9
Author: Dogtiti <499960698@qq.com>
Date:   Sun Dec 29 12:35:21 2024 +0800

    Merge pull request #5989 from bestsanmao/add_deepseek

    since #5984, add DeepSeek as a new ModelProvider (with deepseek-chat&deepseek-coder models), so that user can use openai and deepseek at same time with different api url & key

commit b948d6bf86ba4410c854a3c73df275c42be89baa
Author: suruiqiang <bestsanmao@qq.com>
Date:   Sun Dec 29 11:24:57 2024 +0800

    bug fix

commit fe67f79050c7f4b8971f9b9aabc22c5fd23bac07
Author: Kadxy <2230318258@qq.com>
Date:   Sun Dec 29 09:24:52 2024 +0800

    feat: MCP message type

commit 67338ff9b73eebe5f8fcc317f0f3d93d32bff836
Author: suruiqiang <bestsanmao@qq.com>
Date:   Sun Dec 29 08:58:45 2024 +0800

    add KnowledgeCutOffDate for deepseek

commit 7380c8a2c10bfd314f5a1afd1e5b1b401733e485
Merge: 081daf93 c0062ff2
Author: suruiqiang <bestsanmao@qq.com>
Date:   Sun Dec 29 08:43:25 2024 +0800

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web into add_deepseek

commit e1ba8f1b0f122a73194b2f3716fdb78173647e05
Author: Kadxy <2230318258@qq.com>
Date:   Sun Dec 29 08:29:02 2024 +0800

    feat: Send MCP response as a user

commit c0062ff2802a6d921895517aa4ab17b1c897fe72
Merge: f8b10ad8 39e593da
Author: Dogtiti <499960698@qq.com>
Date:   Sun Dec 29 00:22:13 2024 +0800

    Merge pull request #5998 from dupl/main

    Use regular expressions to make the code more concise.

commit 39e593da48cf63df840e9133e9ee4ad5f8dbc986
Author: dupl <dupl@mail.ustc.edu.cn>
Date:   Sat Dec 28 23:49:28 2024 +0800

    Use regular expressions to make the code more concise.

commit f8b10ad8b173927861c6d1df12e07470465c9a1e
Merge: 5f96804f 8a22c9d6
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 28 23:34:44 2024 +0800

    Merge pull request #5997 from ChatGPTNextWeb/feature/glm-4v

    feature: support glm-4v

commit 8a22c9d6dbe2d1e041c9f9daed5768a8bdd0f7a9
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 28 23:29:39 2024 +0800

    feature: support glm-4v

commit 5f96804f3b1155fc870547a79588c0856283b66a
Merge: 13430ea3 93c5320b
Author: RiverRay <laolei@forkway.cn>
Date:   Sat Dec 28 22:05:37 2024 +0800

    Merge pull request #5920 from fishshi/i18n

    Use i18n for DISCOVERY

commit 13430ea3e261290881d1fbf4c65b52624cba9d0b
Merge: 9df24e56 87b5e3bf
Author: RiverRay <laolei@forkway.cn>
Date:   Sat Dec 28 22:02:02 2024 +0800

    Merge pull request #5965 from zmhuanf/temp

    Fix issue #5964: Prevents character loss in gemini-2.0-flash-thinking-exp-1219 responses

commit 664879b9df8c431664b06346962cff0319a3e85e
Author: Kadxy <2230318258@qq.com>
Date:   Sat Dec 28 21:06:26 2024 +0800

    feat: Create all MCP Servers at startup

commit 9df24e568b5ec280aa05cb8da7dd29c64f06c0b4
Merge: e467ce02 bc322be4
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 28 20:25:25 2024 +0800

    Merge pull request #5996 from ChatGPTNextWeb/feature/cogview

    Feature/cogview

commit bc322be448136a0dcb3f8adf93faae698b28b5d3
Author: Dogtiti <499960698@qq.com>
Date:   Fri Dec 27 22:35:40 2024 +0800

    fix: type error

commit a867adaf046395b7a6ee88b402bc1c3c477696f2
Author: Dogtiti <499960698@qq.com>
Date:   Fri Dec 27 21:57:23 2024 +0800

    fix: size

commit 0cb186846a03b95dfc4dd0d3b1f25dac48ac1026
Author: Dogtiti <499960698@qq.com>
Date:   Fri Dec 27 21:52:22 2024 +0800

    feature: support glm Cogview

commit e467ce028d61f717b86bebf71b9ded1ac41cb3d5
Merge: d91af7f9 cdfe907f
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 28 17:55:29 2024 +0800

    Merge pull request #5994 from ConnectAI-E/fix/failed-test

    fix: failed unit test

commit cdfe907fb506c467324a5a53e4b33f883a30eba3
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 28 17:54:21 2024 +0800

    fix: failed unit test

commit d91af7f9831a44a4bcafc9aef8b38dfef5739880
Merge: 0c3d4462 cc5e16b0
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 28 14:47:35 2024 +0800

    Merge pull request #5883 from code-october/fix/model-leak

    fix model leak issue

commit c3108ad333419ecb0d16a031d4f4603f0f781832
Author: Kadxy <2230318258@qq.com>
Date:   Sat Dec 28 14:31:43 2024 +0800

    feat: simple MCP example

commit 081daf937e4c18eb787662ca1a0fad561f54b9c6
Author: suruiqiang <bestsanmao@qq.com>
Date:   Fri Dec 27 16:46:44 2024 +0800

    since #5984, add DeepSeek as a new ModelProvider (with deepseek-chat&deepseek-corder models), so that user can use openai and deepseek at same time with different api url&key

commit 0c3d4462caa2abe5187856a2619b871ce8c33488
Merge: 1d156667 3c859fc2
Author: RiverRay <laolei@forkway.cn>
Date:   Mon Dec 23 22:47:59 2024 +0800

    Merge pull request #5976 from ChatGPTNextWeb/Leizhenpeng-patch-1

    Update README.md

commit 3c859fc29fc11ac9c229ed024d2d25366b8d2d99
Author: RiverRay <laolei@forkway.cn>
Date:   Mon Dec 23 22:47:16 2024 +0800

    Update README.md

commit e1c7c54dfaf82c37450d0ed3a124f8598bc0249b
Author: river <laolei@forkway.cn>
Date:   Mon Dec 23 22:32:36 2024 +0800

    chore: change md

commit 87b5e3bf6252be247b32385a19d9897bede5cdf0
Author: zmhuanf <zmhuanf@163.com>
Date:   Sun Dec 22 15:44:47 2024 +0800

    修复bug;

commit 1d15666713c27f89912cf913387176a76fc00f66
Merge: acc2e97a a127ae1f
Author: Dogtiti <499960698@qq.com>
Date:   Sun Dec 22 10:37:57 2024 +0800

    Merge pull request #5919 from Yiming3/feature/flexible-visual-model

    feat: runtime configuration of vision-capable models

commit a127ae1fb45d641b9f138057e56a10ece96b2964
Author: Yiming Zhang <yimingg.zhang@mail.utoronto.ca>
Date:   Sat Dec 21 13:12:41 2024 -0500

    docs: add VISION_MODELS section to README files

commit ea1329f73e516546dab7193425e1e7dfdd232eb6
Author: Yiming Zhang <yimingg.zhang@mail.utoronto.ca>
Date:   Sat Dec 21 04:07:58 2024 -0500

    fix: add optional chaining to prevent errors when accessing visionModels

commit 149d732cb78287c51edcd67d1fd5b16a5f334813
Merge: 210b29bf acc2e97a
Author: Yiming Zhang <yimingg.zhang@mail.utoronto.ca>
Date:   Sat Dec 21 03:53:05 2024 -0500

    Merge remote-tracking branch 'upstream/main' into feature/flexible-visual-model

commit 210b29bfbecaebc53c4f37ed23c5df28d28d41fb
Author: Yiming Zhang <yimingg.zhang@mail.utoronto.ca>
Date:   Sat Dec 21 03:51:54 2024 -0500

    refactor: remove NEXT_PUBLIC_ prefix from VISION_MODELS env var

commit acc2e97aab7437030db157605796482686a96f90
Merge: eceec092 93ac0e50
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 21 16:30:09 2024 +0800

    Merge pull request #5959 from dupl/gemini

    add gemini-exp-1206, gemini-2.0-flash-thinking-exp-1219

commit 93ac0e501737f8e01d046a367d0aeb0055c15633
Author: dupl <67990457+dupl@users.noreply.github.com>
Date:   Sat Dec 21 15:26:33 2024 +0800

    Reorganized the Gemini model

commit ed8c3580c8fce9c12c42e2a8ac086ea2f8887953
Author: Yiming Zhang <yimingg.zhang@mail.utoronto.ca>
Date:   Fri Dec 20 19:07:00 2024 -0500

    test: add unit tests for isVisionModel utility function

commit 0a056a7c5c0be993321174706d8b679e7ffde038
Author: dupl <dupl@mail.ustc.edu.cn>
Date:   Sat Dec 21 08:00:37 2024 +0800

    add gemini-exp-1206, gemini-2.0-flash-thinking-exp-1219

commit 74c4711cdd2c38c8136490fb27eb878cc1c8b001
Merge: a433d160 eceec092
Author: Yiming Zhang <yimingg.zhang@mail.utoronto.ca>
Date:   Fri Dec 20 18:34:07 2024 -0500

    Merge remote-tracking branch 'upstream/main' into feature/flexible-visual-model

commit eceec092cfc57f4024a8aab3869e39f6eebfef30
Merge: 42743410 46a0b100
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 21 00:43:02 2024 +0800

    Merge pull request #5932 from fengzai6/update-google-models

    Update google models to add gemini-2.0

commit 42743410a8ce5028cc359863c97baddc16ad8d64
Merge: 0f04756d acdded81
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 21 00:41:45 2024 +0800

    Merge pull request #5940 from ChatGPTNextWeb/dependabot/npm_and_yarn/testing-library/react-16.1.0

    chore(deps-dev): bump @testing-library/react from 16.0.1 to 16.1.0

commit 0f04756d4cda9b2bfd0308c72652268b3ec8b5b3
Merge: 83cea3a9 e939ce5a
Author: Dogtiti <499960698@qq.com>
Date:   Sat Dec 21 00:40:45 2024 +0800

    Merge pull request #5936 from InitialXKO/main

    面具“以文搜图”改成“AI文生图”,微调提示让图片生成更稳定无水印

commit acdded8161860def9fe0f3806798bcdc57754644
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Mon Dec 16 10:57:34 2024 +0000

    chore(deps-dev): bump @testing-library/react from 16.0.1 to 16.1.0

    Bumps [@testing-library/react](https://github.com/testing-library/react-testing-library) from 16.0.1 to 16.1.0.
    - [Release notes](https://github.com/testing-library/react-testing-library/releases)
    - [Changelog](https://github.com/testing-library/react-testing-library/blob/main/CHANGELOG.md)
    - [Commits](https://github.com/testing-library/react-testing-library/compare/v16.0.1...v16.1.0)

    ---
    updated-dependencies:
    - dependency-name: "@testing-library/react"
      dependency-type: direct:development
      update-type: version-update:semver-minor
    ...

    Signed-off-by: dependabot[bot] <support@github.com>

commit e939ce5a027150d2481508691e1c4f818a4f9130
Author: InitialXKO <45725592+InitialXKO@users.noreply.github.com>
Date:   Fri Dec 13 22:29:14 2024 +0800

    面具“以文搜图”改成“AI文生图”,微调提示让图片生成更稳定无水印

commit 46a0b100f73058d651b884341b43e126e2c04a00
Author: Nacho.L <112751823+fengzai6@users.noreply.github.com>
Date:   Fri Dec 13 08:29:43 2024 +0800

    Update versionKeywords

commit e27e8fb0e16ee61865d11606682f2c4cbd075e73
Author: Nacho.L <112751823+fengzai6@users.noreply.github.com>
Date:   Fri Dec 13 07:22:16 2024 +0800

    Update google models

commit 93c5320bf29a8da64e12d3870ea932631ad51b2a
Author: fishshi <2855691008@qq.com>
Date:   Tue Dec 10 15:56:04 2024 +0800

    Use i18n for DISCOVERY

commit a433d1606cc9f24cec7f7cc0a947e416373a5d7b
Author: Yiming Zhang <yimingg.zhang@mail.utoronto.ca>
Date:   Tue Dec 10 00:22:45 2024 -0500

    feat: use regex patterns for vision models and allow adding capabilities to models through env var NEXT_PUBLIC_VISION_MODELS.

commit cc5e16b0454481fab48b1115eda9b8fb11ce0054
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Sat Nov 30 07:30:52 2024 +0000

    update unit test

commit 54f6feb2d74b9ac81fa5f826f24f73929c7cb238
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Sat Nov 30 07:28:38 2024 +0000

    update unit test

commit e1ac0538b8143f93074c1c248a5739358b3ddfd1
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Sat Nov 30 07:22:24 2024 +0000

    add unit test

commit 1a678cb4d832fe47f5d04e614bb267907bbf2677
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Fri Nov 29 15:47:28 2024 +0000

    fix model leak issue

commit 83cea3a90d115f9564d96f6a6af400f467f2d3b4
Merge: 2623a927 759a09a7
Author: Dogtiti <499960698@qq.com>
Date:   Thu Nov 28 12:02:42 2024 +0800

    Merge pull request #5879 from frostime/textline-custom-model

    🎨 style(setting): Place custom-model's input a separated row.

commit 759a09a76c8c6cd97cd4546da022f38f426618f2
Author: frostime <frostime@foxmail.com>
Date:   Wed Nov 27 13:11:18 2024 +0800

    🎨 style(setting): Place custom-model's input a seperated row.

commit 2623a9276388668d65f34e1ad2206bec55acde79
Merge: 3932c594 ef24d3e6
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 25 12:31:36 2024 +0800

    Merge pull request #5850 from code-october/fix-o1

    Fix o1

commit 3932c594c71a4d52968c5bd19ab75cba2197ea76
Merge: a2adfbbd b7acb890
Author: Dogtiti <499960698@qq.com>
Date:   Fri Nov 22 20:59:30 2024 +0800

    Merge pull request #5861 from code-october/update-model

    update new model for gpt-4o and gemini-exp

commit b7acb890969fe5af9135bb49533fad610ac52e69
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Fri Nov 22 09:48:50 2024 +0000

    update new model for gpt-4o and gemini-exp

commit ef24d3e63360eea8868334f884d83f747cfd8f73
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Thu Nov 21 03:46:10 2024 +0000

    use stream when request o1

commit 23350c842b1b102ebdbb17ccbb2b5cfa7b25b893
Author: code-october <148516338+code-october@users.noreply.github.com>
Date:   Thu Nov 21 03:45:07 2024 +0000

    fix o1 in disableGPT4

commit a2adfbbd3242cb38c685c48a1a8d2cba2c2b28cd
Merge: f22cec1e 19facc7c
Author: Dogtiti <499960698@qq.com>
Date:   Sat Nov 16 15:24:46 2024 +0800

    Merge pull request #5821 from Sherlocksuper/scroll

    feat: support more user-friendly scrolling

commit f22cec1eb4a92fc9c388595ba3ca23d2820ba56e
Merge: b08ce563 e5621654
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Fri Nov 15 16:03:27 2024 +0800

    Merge pull request #5827 from ConnectAI-E/fix/markdown-embed-codeblock

    fix: 代码块嵌入小代码块时渲染错误

commit e56216549efe58c1b734f5094eb77bfaa6654c69
Author: opchips <opchips@gmail.com>
Date:   Fri Nov 15 11:56:26 2024 +0800

    fix: 代码块嵌入小代码块时渲染错误

commit 19facc7c85a0e509b5d4ca1eaa98782f29477c9a
Author: Sherlock <1075773551@qq.com>
Date:   Thu Nov 14 21:31:45 2024 +0800

    feat: support mort user-friendly scrolling

commit b08ce5630c5c24491459cca2e3bcc9c8b6a68114
Merge: a392daab b41c012d
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Wed Nov 13 15:17:44 2024 +0800

    Merge pull request #5819 from ConnectAI-E/fix-gemini-summary

    Fix gemini summary

commit b41c012d27d5495bec12f6aa6f9537ebb6873083
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Wed Nov 13 15:12:46 2024 +0800

    chore: shouldStream

commit a392daab716d66a7a9fb95e30e2574bf87716bb4
Merge: 819d249a 9a86c42c
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Wed Nov 13 14:58:33 2024 +0800

    Merge pull request #5816 from ConnectAI-E/feature/artifacts-svg

    artifacts support svg

commit 0628ddfc6f36479650d50281e3fa0ba1a847f777
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Wed Nov 13 14:27:41 2024 +0800

    chore: update

commit 7eda14f13882be635c9e6e5b8077617df8c5339b
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Wed Nov 13 14:24:44 2024 +0800

    fix: [#5308] gemini对话总结

commit 9a86c42c95be5b2bb85f44c0bdeb7714dc526a49
Author: opchips <opchips@gmail.com>
Date:   Tue Nov 12 16:33:55 2024 +0800

    update

commit 819d249a099ba84017afc5b5c20ae8309889b060
Merge: 7cf89b53 8d66fedb
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Tue Nov 12 15:04:11 2024 +0800

    Merge pull request #5815 from LovelyGuYiMeng/main

    更新视觉模型匹配关键词

commit 8d66fedb1f5093d6e29ac06a839316edb535512d
Author: LovelyGuYiMeng <76251800+LovelyGuYiMeng@users.noreply.github.com>
Date:   Tue Nov 12 14:28:11 2024 +0800

    Update visionKeywords

commit 7cf89b53ce129ec3c51181571b34b658894673b2
Merge: 459c373f 1d14a991
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Tue Nov 12 13:49:51 2024 +0800

    Merge pull request #5812 from ConnectAI-E/fix/rerender-chat

    fix: use current session id to trigger rerender

commit 459c373f130d04cec80761a81cbb651afc485135
Merge: 38fa3056 05ef5adf
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 11 20:59:56 2024 +0800

    Merge pull request #5807 from ChatGPTNextWeb/dependabot/npm_and_yarn/testing-library/jest-dom-6.6.3

    chore(deps-dev): bump @testing-library/jest-dom from 6.6.2 to 6.6.3

commit 1d14a991eedb17a492d6e840de71567c8a6884a7
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 11 20:30:59 2024 +0800

    fix: use current session id to trigger rerender

commit 05ef5adfa72d2a519a07c36aca4e6c1a965da7d2
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Mon Nov 11 10:53:00 2024 +0000

    chore(deps-dev): bump @testing-library/jest-dom from 6.6.2 to 6.6.3

    Bumps [@testing-library/jest-dom](https://github.com/testing-library/jest-dom) from 6.6.2 to 6.6.3.
    - [Release notes](https://github.com/testing-library/jest-dom/releases)
    - [Changelog](https://github.com/testing-library/jest-dom/blob/main/CHANGELOG.md)
    - [Commits](https://github.com/testing-library/jest-dom/compare/v6.6.2...v6.6.3)

    ---
    updated-dependencies:
    - dependency-name: "@testing-library/jest-dom"
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...

    Signed-off-by: dependabot[bot] <support@github.com>

commit 38fa3056df5fd9db915a9d4d1c5e11477de22ff6
Author: lloydzhou <lloydzhou@qq.com>
Date:   Mon Nov 11 13:26:08 2024 +0800

    update version

commit 289aeec8af234f2ff6092722f2ea955c5768f342
Merge: f8f69541 7d71da93
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Mon Nov 11 13:19:26 2024 +0800

    Merge pull request #5786 from ConnectAI-E/feature/realtime-chat

    Feature/realtime chat

commit 7d71da938fab927758f6c80921c4eee371deec70
Author: lloydzhou <lloydzhou@qq.com>
Date:   Mon Nov 11 13:15:09 2024 +0800

    remove close-24 svg

commit f8f6954115a5fd0caad88470a530ff7c2378c43a
Merge: 6e03f328 64aa760e
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Mon Nov 11 13:13:09 2024 +0800

    Merge pull request #5779 from ConnectAI-E/feature/model/claude35haiku

    add claude35haiku & not support vision

commit 6e03f3287123bb0bffc2aafd0f0a3c72566540a2
Merge: 108069a0 18a65718
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Mon Nov 11 13:10:00 2024 +0800

    Merge pull request #5795 from JingSyue/main

    fix: built-in plugin dalle3 error #5787

commit 18a657188339ac96905f0eaadd0583fdd6001f65
Author: JingSyue <144884872+JingSyue@users.noreply.github.com>
Date:   Mon Nov 11 12:59:29 2024 +0800

    Update proxy.ts

    Update proxy.ts

commit 14f444e1f0ca4253b65534140f9239d0504e1af6
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 11 11:47:41 2024 +0800

    doc: realtime chat

commit 2b0f2e5f9d064bc8107414b0c2e7efe61c03cdef
Author: JingSyue <acs108801@gm.ntcu.edu.tw>
Date:   Sun Nov 10 10:28:25 2024 +0800

    fix: built-in plugin dalle3 error #5787

commit 4629b39c297f51cf1a7e1685c62043578f653d21
Author: Dogtiti <499960698@qq.com>
Date:   Sat Nov 9 16:22:01 2024 +0800

    chore: comment context history

commit d33e772fa592c24e4adc03f127c887c9e4727913
Author: Dogtiti <499960698@qq.com>
Date:   Fri Nov 8 22:39:17 2024 +0800

    feat: voice print

commit 89136fba32dbf731e4aaed03508684cfeb54614b
Author: Dogtiti <499960698@qq.com>
Date:   Fri Nov 8 22:18:39 2024 +0800

    feat: voice print

commit 8b4ca133fda68ed7034ee5bbae8d622d66bf81f9
Author: Dogtiti <499960698@qq.com>
Date:   Fri Nov 8 22:02:31 2024 +0800

    feat: voice print

commit a4c9eaf6cd5f889fc77877885a52d45866579841
Author: lloydzhou <lloydzhou@qq.com>
Date:   Fri Nov 8 13:43:13 2024 +0800

    do not save empty audio file

commit 50e63109a35b6aff7056c129e30d745d74835226
Author: lloydzhou <lloydzhou@qq.com>
Date:   Fri Nov 8 13:21:40 2024 +0800

    merge code and get analyser data

commit 48a1e8a58469eceb92dfa41559638ad2583fca70
Author: Dogtiti <499960698@qq.com>
Date:   Thu Nov 7 21:32:47 2024 +0800

    chore: i18n

commit e44ebe3f0eda9ab6f08dc6a58601e333dd46101b
Author: Dogtiti <499960698@qq.com>
Date:   Thu Nov 7 21:28:23 2024 +0800

    feat: realtime config

commit 108069a0c6d1689c089ed2257a8e085c754a25b9
Merge: fbb9385f d5bda290
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Thu Nov 7 20:06:30 2024 +0800

    Merge pull request #5788 from ConnectAI-E/fix-o1-maxtokens

    chore: o1模型使用max_completion_tokens

commit d5bda2904dcb7279d4addff8bbc93cb206fdc7d6
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Thu Nov 7 19:45:27 2024 +0800

    chore: o1模型使用max_completion_tokens

commit 283caba8ce0e874e1876e6ef83a8d380e83f985f
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 18:57:57 2024 +0800

    stop streaming play after get input audio.

commit b78e5db81759ada7bb2813ae17d2d94bfae47d68
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 17:55:51 2024 +0800

    add temperature config

commit 46c469b2d7b95d21c8dc2c4c239c0646f3b43665
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 17:47:55 2024 +0800

    add voice config

commit c00ebbea4f774e15358c0a9410f807177931cff0
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 17:40:03 2024 +0800

    update

commit c526ff80b50ae2e284230093f0c51dc415b5cb46
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 17:23:20 2024 +0800

    update

commit 0037b0c94414c10b78ec2a152b36adc1d35bda2f
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 17:03:04 2024 +0800

    ts error

commit 6f81bb3b8a18f370cf3225c3c26fcaf7522e1317
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 16:56:15 2024 +0800

    add context after connected

commit 7bdc45ed3eb62833fee131ce93165823c1e4459f
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 16:41:24 2024 +0800

    connect realtime model when open panel

commit 88cd3ac122cfe0a93f3c87e441d9e1e59c8bfb33
Author: Dogtiti <499960698@qq.com>
Date:   Thu Nov 7 12:16:11 2024 +0800

    fix: ts error

commit 4988d2ee26f5cd65b128dae8924942c54a9da3ee
Author: Dogtiti <499960698@qq.com>
Date:   Thu Nov 7 11:56:58 2024 +0800

    fix: ts error

commit 8deb7a92ee7a9f15f749a93216abc789ac502718
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 11:53:01 2024 +0800

    hotfix for update target session

commit db060d732abeab3724f61eeae9ed6f5b4c93c9be
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 11:45:38 2024 +0800

    upload save record wav file

commit 522627820a1aa641ee5746930626e8a4153cd611
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 09:36:22 2024 +0800

    upload save wav file logic

commit cf46d5ad63bc13dd4e7f938689fdb3c65a09e929
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Nov 7 01:12:08 2024 +0800

    upload response audio, and update audio_url to session message

commit a4941521d0973943bbd0abba86dc7295b444f2b5
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 22:30:02 2024 +0800

    feat: audio to message

commit f6e1f8398b261b8d0a65c971e62ac0fa5178e743
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 22:07:33 2024 +0800

    wip

commit d544eead3818f69413de20c25c5f3578439b7a4d
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 21:14:45 2024 +0800

    feat: realtime chat ui

commit fbb9385f23246e86147df4c4efb29b2efad893b0
Merge: 6ded4e96 18144c3d
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Wed Nov 6 20:33:51 2024 +0800

    Merge pull request #5782 from ConnectAI-E/style/classname

    style: improve classname by clsx

commit 18144c3d9c91e8047b5d56ace69fb9e9829f7a78
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 20:16:38 2024 +0800

    chore: clsx

commit 64aa760e58f31ab45ce720b988c859dd53c491ca
Author: opchips <opchips@gmail.com>
Date:   Wed Nov 6 19:18:05 2024 +0800

    update claude rank

commit e0bbb8bb68429d160c50af512eaa5181b50dc2c3
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 16:58:26 2024 +0800

    style: improve classname by clsx

commit 6667ee1c7fe655cd24bd56ed4e8611bfca5978cd
Merge: 3086a2fa 6ded4e96
Author: opchips <opchips@gmail.com>
Date:   Wed Nov 6 15:08:18 2024 +0800

    merge main

commit 6ded4e96e76fd4a9896d49a729520faabb47d112
Merge: f4c9410c 85cdcab8
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Wed Nov 6 15:04:46 2024 +0800

    Merge pull request #5778 from ConnectAI-E/fix/5436

    fix: botMessage reply date

commit 85cdcab850cadbbd346d38b34603e3eb00e3e715
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 14:53:08 2024 +0800

    fix: botMessage reply date

commit f4c9410c29ac5b8d1979566c269ced96b74b2f57
Merge: f526d6f5 adf7d820
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Wed Nov 6 14:02:20 2024 +0800

    Merge pull request #5776 from ConnectAI-E/feat-glm

    fix: glm chatpath

commit adf7d8200b63ba9e389c3df2b801b82a272a85bf
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Wed Nov 6 13:55:57 2024 +0800

    fix: glm chatpath

commit 3086a2fa77d2815af05236bae4a13a4da387730b
Author: opchips <opchips@gmail.com>
Date:   Wed Nov 6 12:56:24 2024 +0800

    add claude35haiku not vision

commit f526d6f56094a5dc3e24b6a608681b2b0842f1d3
Merge: f3603e59 106461a1
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Wed Nov 6 11:16:33 2024 +0800

    Merge pull request #5774 from ConnectAI-E/feature/update-target-session

    fix: updateCurrentSession => updateTargetSession

commit 106461a1e72b3c62395903945fac27fe165e9e4b
Merge: c4e19dbc f3603e59
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 11:08:41 2024 +0800

    Merge branch 'main' of https://github.com/ConnectAI-E/ChatGPT-Next-Web into feature/update-target-session

commit c4e19dbc59e59b71c81cf33600f7a2be235b0ccc
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 11:06:18 2024 +0800

    fix: updateCurrentSession => updateTargetSession

commit f3603e59faa182313a408f58012b0c83eb2b8956
Merge: 00d6cb27 8e2484fc
Author: Dogtiti <499960698@qq.com>
Date:   Wed Nov 6 10:49:28 2024 +0800

    Merge pull request #5769 from ryanhex53/fix-model-multi@

    Custom model names can include the `@` symbol by itself.

commit 8e2484fcdf476a1248ae91541d6d491e5881b49b
Author: ryanhex53 <ouyang.em@gmail.com>
Date:   Tue Nov 5 13:52:54 2024 +0000

    Refactor: Replace all provider split occurrences with getModelProvider utility method

commit 00d6cb27f719caffd24518db3dd656b7380a9062
Author: lloydzhou <lloydzhou@qq.com>
Date:   Tue Nov 5 17:42:55 2024 +0800

    update version

commit b844045d231658b9e40fa0582936c6746e7a7ef4
Author: ryanhex53 <ouyang.em@gmail.com>
Date:   Tue Nov 5 07:44:12 2024 +0000

    Custom model names can include the `@` symbol by itself.

    To specify the model's provider, append it after the model name using `@` as before.

    This format supports cases like `google vertex ai` with a model name like `claude-3-5-sonnet@20240620`.

    For instance, `claude-3-5-sonnet@20240620@vertex-ai` will be split by `split(/@(?!.*@)/)` into:

    `[ 'claude-3-5-sonnet@20240620', 'vertex-ai' ]`, where the former is the model name and the latter is the custom provider.

commit e49fe976d9bd00106c163766aca915b291cb4956
Merge: 14f75196 e49466fa
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Tue Nov 5 15:07:52 2024 +0800

    Merge pull request #5765 from ConnectAI-E/feature/onfinish

    feat: update real 'currentSession'

commit 14f751965f7a55bb9896f1a02c066591e8b22057
Merge: 820ab54e 0ec42338
Author: Dogtiti <499960698@qq.com>
Date:   Tue Nov 5 11:07:52 2024 +0800

    Merge pull request #5767 from ConnectAI-E/feat-glm

    chore: update readme

commit 0ec423389fa08e4e4b046db5ad147194622b6218
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Tue Nov 5 11:06:20 2024 +0800

    chore: update readme

commit 820ab54e2d8c0ce09c0b88d479b364f998b8d244
Merge: 0dc4071c a6c1eb27
Author: Dogtiti <499960698@qq.com>
Date:   Tue Nov 5 10:54:52 2024 +0800

    Merge pull request #5766 from ConnectAI-E/feature/add-claude-haiku3.5

    Feature/add claude haiku3.5

commit a6c1eb27a82f4f18b043a0c382d5f694f8bd63aa
Merge: 801dc412 0dc4071c
Author: lloydzhou <lloydzhou@qq.com>
Date:   Tue Nov 5 10:23:15 2024 +0800

    add claude 3.5 haiku

commit 0dc4071ccce53fed990598da494316a6b199a3ee
Merge: aef535f1 4d394971
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Tue Nov 5 01:10:06 2024 +0800

    Merge pull request #5464 from endless-learner/main

    Added 1-click deployment link for Alibaba Cloud.

commit 4d3949718a979ff7db1c10d30b7ab66793c95892
Author: Lloyd Zhou <lloydzhou@qq.com>
Date:   Tue Nov 5 01:09:27 2024 +0800

    merge main

commit aef535f1a7af3196a90cfc7f1887cfb432ea3d06
Merge: 686a80e7 fbb7a1e8
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 4 21:41:11 2024 +0800

    Merge pull request #5753 from ChatGPTNextWeb/feat-bt-doc

    Feat bt doc

commit 686a80e727f13bd828d37bb865742f801611d1e3
Merge: 5733e3c5 4b933708
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 4 21:37:34 2024 +0800

    Merge pull request #5764 from ChatGPTNextWeb/dependabot/npm_and_yarn/testing-library/react-16.0.1

    chore(deps-dev): bump @testing-library/react from 16.0.0 to 16.0.1

commit e49466fa054c702898780967812abe2dabd4ba6b
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 4 21:25:56 2024 +0800

    feat: update real 'currentSession'

commit 4b93370814b41e256de7cddc6264705883265d56
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Mon Nov 4 10:24:30 2024 +0000

    chore(deps-dev): bump @testing-library/react from 16.0.0 to 16.0.1

    Bumps [@testing-library/react](https://github.com/testing-library/react-testing-library) from 16.0.0 to 16.0.1.
    - [Release notes](https://github.com/testing-library/react-testing-library/releases)
    - [Changelog](https://github.com/testing-library/react-testing-library/blob/main/CHANGELOG.md)
    - [Commits](https://github.com/testing-library/react-testing-library/compare/v16.0.0...v16.0.1)

    ---
    updated-dependencies:
    - dependency-name: "@testing-library/react"
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...

    Signed-off-by: dependabot[bot] <support@github.com>

commit 5733e3c588f7467e20cc7782b2327f149c7663e8
Merge: d66bfc63 44fc5b5c
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 4 17:16:44 2024 +0800

    Merge pull request #5759 from ConnectAI-E/feature/onfinish

    Feature/onfinish

commit 44fc5b5cbf44b7a362a916fbc3b1c3a34cc8e7cb
Author: Dogtiti <499960698@qq.com>
Date:   Mon Nov 4 17:00:45 2024 +0800

    fix: onfinish responseRes

commit 2d3f7c922f5a3e52da30f45b67a74f0df908e147
Author: Dogtiti <499960698@qq.com>
Date:   Wed Oct 16 15:17:08 2024 +0800

    fix: vision model dalle3

commit fe8cca3730ba261548dfc06ea20b6a0824a7b5da
Merge: adf97c6d d66bfc63
Author: GH Action - Upstream Sync <action@github.com>
Date:   Sat Nov 2 01:12:09 2024 +0000

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit fbb7a1e853334a9de54034aa41a18119e4e86028
Author: weige <772752726@qq.com>
Date:   Fri Nov 1 18:20:16 2024 +0800

    fix

commit fb2c15567dc218edc1ce4f6bfac746a25607fd9d
Author: weige <772752726@qq.com>
Date:   Fri Nov 1 17:45:50 2024 +0800

    fix

commit c2c52a1f605f2eeeac1865a7342968a7cfc36cf6
Author: weige <772752726@qq.com>
Date:   Fri Nov 1 17:35:34 2024 +0800

    fix

commit 106ddc17cd3267201d4c620c50bfdce641e04ff0
Author: weige <772752726@qq.com>
Date:   Fri Nov 1 17:35:09 2024 +0800

    fix

commit 17d5209738a114db34484838c18786924430cc5c
Author: weige <772752726@qq.com>
Date:   Fri Nov 1 17:28:20 2024 +0800

    add bt install doc

commit d66bfc6352fd2aea34eaaeeb15e109206549b75e
Merge: 36bfa2ef 4d75b23e
Author: Dogtiti <499960698@qq.com>
Date:   Fri Nov 1 14:16:50 2024 +0800

    Merge pull request #5752 from ConnectAI-E/feat-glm

    fix: ts error

commit 4d75b23ed1b41a042e28805e46ad2b5c8111cc3d
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Fri Nov 1 14:15:12 2024 +0800

    fix: ts error

commit 36bfa2ef7c4638c55fe12579f7751310db1f422c
Merge: 0581e372 afe12c21
Author: Dogtiti <499960698@qq.com>
Date:   Fri Nov 1 13:57:30 2024 +0800

    Merge pull request #5741 from ConnectAI-E/feat-glm

    feat: [#5714] 支持GLM

commit afe12c212e51bd2d27c5db5700f881c32a0bd3ba
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Fri Nov 1 13:53:43 2024 +0800

    chore: update

commit adf97c6d8bbce8c705437fb8251b87eea574e755
Merge: 7c466c9b 0581e372
Author: GH Action - Upstream Sync <action@github.com>
Date:   Fri Nov 1 01:18:59 2024 +0000

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit 7a8d557ea37e9b02fc26d8416fc631f4b7adda56
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Thu Oct 31 11:37:19 2024 +0800

    chore: 开启插件

commit d3f0a77830073684dd8da25e34d5d8eb0a94ecdb
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Thu Oct 31 11:23:06 2024 +0800

    chore: update Provider

commit 0581e37236a11b7d7925be25fe44320445914f7e
Merge: a0fa4d7e 44383a8b
Author: Dogtiti <499960698@qq.com>
Date:   Thu Oct 31 11:19:34 2024 +0800

    Merge pull request #5744 from mrcore/main

    add  claude-3-5-sonnet-latest and claude-3-opus-latest

commit 44383a8b331ed283f06213c5176bf60fe98bbcc0
Author: Core <mrcore@qq.com>
Date:   Thu Oct 31 11:00:45 2024 +0800

    add  claude-3-5-sonnet-latest and claude-3-opus-latest

    add  claude-3-5-sonnet-latest and claude-3-opus-latest

commit 7c466c9b9c29a1823a19c05aeba4ba2f348e8455
Merge: b0d28eb7 a0fa4d7e
Author: GH Action - Upstream Sync <action@github.com>
Date:   Thu Oct 31 01:14:28 2024 +0000

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit a0fa4d7e7251c66e048825d770f8b107095191e7
Merge: d0bd1bf8 736cbdbd
Author: Dogtiti <499960698@qq.com>
Date:   Thu Oct 31 00:13:16 2024 +0800

    Merge pull request #5737 from hyiip/claude3.5

    add constant to claude 3.5 sonnet 20241022

commit d357b45e84eb773c2e0c142d0d849c4f20be2975
Author: DDMeaqua <lzlloveyou@vip.qq.com>
Date:   Wed Oct 30 19:24:03 2024 +0800

    feat: [#5714] 支持GLM

commit d0bd1bf8fd1a87ede56859b3d5b6a64e8ce461d2
Merge: 613d67ea 86ffa1e6
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Wed Oct 30 16:56:53 2024 +0800

    Merge pull request #5740 from yuxuan-ctrl/main

    feat: 新增阿里系模型代码配置

commit 86ffa1e6430b0a34893665bb284130c1f144e399
Author: yuxuan-ctrl <714180720@qq.com>
Date:   Wed Oct 30 16:30:01 2024 +0800

    feat: 新增阿里系模型代码配置

commit b0d28eb77ebec3a58f57605fb79fdf38e282e22a
Merge: 064e964d 613d67ea
Author: endless-learner <35006844+endless-learner@users.noreply.github.com>
Date:   Tue Oct 29 14:38:49 2024 -0700

    Merge branch 'main' into main

commit 736cbdbdd12d340e9b08b69724f9a1321befd645
Author: hyiip <hyiip@link.cuhk.edu.hk>
Date:   Wed Oct 30 02:18:41 2024 +0800

    add constant to claude 3.5 sonnet 20241022

commit 613d67eada996176335142cc2486bfe09be66700
Merge: 56bc77d2 89cea189
Author: Dogtiti <499960698@qq.com>
Date:   Tue Oct 29 19:39:59 2024 +0800

    Merge pull request #5729 from ConnectAI-E/feature/jest

    chore: improve jest

commit 89cea18955a6d3ac73d053014f9f0d70338623e8
Merge: 49d42bb4 56bc77d2
Author: Dogtiti <499960698@qq.com>
Date:   Tue Oct 29 19:26:52 2024 +0800

    Merge branch 'main' of https://github.com/ConnectAI-E/ChatGPT-Next-Web into feature/jest

commit 56bc77d20bc102af8166c48c8406f11f82cd4cae
Merge: 6d93d379 a4d7a2c6
Author: Dogtiti <499960698@qq.com>
Date:   Mon Oct 28 21:52:08 2024 +0800

    Merge pull request #5731 from ChatGPTNextWeb/dependabot/npm_and_yarn/testing-library/jest-dom-6.6.2

    Bump @testing-library/jest-dom from 6.4.8 to 6.6.2

commit 6d93d37963faf087bc21dea147fe2e9b0e96cee7
Merge: 4f496263 24df85cf
Author: Dogtiti <499960698@qq.com>
Date:   Mon Oct 28 21:51:59 2024 +0800

    Merge pull request #5732 from ChatGPTNextWeb/dependabot/npm_and_yarn/types/jest-29.5.14

    Bump @types/jest from 29.5.13 to 29.5.14

commit 24df85cf9d3ab2a307baa1539922c9463949ffa9
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Mon Oct 28 10:31:34 2024 +0000

    Bump @types/jest from 29.5.13 to 29.5.14

    Bumps [@types/jest](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/jest) from 29.5.13 to 29.5.14.
    - [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
    - [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/jest)

    ---
    updated-dependencies:
    - dependency-name: "@types/jest"
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...

    Signed-off-by: dependabot[bot] <support@github.com>

commit a4d7a2c6e3ef4d325a8039b5dd5bb9445d496c02
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Mon Oct 28 10:31:27 2024 +0000

    Bump @testing-library/jest-dom from 6.4.8 to 6.6.2

    Bumps [@testing-library/jest-dom](https://github.com/testing-library/jest-dom) from 6.4.8 to 6.6.2.
    - [Release notes](https://github.com/testing-library/jest-dom/releases)
    - [Changelog](https://github.com/testing-library/jest-dom/blob/main/CHANGELOG.md)
    - [Commits](https://github.com/testing-library/jest-dom/compare/v6.4.8...v6.6.2)

    ---
    updated-dependencies:
    - dependency-name: "@testing-library/jest-dom"
      dependency-type: direct:development
      update-type: version-update:semver-minor
    ...

    Signed-off-by: dependabot[bot] <support@github.com>

commit 49d42bb45d50141c6f7321ea650b0c5d58697591
Author: Dogtiti <499960698@qq.com>
Date:   Mon Oct 28 16:47:05 2024 +0800

    chore: improve jest

commit 4f49626303127cac6f66ab9386d5b5076ed4d1d1
Merge: 82994843 45db20c1
Author: Lloyd Zhou <lloydzhou@qq.com>
Date:   Sat Oct 26 12:09:09 2024 +0800

    Merge pull request #5722 from ElricLiu/main

    Update README.md

commit 45db20c1c37279ebfe610d75a80dc09a21a14c54
Author: ElricLiu <20209191+ElricLiu@users.noreply.github.com>
Date:   Sat Oct 26 11:16:43 2024 +0800

    Update README.md

commit 82994843f5f9923d8c1c7e4b9e4f6ab79fb17110
Merge: 1110a087 90ced928
Author: Lloyd Zhou <linzhou@huntcoder.com>
Date:   Fri Oct 25 20:34:15 2024 +0800

    Merge pull request #5719 from ConnectAI-E/hotfix/status_text_error

    hotfix for statusText is non ISO-8859-1 #5717

commit 1110a087a0268a7b8a523c547698b961dbd478b6
Merge: 4745706c f0b3e10a
Author: Dogtiti <499960698@qq.com>
Date:   Fri Oct 25 18:25:46 2024 +0800

    Merge pull request #5720 from ConnectAI-E/hotfix/gemini_invald_argument

    hotfix for gemini invald argument #5715

commit f0b3e10a6caf55bf91325183b5ad84de2a05db04
Author: lloydzhou <lloydzhou@qq.com>
Date:   Fri Oct 25 18:19:22 2024 +0800

    hotfix for gemini invald argument #5715

commit f89872b833d27c48b33281e60157640037e17a99
Author: lloydzhou <lloydzhou@qq.com>
Date:   Fri Oct 25 18:12:09 2024 +0800

    hotfix for gemini invald argument #5715

commit 90ced9287626492898f2eb9bfd3b079171faf6ea
Author: lloydzhou <lloydzhou@qq.com>
Date:   Fri Oct 25 18:05:29 2024 +0800

    update

commit 2c745590101b5201c677243f151616cb7023186e
Author: lloydzhou <lloydzhou@qq.com>
Date:   Fri Oct 25 18:02:51 2024 +0800

    hitfix

commit e3ca7e8b4433bea43376035b9417fe233fe5f6f0
Author: lloydzhou <lloydzhou@qq.com>
Date:   Fri Oct 25 17:52:08 2024 +0800

    hotfix for statusText is non ISO-8859-1 #5717

commit 4745706c42a390117e5e0f700af3d5f06e18f312
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Oct 24 15:32:27 2024 +0800

    update version to v2.15.6

commit 801dc412f99937dfd64a895309d9304429d94cac
Author: lloydzhou <lloydzhou@qq.com>
Date:   Thu Oct 24 15:28:05 2024 +0800

    add claude-3.5-haiku

commit c7c2c0211a8b8906dc8b2de695bb74c6b22da92e
Merge: 06f897f3 65bb962f
Author: Dogtiti <499960698@qq.com>
Date:   Wed Oct 23 14:13:17 2024 +0800

    Merge pull request #5704 from ConnectAI-E/feature/xai

    xAi support

commit 65bb962fc0b6eaa0cb1e15451d954df216b1956f
Author: lloydzhou <lloydzhou@qq.com>
Date:   Wed Oct 23 12:00:59 2024 +0800

    hotfix

commit e791cd441d544a18126ddb825651d0e627402…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants