Skip to content

Conversation

@Aaaaash
Copy link
Member

@Aaaaash Aaaaash commented Nov 4, 2025

Types

  • 🎉 New Features
  • 🐛 Bug Fixes
  • 📚 Documentation Changes
  • 💄 Code Style Changes
  • 💄 Style Changes
  • 🪚 Refactors
  • 🚀 Performance Improvements
  • 🏗️ Build System
  • ⏱ Tests
  • 🧹 Chores
  • Other Changes

Background or solution

Changelog

Summary by CodeRabbit

发版说明

  • 新增功能

    • 编辑器对大型文件(>10MB)进行性能优化,自动禁用部分渲染功能以保证流畅性
    • 打开超过50MB文件时新增警告提示
  • 改进

    • 优化WebSocket连接数据传输机制,采用异步缓冲和分块发送,提升传输效率和稳定性
    • 完善数据帧解码器,增强异步处理能力和错误处理

@Aaaaash
Copy link
Member Author

Aaaaash commented Nov 4, 2025

/next

@opensumi
Copy link
Contributor

opensumi bot commented Nov 4, 2025

🎉 PR Next publish successful!

3.9.1-next-1762258086.0

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 4, 2025

Walkthrough

该变更实现了连接层的架构重构,包括引入异步缓存解码、基于Promise的分块发送队列、针对大文件的编辑器优化功能,以及相应的缓冲区读取优化和测试扩展。

Changes

Cohort / File(s) 变更摘要
缓冲区和读取优化
packages/connection/src/common/buffers/buffers.ts
新增 buffer4Capacity 共享缓冲区、Buffers.slice4() 和 Cursor.read4() 方法,优化4字节读取操作,避免重复分配。
帧解码器异步重构
packages/connection/src/common/connection/drivers/frame-decoder.ts
将同步帧处理替换为异步节流处理,引入 onData() 回调API替代事件发射器模式,添加 MAX_ITERATIONS 防护和异步让步机制,改进错误处理。
WebSocket连接队列化
packages/connection/src/common/connection/drivers/ws-websocket.ts
新增 WSWebSocketConnection 类,实现Promise-based send()、带背压的缓存队列(MAX_QUEUE_SIZE、MAX_PENDING_SIZE)、分块发送机制、集成帧解码器进行消息处理。
连接驱动调整
packages/connection/src/common/connection/drivers/reconnecting-websocket.ts, packages/connection/src/common/connection/drivers/stream.ts
重构WebSocket处理流程,集成解码器和队列化发送;调整流驱动的帧构造生命周期,加入错误日志和资源清理。
编辑器大文件优化
packages/editor/src/browser/doc-model/large-file-optimizer.ts
新增大文件优化模块,提供 ILargeFileOptimizationOptions 接口、shouldOptimizeForLargeFile() 检测函数、getLargeFileOptimizedEditorOptions() 配置生成器。
编辑器集成
packages/editor/src/browser/base-editor-wrapper.ts
集成大文件优化逻辑,根据文件大小自动禁用语义高亮、换行、小地图、代码镜头和快速建议。
文件方案处理
packages/file-scheme/src/browser/file-scheme.contribution.ts
新增50MB警告阈值用于大文件检测,保留可配置的最大大小限制(默认4GB)。
常量和工具
packages/connection/src/common/constants.ts, packages/connection/src/common/fury-extends/one-of.ts, packages/connection/src/node/common-channel-handler.ts
新增 chunkSize 常量(256KB),增强 OneOf.deserialize 错误处理,简化 WSWebSocketConnection 构造调用。
测试框架扩展
packages/connection/__test__/common/buffers.test.ts, packages/connection/__test__/common/frame-decoder.test.ts
大幅扩展缓冲区测试覆盖(边界、拼接、性能基准),增加帧解码异步测试场景(多帧流、分块输入、并发、内存稳定性)。
集成测试更新
packages/connection/__test__/browser/index.test.ts, packages/connection/__test__/node/index.test.ts, packages/connection/__test__/node/ws-channel.test.ts, packages/core-browser/__tests__/bootstrap/connection.test.ts
更新测试以使用新的连接API(onMessage 回调、send Promise、WSWebSocketConnection 包装器),调整超时配置。
代码清理
packages/components/src/recycle-tree/tree/TreeNode.ts
spliceArray 优化返回原数组当无修改时、注释引号调整。

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant WSConn as WSWebSocketConnection
    participant Queue as SendQueue
    participant Decoder as FrameDecoder
    participant Socket as WebSocket

    Note over Client,Socket: 发送流程(新)
    Client->>WSConn: send(data)
    activate WSConn
    WSConn->>Queue: 入队(data, resolve, reject)
    WSConn->>WSConn: processSendQueue()
    deactivate WSConn
    
    loop 处理队列中的数据
        activate WSConn
        WSConn->>Decoder: construct(chunk)
        Decoder-->>WSConn: frame
        WSConn->>Socket: send(frame)
        WSConn->>WSConn: 更新pendingSize
        Note over WSConn: 检查队列/内存限制
        WSConn-->>Client: resolve(Promise)
        deactivate WSConn
    end

    Note over Client,Socket: 接收流程(新)
    Socket->>WSConn: message(ArrayBuffer)
    activate WSConn
    WSConn->>Decoder: push(data)
    deactivate WSConn
    
    activate Decoder
    loop processBuffers() 异步处理
        Decoder->>Decoder: readFrame()
        Decoder->>Decoder: onData(listener)
        Decoder-->>Client: 帧数据回调
    end
    deactivate Decoder
Loading
sequenceDiagram
    participant Editor
    participant Wrapper as EditorWrapper
    participant Optimizer as LargeFileOptimizer
    participant Monaco as MonacoEditor

    Note over Editor,Monaco: 大文件优化流程(新)
    Editor->>Wrapper: 初始化编辑器选项
    activate Wrapper
    Wrapper->>Optimizer: shouldOptimizeForLargeFile(size)
    alt 文件 > 10MB 或行数 > 50k
        Optimizer-->>Wrapper: true
        Wrapper->>Optimizer: getLargeFileOptimizedEditorOptions()
        Optimizer-->>Wrapper: 优化后的选项
        Note over Wrapper: 禁用语义高亮、换行、小地图等
    else 文件正常
        Optimizer-->>Wrapper: false
    end
    Wrapper->>Monaco: 应用最终选项
    deactivate Wrapper
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • 关键审查点
    • 帧解码器的异步重构中 processingPromise 并发控制机制的正确性,包括 MAX_ITERATIONShasMoreData 循环逻辑
    • WebSocket连接中队列化发送的背压机制(MAX_QUEUE_SIZE、MAX_PENDING_SIZE)和错误处理路径的完整性
    • LengthFieldBasedFrameDecoder.construct() 返回类型从 Uint8Array 变更为 BinaryWriter 的影响范围和兼容性
    • 连接构造函数可见性从 public 变更为 protected 对现有使用者的影响
    • 异步处理流程中的资源清理和内存泄漏风险(特别是 Promise 链和监听器注册)
    • 大文件优化阈值设置(10MB/50k 行)的合理性验证
    • 扩展测试用例中性能基准测试的稳定性和环境依赖性

Possibly related PRs

  • PR #3988:主PR和该检索PR进行了重叠的相关代码级变更——两者都修改了同一连接堆栈(buffers.slice4/read4、LengthFieldBasedFrameDecoder onData异步处理、WSWebSocketConnection send→Promise并分块发送队列、chunkSize常量)。
  • PR #4477:该检索PR直接相关——它回退了主PR引入的分块发送、解码器、队列更改及相关的buffer4/lengthField/read4编辑和WSWebSocketConnection send签名。

Suggested labels

🎨 feature

Suggested reviewers

  • erha19
  • Ricbet

Pre-merge checks and finishing touches

✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR标题"Feat/chunk ws payload"简洁地概括了主要改动:为WebSocket消息添加分块传输机制。标题与提交内容高度相关,涵盖了核心变更。
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/chunk-ws-payload

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/connection/src/common/fury-extends/one-of.ts (1)

80-115: 建议在 serialize 函数中添加类似的错误处理。

serialize 函数缺少对无效 kind 值的错误处理。如果 v.kind 不存在于 kindToIndex 中(第 81 行),index 会是 undefined,这会导致:

  • writer.uint8(undefined) 可能产生意外行为
  • switch 语句会静默失败,不写入任何序列化数据
  • 可能导致数据损坏或难以调试的问题

为了与 deserialize 的错误处理保持一致,建议添加验证。

应用此差异以添加错误处理:

 const serialize = (v: Writable) => {
   const index = kindToIndex[v.kind];
+
+  if (index === undefined) {
+    throw new Error('unknown kind: ' + v.kind);
+  }

   writer.reset();

   writer.uint8(index);
packages/file-scheme/src/browser/file-scheme.contribution.ts (1)

151-173: 修复不可达的条件逻辑错误

当前的条件判断存在严重的逻辑错误。由于 warningSize(50MB)小于 maxSize(默认4GB),第一个条件会捕获所有大于 50MB 的文件,导致第 161-166 行的 else if 分支永远不会被执行

例如:

  • 60MB 文件:触发第一个条件(60MB > 50MB)→ 显示警告
  • 5GB 文件:仍然触发第一个条件(5GB > 50MB)→ 显示警告,第二个条件永远不会被评估

应用此修复方案,将条件顺序反转,先检查 maxSize:

 const warningSize = 50 * 1024 * 1024; // 50MB 警告阈值

-// 如果文件超过警告阈值,且没有明确要求跳过警告,显示警告页面
-if (stat && (stat.size || 0) > warningSize && !metadata.skipPreventTooLarge) {
+// 如果文件超过最大限制,且没有明确要求跳过警告,显示警告页面
+if (stat && (stat.size || 0) > maxSize && !metadata.skipPreventTooLarge) {
   results.push({
     type: EditorOpenType.component,
     componentId: LARGE_FILE_PREVENT_COMPONENT_ID,
   });
 }
-// 如果文件超过最大限制,且没有明确要求跳过警告,显示警告页面
-else if (stat && (stat.size || 0) > maxSize && !metadata.skipPreventTooLarge) {
+// 如果文件超过警告阈值,且没有明确要求跳过警告,显示警告页面
+else if (stat && (stat.size || 0) > warningSize && !metadata.skipPreventTooLarge) {
   results.push({
     type: EditorOpenType.component,
     componentId: LARGE_FILE_PREVENT_COMPONENT_ID,
   });
 }
 // 其他情况正常打开
 else {
   results.push({
     type: EditorOpenType.code,
     title: localize('editorOpenType.code'),
   });
 }

或者,如果 50MB 和最大限制应该使用同一个阈值,则合并条件:

-const warningSize = 50 * 1024 * 1024; // 50MB 警告阈值
-
-// 如果文件超过警告阈值,且没有明确要求跳过警告,显示警告页面
-if (stat && (stat.size || 0) > warningSize && !metadata.skipPreventTooLarge) {
-  results.push({
-    type: EditorOpenType.component,
-    componentId: LARGE_FILE_PREVENT_COMPONENT_ID,
-  });
-}
-// 如果文件超过最大限制,且没有明确要求跳过警告,显示警告页面
-else if (stat && (stat.size || 0) > maxSize && !metadata.skipPreventTooLarge) {
+// 如果文件超过大小限制,且没有明确要求跳过警告,显示警告页面
+if (stat && (stat.size || 0) > maxSize && !metadata.skipPreventTooLarge) {
   results.push({
     type: EditorOpenType.component,
     componentId: LARGE_FILE_PREVENT_COMPONENT_ID,
   });
 }
-// 其他情况正常打开
 else {
   results.push({
     type: EditorOpenType.code,
     title: localize('editorOpenType.code'),
   });
 }
🧹 Nitpick comments (2)
packages/core-browser/__tests__/bootstrap/connection.test.ts (1)

38-40: 考虑验证测试延迟时长的必要性

使用 sleep 工具函数替代内联实现是一个很好的重构。不过,4 秒的延迟时间对于单元测试来说相对较长,可能会影响测试套件的整体执行效率。

建议验证这个延迟是否可以缩短,或者在注释中说明为什么需要这么长的等待时间。

packages/editor/src/browser/doc-model/large-file-optimizer.ts (1)

50-56: 建议保留嵌套对象的现有属性。

直接赋值 minimaphover 会完全覆盖 baseOptions 中可能存在的其他属性。建议使用对象展开来保留现有配置。

应用以下修改以保留现有属性:

   if (optimizations.disableMinimap) {
-    optimizedOptions.minimap = { enabled: false };
+    optimizedOptions.minimap = { ...optimizedOptions.minimap, enabled: false };
   }

   if (optimizations.disableHover) {
-    optimizedOptions.hover = { enabled: false };
+    optimizedOptions.hover = { ...optimizedOptions.hover, enabled: false };
   }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c72ac45 and 383a193.

📒 Files selected for processing (18)
  • packages/components/src/recycle-tree/tree/TreeNode.ts (2 hunks)
  • packages/connection/__test__/browser/index.test.ts (2 hunks)
  • packages/connection/__test__/common/buffers.test.ts (2 hunks)
  • packages/connection/__test__/common/frame-decoder.test.ts (4 hunks)
  • packages/connection/__test__/node/index.test.ts (2 hunks)
  • packages/connection/__test__/node/ws-channel.test.ts (3 hunks)
  • packages/connection/src/common/buffers/buffers.ts (3 hunks)
  • packages/connection/src/common/connection/drivers/frame-decoder.ts (6 hunks)
  • packages/connection/src/common/connection/drivers/reconnecting-websocket.ts (3 hunks)
  • packages/connection/src/common/connection/drivers/stream.ts (2 hunks)
  • packages/connection/src/common/connection/drivers/ws-websocket.ts (2 hunks)
  • packages/connection/src/common/constants.ts (1 hunks)
  • packages/connection/src/common/fury-extends/one-of.ts (1 hunks)
  • packages/connection/src/node/common-channel-handler.ts (1 hunks)
  • packages/core-browser/__tests__/bootstrap/connection.test.ts (2 hunks)
  • packages/editor/src/browser/base-editor-wrapper.ts (2 hunks)
  • packages/editor/src/browser/doc-model/large-file-optimizer.ts (1 hunks)
  • packages/file-scheme/src/browser/file-scheme.contribution.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (12)
packages/connection/__test__/browser/index.test.ts (1)
packages/connection/src/common/connection/drivers/ws-websocket.ts (1)
  • WSWebSocketConnection (17-131)
packages/editor/src/browser/base-editor-wrapper.ts (1)
packages/editor/src/browser/doc-model/large-file-optimizer.ts (2)
  • shouldOptimizeForLargeFile (73-86)
  • getLargeFileOptimizedEditorOptions (16-68)
packages/connection/__test__/node/index.test.ts (4)
packages/connection/src/common/serializer/index.ts (1)
  • wrapSerializer (9-24)
packages/connection/src/common/serializer/fury.ts (1)
  • furySerializer (67-67)
packages/connection/src/common/ws-channel.ts (1)
  • WSChannel (96-322)
packages/connection/src/common/channel/types.ts (1)
  • ChannelMessage (1-9)
packages/editor/src/browser/doc-model/large-file-optimizer.ts (1)
packages/monaco/src/browser/monaco-api/editor.ts (1)
  • IEditorOptions (132-132)
packages/connection/src/node/common-channel-handler.ts (1)
packages/connection/src/common/connection/drivers/ws-websocket.ts (1)
  • WSWebSocketConnection (17-131)
packages/connection/src/common/connection/drivers/ws-websocket.ts (2)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
  • LengthFieldBasedFrameDecoder (26-230)
packages/connection/src/common/constants.ts (1)
  • chunkSize (6-6)
packages/connection/src/common/connection/drivers/reconnecting-websocket.ts (2)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
  • LengthFieldBasedFrameDecoder (26-230)
packages/connection/src/common/constants.ts (1)
  • chunkSize (6-6)
packages/connection/__test__/common/buffers.test.ts (1)
packages/connection/src/common/buffers/buffers.ts (3)
  • Buffers (20-230)
  • slice (38-74)
  • cursor (222-224)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
packages/connection/src/common/buffers/buffers.ts (1)
  • Buffers (20-230)
packages/connection/src/common/connection/drivers/stream.ts (1)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
  • LengthFieldBasedFrameDecoder (26-230)
packages/core-browser/__tests__/bootstrap/connection.test.ts (1)
packages/utils/src/async.ts (1)
  • sleep (603-605)
packages/connection/__test__/common/frame-decoder.test.ts (1)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
  • LengthFieldBasedFrameDecoder (26-230)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: 🚀🚀🚀 Next Version for pull request
  • GitHub Check: build (macos-latest, 20.x)
  • GitHub Check: build (ubuntu-latest, 20.x)
  • GitHub Check: ubuntu-latest, Node.js 20.x
  • GitHub Check: unittest (ubuntu-latest, 18.x, node)
  • GitHub Check: unittest (ubuntu-latest, 18.x, jsdom)
  • GitHub Check: unittest (macos-latest, 18.x, jsdom)
  • GitHub Check: unittest (macos-latest, 18.x, node)
🔇 Additional comments (7)
packages/core-browser/__tests__/bootstrap/connection.test.ts (1)

3-3: 重构良好:使用标准化的 sleep 工具函数

引入 sleep 工具函数替代内联的 Promise + setTimeout 实现,提高了代码一致性和可读性。

packages/connection/src/common/fury-extends/one-of.ts (1)

71-73: 改进的错误处理看起来很好。

这个 default 分支正确地防止了未知索引导致的未定义行为。如果没有这个错误检查,第 76 行的 v.kind = kinds[idx] 会在索引超出 0-7 范围时赋值 undefined,导致静默的数据错误。

packages/components/src/recycle-tree/tree/TreeNode.ts (1)

41-51: 优化实现正确,无兼容性风险。

从脚本执行结果验证,spliceArray 共有 7 个调用点:

  • TreeNode.ts 中 5 处(1194、1238、1325、1357 行):均将结果传递给 setFlattenedBranch() 方法
  • RecycleTree.tsx 中 2 处(866、870 行):均直接赋值给 this.filterFlattenBranch 属性

所有调用方都立即消费返回值,不存在引用相等性检查或引用别名问题。早期返回优化在无修改操作时返回原数组引用在语义上等价,且安全可行。

特别地,第 866 行的情况中即使 collapesArray.length 为 0,早期返回也会返回未改变的数组,这是正确的无操作行为。

packages/editor/src/browser/doc-model/large-file-optimizer.ts (2)

3-11: 接口定义合理。

优化选项接口设计清晰,所有属性都是可选的布尔值,符合配置接口的最佳实践。


88-101: 实现正确。

元数据提取逻辑简单明了,正确地将元数据映射到优化选项接口。

packages/editor/src/browser/base-editor-wrapper.ts (1)

14-14: 导入语句正确。

从新模块正确导入了大文件优化所需的工具函数。

packages/connection/src/common/connection/drivers/stream.ts (1)

24-34: 延迟释放帧缓冲句柄以避免数据损坏
当前在调用 write 后立刻 dispose 句柄,但底层 Writable 仍可能异步使用这块内存;句柄被归还到池里后,后续写入会复用并覆盖同一缓冲区,导致正在发送的数据被篡改甚至触发难以排查的协议错误。这属于同步与异步生命周期混用的严重缺陷,必须等写入回调触发后再释放句柄,同时在同步异常时兜底释放。raise_critical_issue

   send(data: Uint8Array): void {
-    const handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn();
-    try {
-      this.writable.write(handle.get(), (error) => {
-        if (error) {
-          console.error('Failed to write data:', error);
-        }
-      });
-    } finally {
-      handle.dispose();
-    }
+    const handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn();
+    const payload = handle.get();
+    const release = () => {
+      handle.dispose();
+    };
+    const onWrite = (error?: Error | null) => {
+      if (error) {
+        console.error('Failed to write data:', error);
+      }
+      release();
+    };
+    try {
+      this.writable.write(payload, onWrite);
+    } catch (error) {
+      release();
+      throw error;
+    }
   }

Comment on lines +384 to +406
it('should handle slicing 1MB data under 50ms', () => {
const start = performance.now();
const slice = largeBuffer.slice(0, 1024 * 1024);
const duration = performance.now() - start;

expect(duration).toBeLessThan(50);
expect(slice.byteLength).toBe(1024 * 1024);
});

it('should handle 1000 splices under 1s', () => {
const buf = createEnhanced(
new Array(10000).fill(0).map((_, i) => i),
[100, 900, 9000],
);

const start = performance.now();
for (let i = 0; i < 1000; i++) {
buf.splice(i % 100, 5, new Uint8Array([i]));
}
const duration = performance.now() - start;

expect(duration).toBeLessThan(1000);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

移除脆弱的性能断言

这里用 expect(duration).toBeLessThan(50)expect(duration).toBeLessThan(1000) 将单测绑死在极低延迟上,CI/本地机器一旦负载稍高就会随机失败。这类性能基准应放在基准测试或监控里,而不是功能性单测。建议改成只验证结果正确,或将性能检查移出单测流程。

-      expect(duration).toBeLessThan(50);
       expect(slice.byteLength).toBe(1024 * 1024);
-      expect(duration).toBeLessThan(1000);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it('should handle slicing 1MB data under 50ms', () => {
const start = performance.now();
const slice = largeBuffer.slice(0, 1024 * 1024);
const duration = performance.now() - start;
expect(duration).toBeLessThan(50);
expect(slice.byteLength).toBe(1024 * 1024);
});
it('should handle 1000 splices under 1s', () => {
const buf = createEnhanced(
new Array(10000).fill(0).map((_, i) => i),
[100, 900, 9000],
);
const start = performance.now();
for (let i = 0; i < 1000; i++) {
buf.splice(i % 100, 5, new Uint8Array([i]));
}
const duration = performance.now() - start;
expect(duration).toBeLessThan(1000);
});
it('should handle slicing 1MB data under 50ms', () => {
const start = performance.now();
const slice = largeBuffer.slice(0, 1024 * 1024);
const duration = performance.now() - start;
expect(slice.byteLength).toBe(1024 * 1024);
});
it('should handle 1000 splices under 1s', () => {
const buf = createEnhanced(
new Array(10000).fill(0).map((_, i) => i),
[100, 900, 9000],
);
const start = performance.now();
for (let i = 0; i < 1000; i++) {
buf.splice(i % 100, 5, new Uint8Array([i]));
}
const duration = performance.now() - start;
});
🤖 Prompt for AI Agents
packages/connection/__test__/common/buffers.test.ts lines 384-406: the tests
include brittle timing assertions (expect(duration).toBeLessThan(...)) that
cause flaky CI; remove the duration-based expect calls and keep only the
correctness assertions (byteLength and splice effects), or move the timing
checks into a separate benchmark test (or mark them skipped) outside regular
unit tests; if you want to preserve metrics, record/log the duration without
asserting so functional tests stop failing under load.

Comment on lines 150 to 154
dispose(): void {
// do nothing
this.socket.removeEventListener('message', this.arrayBufferHandler);
this.sendQueue = [];
this.sending = false;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

在 dispose 中拒绝未完成的发送请求

当前 dispose 直接丢弃 sendQueue,会让外部 send() 得到的 Promise 永远悬挂,调用方既无法获知失败也无法补救,属于严重的可靠性问题。请在清空队列前逐个 reject,并同步重置 pendingSize 及释放 decoder 状态。

   dispose(): void {
     this.socket.removeEventListener('message', this.arrayBufferHandler);
-    this.sendQueue = [];
-    this.sending = false;
+    for (const request of this.sendQueue) {
+      request.reject(new Error('Connection disposed before send completed'));
+    }
+    this.sendQueue = [];
+    this.pendingSize = 0;
+    this.sending = false;
+    this.decoder.dispose();
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dispose(): void {
// do nothing
this.socket.removeEventListener('message', this.arrayBufferHandler);
this.sendQueue = [];
this.sending = false;
}
dispose(): void {
this.socket.removeEventListener('message', this.arrayBufferHandler);
for (const request of this.sendQueue) {
request.reject(new Error('Connection disposed before send completed'));
}
this.sendQueue = [];
this.pendingSize = 0;
this.sending = false;
this.decoder.dispose();
}
🤖 Prompt for AI Agents
In packages/connection/src/common/connection/drivers/reconnecting-websocket.ts
around lines 150 to 154, dispose currently just drops the sendQueue which leaves
callers' send() Promises hanging; before clearing the queue iterate the queued
entries and call their reject callback with a clear Error (e.g. "WebSocket
disposed"), then clear the array, set sending = false and reset pendingSize = 0,
remove the message listener as before, and also release/reset any decoder state
(call the decoder's release/reset method or null it) so no held resources
remain.

Comment on lines +39 to +78
while (this.sendQueue.length > 0) {
const { data, resolve, reject } = this.sendQueue[0];
let handle: { get: () => Uint8Array; dispose: () => void } | null = null;

try {
handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn();
const packet = handle.get();

for (let i = 0; i < packet.byteLength; i += chunkSize) {
if (!this.isOpen()) {
throw new Error('Connection closed while sending');
}

await new Promise<void>((resolve, reject) => {
const chunk = packet.subarray(i, Math.min(i + chunkSize, packet.byteLength));
this.socket.send(chunk, { binary: true }, (error?: Error) => {
if (error) {
reject(error);
} else {
resolve();
}
});
});
}

resolve();
} catch (error) {
reject(error instanceof Error ? error : new Error(String(error)));
} finally {
if (handle) {
try {
handle.dispose();
} catch (error) {
console.warn('[WSWebSocket] Error disposing handle:', error);
}
}
this.pendingSize -= this.sendQueue[0].data.byteLength;
this.sendQueue.shift();
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

修复 dispose 与发送队列的竞态

processSendQueue 正在 await socket.send 时,如果外部调用了 dispose() 会把队列清空并把 pendingSize 设为 0。返回后这里继续执行 this.sendQueue[0] 会得到 undefined,直接抛出 TypeError 并导致整条发送流程崩溃。

建议缓存当前队列项,并在 finally 中判断该项是否仍在队列里再做减计/移除,同时对 pendingSize 做下限保护:

-      const { data, resolve, reject } = this.sendQueue[0];
+      const current = this.sendQueue[0];
+      if (!current) {
+        break;
+      }
+      const { data, resolve, reject } = current;
@@
-        this.pendingSize -= this.sendQueue[0].data.byteLength;
-        this.sendQueue.shift();
+        this.pendingSize = Math.max(0, this.pendingSize - current.data.byteLength);
+        if (this.sendQueue[0] === current) {
+          this.sendQueue.shift();
+        }

这样即便 dispose() 抢先清空队列,也不会再访问空引用。

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
while (this.sendQueue.length > 0) {
const { data, resolve, reject } = this.sendQueue[0];
let handle: { get: () => Uint8Array; dispose: () => void } | null = null;
try {
handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn();
const packet = handle.get();
for (let i = 0; i < packet.byteLength; i += chunkSize) {
if (!this.isOpen()) {
throw new Error('Connection closed while sending');
}
await new Promise<void>((resolve, reject) => {
const chunk = packet.subarray(i, Math.min(i + chunkSize, packet.byteLength));
this.socket.send(chunk, { binary: true }, (error?: Error) => {
if (error) {
reject(error);
} else {
resolve();
}
});
});
}
resolve();
} catch (error) {
reject(error instanceof Error ? error : new Error(String(error)));
} finally {
if (handle) {
try {
handle.dispose();
} catch (error) {
console.warn('[WSWebSocket] Error disposing handle:', error);
}
}
this.pendingSize -= this.sendQueue[0].data.byteLength;
this.sendQueue.shift();
}
}
while (this.sendQueue.length > 0) {
const current = this.sendQueue[0];
if (!current) {
break;
}
const { data, resolve, reject } = current;
let handle: { get: () => Uint8Array; dispose: () => void } | null = null;
try {
handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn();
const packet = handle.get();
for (let i = 0; i < packet.byteLength; i += chunkSize) {
if (!this.isOpen()) {
throw new Error('Connection closed while sending');
}
await new Promise<void>((resolve, reject) => {
const chunk = packet.subarray(i, Math.min(i + chunkSize, packet.byteLength));
this.socket.send(chunk, { binary: true }, (error?: Error) => {
if (error) {
reject(error);
} else {
resolve();
}
});
});
}
resolve();
} catch (error) {
reject(error instanceof Error ? error : new Error(String(error)));
} finally {
if (handle) {
try {
handle.dispose();
} catch (error) {
console.warn('[WSWebSocket] Error disposing handle:', error);
}
}
this.pendingSize = Math.max(0, this.pendingSize - current.data.byteLength);
if (this.sendQueue[0] === current) {
this.sendQueue.shift();
}
}
}
🤖 Prompt for AI Agents
In packages/connection/src/common/connection/drivers/ws-websocket.ts around
lines 39 to 78, cache the current queue item at the top of the loop (const item
= this.sendQueue[0]) and use that cached reference for resolve/reject and
pendingSize logic so we never read this.sendQueue[0] after an external dispose()
may have cleared the queue; in the finally block check whether the queue still
contains that exact item (e.g. this.sendQueue[0] === item) before decrementing
pendingSize and shifting the queue, and when decrementing ensure pendingSize
never goes below zero (pendingSize = Math.max(0, pendingSize -
item.data.byteLength)); this prevents TypeError when dispose() clears the queue
while awaiting socket.send.

Comment on lines +183 to +207
// 大文件优化逻辑
if (this.currentDocumentModel && this.currentUri) {
const content = this.currentDocumentModel.getText();
const contentLength = content.length;

// 检查是否需要优化
const needsOptimization = shouldOptimizeForLargeFile(contentLength, content);

// 检查是否有明确的优化配置(从资源元数据)
// 注意:这里无法直接访问resource metadata,所以主要依赖自动检测

if (needsOptimization) {
const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, {
optimizedMode: true,
disableSemanticHighlighting: true,
disableWordWrap: true,
disableMinimap: true,
disableHover: contentLength > 20 * 1024 * 1024, // 20MB以上禁用hover
disableCodeLens: true,
disableQuickSuggestions: contentLength > 30 * 1024 * 1024, // 30MB以上禁用建议
});

editorOptions = optimizedEditorOptions;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

字符长度与字节大小的语义不匹配。

第 186 行的 contentLength 是字符数量(content.length),但在第 189 行传递给 shouldOptimizeForLargeFile 时,函数签名期望的是 fileSizeBytes(字节大小)。此外,第 200 和 202 行将字符数量与字节阈值(20MB、30MB)进行比较,这在语义上是不一致的。

影响分析:

  • 对于 ASCII 文本,字符数 ≈ 字节数,影响较小
  • 对于包含多字节字符的 Unicode 文本(如中文),实际字节大小会远大于字符数
  • 这可能导致大文件优化触发不准确

建议方案:

方案 1(推荐):使用实际字节大小

-    const content = this.currentDocumentModel.getText();
-    const contentLength = content.length;
+    const content = this.currentDocumentModel.getText();
+    const contentSizeBytes = new Blob([content]).size; // 或使用 new TextEncoder().encode(content).length

     // 检查是否需要优化
-    const needsOptimization = shouldOptimizeForLargeFile(contentLength, content);
+    const needsOptimization = shouldOptimizeForLargeFile(contentSizeBytes, content);

     if (needsOptimization) {
       const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, {
         optimizedMode: true,
         disableSemanticHighlighting: true,
         disableWordWrap: true,
         disableMinimap: true,
-        disableHover: contentLength > 20 * 1024 * 1024,
+        disableHover: contentSizeBytes > 20 * 1024 * 1024,
         disableCodeLens: true,
-        disableQuickSuggestions: contentLength > 30 * 1024 * 1024,
+        disableQuickSuggestions: contentSizeBytes > 30 * 1024 * 1024,
       });

方案 2:明确使用字符长度并调整阈值

-    const contentLength = content.length;
+    const contentLengthChars = content.length;

-    const needsOptimization = shouldOptimizeForLargeFile(contentLength, content);
+    // 注意:这里传入的是字符数,不是字节数
+    const needsOptimization = shouldOptimizeForLargeFile(contentLengthChars, content);

     if (needsOptimization) {
       const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, {
         optimizedMode: true,
         disableSemanticHighlighting: true,
         disableWordWrap: true,
         disableMinimap: true,
-        disableHover: contentLength > 20 * 1024 * 1024, // 20MB
+        disableHover: contentLengthChars > 10 * 1024 * 1024, // 约10M字符
         disableCodeLens: true,
-        disableQuickSuggestions: contentLength > 30 * 1024 * 1024, // 30MB
+        disableQuickSuggestions: contentLengthChars > 15 * 1024 * 1024, // 约15M字符
       });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// 大文件优化逻辑
if (this.currentDocumentModel && this.currentUri) {
const content = this.currentDocumentModel.getText();
const contentLength = content.length;
// 检查是否需要优化
const needsOptimization = shouldOptimizeForLargeFile(contentLength, content);
// 检查是否有明确的优化配置(从资源元数据)
// 注意:这里无法直接访问resource metadata,所以主要依赖自动检测
if (needsOptimization) {
const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, {
optimizedMode: true,
disableSemanticHighlighting: true,
disableWordWrap: true,
disableMinimap: true,
disableHover: contentLength > 20 * 1024 * 1024, // 20MB以上禁用hover
disableCodeLens: true,
disableQuickSuggestions: contentLength > 30 * 1024 * 1024, // 30MB以上禁用建议
});
editorOptions = optimizedEditorOptions;
}
}
// 大文件优化逻辑
if (this.currentDocumentModel && this.currentUri) {
const content = this.currentDocumentModel.getText();
const contentSizeBytes = new Blob([content]).size; // 或使用 new TextEncoder().encode(content).length
// 检查是否需要优化
const needsOptimization = shouldOptimizeForLargeFile(contentSizeBytes, content);
// 检查是否有明确的优化配置(从资源元数据)
// 注意:这里无法直接访问resource metadata,所以主要依赖自动检测
if (needsOptimization) {
const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, {
optimizedMode: true,
disableSemanticHighlighting: true,
disableWordWrap: true,
disableMinimap: true,
disableHover: contentSizeBytes > 20 * 1024 * 1024, // 20MB以上禁用hover
disableCodeLens: true,
disableQuickSuggestions: contentSizeBytes > 30 * 1024 * 1024, // 30MB以上禁用建议
});
editorOptions = optimizedEditorOptions;
}
}
🤖 Prompt for AI Agents
In packages/editor/src/browser/base-editor-wrapper.ts around lines 183 to 207,
content.length (character count) is being used where byte sizes are intended;
change to compute the actual byte size (e.g. const contentByteLength = typeof
TextEncoder !== "undefined" ? new TextEncoder().encode(content).length :
Buffer.byteLength(content, "utf8")), use contentByteLength when calling
shouldOptimizeForLargeFile and when comparing against the 20MB/30MB thresholds
(20 * 1024 * 1024 and 30 * 1024 * 1024), and pass the updated byte-based values
into disableHover/disableQuickSuggestions and other size-dependent flags so all
size checks consistently use bytes.

return true;
}

if (content && content.split('\n').length > LINE_THRESHOLD) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

优化行数统计的性能。

content.split('\n').length 会为大文件创建一个完整的数组,这在性能优化函数中可能造成不必要的开销。建议使用更高效的计数方法。

应用以下修改以提高性能:

-  if (content && content.split('\n').length > LINE_THRESHOLD) {
+  if (content) {
+    const lineCount = (content.match(/\n/g) || []).length + 1;
+    if (lineCount > LINE_THRESHOLD) {
+      return true;
+    }
+  }
+
+  if (content && (content.match(/\n/g) || []).length + 1 > LINE_THRESHOLD) {
     return true;
   }

或者使用循环计数(对于超大文件可能更快):

if (content) {
  let lineCount = 1;
  for (let i = 0; i < content.length; i++) {
    if (content[i] === '\n') {
      lineCount++;
      if (lineCount > LINE_THRESHOLD) {
        return true;
      }
    }
  }
}
🤖 Prompt for AI Agents
In packages/editor/src/browser/doc-model/large-file-optimizer.ts around line 81,
the current check uses content.split('\n').length which allocates a full array
for very large files; change it to an efficient line-count loop with early exit
(either iterate searching for '\n' using indexOf in a while loop or a
char-by-char loop that increments a counter and returns true as soon as
lineCount > LINE_THRESHOLD) so no full-array allocation occurs and the function
short-circuits as soon as the threshold is exceeded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants