-
Notifications
You must be signed in to change notification settings - Fork 441
Feat/chunk ws payload #4660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feat/chunk ws payload #4660
Conversation
…dispatching in WSChannel
…in WebSocket connections
|
/next |
|
🎉 PR Next publish successful! 3.9.1-next-1762258086.0 |
Walkthrough该变更实现了连接层的架构重构,包括引入异步缓存解码、基于Promise的分块发送队列、针对大文件的编辑器优化功能,以及相应的缓冲区读取优化和测试扩展。 Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant WSConn as WSWebSocketConnection
participant Queue as SendQueue
participant Decoder as FrameDecoder
participant Socket as WebSocket
Note over Client,Socket: 发送流程(新)
Client->>WSConn: send(data)
activate WSConn
WSConn->>Queue: 入队(data, resolve, reject)
WSConn->>WSConn: processSendQueue()
deactivate WSConn
loop 处理队列中的数据
activate WSConn
WSConn->>Decoder: construct(chunk)
Decoder-->>WSConn: frame
WSConn->>Socket: send(frame)
WSConn->>WSConn: 更新pendingSize
Note over WSConn: 检查队列/内存限制
WSConn-->>Client: resolve(Promise)
deactivate WSConn
end
Note over Client,Socket: 接收流程(新)
Socket->>WSConn: message(ArrayBuffer)
activate WSConn
WSConn->>Decoder: push(data)
deactivate WSConn
activate Decoder
loop processBuffers() 异步处理
Decoder->>Decoder: readFrame()
Decoder->>Decoder: onData(listener)
Decoder-->>Client: 帧数据回调
end
deactivate Decoder
sequenceDiagram
participant Editor
participant Wrapper as EditorWrapper
participant Optimizer as LargeFileOptimizer
participant Monaco as MonacoEditor
Note over Editor,Monaco: 大文件优化流程(新)
Editor->>Wrapper: 初始化编辑器选项
activate Wrapper
Wrapper->>Optimizer: shouldOptimizeForLargeFile(size)
alt 文件 > 10MB 或行数 > 50k
Optimizer-->>Wrapper: true
Wrapper->>Optimizer: getLargeFileOptimizedEditorOptions()
Optimizer-->>Wrapper: 优化后的选项
Note over Wrapper: 禁用语义高亮、换行、小地图等
else 文件正常
Optimizer-->>Wrapper: false
end
Wrapper->>Monaco: 应用最终选项
deactivate Wrapper
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/connection/src/common/fury-extends/one-of.ts (1)
80-115: 建议在 serialize 函数中添加类似的错误处理。serialize 函数缺少对无效
kind值的错误处理。如果v.kind不存在于kindToIndex中(第 81 行),index会是undefined,这会导致:
writer.uint8(undefined)可能产生意外行为- switch 语句会静默失败,不写入任何序列化数据
- 可能导致数据损坏或难以调试的问题
为了与 deserialize 的错误处理保持一致,建议添加验证。
应用此差异以添加错误处理:
const serialize = (v: Writable) => { const index = kindToIndex[v.kind]; + + if (index === undefined) { + throw new Error('unknown kind: ' + v.kind); + } writer.reset(); writer.uint8(index);packages/file-scheme/src/browser/file-scheme.contribution.ts (1)
151-173: 修复不可达的条件逻辑错误当前的条件判断存在严重的逻辑错误。由于
warningSize(50MB)小于maxSize(默认4GB),第一个条件会捕获所有大于 50MB 的文件,导致第 161-166 行的else if分支永远不会被执行。例如:
- 60MB 文件:触发第一个条件(60MB > 50MB)→ 显示警告
- 5GB 文件:仍然触发第一个条件(5GB > 50MB)→ 显示警告,第二个条件永远不会被评估
应用此修复方案,将条件顺序反转,先检查 maxSize:
const warningSize = 50 * 1024 * 1024; // 50MB 警告阈值 -// 如果文件超过警告阈值,且没有明确要求跳过警告,显示警告页面 -if (stat && (stat.size || 0) > warningSize && !metadata.skipPreventTooLarge) { +// 如果文件超过最大限制,且没有明确要求跳过警告,显示警告页面 +if (stat && (stat.size || 0) > maxSize && !metadata.skipPreventTooLarge) { results.push({ type: EditorOpenType.component, componentId: LARGE_FILE_PREVENT_COMPONENT_ID, }); } -// 如果文件超过最大限制,且没有明确要求跳过警告,显示警告页面 -else if (stat && (stat.size || 0) > maxSize && !metadata.skipPreventTooLarge) { +// 如果文件超过警告阈值,且没有明确要求跳过警告,显示警告页面 +else if (stat && (stat.size || 0) > warningSize && !metadata.skipPreventTooLarge) { results.push({ type: EditorOpenType.component, componentId: LARGE_FILE_PREVENT_COMPONENT_ID, }); } // 其他情况正常打开 else { results.push({ type: EditorOpenType.code, title: localize('editorOpenType.code'), }); }或者,如果 50MB 和最大限制应该使用同一个阈值,则合并条件:
-const warningSize = 50 * 1024 * 1024; // 50MB 警告阈值 - -// 如果文件超过警告阈值,且没有明确要求跳过警告,显示警告页面 -if (stat && (stat.size || 0) > warningSize && !metadata.skipPreventTooLarge) { - results.push({ - type: EditorOpenType.component, - componentId: LARGE_FILE_PREVENT_COMPONENT_ID, - }); -} -// 如果文件超过最大限制,且没有明确要求跳过警告,显示警告页面 -else if (stat && (stat.size || 0) > maxSize && !metadata.skipPreventTooLarge) { +// 如果文件超过大小限制,且没有明确要求跳过警告,显示警告页面 +if (stat && (stat.size || 0) > maxSize && !metadata.skipPreventTooLarge) { results.push({ type: EditorOpenType.component, componentId: LARGE_FILE_PREVENT_COMPONENT_ID, }); } -// 其他情况正常打开 else { results.push({ type: EditorOpenType.code, title: localize('editorOpenType.code'), }); }
🧹 Nitpick comments (2)
packages/core-browser/__tests__/bootstrap/connection.test.ts (1)
38-40: 考虑验证测试延迟时长的必要性使用
sleep工具函数替代内联实现是一个很好的重构。不过,4 秒的延迟时间对于单元测试来说相对较长,可能会影响测试套件的整体执行效率。建议验证这个延迟是否可以缩短,或者在注释中说明为什么需要这么长的等待时间。
packages/editor/src/browser/doc-model/large-file-optimizer.ts (1)
50-56: 建议保留嵌套对象的现有属性。直接赋值
minimap和hover会完全覆盖baseOptions中可能存在的其他属性。建议使用对象展开来保留现有配置。应用以下修改以保留现有属性:
if (optimizations.disableMinimap) { - optimizedOptions.minimap = { enabled: false }; + optimizedOptions.minimap = { ...optimizedOptions.minimap, enabled: false }; } if (optimizations.disableHover) { - optimizedOptions.hover = { enabled: false }; + optimizedOptions.hover = { ...optimizedOptions.hover, enabled: false }; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
packages/components/src/recycle-tree/tree/TreeNode.ts(2 hunks)packages/connection/__test__/browser/index.test.ts(2 hunks)packages/connection/__test__/common/buffers.test.ts(2 hunks)packages/connection/__test__/common/frame-decoder.test.ts(4 hunks)packages/connection/__test__/node/index.test.ts(2 hunks)packages/connection/__test__/node/ws-channel.test.ts(3 hunks)packages/connection/src/common/buffers/buffers.ts(3 hunks)packages/connection/src/common/connection/drivers/frame-decoder.ts(6 hunks)packages/connection/src/common/connection/drivers/reconnecting-websocket.ts(3 hunks)packages/connection/src/common/connection/drivers/stream.ts(2 hunks)packages/connection/src/common/connection/drivers/ws-websocket.ts(2 hunks)packages/connection/src/common/constants.ts(1 hunks)packages/connection/src/common/fury-extends/one-of.ts(1 hunks)packages/connection/src/node/common-channel-handler.ts(1 hunks)packages/core-browser/__tests__/bootstrap/connection.test.ts(2 hunks)packages/editor/src/browser/base-editor-wrapper.ts(2 hunks)packages/editor/src/browser/doc-model/large-file-optimizer.ts(1 hunks)packages/file-scheme/src/browser/file-scheme.contribution.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (12)
packages/connection/__test__/browser/index.test.ts (1)
packages/connection/src/common/connection/drivers/ws-websocket.ts (1)
WSWebSocketConnection(17-131)
packages/editor/src/browser/base-editor-wrapper.ts (1)
packages/editor/src/browser/doc-model/large-file-optimizer.ts (2)
shouldOptimizeForLargeFile(73-86)getLargeFileOptimizedEditorOptions(16-68)
packages/connection/__test__/node/index.test.ts (4)
packages/connection/src/common/serializer/index.ts (1)
wrapSerializer(9-24)packages/connection/src/common/serializer/fury.ts (1)
furySerializer(67-67)packages/connection/src/common/ws-channel.ts (1)
WSChannel(96-322)packages/connection/src/common/channel/types.ts (1)
ChannelMessage(1-9)
packages/editor/src/browser/doc-model/large-file-optimizer.ts (1)
packages/monaco/src/browser/monaco-api/editor.ts (1)
IEditorOptions(132-132)
packages/connection/src/node/common-channel-handler.ts (1)
packages/connection/src/common/connection/drivers/ws-websocket.ts (1)
WSWebSocketConnection(17-131)
packages/connection/src/common/connection/drivers/ws-websocket.ts (2)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
LengthFieldBasedFrameDecoder(26-230)packages/connection/src/common/constants.ts (1)
chunkSize(6-6)
packages/connection/src/common/connection/drivers/reconnecting-websocket.ts (2)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
LengthFieldBasedFrameDecoder(26-230)packages/connection/src/common/constants.ts (1)
chunkSize(6-6)
packages/connection/__test__/common/buffers.test.ts (1)
packages/connection/src/common/buffers/buffers.ts (3)
Buffers(20-230)slice(38-74)cursor(222-224)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
packages/connection/src/common/buffers/buffers.ts (1)
Buffers(20-230)
packages/connection/src/common/connection/drivers/stream.ts (1)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
LengthFieldBasedFrameDecoder(26-230)
packages/core-browser/__tests__/bootstrap/connection.test.ts (1)
packages/utils/src/async.ts (1)
sleep(603-605)
packages/connection/__test__/common/frame-decoder.test.ts (1)
packages/connection/src/common/connection/drivers/frame-decoder.ts (1)
LengthFieldBasedFrameDecoder(26-230)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: 🚀🚀🚀 Next Version for pull request
- GitHub Check: build (macos-latest, 20.x)
- GitHub Check: build (ubuntu-latest, 20.x)
- GitHub Check: ubuntu-latest, Node.js 20.x
- GitHub Check: unittest (ubuntu-latest, 18.x, node)
- GitHub Check: unittest (ubuntu-latest, 18.x, jsdom)
- GitHub Check: unittest (macos-latest, 18.x, jsdom)
- GitHub Check: unittest (macos-latest, 18.x, node)
🔇 Additional comments (7)
packages/core-browser/__tests__/bootstrap/connection.test.ts (1)
3-3: 重构良好:使用标准化的 sleep 工具函数引入
sleep工具函数替代内联的 Promise + setTimeout 实现,提高了代码一致性和可读性。packages/connection/src/common/fury-extends/one-of.ts (1)
71-73: 改进的错误处理看起来很好。这个 default 分支正确地防止了未知索引导致的未定义行为。如果没有这个错误检查,第 76 行的
v.kind = kinds[idx]会在索引超出 0-7 范围时赋值 undefined,导致静默的数据错误。packages/components/src/recycle-tree/tree/TreeNode.ts (1)
41-51: 优化实现正确,无兼容性风险。从脚本执行结果验证,
spliceArray共有 7 个调用点:
- TreeNode.ts 中 5 处(1194、1238、1325、1357 行):均将结果传递给
setFlattenedBranch()方法- RecycleTree.tsx 中 2 处(866、870 行):均直接赋值给
this.filterFlattenBranch属性所有调用方都立即消费返回值,不存在引用相等性检查或引用别名问题。早期返回优化在无修改操作时返回原数组引用在语义上等价,且安全可行。
特别地,第 866 行的情况中即使
collapesArray.length为 0,早期返回也会返回未改变的数组,这是正确的无操作行为。packages/editor/src/browser/doc-model/large-file-optimizer.ts (2)
3-11: 接口定义合理。优化选项接口设计清晰,所有属性都是可选的布尔值,符合配置接口的最佳实践。
88-101: 实现正确。元数据提取逻辑简单明了,正确地将元数据映射到优化选项接口。
packages/editor/src/browser/base-editor-wrapper.ts (1)
14-14: 导入语句正确。从新模块正确导入了大文件优化所需的工具函数。
packages/connection/src/common/connection/drivers/stream.ts (1)
24-34: 延迟释放帧缓冲句柄以避免数据损坏
当前在调用write后立刻dispose句柄,但底层Writable仍可能异步使用这块内存;句柄被归还到池里后,后续写入会复用并覆盖同一缓冲区,导致正在发送的数据被篡改甚至触发难以排查的协议错误。这属于同步与异步生命周期混用的严重缺陷,必须等写入回调触发后再释放句柄,同时在同步异常时兜底释放。raise_critical_issuesend(data: Uint8Array): void { - const handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn(); - try { - this.writable.write(handle.get(), (error) => { - if (error) { - console.error('Failed to write data:', error); - } - }); - } finally { - handle.dispose(); - } + const handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn(); + const payload = handle.get(); + const release = () => { + handle.dispose(); + }; + const onWrite = (error?: Error | null) => { + if (error) { + console.error('Failed to write data:', error); + } + release(); + }; + try { + this.writable.write(payload, onWrite); + } catch (error) { + release(); + throw error; + } }
| it('should handle slicing 1MB data under 50ms', () => { | ||
| const start = performance.now(); | ||
| const slice = largeBuffer.slice(0, 1024 * 1024); | ||
| const duration = performance.now() - start; | ||
|
|
||
| expect(duration).toBeLessThan(50); | ||
| expect(slice.byteLength).toBe(1024 * 1024); | ||
| }); | ||
|
|
||
| it('should handle 1000 splices under 1s', () => { | ||
| const buf = createEnhanced( | ||
| new Array(10000).fill(0).map((_, i) => i), | ||
| [100, 900, 9000], | ||
| ); | ||
|
|
||
| const start = performance.now(); | ||
| for (let i = 0; i < 1000; i++) { | ||
| buf.splice(i % 100, 5, new Uint8Array([i])); | ||
| } | ||
| const duration = performance.now() - start; | ||
|
|
||
| expect(duration).toBeLessThan(1000); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
移除脆弱的性能断言
这里用 expect(duration).toBeLessThan(50) 及 expect(duration).toBeLessThan(1000) 将单测绑死在极低延迟上,CI/本地机器一旦负载稍高就会随机失败。这类性能基准应放在基准测试或监控里,而不是功能性单测。建议改成只验证结果正确,或将性能检查移出单测流程。
- expect(duration).toBeLessThan(50);
expect(slice.byteLength).toBe(1024 * 1024);- expect(duration).toBeLessThan(1000);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should handle slicing 1MB data under 50ms', () => { | |
| const start = performance.now(); | |
| const slice = largeBuffer.slice(0, 1024 * 1024); | |
| const duration = performance.now() - start; | |
| expect(duration).toBeLessThan(50); | |
| expect(slice.byteLength).toBe(1024 * 1024); | |
| }); | |
| it('should handle 1000 splices under 1s', () => { | |
| const buf = createEnhanced( | |
| new Array(10000).fill(0).map((_, i) => i), | |
| [100, 900, 9000], | |
| ); | |
| const start = performance.now(); | |
| for (let i = 0; i < 1000; i++) { | |
| buf.splice(i % 100, 5, new Uint8Array([i])); | |
| } | |
| const duration = performance.now() - start; | |
| expect(duration).toBeLessThan(1000); | |
| }); | |
| it('should handle slicing 1MB data under 50ms', () => { | |
| const start = performance.now(); | |
| const slice = largeBuffer.slice(0, 1024 * 1024); | |
| const duration = performance.now() - start; | |
| expect(slice.byteLength).toBe(1024 * 1024); | |
| }); | |
| it('should handle 1000 splices under 1s', () => { | |
| const buf = createEnhanced( | |
| new Array(10000).fill(0).map((_, i) => i), | |
| [100, 900, 9000], | |
| ); | |
| const start = performance.now(); | |
| for (let i = 0; i < 1000; i++) { | |
| buf.splice(i % 100, 5, new Uint8Array([i])); | |
| } | |
| const duration = performance.now() - start; | |
| }); |
🤖 Prompt for AI Agents
packages/connection/__test__/common/buffers.test.ts lines 384-406: the tests
include brittle timing assertions (expect(duration).toBeLessThan(...)) that
cause flaky CI; remove the duration-based expect calls and keep only the
correctness assertions (byteLength and splice effects), or move the timing
checks into a separate benchmark test (or mark them skipped) outside regular
unit tests; if you want to preserve metrics, record/log the duration without
asserting so functional tests stop failing under load.
| dispose(): void { | ||
| // do nothing | ||
| this.socket.removeEventListener('message', this.arrayBufferHandler); | ||
| this.sendQueue = []; | ||
| this.sending = false; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
在 dispose 中拒绝未完成的发送请求
当前 dispose 直接丢弃 sendQueue,会让外部 send() 得到的 Promise 永远悬挂,调用方既无法获知失败也无法补救,属于严重的可靠性问题。请在清空队列前逐个 reject,并同步重置 pendingSize 及释放 decoder 状态。
dispose(): void {
this.socket.removeEventListener('message', this.arrayBufferHandler);
- this.sendQueue = [];
- this.sending = false;
+ for (const request of this.sendQueue) {
+ request.reject(new Error('Connection disposed before send completed'));
+ }
+ this.sendQueue = [];
+ this.pendingSize = 0;
+ this.sending = false;
+ this.decoder.dispose();
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| dispose(): void { | |
| // do nothing | |
| this.socket.removeEventListener('message', this.arrayBufferHandler); | |
| this.sendQueue = []; | |
| this.sending = false; | |
| } | |
| dispose(): void { | |
| this.socket.removeEventListener('message', this.arrayBufferHandler); | |
| for (const request of this.sendQueue) { | |
| request.reject(new Error('Connection disposed before send completed')); | |
| } | |
| this.sendQueue = []; | |
| this.pendingSize = 0; | |
| this.sending = false; | |
| this.decoder.dispose(); | |
| } |
🤖 Prompt for AI Agents
In packages/connection/src/common/connection/drivers/reconnecting-websocket.ts
around lines 150 to 154, dispose currently just drops the sendQueue which leaves
callers' send() Promises hanging; before clearing the queue iterate the queued
entries and call their reject callback with a clear Error (e.g. "WebSocket
disposed"), then clear the array, set sending = false and reset pendingSize = 0,
remove the message listener as before, and also release/reset any decoder state
(call the decoder's release/reset method or null it) so no held resources
remain.
| while (this.sendQueue.length > 0) { | ||
| const { data, resolve, reject } = this.sendQueue[0]; | ||
| let handle: { get: () => Uint8Array; dispose: () => void } | null = null; | ||
|
|
||
| try { | ||
| handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn(); | ||
| const packet = handle.get(); | ||
|
|
||
| for (let i = 0; i < packet.byteLength; i += chunkSize) { | ||
| if (!this.isOpen()) { | ||
| throw new Error('Connection closed while sending'); | ||
| } | ||
|
|
||
| await new Promise<void>((resolve, reject) => { | ||
| const chunk = packet.subarray(i, Math.min(i + chunkSize, packet.byteLength)); | ||
| this.socket.send(chunk, { binary: true }, (error?: Error) => { | ||
| if (error) { | ||
| reject(error); | ||
| } else { | ||
| resolve(); | ||
| } | ||
| }); | ||
| }); | ||
| } | ||
|
|
||
| resolve(); | ||
| } catch (error) { | ||
| reject(error instanceof Error ? error : new Error(String(error))); | ||
| } finally { | ||
| if (handle) { | ||
| try { | ||
| handle.dispose(); | ||
| } catch (error) { | ||
| console.warn('[WSWebSocket] Error disposing handle:', error); | ||
| } | ||
| } | ||
| this.pendingSize -= this.sendQueue[0].data.byteLength; | ||
| this.sendQueue.shift(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
修复 dispose 与发送队列的竞态
当 processSendQueue 正在 await socket.send 时,如果外部调用了 dispose() 会把队列清空并把 pendingSize 设为 0。返回后这里继续执行 this.sendQueue[0] 会得到 undefined,直接抛出 TypeError 并导致整条发送流程崩溃。
建议缓存当前队列项,并在 finally 中判断该项是否仍在队列里再做减计/移除,同时对 pendingSize 做下限保护:
- const { data, resolve, reject } = this.sendQueue[0];
+ const current = this.sendQueue[0];
+ if (!current) {
+ break;
+ }
+ const { data, resolve, reject } = current;
@@
- this.pendingSize -= this.sendQueue[0].data.byteLength;
- this.sendQueue.shift();
+ this.pendingSize = Math.max(0, this.pendingSize - current.data.byteLength);
+ if (this.sendQueue[0] === current) {
+ this.sendQueue.shift();
+ }这样即便 dispose() 抢先清空队列,也不会再访问空引用。
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| while (this.sendQueue.length > 0) { | |
| const { data, resolve, reject } = this.sendQueue[0]; | |
| let handle: { get: () => Uint8Array; dispose: () => void } | null = null; | |
| try { | |
| handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn(); | |
| const packet = handle.get(); | |
| for (let i = 0; i < packet.byteLength; i += chunkSize) { | |
| if (!this.isOpen()) { | |
| throw new Error('Connection closed while sending'); | |
| } | |
| await new Promise<void>((resolve, reject) => { | |
| const chunk = packet.subarray(i, Math.min(i + chunkSize, packet.byteLength)); | |
| this.socket.send(chunk, { binary: true }, (error?: Error) => { | |
| if (error) { | |
| reject(error); | |
| } else { | |
| resolve(); | |
| } | |
| }); | |
| }); | |
| } | |
| resolve(); | |
| } catch (error) { | |
| reject(error instanceof Error ? error : new Error(String(error))); | |
| } finally { | |
| if (handle) { | |
| try { | |
| handle.dispose(); | |
| } catch (error) { | |
| console.warn('[WSWebSocket] Error disposing handle:', error); | |
| } | |
| } | |
| this.pendingSize -= this.sendQueue[0].data.byteLength; | |
| this.sendQueue.shift(); | |
| } | |
| } | |
| while (this.sendQueue.length > 0) { | |
| const current = this.sendQueue[0]; | |
| if (!current) { | |
| break; | |
| } | |
| const { data, resolve, reject } = current; | |
| let handle: { get: () => Uint8Array; dispose: () => void } | null = null; | |
| try { | |
| handle = LengthFieldBasedFrameDecoder.construct(data).dumpAndOwn(); | |
| const packet = handle.get(); | |
| for (let i = 0; i < packet.byteLength; i += chunkSize) { | |
| if (!this.isOpen()) { | |
| throw new Error('Connection closed while sending'); | |
| } | |
| await new Promise<void>((resolve, reject) => { | |
| const chunk = packet.subarray(i, Math.min(i + chunkSize, packet.byteLength)); | |
| this.socket.send(chunk, { binary: true }, (error?: Error) => { | |
| if (error) { | |
| reject(error); | |
| } else { | |
| resolve(); | |
| } | |
| }); | |
| }); | |
| } | |
| resolve(); | |
| } catch (error) { | |
| reject(error instanceof Error ? error : new Error(String(error))); | |
| } finally { | |
| if (handle) { | |
| try { | |
| handle.dispose(); | |
| } catch (error) { | |
| console.warn('[WSWebSocket] Error disposing handle:', error); | |
| } | |
| } | |
| this.pendingSize = Math.max(0, this.pendingSize - current.data.byteLength); | |
| if (this.sendQueue[0] === current) { | |
| this.sendQueue.shift(); | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In packages/connection/src/common/connection/drivers/ws-websocket.ts around
lines 39 to 78, cache the current queue item at the top of the loop (const item
= this.sendQueue[0]) and use that cached reference for resolve/reject and
pendingSize logic so we never read this.sendQueue[0] after an external dispose()
may have cleared the queue; in the finally block check whether the queue still
contains that exact item (e.g. this.sendQueue[0] === item) before decrementing
pendingSize and shifting the queue, and when decrementing ensure pendingSize
never goes below zero (pendingSize = Math.max(0, pendingSize -
item.data.byteLength)); this prevents TypeError when dispose() clears the queue
while awaiting socket.send.
| // 大文件优化逻辑 | ||
| if (this.currentDocumentModel && this.currentUri) { | ||
| const content = this.currentDocumentModel.getText(); | ||
| const contentLength = content.length; | ||
|
|
||
| // 检查是否需要优化 | ||
| const needsOptimization = shouldOptimizeForLargeFile(contentLength, content); | ||
|
|
||
| // 检查是否有明确的优化配置(从资源元数据) | ||
| // 注意:这里无法直接访问resource metadata,所以主要依赖自动检测 | ||
|
|
||
| if (needsOptimization) { | ||
| const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, { | ||
| optimizedMode: true, | ||
| disableSemanticHighlighting: true, | ||
| disableWordWrap: true, | ||
| disableMinimap: true, | ||
| disableHover: contentLength > 20 * 1024 * 1024, // 20MB以上禁用hover | ||
| disableCodeLens: true, | ||
| disableQuickSuggestions: contentLength > 30 * 1024 * 1024, // 30MB以上禁用建议 | ||
| }); | ||
|
|
||
| editorOptions = optimizedEditorOptions; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
字符长度与字节大小的语义不匹配。
第 186 行的 contentLength 是字符数量(content.length),但在第 189 行传递给 shouldOptimizeForLargeFile 时,函数签名期望的是 fileSizeBytes(字节大小)。此外,第 200 和 202 行将字符数量与字节阈值(20MB、30MB)进行比较,这在语义上是不一致的。
影响分析:
- 对于 ASCII 文本,字符数 ≈ 字节数,影响较小
- 对于包含多字节字符的 Unicode 文本(如中文),实际字节大小会远大于字符数
- 这可能导致大文件优化触发不准确
建议方案:
方案 1(推荐):使用实际字节大小
- const content = this.currentDocumentModel.getText();
- const contentLength = content.length;
+ const content = this.currentDocumentModel.getText();
+ const contentSizeBytes = new Blob([content]).size; // 或使用 new TextEncoder().encode(content).length
// 检查是否需要优化
- const needsOptimization = shouldOptimizeForLargeFile(contentLength, content);
+ const needsOptimization = shouldOptimizeForLargeFile(contentSizeBytes, content);
if (needsOptimization) {
const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, {
optimizedMode: true,
disableSemanticHighlighting: true,
disableWordWrap: true,
disableMinimap: true,
- disableHover: contentLength > 20 * 1024 * 1024,
+ disableHover: contentSizeBytes > 20 * 1024 * 1024,
disableCodeLens: true,
- disableQuickSuggestions: contentLength > 30 * 1024 * 1024,
+ disableQuickSuggestions: contentSizeBytes > 30 * 1024 * 1024,
});方案 2:明确使用字符长度并调整阈值
- const contentLength = content.length;
+ const contentLengthChars = content.length;
- const needsOptimization = shouldOptimizeForLargeFile(contentLength, content);
+ // 注意:这里传入的是字符数,不是字节数
+ const needsOptimization = shouldOptimizeForLargeFile(contentLengthChars, content);
if (needsOptimization) {
const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, {
optimizedMode: true,
disableSemanticHighlighting: true,
disableWordWrap: true,
disableMinimap: true,
- disableHover: contentLength > 20 * 1024 * 1024, // 20MB
+ disableHover: contentLengthChars > 10 * 1024 * 1024, // 约10M字符
disableCodeLens: true,
- disableQuickSuggestions: contentLength > 30 * 1024 * 1024, // 30MB
+ disableQuickSuggestions: contentLengthChars > 15 * 1024 * 1024, // 约15M字符
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 大文件优化逻辑 | |
| if (this.currentDocumentModel && this.currentUri) { | |
| const content = this.currentDocumentModel.getText(); | |
| const contentLength = content.length; | |
| // 检查是否需要优化 | |
| const needsOptimization = shouldOptimizeForLargeFile(contentLength, content); | |
| // 检查是否有明确的优化配置(从资源元数据) | |
| // 注意:这里无法直接访问resource metadata,所以主要依赖自动检测 | |
| if (needsOptimization) { | |
| const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, { | |
| optimizedMode: true, | |
| disableSemanticHighlighting: true, | |
| disableWordWrap: true, | |
| disableMinimap: true, | |
| disableHover: contentLength > 20 * 1024 * 1024, // 20MB以上禁用hover | |
| disableCodeLens: true, | |
| disableQuickSuggestions: contentLength > 30 * 1024 * 1024, // 30MB以上禁用建议 | |
| }); | |
| editorOptions = optimizedEditorOptions; | |
| } | |
| } | |
| // 大文件优化逻辑 | |
| if (this.currentDocumentModel && this.currentUri) { | |
| const content = this.currentDocumentModel.getText(); | |
| const contentSizeBytes = new Blob([content]).size; // 或使用 new TextEncoder().encode(content).length | |
| // 检查是否需要优化 | |
| const needsOptimization = shouldOptimizeForLargeFile(contentSizeBytes, content); | |
| // 检查是否有明确的优化配置(从资源元数据) | |
| // 注意:这里无法直接访问resource metadata,所以主要依赖自动检测 | |
| if (needsOptimization) { | |
| const optimizedEditorOptions = getLargeFileOptimizedEditorOptions(editorOptions, { | |
| optimizedMode: true, | |
| disableSemanticHighlighting: true, | |
| disableWordWrap: true, | |
| disableMinimap: true, | |
| disableHover: contentSizeBytes > 20 * 1024 * 1024, // 20MB以上禁用hover | |
| disableCodeLens: true, | |
| disableQuickSuggestions: contentSizeBytes > 30 * 1024 * 1024, // 30MB以上禁用建议 | |
| }); | |
| editorOptions = optimizedEditorOptions; | |
| } | |
| } |
🤖 Prompt for AI Agents
In packages/editor/src/browser/base-editor-wrapper.ts around lines 183 to 207,
content.length (character count) is being used where byte sizes are intended;
change to compute the actual byte size (e.g. const contentByteLength = typeof
TextEncoder !== "undefined" ? new TextEncoder().encode(content).length :
Buffer.byteLength(content, "utf8")), use contentByteLength when calling
shouldOptimizeForLargeFile and when comparing against the 20MB/30MB thresholds
(20 * 1024 * 1024 and 30 * 1024 * 1024), and pass the updated byte-based values
into disableHover/disableQuickSuggestions and other size-dependent flags so all
size checks consistently use bytes.
| return true; | ||
| } | ||
|
|
||
| if (content && content.split('\n').length > LINE_THRESHOLD) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
优化行数统计的性能。
content.split('\n').length 会为大文件创建一个完整的数组,这在性能优化函数中可能造成不必要的开销。建议使用更高效的计数方法。
应用以下修改以提高性能:
- if (content && content.split('\n').length > LINE_THRESHOLD) {
+ if (content) {
+ const lineCount = (content.match(/\n/g) || []).length + 1;
+ if (lineCount > LINE_THRESHOLD) {
+ return true;
+ }
+ }
+
+ if (content && (content.match(/\n/g) || []).length + 1 > LINE_THRESHOLD) {
return true;
}或者使用循环计数(对于超大文件可能更快):
if (content) {
let lineCount = 1;
for (let i = 0; i < content.length; i++) {
if (content[i] === '\n') {
lineCount++;
if (lineCount > LINE_THRESHOLD) {
return true;
}
}
}
}🤖 Prompt for AI Agents
In packages/editor/src/browser/doc-model/large-file-optimizer.ts around line 81,
the current check uses content.split('\n').length which allocates a full array
for very large files; change it to an efficient line-count loop with early exit
(either iterate searching for '\n' using indexOf in a while loop or a
char-by-char loop that increments a counter and returns true as soon as
lineCount > LINE_THRESHOLD) so no full-array allocation occurs and the function
short-circuits as soon as the threshold is exceeded.
Types
Background or solution
Changelog
Summary by CodeRabbit
发版说明
新增功能
改进