-
Notifications
You must be signed in to change notification settings - Fork 185
Feat/migrate to electron #214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Feat/migrate to electron #214
Conversation
…roxy. This was a significant architectural change where I moved core functionality to the Rust backend of the Tauri application. Here are the key changes: - Process Management: The Rust backend now launches and manages an external `text-generation-webui` process. You can configure the path to this executable via a new `settings.json` file. The backend ensures the process is terminated gracefully when the application exits. - API Proxy: I added a new `proxy_request` command in Rust. This command forwards API calls from the frontend to the `text-generation-webui` service, centralizing communication and hardening the application. - Frontend Refactoring: The KoboldAI chat feature in the frontend has been updated to use the new Rust proxy instead of making direct `fetch` calls. - Configuration: A `settings.json` file has been added to the root of the project to allow you to specify the path to your `text-generation-webui` executable. - Documentation: I also added a `DEPLOYMENT.md` guide to explain how you can set up and run the new version of the application.
Update DEPLOYMENT.md to clarify that settings.json is always read from the current working directory (CWD). This note explains that this behavior applies to both development and packaged runs and that there is no OS-specific config path, preventing potential user confusion.
This commit refactors the application's startup and configuration logic to be more robust and easier for you to use. Key changes include: - **Configuration Loading:** The application now loads `settings.json` from the standard OS-specific application config directory, falling back to a bundled default if the file is not present. This replaces the previous CWD-based approach. - **Path Validation:** Added validation to ensure that the `text_generation_webui_path` provided in the settings exists and is a valid file before attempting to spawn the process. - **Graceful Error Handling:** Replaced all `panic!` and `.expect()` calls in the startup sequence with native error dialogs. This prevents the application from crashing and provides you with clear, easy-to-understand feedback in case of configuration errors. - **Documentation:** The `DEPLOYMENT.md` guide has been updated to reflect the new configuration behavior, instructing you on where to place your `settings.json` file.
These changes resolve some previously identified issues, including a functional regression and dependency optimization. Key changes include: - **Streaming Proxy:** The Rust backend now properly handles streaming API responses. A new `proxy_request_streaming` command uses Tauri events to send data chunks to the frontend in real-time, restoring the original streaming functionality. - **Frontend Streaming:** The `getExtra` function in `koboldAiChat.ts` has been refactored to use the new event-based streaming mechanism, correctly reconstructing the `ReadableStream`. - **Dependency Optimization:** Removed the redundant `tokio` dependency from `Cargo.toml` as Tauri provides its own async runtime. Configured `reqwest` to use `rustls-tls` instead of the native TLS backend, improving portability and reducing binary size. - **Configuration Cleanup:** Removed the unnecessary `http` allowlist from `tauri.conf.json`.
Windows, macOS, and Linux build tools/dependencies reflected in DEPLOYMENT.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This commit hardens the Rust proxy commands to prevent a potential Server-Side Request Forgery (SSRF) vulnerability. It also includes the final fixes from previous code reviews, including dependency optimization and restoring streaming functionality. Key changes: - **Security (SSRF Fix):** A new `validate_and_sanitize_path` function has been added. It is called by both proxy commands to ensure that the API path provided by the frontend is on a strict allowlist and does not contain malicious patterns. This prevents the proxy from being used to make unintended requests. - **Streaming Proxy:** The Rust backend now properly handles streaming API responses. A new `proxy_request_streaming` command uses Tauri events to send data chunks to the frontend in real-time. - **Frontend Streaming:** The `getExtra` function in `koboldAiChat.ts` has been refactored to use the new event-based streaming mechanism, restoring the original functionality. - **Dependency Optimization:** Removed the redundant `tokio` dependency and configured `reqwest` to use `rustls-tls` for a smaller, more portable binary. - **Configuration Cleanup:** Removed the unnecessary `http` allowlist from `tauri.conf.json`.
This commit refactors the sidecar process shutdown sequence to be idempotent and free of race conditions. Key changes: - **Idempotent Shutdown:** An `Arc<AtomicBool>` guard has been added to the `AppState` to ensure that the termination logic for the sidecar process can only be executed once, even if multiple shutdown signals are received concurrently (e.g., from closing the window and quitting from the system tray at the same time). - **Graceful Termination:** The shutdown logic now uses `try_wait()` to check if the process has already exited before attempting to kill it, preventing panics. It then uses `kill()` followed by `wait()` to ensure the process is terminated and fully reaped by the operating system. - **Centralized Logic:** The new robust shutdown logic has been moved into a single `shutdown_sidecar` helper function, which is now called from all three application exit paths (system tray, window close, and app exit event). This removes duplicated code and ensures consistent behavior.
This commit introduces a more user-friendly and robust application shutdown flow. It also includes all prior enhancements, such as the Rust-based API proxy, sidecar process management, and dependency optimizations. Key changes in this final update: - **Confirm on Quit:** When you attempt to close the window or quit from the system tray, the application now prevents an immediate exit. It sends an event to the frontend, which displays a confirmation dialog to you. - **Graceful Shutdown Command:** A new `quit_app` command has been added to the Rust backend. This command is invoked by the frontend only after you confirm your intent to quit. It performs the full graceful shutdown of the sidecar process before exiting the application. - **Race-Free Termination:** The shutdown logic is now idempotent and free of race conditions. It uses an `Arc<AtomicBool>` to ensure the termination sequence for the sidecar process runs only once, and it gracefully handles cases where the process may have already exited.
Add a note to DEPLOYMENT.md to clarify that this project uses Tauri v1 and requires `libwebkit2gtk-4.0-dev`. The note also informs users that Tauri v2+ projects require the newer `libwebkit2gtk-4.1-dev` package, providing helpful context for future development or other projects.
This commit corrects the example path for the application configuration directory on Windows. The previous version incorrectly included a `\config` subfolder, which is not used by Tauri's `app_config_dir()` API. The path has been corrected to `%APPDATA%\com.heyamica.dev`.
Formatting changes Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Formatting updates Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This commit adds a `cancel` handler to the `ReadableStream` implementation in the `getExtra` function. If a consumer of the stream aborts it (e.g., via AbortController or navigating away), the `cancel` handler will now be called. This handler cleans up the Tauri event listeners (`stream-chunk`, `stream-error`, `stream-end`), preventing resource and memory leaks.
This commit replaces `.unwrap()` calls with proper error handling when emitting events from the streaming proxy task in the Rust backend. If the frontend disconnects while the stream is active, `handle.emit_all()` can fail. Previously, this would cause the spawned Rust task to panic. Now, it handles the error gracefully by logging it and terminating the stream, preventing the backend from crashing.
This commit replaces a `.unwrap()` call with proper error handling when forwarding the sidecar process's stdout to the frontend. If the frontend disconnects while the sidecar is running, the `emit_all` call for `sidecar-output` could fail. Previously, this would cause the monitoring task to panic. Now, it handles the error gracefully by logging it and terminating the forwarding loop, preventing the backend task from crashing.
This commit updates the `DEPLOYMENT.md` file to be more specific about the streaming option in the UI. It now uses the exact UI label "Use Extra (enables streaming)" and clarifies that the option is disabled by default, so no action is required for standard non-streaming behavior.
This commit corrects the documentation for the location of the final build artifacts produced by `tauri build`. The previous path was incorrect. The documentation now correctly points to the platform-specific subdirectories inside `src-tauri/target/release/bundle/`, providing clearer and more accurate instructions for users.
…ible API, as you requested. I replaced the previous, incorrect implementation that targeted the KoboldAI API with one that now targets the API provided by `text-generation-webui`. This final version includes all previous enhancements: - **Rust Backend:** Manages the `text-generation-webui` sidecar process and acts as a secure API proxy. - **OpenAI API Proxy:** The proxy now forwards requests to the `/v1/chat/completions` endpoint and handles the `Authorization` header. - **Frontend Refactoring:** I refactored the `openAiChat.ts` file to use the Rust proxy for both streaming and blocking requests and reverted the incorrect changes to `koboldAiChat.ts`. - **Robustness and Security:** The implementation includes graceful shutdown logic requiring your confirmation, robust error handling with dialogs, and a secure API path allowlist. - **Documentation:** I updated `DEPLOYMENT.md` to reflect the new setup. The instructions now guide you to select "ChatGPT" and to launch the sidecar with the `--api` flag.
…ndling This commit refactors the frontend TypeScript code in `openAiChat.ts` to make the calls to the Rust proxy more robust and type-safe. Key changes: - **Strong Typing:** Replaced `any` with a new `OpenAIResponse` interface when calling the blocking proxy. The `invoke` generic (`invoke<OpenAIResponse>`) is now used to ensure the response is correctly typed. - **Error Handling:** Wrapped all `invoke` calls (for both streaming and non-streaming requests) in `try/catch` blocks. This ensures that any errors from the Rust backend (e.g., validation failures, network errors) are caught and handled gracefully instead of causing unhandled promise rejections. - **Safe Access:** Added more thorough checks for the response structure before accessing nested properties to prevent runtime errors.
This commit refactors the `setup` hook to follow Tauri best practices for handling State in spawned async tasks. Instead of capturing the `State<'_, AppState>` guard (which can lead to lifetime compilation errors), the `AppHandle` is captured, and the state is reacquired from the handle within the async task just before it's needed. This makes the code more robust and less prone to lifetime issues.
This commit refactors the error handling for when the sidecar process fails to spawn. Instead of calling a helper that uses `std::process::exit`, the logic now directly uses `dialog::message` to show the error to you and then calls `handle.exit(1)`. This ensures the application terminates gracefully via Tauri's event loop, rather than an abrupt process exit from a background thread.
This commit refactors the event listener cleanup mechanism in `getResponseStream` to be more robust and prevent race conditions. Instead of assigning unlisten functions to separate variables, they are now pushed into an array immediately after creation. The `cleanup` function now iterates over this array. This ensures that if an error or end event arrives before all listeners have been registered, the cleanup function will still correctly unregister all listeners that have been successfully created up to that point, preventing memory leaks.
This commit migrates the project from Tauri v1 to v2 to resolve a build failure caused by using a legacy v1 feature flag (`shell-open`) with a v2 dependency. Key changes include: - **Rust Dependencies:** Updated the `tauri` dependency in `Cargo.toml` to a v2 version and removed the `shell-open` feature. - **Rust API Usage:** Refactored `main.rs` to replace the deprecated `tauri::api::shell::open` with the new `app.shell().open()` API from Tauri v2. - **Frontend Dependencies:** Updated the `@tauri-apps/api` and `@tauri-apps/cli` packages in `package.json` to versions compatible with Tauri v2. - **Configuration:** Verified that the `shell` allowlist in `tauri.conf.json` is correctly configured for the v2 API.
…s resolves the build failure you were seeing, which was caused by an outdated configuration. Here is a summary of the changes I made: - Renamed `build.devPath` to `build.devUrl`. - Renamed `build.distDir` to `build.frontendDist`. - Removed the top-level `package` and `tauri` objects. - Created a new top-level `app` object and moved `windows`, `security`, `systemTray`, and package information into it. - Moved the `bundle` object to be top-level. - Moved the `allowlist` into `app.security`.
The latest updates on your projects. Learn more about Vercel for GitHub.
|
@google-labs-jules[bot] is attempting to deploy a commit to the heyamica Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughReplaces Tauri with an Electron + Rust (N-API) architecture. Adds a Rust native module for HTTP proxying and sidecar management, wires Electron main/preload IPC to it, and updates frontend chat flows to use IPC proxies. Introduces deployment docs, updates package scripts/tooling, adds ignore rules, and removes all Tauri config/code. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant UI as Renderer (Next.js)
participant Preload as Electron Preload (electronAPI)
participant Main as Electron Main
participant Rust as Rust N-API (amica-rust-lib)
UI->>Preload: electronAPI.proxyRequestBlocking(payload)
Preload->>Main: ipc.invoke('proxy_request_blocking', payload)
Main->>Rust: proxy_request_blocking(payload)
Rust-->>Main: String (response body)
Main-->>Preload: result
Preload-->>UI: Promise<String>
sequenceDiagram
autonumber
participant UI as Renderer (Next.js)
participant Preload as Electron Preload
participant Main as Electron Main
participant Rust as Rust N-API
UI->>Preload: proxyRequestStreaming(payload, onChunk,onEnd,onError)
Preload->>Main: ipc.send('proxy_request_streaming', payload)
Main->>Rust: proxy_request_streaming(payload, onChunk,onEnd,onError)
Rust-->>Main: stream callbacks
Main-->>Preload: 'stream-chunk'/'stream-end'/'stream-error'
Preload-->>UI: onChunk/onEnd/onError events
sequenceDiagram
autonumber
participant UI as Renderer
participant Preload as Electron Preload
participant Main as Electron Main
participant Rust as Rust N-API (Sidecar)
UI->>Preload: startSidecar({ path }, onOutput)
Preload->>Main: ipc.invoke('start_sidecar', payload)
Main->>Rust: start_sidecar(payload, onOutput)
Rust-->>Main: output events
Main-->>Preload: 'sidecar-output'
Preload-->>UI: onOutput(data)
UI->>Preload: stopSidecar()
Preload->>Main: ipc.invoke('stop_sidecar')
Main->>Rust: stop_sidecar()
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 24
🔭 Outside diff range comments (2)
electron/preload.js (1)
1-18
: This appears to be generated code that shouldn't be in version control.Like
electron/main.js
, this file appears to be transpiled fromelectron/preload.ts
. Having both source and compiled files in the repository causes maintenance issues.Remove this file from version control and add it to
.gitignore
. The build process should generate it from the TypeScript source.src/features/chat/openAiChat.ts (1)
91-96
: Fix: Remove dependency on openai_url config (now unused) to avoid runtime crashconfig("openai_url") will throw if unset, but the proxy owns the URL now. This breaks chat on setups without that config.
Apply this diff to drop the unused config and parameter:
-export async function getOpenAiChatResponseStream(messages: Message[]) { - const apiKey = getApiKey("openai_apikey"); - const url = config("openai_url"); - const model = config("openai_model"); - return getResponseStream(messages, url, model, apiKey); -} +export async function getOpenAiChatResponseStream(messages: Message[]) { + const apiKey = getApiKey("openai_apikey"); + const model = config("openai_model"); + return getResponseStream(messages, model, apiKey); +}And adjust the getResponseStream signature below.
🧹 Nitpick comments (12)
.gitignore (1)
45-48
: Revisit ignoring Cargo.lock and package-lock.json for reproducible builds
- Keeping rust-lib/target ignored is correct.
- Consider committing rust-lib/Cargo.lock to ensure reproducible builds across CI and developer machines. While libraries published to crates.io generally don’t commit Cargo.lock, in-app vendored crates inside an application repo typically do. Same rationale applies to rust-lib/package-lock.json if you run npm scripts under rust-lib (e.g., @napi-rs/cli): committing lockfiles avoids supply-chain drift.
Suggested tweak:
# Rust rust-lib/target -rust-lib/Cargo.lock -rust-lib/package-lock.jsonIf you intentionally want to regenerate dependencies on every build, keep as-is, but please confirm CI stability.
rust-lib/package.json (1)
1-8
: Augment package manifest for Node/Electron consumption and prebuildsTo improve DX and packaging stability:
- Add main/types so Node/TS resolution is explicit.
- Declare engines for Node version compatibility.
- Configure @napi-rs/cli “triples” to ensure prebuilds for target platforms (especially macOS arm64).
- Optionally add basic build scripts.
Proposed update:
{ "name": "amica-rust-lib", "version": "0.1.0", "private": true, - "napi": { - "name": "amica-rust-lib" - } + "main": "index.js", + "types": "index.d.ts", + "engines": { + "node": ">=18.18.0" + }, + "napi": { + "name": "amica-rust-lib", + "triples": [ + "x86_64-unknown-linux-gnu", + "aarch64-unknown-linux-gnu", + "x86_64-apple-darwin", + "aarch64-apple-darwin", + "x86_64-pc-windows-msvc", + "aarch64-pc-windows-msvc" + ] + }, + "scripts": { + "build": "napi build -r", + "build:debug": "napi build", + "prepublishOnly": "napi prepublish -t" + } }Adjust triples to match your supported platforms and CI matrix.
rust-lib/Cargo.toml (2)
4-9
: Fill in package metadata and consider marking as non-publishableMetadata is placeholder/missing. For internal addons, it’s helpful to set license/repository and optionally prevent accidental publish.
Suggested edits:
description = "Rust library for Amica" -authors = ["you"] -license = "" -repository = "" +authors = ["Amica Team"] +license = "MIT OR Apache-2.0" +repository = "https://github.com/semperai/amica" edition = "2021" rust-version = "1.80.0" +publish = false
15-21
: Confirm required reqwest features for streaming and HTTP2You’re using reqwest with default-features = false and features = ["json", "rustls-tls"]. If your Rust N-API exposes streaming (e.g., SSE-like token streams) via bytes_stream, ensure the necessary features are enabled for your code paths (in some setups, “stream” and/or “http2” are required). If everything compiles and streams correctly in CI/dev, ignore this.
If needed:
reqwest = { version = "0.12.5", default-features = false, features = ["json", "rustls-tls", "http2", "stream"] }DEPLOYMENT.md (3)
24-41
: Fix heading level increment (markdownlint MD001)Headings jump from H2 to H4 (“#### Step 1/2/3”). Use H3 for steps under a H2 section.
Suggested change:
-#### Step 1: Clone the Amica Repository +### Step 1: Clone the Amica Repository ... -#### Step 2: Install JavaScript Dependencies +### Step 2: Install JavaScript Dependencies ... -#### Step 3: Configure the `text-generation-webui` Path +### Step 3: Configure the `text-generation-webui` Path
55-62
: Minor wording and clarity improvements for config directories
- The Windows line uses a double backslash in inline code that’s already escaped; consider simplifying the sentence and clarifying when the directory appears.
Suggested edits:
- * **Windows:** `%APPDATA%\\com.heyamica.dev` (you can paste this into the Explorer address bar) + * **Windows:** `%APPDATA%/com.heyamica.dev` (paste into Explorer’s address bar; the actual filesystem path uses backslashes) ... - *(Note: The `com.heyamica.dev` directory might not exist until you run Amica at least once.)* + *(Note: The `com.heyamica.dev` directory is created after first run; you can also create it manually.)*
109-113
: Electron flow note: “dummy API key” guidance may no longer applyIf requests are fully proxied via the Rust addon and Electron IPC now, confirm whether the UI still requires a placeholder API key for ChatGPT settings. If not, remove this step to prevent confusion.
rust-lib/index.d.ts (2)
6-10
: Clarify Authorization header semantics in the contractProxyRequestPayload.authorization is a plain string. Please confirm whether callers should pass the full "Authorization" header value (e.g., "Bearer sk-...") or just the raw token. OpenAI-style backends typically require the "Bearer " prefix; aligning on this avoids subtle 401s.
16-17
: Sidecar API has no error/exit channelstartSidecar/stopSidecar are void; there’s no callback for spawn errors or process exit. Add onError/onExit hooks or return a handle that exposes PID, onExit, and dispose() to improve observability and control.
package.json (1)
16-17
: Heavy native build during postinstall may cause install failures on CI/consumersRunning napi build in postinstall can break installs on machines lacking a Rust toolchain. Consider gating with an env var (e.g., SKIP_RUST_BUILD) or attempting to load a prebuild first, falling back to a local build only when needed.
Example alternative:
- postinstall: "node scripts/postinstall-rust.js" that:
- tries to resolve a prebuilt .node
- builds locally only when missing and SKIP_RUST_BUILD is not set
src/features/chat/openAiChat.ts (2)
14-20
: Nit: redundant API key falsy checkconfig() already throws when the key is missing, so the if (!apiKey) branch is unreachable unless empty strings are stored intentionally. If that’s not a use case, you can simplify.
122-127
: Robustness: guard with optional chaining and include fallback context in errorMinor safety/readability tweak.
Apply this diff:
- if (json.choices && json.choices.length > 0 && json.choices[0].message && json.choices[0].message.content) { - return json.choices[0].message.content; - } - throw new Error("Invalid response structure from OpenAI-compatible API"); + const content = json?.choices?.[0]?.message?.content; + if (content) return content; + throw new Error("Invalid response structure from OpenAI-compatible API (missing choices[0].message.content)");
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (17)
package-lock.json
is excluded by!**/package-lock.json
src-tauri/Cargo.lock
is excluded by!**/*.lock
src-tauri/icons/128x128.png
is excluded by!**/*.png
src-tauri/icons/128x128@2x.png
is excluded by!**/*.png
src-tauri/icons/32x32.png
is excluded by!**/*.png
src-tauri/icons/Square107x107Logo.png
is excluded by!**/*.png
src-tauri/icons/Square142x142Logo.png
is excluded by!**/*.png
src-tauri/icons/Square150x150Logo.png
is excluded by!**/*.png
src-tauri/icons/Square284x284Logo.png
is excluded by!**/*.png
src-tauri/icons/Square30x30Logo.png
is excluded by!**/*.png
src-tauri/icons/Square310x310Logo.png
is excluded by!**/*.png
src-tauri/icons/Square44x44Logo.png
is excluded by!**/*.png
src-tauri/icons/Square71x71Logo.png
is excluded by!**/*.png
src-tauri/icons/Square89x89Logo.png
is excluded by!**/*.png
src-tauri/icons/StoreLogo.png
is excluded by!**/*.png
src-tauri/icons/icon.ico
is excluded by!**/*.ico
src-tauri/icons/icon.png
is excluded by!**/*.png
📒 Files selected for processing (22)
.gitignore
(1 hunks)DEPLOYMENT.md
(1 hunks)electron/main.js
(1 hunks)electron/main.ts
(1 hunks)electron/preload.js
(1 hunks)electron/preload.ts
(1 hunks)package.json
(2 hunks)rust-lib/Cargo.toml
(1 hunks)rust-lib/index.d.ts
(1 hunks)rust-lib/index.js
(1 hunks)rust-lib/package.json
(1 hunks)rust-lib/src/lib.rs
(1 hunks)src-tauri/.gitignore
(0 hunks)src-tauri/Cargo.toml
(0 hunks)src-tauri/Release.entitlements
(0 hunks)src-tauri/build.rs
(0 hunks)src-tauri/info.plist
(0 hunks)src-tauri/src/main.rs
(0 hunks)src-tauri/tauri.conf.json
(0 hunks)src/components/vrmViewer.tsx
(0 hunks)src/features/chat/openAiChat.ts
(3 hunks)src/pages/_app.tsx
(1 hunks)
💤 Files with no reviewable changes (8)
- src/components/vrmViewer.tsx
- src-tauri/build.rs
- src-tauri/info.plist
- src-tauri/Release.entitlements
- src-tauri/tauri.conf.json
- src-tauri/.gitignore
- src-tauri/src/main.rs
- src-tauri/Cargo.toml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-13T07:01:57.927Z
Learnt from: AlphaEcho11
PR: semperai/amica#213
File: src/features/chat/koboldAiChat.ts:0-0
Timestamp: 2025-08-13T07:01:57.927Z
Learning: AlphaEcho11 prefers strong typing and comprehensive error handling for Tauri invoke calls in frontend TypeScript code. They implemented an OpenAIResponse interface and wrapped invoke calls in try/catch blocks in openAiChat.ts, and expect similar patterns to be applied consistently across similar proxy call implementations.
Applied to files:
src/features/chat/openAiChat.ts
🧬 Code Graph Analysis (4)
electron/main.ts (1)
rust-lib/src/lib.rs (4)
proxy_request_blocking
(53-90)proxy_request_streaming
(93-135)start_sidecar
(153-197)stop_sidecar
(200-213)
electron/main.js (2)
rust-lib/index.js (4)
require
(7-7)require
(8-8)process
(10-10)process
(26-26)rust-lib/src/lib.rs (4)
proxy_request_blocking
(53-90)proxy_request_streaming
(93-135)start_sidecar
(153-197)stop_sidecar
(200-213)
rust-lib/src/lib.rs (1)
rust-lib/index.d.ts (2)
ProxyRequestPayload
(6-10)StartSidecarPayload
(13-15)
src/features/chat/openAiChat.ts (2)
src/features/chat/messages.ts (1)
Message
(4-7)src/utils/config.ts (1)
config
(139-154)
🪛 LanguageTool
DEPLOYMENT.md
[grammar] ~55-~55: There might be a mistake here.
Context: ...tion directory. The paths are typically: * Windows: %APPDATA%\\com.heyamica.dev
...
(QB_NEW_EN)
[grammar] ~56-56: There might be a mistake here./Library/Application Support/...
Context: ...aste this into the Explorer address bar) * macOS: `
(QB_NEW_EN)
[grammar] ~57-~57: There might be a mistake here.
Context: ...xplorer address bar) * macOS: ~/Library/Application Support/com.heyamica.dev
* Linux: ~/.config/com.heyamica.dev
...
(QB_NEW_EN)
[grammar] ~102-~102: There might be a mistake here.
Context: ...package under the
deb/subdirectory. * **macOS:** The
.appfile is under the
m...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
DEPLOYMENT.md
24-24: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
🔇 Additional comments (6)
src/pages/_app.tsx (1)
6-6
: No-impact formatting change looks goodWhitespace-only change after imports. No functional impact.
DEPLOYMENT.md (2)
13-19
: Outdated Tauri build dependencies conflict with Electron migrationThis section lists Tauri v1 Linux dependencies and mentions WebKitGTK. The PR migrates to Electron + Rust (N-API). Electron packaging and runtime have different prerequisites (e.g., system packages are generally not needed like WebKitGTK). Please update this section to reflect Electron requirements and remove Tauri-specific notes.
Action items:
- Replace Tauri/Linux deps with Electron-specific guidance (e.g., Node, system toolchain for native modules, optional system libs for AppImage/deb creation if you use electron-builder).
- If you use electron-builder, note required system tools (appimagetool/fpm) per your CI.
I can draft an Electron-focused prerequisites section once you confirm the packaging tool (electron-builder vs electron-forge).
⛔ Skipped due to learnings
Learnt from: AlphaEcho11 PR: semperai/amica#213 File: DEPLOYMENT.md:13-17 Timestamp: 2025-08-13T07:01:50.344Z Learning: The Amica project uses Tauri v1, not Tauri v2. The DEPLOYMENT.md should reference libwebkit2gtk-4.0-dev for Linux build dependencies, which is correct for Tauri v1.
96-104
: Update Deployment Documentation for Electron PackagingThe current instructions still reference Tauri. This project uses electron-builder (see package.json), so please update DEPLOYMENT.md (lines 96–104) accordingly:
• Replace the Tauri build command:
- npm run tauri build + npm run build # or npm run build:rust && npm run build:next && npm run build:electron• Update the output paths to point at the Electron dist folder:
- src-tauri/target/release/bundle/ + dist/• Example directory structure under dist/:
- Windows: installer and unpacked app under
dist/win-unpacked/
(or.exe
indist/
)- Linux: AppImage under
dist/linux-unpacked/
- macOS:
.dmg
installer indist/
Adjust these paths if you’ve overridden electron-builder’s default output directory in your config.
⛔ Skipped due to learnings
Learnt from: AlphaEcho11 PR: semperai/amica#213 File: DEPLOYMENT.md:13-17 Timestamp: 2025-08-13T07:01:50.344Z Learning: The Amica project uses Tauri v1, not Tauri v2. The DEPLOYMENT.md should reference libwebkit2gtk-4.0-dev for Linux build dependencies, which is correct for Tauri v1.
package.json (2)
12-13
: Confirm Next.js static export settings for Electron file:// loading"homepage" is ignored by Next.js. For next export to work reliably under Electron (file://), ensure next.config.js uses output: 'export' and correct assetPrefix/basePath settings, or set trailingSlash: true to avoid 404s on deep links.
I can draft a minimal next.config.js snippet if helpful.
6-6
: Verify module format of electron/main.js matches Electron expectationsEnsure tsc compiles electron/main.ts to a CommonJS entry (or package.type="module" + ESM compatible main). A mismatch can cause runtime import/require errors when Electron launches.
If you want, I can produce a dedicated tsconfig.electron.json aligned to Electron v31 (Node 20).
rust-lib/index.js (1)
313-318
: LGTM: exports align with N-API surfaceThe exported functions match the native binding and the TS declarations. Loader logic is thorough across OS/arch.
"use strict"; | ||
var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) { | ||
function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } | ||
return new (P || (P = Promise))(function (resolve, reject) { | ||
function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } | ||
function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } | ||
function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } | ||
step((generator = generator.apply(thisArg, _arguments || [])).next()); | ||
}); | ||
}; | ||
var __generator = (this && this.__generator) || function (thisArg, body) { | ||
var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g = Object.create((typeof Iterator === "function" ? Iterator : Object).prototype); | ||
return g.next = verb(0), g["throw"] = verb(1), g["return"] = verb(2), typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; | ||
function verb(n) { return function (v) { return step([n, v]); }; } | ||
function step(op) { | ||
if (f) throw new TypeError("Generator is already executing."); | ||
while (g && (g = 0, op[0] && (_ = 0)), _) try { | ||
if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; | ||
if (y = 0, t) op = [op[0] & 2, t.value]; | ||
switch (op[0]) { | ||
case 0: case 1: t = op; break; | ||
case 4: _.label++; return { value: op[1], done: false }; | ||
case 5: _.label++; y = op[1]; op = [0]; continue; | ||
case 7: op = _.ops.pop(); _.trys.pop(); continue; | ||
default: | ||
if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } | ||
if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } | ||
if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } | ||
if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } | ||
if (t[2]) _.ops.pop(); | ||
_.trys.pop(); continue; | ||
} | ||
op = body.call(thisArg, _); | ||
} catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } | ||
if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; | ||
} | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove TypeScript transpilation artifacts from version control.
This file appears to be the transpiled JavaScript output of electron/main.ts
. Having both the TypeScript source and its compiled JavaScript output in version control can lead to synchronization issues and merge conflicts.
Consider:
- Adding
electron/*.js
to.gitignore
- Setting up a build process that transpiles TypeScript files before running Electron
- Only committing the TypeScript source files
# In .gitignore
+electron/*.js
+electron/*.js.map
🤖 Prompt for AI Agents
In electron/main.js around lines 1 to 37: this file is a transpiled JavaScript
artifact from electron/main.ts and should not be committed; remove
electron/main.js from version control, add an entry to .gitignore to exclude
electron/*.js (or specifically electron/main.js), update the repository by
deleting the tracked file (git rm --cached) and committing that change, and
ensure the project build/CI scripts transpile TypeScript before running Electron
so only the TypeScript source (electron/main.ts) is committed going forward.
var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node"); | ||
var _a = require(nativeModulePath), proxy_request_blocking = _a.proxy_request_blocking, proxy_request_streaming = _a.proxy_request_streaming, start_sidecar = _a.start_sidecar, stop_sidecar = _a.stop_sidecar; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Native module loading path is fragile and may fail in production.
The hardcoded relative path to the native module (line 43) could break in production builds where the directory structure might differ. Additionally, there's no error handling for the case where the module fails to load.
Add error handling and consider using a more robust module resolution strategy:
-var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node");
-var _a = require(nativeModulePath), proxy_request_blocking = _a.proxy_request_blocking, proxy_request_streaming = _a.proxy_request_streaming, start_sidecar = _a.start_sidecar, stop_sidecar = _a.stop_sidecar;
+var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node");
+var nativeModule;
+try {
+ nativeModule = require(nativeModulePath);
+} catch (error) {
+ console.error("Failed to load native module:", error);
+ electron_1.app.quit();
+ throw error;
+}
+var proxy_request_blocking = nativeModule.proxy_request_blocking;
+var proxy_request_streaming = nativeModule.proxy_request_streaming;
+var start_sidecar = nativeModule.start_sidecar;
+var stop_sidecar = nativeModule.stop_sidecar;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node"); | |
var _a = require(nativeModulePath), proxy_request_blocking = _a.proxy_request_blocking, proxy_request_streaming = _a.proxy_request_streaming, start_sidecar = _a.start_sidecar, stop_sidecar = _a.stop_sidecar; | |
var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node"); | |
var nativeModule; | |
try { | |
nativeModule = require(nativeModulePath); | |
} catch (error) { | |
console.error("Failed to load native module:", error); | |
electron_1.app.quit(); | |
throw error; | |
} | |
var proxy_request_blocking = nativeModule.proxy_request_blocking; | |
var proxy_request_streaming = nativeModule.proxy_request_streaming; | |
var start_sidecar = nativeModule.start_sidecar; | |
var stop_sidecar = nativeModule.stop_sidecar; |
🤖 Prompt for AI Agents
In electron/main.js around lines 43-44, the native module is required with a
fragile hardcoded relative path and lacks error handling; wrap the require in a
try/catch, resolve the node binary path using a robust strategy (e.g. use
path.resolve with process.resourcesPath or app.getAppPath and handle
app.isPackaged to choose production vs dev locations), attempt fallbacks
(alternative expected build locations), and on failure log the full error and
exit or disable dependent features; ensure the resolved path is validated
(fs.existsSync) before requiring to avoid unhandled exceptions.
electron_1.ipcMain.on('proxy_request_streaming', function (event, payload) { return __awaiter(_this, void 0, void 0, function () { | ||
var onChunk, onEnd, onError; | ||
return __generator(this, function (_a) { | ||
switch (_a.label) { | ||
case 0: | ||
onChunk = function (chunk) { | ||
mainWindow.webContents.send('stream-chunk', chunk); | ||
}; | ||
onEnd = function () { | ||
mainWindow.webContents.send('stream-end'); | ||
}; | ||
onError = function (error) { | ||
mainWindow.webContents.send('stream-error', error); | ||
}; | ||
return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)]; | ||
case 1: | ||
_a.sent(); | ||
return [2 /*return*/]; | ||
} | ||
}); | ||
}); }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for IPC streaming callbacks.
The proxy_request_streaming
handler doesn't include any error handling. If the native streaming function throws an error, it won't be caught or communicated to the renderer.
Add try-catch error handling:
electron_1.ipcMain.on('proxy_request_streaming', function (event, payload) { return __awaiter(_this, void 0, void 0, function () {
var onChunk, onEnd, onError;
return __generator(this, function (_a) {
switch (_a.label) {
case 0:
onChunk = function (chunk) {
mainWindow.webContents.send('stream-chunk', chunk);
};
onEnd = function () {
mainWindow.webContents.send('stream-end');
};
onError = function (error) {
mainWindow.webContents.send('stream-error', error);
};
- return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)];
+ return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)
+ .catch(function(error) {
+ onError(error.message || 'Unknown streaming error');
+ })];
case 1:
_a.sent();
return [2 /*return*/];
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
electron_1.ipcMain.on('proxy_request_streaming', function (event, payload) { return __awaiter(_this, void 0, void 0, function () { | |
var onChunk, onEnd, onError; | |
return __generator(this, function (_a) { | |
switch (_a.label) { | |
case 0: | |
onChunk = function (chunk) { | |
mainWindow.webContents.send('stream-chunk', chunk); | |
}; | |
onEnd = function () { | |
mainWindow.webContents.send('stream-end'); | |
}; | |
onError = function (error) { | |
mainWindow.webContents.send('stream-error', error); | |
}; | |
return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)]; | |
case 1: | |
_a.sent(); | |
return [2 /*return*/]; | |
} | |
}); | |
}); }); | |
electron_1.ipcMain.on('proxy_request_streaming', function (event, payload) { return __awaiter(_this, void 0, void 0, function () { | |
var onChunk, onEnd, onError; | |
return __generator(this, function (_a) { | |
switch (_a.label) { | |
case 0: | |
onChunk = function (chunk) { | |
mainWindow.webContents.send('stream-chunk', chunk); | |
}; | |
onEnd = function () { | |
mainWindow.webContents.send('stream-end'); | |
}; | |
onError = function (error) { | |
mainWindow.webContents.send('stream-error', error); | |
}; | |
return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError) | |
.catch(function (error) { | |
onError(error.message || 'Unknown streaming error'); | |
})]; | |
case 1: | |
_a.sent(); | |
return [2 /*return*/]; | |
} | |
}); | |
}); }); |
🤖 Prompt for AI Agents
In electron/main.js around lines 72 to 92, the IPC handler for
'proxy_request_streaming' lacks a try/catch around the await
proxy_request_streaming call so thrown errors are uncaught and not sent to the
renderer; wrap the await call in a try block and in the catch send a
'stream-error' IPC message with the caught error (and guard that mainWindow and
mainWindow.webContents exist before sending), ensuring any exception from
proxy_request_streaming is forwarded to the renderer and does not crash the main
process.
// This is a temporary solution to get the native module path. | ||
// In a real application, this would be handled by the build process. | ||
const nativeModulePath = path.join(__dirname, `../rust-lib/amica-rust-lib.node`); | ||
const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = require(nativeModulePath); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Native module path resolution needs improvement for production.
The comment acknowledges this is temporary, but the hardcoded relative path will likely fail in production builds. Also, there's no error handling if the module fails to load.
Implement a more robust module resolution strategy:
-// This is a temporary solution to get the native module path.
-// In a real application, this would be handled by the build process.
-const nativeModulePath = path.join(__dirname, `../rust-lib/amica-rust-lib.node`);
-const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = require(nativeModulePath);
+function loadNativeModule() {
+ const possiblePaths = [
+ path.join(__dirname, '../rust-lib/amica-rust-lib.node'),
+ path.join(process.resourcesPath, 'amica-rust-lib.node'),
+ path.join(app.getAppPath(), 'rust-lib/amica-rust-lib.node'),
+ ];
+
+ for (const modulePath of possiblePaths) {
+ try {
+ if (require('fs').existsSync(modulePath)) {
+ return require(modulePath);
+ }
+ } catch (error) {
+ console.warn(`Failed to load native module from ${modulePath}:`, error);
+ }
+ }
+
+ throw new Error('Failed to load native module from any expected location');
+}
+
+const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = loadNativeModule();
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// This is a temporary solution to get the native module path. | |
// In a real application, this would be handled by the build process. | |
const nativeModulePath = path.join(__dirname, `../rust-lib/amica-rust-lib.node`); | |
const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = require(nativeModulePath); | |
function loadNativeModule() { | |
const possiblePaths = [ | |
path.join(__dirname, '../rust-lib/amica-rust-lib.node'), | |
path.join(process.resourcesPath, 'amica-rust-lib.node'), | |
path.join(app.getAppPath(), 'rust-lib/amica-rust-lib.node'), | |
]; | |
for (const modulePath of possiblePaths) { | |
try { | |
if (require('fs').existsSync(modulePath)) { | |
return require(modulePath); | |
} | |
} catch (error) { | |
console.warn(`Failed to load native module from ${modulePath}:`, error); | |
} | |
} | |
throw new Error('Failed to load native module from any expected location'); | |
} | |
const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = loadNativeModule(); |
🤖 Prompt for AI Agents
In electron/main.ts around lines 4 to 7, the current hardcoded relative path to
the native module is fragile for production and lacks error handling; replace it
with a resolution strategy that picks between dev and packaged locations (e.g.
when app.isPackaged or using process.resourcesPath look in app.asar.unpacked or
a platform/arch-specific subfolder) and validate the resolved path with
fs.existsSync before requiring; wrap the require in a try/catch, log a clear
error (including the resolved path and the caught exception) and exit the
process cleanly if loading fails so the app doesn't continue in a broken state.
const mainWindow = new BrowserWindow({ | ||
width: 1200, | ||
height: 800, | ||
webPreferences: { | ||
preload: path.join(__dirname, 'preload.js'), | ||
nodeIntegration: false, | ||
contextIsolation: true, | ||
}, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Missing security configurations for BrowserWindow.
While contextIsolation: true
and nodeIntegration: false
are good security practices, additional security measures should be implemented for production use.
Add additional security configurations:
const mainWindow = new BrowserWindow({
width: 1200,
height: 800,
webPreferences: {
preload: path.join(__dirname, 'preload.js'),
nodeIntegration: false,
contextIsolation: true,
+ sandbox: true,
+ webSecurity: true,
+ allowRunningInsecureContent: false,
+ experimentalFeatures: false,
},
+ // Additional window security
+ webPreferences: {
+ preload: path.join(__dirname, 'preload.js'),
+ nodeIntegration: false,
+ contextIsolation: true,
+ sandbox: true,
+ },
+ titleBarStyle: 'hiddenInset',
+ frame: process.platform !== 'win32',
});
+
+ // Prevent navigation to external URLs
+ mainWindow.webContents.on('will-navigate', (event, url) => {
+ if (!url.startsWith('file://') && !url.startsWith('http://localhost')) {
+ event.preventDefault();
+ }
+ });
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In electron/main.ts around lines 10 to 18, the BrowserWindow lacks several
production security settings; update the BrowserWindow creation to harden the
renderer by adding secure webPreferences (enable sandbox: true, ensure
enableRemoteModule: false, keep nodeIntegration: false and contextIsolation:
true, set webSecurity: true and allowRunningInsecureContent: false), disable
devTools in production (only allow in dev), and enable content protection if
needed (setContentProtection(true)) — make these changes conditionally based on
NODE_ENV so dev ergonomics remain while production builds are locked down.
function getResponseStream( | ||
messages: Message[], | ||
url: string, | ||
_url: string, // url is now handled by the proxy | ||
model: string, | ||
apiKey: string, | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Align getResponseStream signature with proxy-based flow (drop unused _url param)
The URL is handled by the proxy. Keeping _url invites accidental config dependency elsewhere.
Apply this diff:
-function getResponseStream(
- messages: Message[],
- _url: string, // url is now handled by the proxy
- model: string,
- apiKey: string,
-) {
+function getResponseStream(
+ messages: Message[],
+ model: string,
+ apiKey: string,
+) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function getResponseStream( | |
messages: Message[], | |
url: string, | |
_url: string, // url is now handled by the proxy | |
model: string, | |
apiKey: string, | |
) { | |
function getResponseStream( | |
messages: Message[], | |
model: string, | |
apiKey: string, | |
) { |
🤖 Prompt for AI Agents
In src/features/chat/openAiChat.ts around lines 22 to 27, the getResponseStream
function still declares an unused _url parameter (the proxy now handles the
URL); remove the _url parameter from the function signature and any internal
references, update its TypeScript signature and JSDoc/comments accordingly, and
then search-and-replace/update all call sites to stop passing a URL (or adjust
callers to only pass messages, model and apiKey) so parameter ordering and types
remain consistent.
const unlistens: Array<() => void> = []; | ||
cleanup = () => unlistens.forEach(fn => fn()); | ||
|
||
const onChunk = (chunk: string) => { | ||
const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: ')); | ||
for (const line of lines) { | ||
const data = line.substring(6); | ||
if (data.trim() === '[DONE]') { | ||
return; | ||
} | ||
try { | ||
const json = JSON.parse(data); | ||
const messagePiece = json.choices[0].delta.content; | ||
if (messagePiece) { | ||
controller.enqueue(messagePiece); | ||
} | ||
} catch (error) { | ||
console.error("Failed to parse stream chunk:", error, "in chunk:", data); | ||
} | ||
} | ||
} catch (error) { | ||
console.error(error); | ||
controller.error(error); | ||
} finally { | ||
reader.releaseLock(); | ||
}; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make SSE parsing robust across chunk boundaries and tolerate 'data:'/data:
Current parsing assumes each chunk contains whole lines and only considers lines starting with "data: ". Partial JSON at chunk boundaries will be dropped; some servers omit the space. Keep a rolling buffer and strip the prefix flexibly.
Apply this diff:
- const unlistens: Array<() => void> = [];
- cleanup = () => unlistens.forEach(fn => fn());
+ const unlistens: Array<() => void> = [];
+ cleanup = () => unlistens.forEach(fn => fn());
+ let buffer = "";
- const onChunk = (chunk: string) => {
- const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: '));
- for (const line of lines) {
- const data = line.substring(6);
- if (data.trim() === '[DONE]') {
- return;
- }
- try {
- const json = JSON.parse(data);
- const messagePiece = json.choices[0].delta.content;
- if (messagePiece) {
- controller.enqueue(messagePiece);
- }
- } catch (error) {
- console.error("Failed to parse stream chunk:", error, "in chunk:", data);
- }
- }
- };
+ const onChunk = (chunk: string) => {
+ buffer += chunk;
+ const lines = buffer.split('\n');
+ buffer = lines.pop() ?? "";
+ for (const raw of lines) {
+ if (!raw.startsWith('data:')) continue;
+ const data = raw.replace(/^data:\s*/, '');
+ if (data.trim() === '[DONE]') return;
+ try {
+ const json = JSON.parse(data);
+ const piece = json?.choices?.[0]?.delta?.content;
+ if (piece) controller.enqueue(piece);
+ } catch (error) {
+ console.error("Failed to parse stream chunk:", error, "in chunk:", data);
+ }
+ }
+ };
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const unlistens: Array<() => void> = []; | |
cleanup = () => unlistens.forEach(fn => fn()); | |
const onChunk = (chunk: string) => { | |
const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: ')); | |
for (const line of lines) { | |
const data = line.substring(6); | |
if (data.trim() === '[DONE]') { | |
return; | |
} | |
try { | |
const json = JSON.parse(data); | |
const messagePiece = json.choices[0].delta.content; | |
if (messagePiece) { | |
controller.enqueue(messagePiece); | |
} | |
} catch (error) { | |
console.error("Failed to parse stream chunk:", error, "in chunk:", data); | |
} | |
} | |
} catch (error) { | |
console.error(error); | |
controller.error(error); | |
} finally { | |
reader.releaseLock(); | |
}; | |
const unlistens: Array<() => void> = []; | |
cleanup = () => unlistens.forEach(fn => fn()); | |
let buffer = ""; | |
- const onChunk = (chunk: string) => { | |
- const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: ')); | |
- for (const line of lines) { | |
- const data = line.substring(6); | |
- if (data.trim() === '[DONE]') { | |
- return; | |
- } | |
- try { | |
- const json = JSON.parse(data); | |
- const messagePiece = json.choices[0].delta.content; | |
- if (messagePiece) { | |
- controller.enqueue(messagePiece); | |
- } | |
- } catch (error) { | |
- console.error("Failed to parse stream chunk:", error, "in chunk:", data); | |
- } | |
- } | |
const onChunk = (chunk: string) => { | |
buffer += chunk; | |
const lines = buffer.split('\n'); | |
buffer = lines.pop() ?? ""; | |
for (const raw of lines) { | |
if (!raw.startsWith('data:')) continue; | |
const data = raw.replace(/^data:\s*/, ''); | |
if (data.trim() === '[DONE]') return; | |
try { | |
const json = JSON.parse(data); | |
const piece = json?.choices?.[0]?.delta?.content; | |
if (piece) controller.enqueue(piece); | |
} catch (error) { | |
console.error("Failed to parse stream chunk:", error, "in chunk:", data); | |
} | |
} | |
}; |
🤖 Prompt for AI Agents
In src/features/chat/openAiChat.ts around lines 32 to 53, the SSE chunk parsing
assumes each chunk contains full lines and only looks for lines starting with
"data: " which drops partial JSON crossing chunk boundaries and misses "data:"
without space; fix it by keeping a rolling buffer string outside onChunk, on
each chunk append to the buffer, split buffer by '\n' into lines, keep the last
line back in the buffer if it does not end with '\n' (it's a partial line), for
each complete line trim leading whitespace and check for prefixes "data:" or
"data: " (tolerant), strip that prefix, skip empty entries, treat "[DONE]" as
the stop signal, then JSON.parse the remaining data and enqueue content if
present, and ensure parse errors are caught and logged without dropping
subsequent chunks.
try { | ||
window.electronAPI.proxyRequestStreaming({ | ||
path: "v1/chat/completions", | ||
authorization: apiKey, | ||
body: JSON.stringify({ | ||
model, | ||
messages, | ||
stream: true, | ||
max_tokens: 200, | ||
}), | ||
}, onChunk, onEnd, onError); | ||
} catch (e) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Pass a fully-formed Authorization header value ("Bearer ")
If the backend expects a standard Authorization header, sending the raw key will cause 401s. Prefix with "Bearer ".
Apply this diff:
- window.electronAPI.proxyRequestStreaming({
+ window.electronAPI.proxyRequestStreaming({
path: "v1/chat/completions",
- authorization: apiKey,
+ authorization: `Bearer ${apiKey}`,
body: JSON.stringify({
model,
messages,
stream: true,
max_tokens: 200,
}),
}, onChunk, onEnd, onError);
🏁 Script executed:
#!/bin/bash
# Locate the definition of proxyRequestStreaming to inspect header handling
rg -n "proxyRequestStreaming" -C3
Length of output: 2831
#!/bin/bash
# Search for IPC handlers related to proxy requests
rg -n "ipcMain.handle" -C3
rg -n "proxy_request_blocking" -C3
rg -n "proxy_request_streaming" -C3
Prefix the API key with “Bearer ” in the Authorization header
OpenAI’s API requires a header of the form
Authorization: Bearer <YOUR_TOKEN>
You’re currently sending just the raw key, which will lead to 401 Unauthorized errors. Please update the call in src/features/chat/openAiChat.ts:
window.electronAPI.proxyRequestStreaming({
path: "v1/chat/completions",
- authorization: apiKey,
+ authorization: `Bearer ${apiKey}`,
body: JSON.stringify({
model,
messages,
stream: true,
max_tokens: 200,
}),
}, onChunk, onEnd, onError);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
try { | |
window.electronAPI.proxyRequestStreaming({ | |
path: "v1/chat/completions", | |
authorization: apiKey, | |
body: JSON.stringify({ | |
model, | |
messages, | |
stream: true, | |
max_tokens: 200, | |
}), | |
}, onChunk, onEnd, onError); | |
} catch (e) { | |
try { | |
window.electronAPI.proxyRequestStreaming({ | |
path: "v1/chat/completions", | |
authorization: `Bearer ${apiKey}`, | |
body: JSON.stringify({ | |
model, | |
messages, | |
stream: true, | |
max_tokens: 200, | |
}), | |
}, onChunk, onEnd, onError); | |
} catch (e) { |
🤖 Prompt for AI Agents
In src/features/chat/openAiChat.ts around lines 65 to 76, the Authorization
header is being sent as the raw API key which will cause 401 errors; update the
payload to prefix the API key with "Bearer " (e.g. Authorization: `Bearer
${apiKey}`) before passing it into window.electronAPI.proxyRequestStreaming so
the header matches OpenAI's required format.
cancel(reason) { | ||
console.log("Stream cancelled:", reason); | ||
cleanup(); | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stream cancellation is a no-op; the underlying request will continue
cleanup() only runs local unlisteners (none are registered here) and doesn’t signal the backend to abort. This can leak connections and CPU.
Follow-up options:
- Make proxyRequestStreaming return a cancel function and store it; call it in cancel() and onEnd/onError.
- Support AbortSignal: pass a signal and abort in main/Rust when signaled.
I can help wire this end-to-end (preload/main/native).
🤖 Prompt for AI Agents
In src/features/chat/openAiChat.ts around lines 82 to 85, the cancel handler
only calls cleanup() which doesn't stop the underlying streaming request and can
leak connections/CPU; change the streaming invocation so proxyRequestStreaming
returns a cancel/abort function (or accept an AbortSignal) and store that
function when starting the stream, then call that cancel function inside
cancel(), and also invoke it onEnd and onError before calling cleanup(); ensure
the cancel function is null-checked and that errors from cancellation are
handled/logged.
export async function getOpenAiVisionChatResponse(messages: Message[]): Promise<string> { | ||
const apiKey = getApiKey("vision_openai_apikey"); | ||
const url = config("vision_openai_url"); | ||
const model = config("vision_openai_model"); | ||
|
||
const stream = await getResponseStream(messages, url, model, apiKey); | ||
const sreader = await stream.getReader(); | ||
let json: OpenAIResponse; | ||
try { | ||
// This is a non-streaming request. | ||
const res = await window.electronAPI.proxyRequestBlocking({ | ||
path: "v1/chat/completions", | ||
authorization: apiKey, | ||
body: JSON.stringify({ | ||
model, | ||
messages, | ||
stream: false, | ||
max_tokens: 200, | ||
}), | ||
}); | ||
json = JSON.parse(res); | ||
} catch (e) { | ||
const msg = e instanceof Error ? e.message : String(e); | ||
throw new Error(`OpenAI proxy request failed: ${msg}`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Blocking path: also send "Bearer " and surface backend error payloads
- Authorization formatting as above.
- If the backend returns { error: { message } }, surface that for better DX.
Apply this diff:
const res = await window.electronAPI.proxyRequestBlocking({
path: "v1/chat/completions",
- authorization: apiKey,
+ authorization: `Bearer ${apiKey}`,
body: JSON.stringify({
model,
messages,
stream: false,
max_tokens: 200,
}),
});
json = JSON.parse(res);
+ const errMsg = (json as any)?.error?.message;
+ if (errMsg) throw new Error(errMsg);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export async function getOpenAiVisionChatResponse(messages: Message[]): Promise<string> { | |
const apiKey = getApiKey("vision_openai_apikey"); | |
const url = config("vision_openai_url"); | |
const model = config("vision_openai_model"); | |
const stream = await getResponseStream(messages, url, model, apiKey); | |
const sreader = await stream.getReader(); | |
let json: OpenAIResponse; | |
try { | |
// This is a non-streaming request. | |
const res = await window.electronAPI.proxyRequestBlocking({ | |
path: "v1/chat/completions", | |
authorization: apiKey, | |
body: JSON.stringify({ | |
model, | |
messages, | |
stream: false, | |
max_tokens: 200, | |
}), | |
}); | |
json = JSON.parse(res); | |
} catch (e) { | |
const msg = e instanceof Error ? e.message : String(e); | |
throw new Error(`OpenAI proxy request failed: ${msg}`); | |
} | |
export async function getOpenAiVisionChatResponse(messages: Message[]): Promise<string> { | |
const apiKey = getApiKey("vision_openai_apikey"); | |
const model = config("vision_openai_model"); | |
let json: OpenAIResponse; | |
try { | |
// This is a non-streaming request. | |
const res = await window.electronAPI.proxyRequestBlocking({ | |
path: "v1/chat/completions", | |
authorization: `Bearer ${apiKey}`, | |
body: JSON.stringify({ | |
model, | |
messages, | |
stream: false, | |
max_tokens: 200, | |
}), | |
}); | |
json = JSON.parse(res); | |
const errMsg = (json as any)?.error?.message; | |
if (errMsg) throw new Error(errMsg); | |
} catch (e) { | |
const msg = e instanceof Error ? e.message : String(e); | |
throw new Error(`OpenAI proxy request failed: ${msg}`); | |
} | |
} |
🤖 Prompt for AI Agents
In src/features/chat/openAiChat.ts around lines 98 to 119, the proxy request is
sending the raw API key and not surfacing backend error payloads; update the
request to set authorization to "Bearer <apiKey>" (i.e. prefix the token with
"Bearer "), then parse the response JSON and if it contains an error object
(e.g. json.error?.message) throw or surface that message instead of a generic
success path; additionally, in the catch path include any backend error payload
if available by inspecting the caught value and throwing a new Error with that
payload or its message to improve developer feedback.
Wow that's really something. Will review. Thanks! |
Testing Electron (Rust) build
Summary by CodeRabbit
New Features
Documentation
Refactor
Chores