Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
b382be7
I've integrated the Rust backend for process management and the API p…
google-labs-jules[bot] Aug 13, 2025
4da3fe0
docs: Clarify how settings.json is loaded
google-labs-jules[bot] Aug 13, 2025
0cdd5a6
refactor: Improve config loading and error handling
google-labs-jules[bot] Aug 13, 2025
78a0911
refactor: Implement streaming proxy and optimize dependencies
google-labs-jules[bot] Aug 13, 2025
84ce290
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
d4e59b6
fix: Add SSRF protection and improve proxy logic
google-labs-jules[bot] Aug 13, 2025
0ac7c01
fix: Implement robust, race-free shutdown logic
google-labs-jules[bot] Aug 13, 2025
8938ea8
feat: Add confirmation on quit and robust shutdown logic
google-labs-jules[bot] Aug 13, 2025
8297e10
docs: Clarify Linux dependency for Tauri v1 vs v2
google-labs-jules[bot] Aug 13, 2025
1f7cb2f
docs: Correct Windows config path in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
9399c2a
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
bf3598e
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
009e03e
fix(frontend): Add cancel handler to ReadableStream
google-labs-jules[bot] Aug 13, 2025
8f22f3f
fix(backend): Add error handling for event emission
google-labs-jules[bot] Aug 13, 2025
1d156d3
fix(backend): Add error handling for stdout forwarding
google-labs-jules[bot] Aug 13, 2025
09621db
docs: Clarify streaming option in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
8881db0
docs: Correct build artifact paths in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
e9939b6
I've finished refactoring the implementation to use the OpenAI-compat…
google-labs-jules[bot] Aug 13, 2025
a7afd93
refactor(frontend): Harden proxy invoke calls with types and error ha…
google-labs-jules[bot] Aug 13, 2025
ea0364a
refactor(backend): Reacquire Tauri State within async task
google-labs-jules[bot] Aug 13, 2025
a9f9b55
fix(backend): Use graceful exit when sidecar spawn fails
google-labs-jules[bot] Aug 13, 2025
c33b59d
fix(frontend): Implement robust event listener cleanup
google-labs-jules[bot] Aug 13, 2025
9c92a01
fix: Migrate project to Tauri v2
google-labs-jules[bot] Aug 14, 2025
cf98e74
I've migrated your `tauri.conf.json` file to the Tauri v2 format. Thi…
google-labs-jules[bot] Aug 14, 2025
736d1a4
Feat: Migrate from Tauri to Electron with a native Rust module
google-labs-jules[bot] Aug 15, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,8 @@ working/*

# Sentry Config File
.env.sentry-build-plugin

# Rust
rust-lib/target
rust-lib/Cargo.lock
rust-lib/package-lock.json
113 changes: 113 additions & 0 deletions DEPLOYMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# Deployment Guide: Running Amica Locally

This guide provides step-by-step instructions for setting up and running the Rust-powered version of Amica on your local machine.

## 1. Prerequisites

Before you begin, you need to have the following software installed on your system:

* **Node.js:** Amica's user interface is built with Node.js. You will need version `18.18.0` or newer. You can download it from the [official Node.js website](https://nodejs.org/).
* **Rust:** The new backend is written in Rust. The easiest way to install Rust is by using `rustup`. You can find instructions at the [official Rust website](https://www.rust-lang.org/tools/install).
* **`text-generation-webui`:** You must have a working, pre-compiled version of `text-generation-webui`. You can find releases and setup instructions on its [GitHub repository](https://github.com/oobabooga/text-generation-webui).
> **Important:** When you launch `text-generation-webui`, you must enable the API with the `--api` flag. For example: `./start_linux.sh --api`.
* **(Linux Only) Build Dependencies:** On Linux, you will need to install a few extra packages for Tauri to build correctly. You can install them with the following command:
```bash
sudo apt-get update
sudo apt-get install -y libwebkit2gtk-4.0-dev build-essential curl wget libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev
```
> **Note:** This project uses Tauri v1, which requires `libwebkit2gtk-4.0-dev`. If you are working on a project with Tauri v2 or newer, you will need to use `libwebkit2gtk-4.1-dev` instead.

## 2. Installation and Configuration

Follow these steps to get the Amica project set up.

#### Step 1: Clone the Amica Repository

Open your terminal, navigate to where you want to store the project, and run the following command:

```bash
git clone https://github.com/semperai/amica
cd amica
```

#### Step 2: Install JavaScript Dependencies

Once you are in the `amica` directory, run this command to install all the necessary frontend packages:

```bash
npm install
```

#### Step 3: Configure the `text-generation-webui` Path

Amica needs to know where to find your `text-generation-webui` executable. This is configured in a `settings.json` file.

##### How Configuration Works

Amica uses a default, bundled configuration file to start. To customize the settings, you must create your own `settings.json` file and place it in the correct application configuration directory for your operating system.

When Amica starts, it looks for `settings.json` in this order:
1. **Your Custom `settings.json`:** It checks for the file in your OS's standard application config directory.
2. **Default `settings.json`:** If no custom file is found, it falls back to the default settings bundled inside the application. The default has an empty path, so you **must** create a custom file.

##### Creating Your Custom `settings.json`

1. First, you need to find your application's configuration directory. The paths are typically:
* **Windows:** `%APPDATA%\\com.heyamica.dev` (you can paste this into the Explorer address bar)
* **macOS:** `~/Library/Application Support/com.heyamica.dev`
* **Linux:** `~/.config/com.heyamica.dev`

*(Note: The `com.heyamica.dev` directory might not exist until you run Amica at least once.)*

2. Create a new file named `settings.json` inside that directory.

3. Copy and paste the following content into your new `settings.json` file:
```json
{
"text_generation_webui_path": ""
}
```

4. Add the **full path** to your `text-generation-webui` executable inside the quotes.

* **Windows Example:**
```json
{
"text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat"
}
```
*(Note the double backslashes `\\`)*

* **Linux/macOS Example:**
```json
{
"text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh"
}
```

If Amica ever has trouble starting, it will show a dialog box explaining the configuration error. This usually means there's a typo in your `settings.json` file or the path to the executable is incorrect.

## 3. Building the Application

Now that everything is configured, you can build the final, standalone executable.

Run the following command in your terminal. This process will compile the Rust backend and package it with the frontend into a single application. It may take several minutes.

```bash
npm run tauri build
```

Once the build is complete, you will find the final application inside the `src-tauri/target/release/bundle/` directory, organized by platform and package type:
* **Windows:** The installer can be found under the `msi/` subdirectory, and the portable `.exe` under the `nsis/` subdirectory (or a similar name).
* **Linux:** The `.AppImage` can be found under the `appimage/` subdirectory, and the `.deb` package under the `deb/` subdirectory.
* **macOS:** The `.app` file is under the `macos/` subdirectory, and the `.dmg` installer is under the `dmg/` subdirectory.

## 4. Running Amica

You can now run this executable file directly! There is no need for any further commands.

On the first run, be sure to open the in-app settings and configure the following:
* **Chatbot Backend:** Select **ChatGPT**.
* In the ChatGPT settings, you may need to enter a dummy API key (e.g., "123") for the UI to proceed, but the key itself is not used by the proxy.

That's it! Your self-contained, Rust-powered Amica application is now ready to use.
127 changes: 127 additions & 0 deletions electron/main.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
"use strict";
var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) {
function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }
return new (P || (P = Promise))(function (resolve, reject) {
function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }
step((generator = generator.apply(thisArg, _arguments || [])).next());
});
};
var __generator = (this && this.__generator) || function (thisArg, body) {
var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g = Object.create((typeof Iterator === "function" ? Iterator : Object).prototype);
return g.next = verb(0), g["throw"] = verb(1), g["return"] = verb(2), typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g;
function verb(n) { return function (v) { return step([n, v]); }; }
function step(op) {
if (f) throw new TypeError("Generator is already executing.");
while (g && (g = 0, op[0] && (_ = 0)), _) try {
if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;
if (y = 0, t) op = [op[0] & 2, t.value];
switch (op[0]) {
case 0: case 1: t = op; break;
case 4: _.label++; return { value: op[1], done: false };
case 5: _.label++; y = op[1]; op = [0]; continue;
case 7: op = _.ops.pop(); _.trys.pop(); continue;
default:
if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }
if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }
if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }
if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }
if (t[2]) _.ops.pop();
_.trys.pop(); continue;
}
op = body.call(thisArg, _);
} catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }
if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };
}
};
Comment on lines +1 to +37
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove TypeScript transpilation artifacts from version control.

This file appears to be the transpiled JavaScript output of electron/main.ts. Having both the TypeScript source and its compiled JavaScript output in version control can lead to synchronization issues and merge conflicts.

Consider:

  1. Adding electron/*.js to .gitignore
  2. Setting up a build process that transpiles TypeScript files before running Electron
  3. Only committing the TypeScript source files
# In .gitignore
+electron/*.js
+electron/*.js.map
🤖 Prompt for AI Agents
In electron/main.js around lines 1 to 37: this file is a transpiled JavaScript
artifact from electron/main.ts and should not be committed; remove
electron/main.js from version control, add an entry to .gitignore to exclude
electron/*.js (or specifically electron/main.js), update the repository by
deleting the tracked file (git rm --cached) and committing that change, and
ensure the project build/CI scripts transpile TypeScript before running Electron
so only the TypeScript source (electron/main.ts) is committed going forward.

Object.defineProperty(exports, "__esModule", { value: true });
var electron_1 = require("electron");
var path_1 = require("path");
// This is a temporary solution to get the native module path.
// In a real application, this would be handled by the build process.
var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node");
var _a = require(nativeModulePath), proxy_request_blocking = _a.proxy_request_blocking, proxy_request_streaming = _a.proxy_request_streaming, start_sidecar = _a.start_sidecar, stop_sidecar = _a.stop_sidecar;
Comment on lines +43 to +44
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Native module loading path is fragile and may fail in production.

The hardcoded relative path to the native module (line 43) could break in production builds where the directory structure might differ. Additionally, there's no error handling for the case where the module fails to load.

Add error handling and consider using a more robust module resolution strategy:

-var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node");
-var _a = require(nativeModulePath), proxy_request_blocking = _a.proxy_request_blocking, proxy_request_streaming = _a.proxy_request_streaming, start_sidecar = _a.start_sidecar, stop_sidecar = _a.stop_sidecar;
+var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node");
+var nativeModule;
+try {
+    nativeModule = require(nativeModulePath);
+} catch (error) {
+    console.error("Failed to load native module:", error);
+    electron_1.app.quit();
+    throw error;
+}
+var proxy_request_blocking = nativeModule.proxy_request_blocking;
+var proxy_request_streaming = nativeModule.proxy_request_streaming;
+var start_sidecar = nativeModule.start_sidecar;
+var stop_sidecar = nativeModule.stop_sidecar;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node");
var _a = require(nativeModulePath), proxy_request_blocking = _a.proxy_request_blocking, proxy_request_streaming = _a.proxy_request_streaming, start_sidecar = _a.start_sidecar, stop_sidecar = _a.stop_sidecar;
var nativeModulePath = path_1.default.join(__dirname, "../rust-lib/amica-rust-lib.node");
var nativeModule;
try {
nativeModule = require(nativeModulePath);
} catch (error) {
console.error("Failed to load native module:", error);
electron_1.app.quit();
throw error;
}
var proxy_request_blocking = nativeModule.proxy_request_blocking;
var proxy_request_streaming = nativeModule.proxy_request_streaming;
var start_sidecar = nativeModule.start_sidecar;
var stop_sidecar = nativeModule.stop_sidecar;
🤖 Prompt for AI Agents
In electron/main.js around lines 43-44, the native module is required with a
fragile hardcoded relative path and lacks error handling; wrap the require in a
try/catch, resolve the node binary path using a robust strategy (e.g. use
path.resolve with process.resourcesPath or app.getAppPath and handle
app.isPackaged to choose production vs dev locations), attempt fallbacks
(alternative expected build locations), and on failure log the full error and
exit or disable dependent features; ensure the resolved path is validated
(fs.existsSync) before requiring to avoid unhandled exceptions.

function createWindow() {
var _this = this;
var mainWindow = new electron_1.BrowserWindow({
width: 1200,
height: 800,
webPreferences: {
preload: path_1.default.join(__dirname, 'preload.js'),
nodeIntegration: false,
contextIsolation: true,
},
});
var startUrl = process.env.ELECTRON_START_URL || path_1.default.join(__dirname, '../out/index.html');
if (process.env.ELECTRON_START_URL) {
mainWindow.loadURL(startUrl);
mainWindow.webContents.openDevTools();
}
else {
mainWindow.loadFile(startUrl);
}
electron_1.ipcMain.handle('proxy_request_blocking', function (event, payload) { return __awaiter(_this, void 0, void 0, function () {
return __generator(this, function (_a) {
switch (_a.label) {
case 0: return [4 /*yield*/, proxy_request_blocking(payload)];
case 1: return [2 /*return*/, _a.sent()];
}
});
}); });
electron_1.ipcMain.on('proxy_request_streaming', function (event, payload) { return __awaiter(_this, void 0, void 0, function () {
var onChunk, onEnd, onError;
return __generator(this, function (_a) {
switch (_a.label) {
case 0:
onChunk = function (chunk) {
mainWindow.webContents.send('stream-chunk', chunk);
};
onEnd = function () {
mainWindow.webContents.send('stream-end');
};
onError = function (error) {
mainWindow.webContents.send('stream-error', error);
};
return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)];
case 1:
_a.sent();
return [2 /*return*/];
}
});
}); });
Comment on lines +72 to +92
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing error handling for IPC streaming callbacks.

The proxy_request_streaming handler doesn't include any error handling. If the native streaming function throws an error, it won't be caught or communicated to the renderer.

Add try-catch error handling:

     electron_1.ipcMain.on('proxy_request_streaming', function (event, payload) { return __awaiter(_this, void 0, void 0, function () {
         var onChunk, onEnd, onError;
         return __generator(this, function (_a) {
             switch (_a.label) {
                 case 0:
                     onChunk = function (chunk) {
                         mainWindow.webContents.send('stream-chunk', chunk);
                     };
                     onEnd = function () {
                         mainWindow.webContents.send('stream-end');
                     };
                     onError = function (error) {
                         mainWindow.webContents.send('stream-error', error);
                     };
-                    return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)];
+                    return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)
+                        .catch(function(error) {
+                            onError(error.message || 'Unknown streaming error');
+                        })];
                 case 1:
                     _a.sent();
                     return [2 /*return*/];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
electron_1.ipcMain.on('proxy_request_streaming', function (event, payload) { return __awaiter(_this, void 0, void 0, function () {
var onChunk, onEnd, onError;
return __generator(this, function (_a) {
switch (_a.label) {
case 0:
onChunk = function (chunk) {
mainWindow.webContents.send('stream-chunk', chunk);
};
onEnd = function () {
mainWindow.webContents.send('stream-end');
};
onError = function (error) {
mainWindow.webContents.send('stream-error', error);
};
return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)];
case 1:
_a.sent();
return [2 /*return*/];
}
});
}); });
electron_1.ipcMain.on('proxy_request_streaming', function (event, payload) { return __awaiter(_this, void 0, void 0, function () {
var onChunk, onEnd, onError;
return __generator(this, function (_a) {
switch (_a.label) {
case 0:
onChunk = function (chunk) {
mainWindow.webContents.send('stream-chunk', chunk);
};
onEnd = function () {
mainWindow.webContents.send('stream-end');
};
onError = function (error) {
mainWindow.webContents.send('stream-error', error);
};
return [4 /*yield*/, proxy_request_streaming(payload, onChunk, onEnd, onError)
.catch(function (error) {
onError(error.message || 'Unknown streaming error');
})];
case 1:
_a.sent();
return [2 /*return*/];
}
});
}); });
🤖 Prompt for AI Agents
In electron/main.js around lines 72 to 92, the IPC handler for
'proxy_request_streaming' lacks a try/catch around the await
proxy_request_streaming call so thrown errors are uncaught and not sent to the
renderer; wrap the await call in a try block and in the catch send a
'stream-error' IPC message with the caught error (and guard that mainWindow and
mainWindow.webContents exist before sending), ensuring any exception from
proxy_request_streaming is forwarded to the renderer and does not crash the main
process.

electron_1.ipcMain.handle('start_sidecar', function (event, payload) { return __awaiter(_this, void 0, void 0, function () {
var onOutput;
return __generator(this, function (_a) {
switch (_a.label) {
case 0:
onOutput = function (output) {
mainWindow.webContents.send('sidecar-output', output);
};
return [4 /*yield*/, start_sidecar(payload, onOutput)];
case 1: return [2 /*return*/, _a.sent()];
}
});
}); });
electron_1.ipcMain.handle('stop_sidecar', function () { return __awaiter(_this, void 0, void 0, function () {
return __generator(this, function (_a) {
switch (_a.label) {
case 0: return [4 /*yield*/, stop_sidecar()];
case 1: return [2 /*return*/, _a.sent()];
}
});
}); });
}
electron_1.app.whenReady().then(function () {
createWindow();
electron_1.app.on('activate', function () {
if (electron_1.BrowserWindow.getAllWindows().length === 0) {
createWindow();
}
});
});
electron_1.app.on('window-all-closed', function () {
if (process.platform !== 'darwin') {
electron_1.app.quit();
}
});
72 changes: 72 additions & 0 deletions electron/main.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
import { app, BrowserWindow, ipcMain } from 'electron';
import path from 'path';

// This is a temporary solution to get the native module path.
// In a real application, this would be handled by the build process.
const nativeModulePath = path.join(__dirname, `../rust-lib/amica-rust-lib.node`);
const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = require(nativeModulePath);
Comment on lines +4 to +7
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Native module path resolution needs improvement for production.

The comment acknowledges this is temporary, but the hardcoded relative path will likely fail in production builds. Also, there's no error handling if the module fails to load.

Implement a more robust module resolution strategy:

-// This is a temporary solution to get the native module path.
-// In a real application, this would be handled by the build process.
-const nativeModulePath = path.join(__dirname, `../rust-lib/amica-rust-lib.node`);
-const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = require(nativeModulePath);
+function loadNativeModule() {
+  const possiblePaths = [
+    path.join(__dirname, '../rust-lib/amica-rust-lib.node'),
+    path.join(process.resourcesPath, 'amica-rust-lib.node'),
+    path.join(app.getAppPath(), 'rust-lib/amica-rust-lib.node'),
+  ];
+  
+  for (const modulePath of possiblePaths) {
+    try {
+      if (require('fs').existsSync(modulePath)) {
+        return require(modulePath);
+      }
+    } catch (error) {
+      console.warn(`Failed to load native module from ${modulePath}:`, error);
+    }
+  }
+  
+  throw new Error('Failed to load native module from any expected location');
+}
+
+const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = loadNativeModule();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// This is a temporary solution to get the native module path.
// In a real application, this would be handled by the build process.
const nativeModulePath = path.join(__dirname, `../rust-lib/amica-rust-lib.node`);
const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = require(nativeModulePath);
function loadNativeModule() {
const possiblePaths = [
path.join(__dirname, '../rust-lib/amica-rust-lib.node'),
path.join(process.resourcesPath, 'amica-rust-lib.node'),
path.join(app.getAppPath(), 'rust-lib/amica-rust-lib.node'),
];
for (const modulePath of possiblePaths) {
try {
if (require('fs').existsSync(modulePath)) {
return require(modulePath);
}
} catch (error) {
console.warn(`Failed to load native module from ${modulePath}:`, error);
}
}
throw new Error('Failed to load native module from any expected location');
}
const { proxy_request_blocking, proxy_request_streaming, start_sidecar, stop_sidecar } = loadNativeModule();
🤖 Prompt for AI Agents
In electron/main.ts around lines 4 to 7, the current hardcoded relative path to
the native module is fragile for production and lacks error handling; replace it
with a resolution strategy that picks between dev and packaged locations (e.g.
when app.isPackaged or using process.resourcesPath look in app.asar.unpacked or
a platform/arch-specific subfolder) and validate the resolved path with
fs.existsSync before requiring; wrap the require in a try/catch, log a clear
error (including the resolved path and the caught exception) and exit the
process cleanly if loading fails so the app doesn't continue in a broken state.


function createWindow() {
const mainWindow = new BrowserWindow({
width: 1200,
height: 800,
webPreferences: {
preload: path.join(__dirname, 'preload.js'),
nodeIntegration: false,
contextIsolation: true,
},
});
Comment on lines +10 to +18
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Missing security configurations for BrowserWindow.

While contextIsolation: true and nodeIntegration: false are good security practices, additional security measures should be implemented for production use.

Add additional security configurations:

   const mainWindow = new BrowserWindow({
     width: 1200,
     height: 800,
     webPreferences: {
       preload: path.join(__dirname, 'preload.js'),
       nodeIntegration: false,
       contextIsolation: true,
+      sandbox: true,
+      webSecurity: true,
+      allowRunningInsecureContent: false,
+      experimentalFeatures: false,
     },
+    // Additional window security
+    webPreferences: {
+      preload: path.join(__dirname, 'preload.js'),
+      nodeIntegration: false,
+      contextIsolation: true,
+      sandbox: true,
+    },
+    titleBarStyle: 'hiddenInset',
+    frame: process.platform !== 'win32',
   });
+  
+  // Prevent navigation to external URLs
+  mainWindow.webContents.on('will-navigate', (event, url) => {
+    if (!url.startsWith('file://') && !url.startsWith('http://localhost')) {
+      event.preventDefault();
+    }
+  });

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In electron/main.ts around lines 10 to 18, the BrowserWindow lacks several
production security settings; update the BrowserWindow creation to harden the
renderer by adding secure webPreferences (enable sandbox: true, ensure
enableRemoteModule: false, keep nodeIntegration: false and contextIsolation:
true, set webSecurity: true and allowRunningInsecureContent: false), disable
devTools in production (only allow in dev), and enable content protection if
needed (setContentProtection(true)) — make these changes conditionally based on
NODE_ENV so dev ergonomics remain while production builds are locked down.


const startUrl = process.env.ELECTRON_START_URL || path.join(__dirname, '../out/index.html');

if (process.env.ELECTRON_START_URL) {
mainWindow.loadURL(startUrl);
mainWindow.webContents.openDevTools();
} else {
mainWindow.loadFile(startUrl);
}

ipcMain.handle('proxy_request_blocking', async (event, payload) => {
return await proxy_request_blocking(payload);
});

ipcMain.on('proxy_request_streaming', async (event, payload) => {
const onChunk = (chunk) => {
mainWindow.webContents.send('stream-chunk', chunk);
};
const onEnd = () => {
mainWindow.webContents.send('stream-end');
};
const onError = (error) => {
mainWindow.webContents.send('stream-error', error);
};
await proxy_request_streaming(payload, onChunk, onEnd, onError);
});
Comment on lines +33 to +44
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

IPC handler uses incorrect method for streaming.

The proxy_request_streaming handler uses ipcMain.on (line 33), which doesn't support async/await properly and can't send responses back to the renderer. This should use ipcMain.handle for bidirectional communication or properly handle the async nature.

Either use ipcMain.handle for proper async handling or wrap in try-catch:

-  ipcMain.on('proxy_request_streaming', async (event, payload) => {
+  ipcMain.handle('proxy_request_streaming', async (event, payload) => {
     const onChunk = (chunk) => {
-      mainWindow.webContents.send('stream-chunk', chunk);
+      event.sender.send('stream-chunk', chunk);
     };
     const onEnd = () => {
-      mainWindow.webContents.send('stream-end');
+      event.sender.send('stream-end');
     };
     const onError = (error) => {
-      mainWindow.webContents.send('stream-error', error);
+      event.sender.send('stream-error', error);
     };
-    await proxy_request_streaming(payload, onChunk, onEnd, onError);
+    try {
+      await proxy_request_streaming(payload, onChunk, onEnd, onError);
+      return { success: true };
+    } catch (error) {
+      onError(error.message || 'Streaming failed');
+      throw error;
+    }
   });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ipcMain.on('proxy_request_streaming', async (event, payload) => {
const onChunk = (chunk) => {
mainWindow.webContents.send('stream-chunk', chunk);
};
const onEnd = () => {
mainWindow.webContents.send('stream-end');
};
const onError = (error) => {
mainWindow.webContents.send('stream-error', error);
};
await proxy_request_streaming(payload, onChunk, onEnd, onError);
});
ipcMain.handle('proxy_request_streaming', async (event, payload) => {
const onChunk = (chunk) => {
event.sender.send('stream-chunk', chunk);
};
const onEnd = () => {
event.sender.send('stream-end');
};
const onError = (error) => {
event.sender.send('stream-error', error);
};
try {
await proxy_request_streaming(payload, onChunk, onEnd, onError);
return { success: true };
} catch (error) {
onError(error.message || 'Streaming failed');
throw error;
}
});
🤖 Prompt for AI Agents
In electron/main.ts around lines 33 to 44, the IPC handler uses ipcMain.on which
doesn't support proper async/await/invoke semantics for bidirectional RPC;
change this to ipcMain.handle so the renderer can call ipcRenderer.invoke and
receive a Promise, make the handler async, wrap the proxy_request_streaming call
in try/catch, keep using event.sender or mainWindow.webContents to send
stream-chunk/stream-end/stream-error events, and return a final result or
rethrow an error so the invoke caller receives success/failure.


ipcMain.handle('start_sidecar', async (event, payload) => {
const onOutput = (output) => {
mainWindow.webContents.send('sidecar-output', output);
};
return await start_sidecar(payload, onOutput);
});
Comment on lines +46 to +51
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential race condition with window reference in async handlers.

The IPC handlers capture mainWindow in their closures, but if the window is closed while an async operation is in progress, calling mainWindow.webContents.send() could throw an error.

Add window validity checks:

   ipcMain.handle('start_sidecar', async (event, payload) => {
     const onOutput = (output) => {
-      mainWindow.webContents.send('sidecar-output', output);
+      if (!mainWindow.isDestroyed()) {
+        mainWindow.webContents.send('sidecar-output', output);
+      }
     };
     return await start_sidecar(payload, onOutput);
   });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ipcMain.handle('start_sidecar', async (event, payload) => {
const onOutput = (output) => {
mainWindow.webContents.send('sidecar-output', output);
};
return await start_sidecar(payload, onOutput);
});
ipcMain.handle('start_sidecar', async (event, payload) => {
const onOutput = (output) => {
if (!mainWindow.isDestroyed()) {
mainWindow.webContents.send('sidecar-output', output);
}
};
return await start_sidecar(payload, onOutput);
});
🤖 Prompt for AI Agents
In electron/main.ts around lines 46 to 51, the IPC handler captures mainWindow
and may call mainWindow.webContents.send after the window is closed; update the
onOutput callback to first verify the window and its webContents are still valid
(e.g. mainWindow && !mainWindow.isDestroyed() and mainWindow.webContents &&
!mainWindow.webContents.isDestroyed()) before calling send, and wrap the send in
a try/catch to swallow or log errors; optionally ensure any long-lived async
operations remove or noop their callbacks when the window is closed to avoid the
race.


ipcMain.handle('stop_sidecar', async () => {
return await stop_sidecar();
});
}

app.whenReady().then(() => {
createWindow();

app.on('activate', () => {
if (BrowserWindow.getAllWindows().length === 0) {
createWindow();
}
});
});

app.on('window-all-closed', () => {
if (process.platform !== 'darwin') {
app.quit();
}
});
17 changes: 17 additions & 0 deletions electron/preload.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
var electron_1 = require("electron");
electron_1.contextBridge.exposeInMainWorld('electronAPI', {
proxyRequestBlocking: function (payload) { return electron_1.ipcRenderer.invoke('proxy_request_blocking', payload); },
proxyRequestStreaming: function (payload, onChunk, onEnd, onError) {
electron_1.ipcRenderer.on('stream-chunk', function (event, chunk) { return onChunk(chunk); });
electron_1.ipcRenderer.on('stream-end', function (event) { return onEnd(); });
electron_1.ipcRenderer.on('stream-error', function (event, error) { return onError(error); });
electron_1.ipcRenderer.send('proxy_request_streaming', payload);
},
Comment on lines +6 to +11
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Memory leak: Event listeners accumulate without cleanup.

The same memory leak issues present in the TypeScript version exist here - event listeners are registered but never removed.

This issue needs to be fixed in the TypeScript source file (electron/preload.ts), and this JavaScript file should be regenerated from the corrected source.

🤖 Prompt for AI Agents
In electron/preload.js around lines 6 to 11, the ipcRenderer event listeners for
'stream-chunk', 'stream-end', and 'stream-error' are registered but never
removed causing a memory leak; update the TypeScript source
(electron/preload.ts) to register those listeners with named handler functions
(or once/removeListener patterns) and ensure they are cleaned up after the
stream completes or errors (call removeListener or use ipcRenderer.once where
appropriate), then regenerate this JavaScript file from the fixed TypeScript so
the runtime no longer accumulates listeners.

startSidecar: function (payload, onOutput) {
electron_1.ipcRenderer.on('sidecar-output', function (event, output) { return onOutput(output); });
return electron_1.ipcRenderer.invoke('start_sidecar', payload);
},
stopSidecar: function () { return electron_1.ipcRenderer.invoke('stop_sidecar'); },
});
16 changes: 16 additions & 0 deletions electron/preload.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
import { contextBridge, ipcRenderer } from 'electron';

contextBridge.exposeInMainWorld('electronAPI', {
proxyRequestBlocking: (payload) => ipcRenderer.invoke('proxy_request_blocking', payload),
proxyRequestStreaming: (payload, onChunk, onEnd, onError) => {
ipcRenderer.on('stream-chunk', (event, chunk) => onChunk(chunk));
ipcRenderer.on('stream-end', (event) => onEnd());
ipcRenderer.on('stream-error', (event, error) => onError(error));
ipcRenderer.send('proxy_request_streaming', payload);
},
Comment on lines +5 to +10
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Memory leak: IPC event listeners are never removed.

The proxyRequestStreaming function registers event listeners (lines 6-8) but never removes them. This will cause a memory leak as listeners accumulate with each streaming request.

Store and remove listeners after use:

   proxyRequestStreaming: (payload, onChunk, onEnd, onError) => {
-    ipcRenderer.on('stream-chunk', (event, chunk) => onChunk(chunk));
-    ipcRenderer.on('stream-end', (event) => onEnd());
-    ipcRenderer.on('stream-error', (event, error) => onError(error));
+    const chunkHandler = (event, chunk) => onChunk(chunk);
+    const endHandler = (event) => {
+      ipcRenderer.removeListener('stream-chunk', chunkHandler);
+      ipcRenderer.removeListener('stream-end', endHandler);
+      ipcRenderer.removeListener('stream-error', errorHandler);
+      onEnd();
+    };
+    const errorHandler = (event, error) => {
+      ipcRenderer.removeListener('stream-chunk', chunkHandler);
+      ipcRenderer.removeListener('stream-end', endHandler);
+      ipcRenderer.removeListener('stream-error', errorHandler);
+      onError(error);
+    };
+    
+    ipcRenderer.on('stream-chunk', chunkHandler);
+    ipcRenderer.on('stream-end', endHandler);
+    ipcRenderer.on('stream-error', errorHandler);
     ipcRenderer.send('proxy_request_streaming', payload);
   },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
proxyRequestStreaming: (payload, onChunk, onEnd, onError) => {
ipcRenderer.on('stream-chunk', (event, chunk) => onChunk(chunk));
ipcRenderer.on('stream-end', (event) => onEnd());
ipcRenderer.on('stream-error', (event, error) => onError(error));
ipcRenderer.send('proxy_request_streaming', payload);
},
proxyRequestStreaming: (payload, onChunk, onEnd, onError) => {
const chunkHandler = (event, chunk) => onChunk(chunk);
const endHandler = (event) => {
ipcRenderer.removeListener('stream-chunk', chunkHandler);
ipcRenderer.removeListener('stream-end', endHandler);
ipcRenderer.removeListener('stream-error', errorHandler);
onEnd();
};
const errorHandler = (event, error) => {
ipcRenderer.removeListener('stream-chunk', chunkHandler);
ipcRenderer.removeListener('stream-end', endHandler);
ipcRenderer.removeListener('stream-error', errorHandler);
onError(error);
};
ipcRenderer.on('stream-chunk', chunkHandler);
ipcRenderer.on('stream-end', endHandler);
ipcRenderer.on('stream-error', errorHandler);
ipcRenderer.send('proxy_request_streaming', payload);
},
🤖 Prompt for AI Agents
In electron/preload.ts around lines 5 to 10, the proxyRequestStreaming
implementation attaches ipcRenderer listeners for 'stream-chunk', 'stream-end',
and 'stream-error' but never removes them, causing a memory leak; fix by
creating named handler functions (or use once where appropriate), register them,
and ensure you remove all three listeners after stream completion or error (call
ipcRenderer.removeListener for each handler in the onEnd and onError flows) and
also remove them if a cancellation path exists so listeners do not accumulate
across requests.

startSidecar: (payload, onOutput) => {
ipcRenderer.on('sidecar-output', (event, output) => onOutput(output));
return ipcRenderer.invoke('start_sidecar', payload);
},
Comment on lines +11 to +14
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Memory leak: sidecar-output listener is never removed.

Similar to the streaming function, the startSidecar function registers an event listener (line 12) that is never removed, causing a memory leak.

Return a cleanup function or handle to allow the caller to stop listening:

   startSidecar: (payload, onOutput) => {
-    ipcRenderer.on('sidecar-output', (event, output) => onOutput(output));
-    return ipcRenderer.invoke('start_sidecar', payload);
+    const outputHandler = (event, output) => onOutput(output);
+    ipcRenderer.on('sidecar-output', outputHandler);
+    return ipcRenderer.invoke('start_sidecar', payload).then((result) => {
+      // Return result with a cleanup function
+      return {
+        ...result,
+        cleanup: () => ipcRenderer.removeListener('sidecar-output', outputHandler)
+      };
+    });
   },

Alternatively, remove the listener when the sidecar is stopped:

   stopSidecar: () => ipcRenderer.invoke('stop_sidecar'),
+  stopSidecar: () => {
+    ipcRenderer.removeAllListeners('sidecar-output');
+    return ipcRenderer.invoke('stop_sidecar');
+  },

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In electron/preload.ts around lines 11 to 14, the
ipcRenderer.on('sidecar-output', ...) listener added in startSidecar is never
removed which leaks memory; change startSidecar to register a named listener
(const handler = (event, output) => onOutput(output)) and either return a
cleanup function/handle that calls ipcRenderer.removeListener('sidecar-output',
handler) (or ipcRenderer.off) so callers can stop listening, or ensure the
listener is removed internally when the sidecar stops by removing the same named
handler after ipcRenderer.invoke('start_sidecar', payload) resolves or when a
corresponding stop event occurs. Ensure the listener removal uses the exact same
handler reference so it actually unregisters.

stopSidecar: () => ipcRenderer.invoke('stop_sidecar'),
});
Loading