Skip to content

Conversation

@AlphaEcho11
Copy link

@AlphaEcho11 AlphaEcho11 commented Aug 13, 2025

This was a significant architectural change where I moved core functionality to the Rust backend of the Tauri application.

Here are the key changes:

  • Process Management: The Rust backend now launches and manages an external text-generation-webui process. You can configure the path to this executable via a new settings.json file. The backend ensures the process is terminated gracefully when the application exits.
  • API Proxy: I added a new proxy_request command in Rust. This command forwards API calls from the frontend to the text-generation-webui service, centralizing communication and hardening the application.
  • Frontend Refactoring: The KoboldAI chat feature in the frontend has been updated to use the new Rust proxy instead of making direct fetch calls.
  • Configuration: A settings.json file has been added to the root of the project to allow you to specify the path to your text-generation-webui executable.
  • Documentation: I also added a DEPLOYMENT.md guide to explain how you can set up and run the new version of the application.

Summary by CodeRabbit

  • New Features

    • Local desktop proxy for streamed chat responses and a blocking fallback for vision replies.
    • Quit confirmation flow for safer app exit.
  • Documentation

    • Added Deployment Guide for running the Rust-powered backend locally (setup, config, build, run).
  • Chores

    • Tauri/tooling upgraded and packaging/config keys reorganized; included app resources and tightened shell permissions.
  • Bug Fixes

    • Improved startup configuration validation, sidecar lifecycle management, path validation, and user-facing error dialogs.

…roxy. This was a significant architectural change where I moved core functionality to the Rust backend of the Tauri application.

Here are the key changes:
- Process Management: The Rust backend now launches and manages an external `text-generation-webui` process. You can configure the path to this executable via a new `settings.json` file. The backend ensures the process is terminated gracefully when the application exits.
- API Proxy: I added a new `proxy_request` command in Rust. This command forwards API calls from the frontend to the `text-generation-webui` service, centralizing communication and hardening the application.
- Frontend Refactoring: The KoboldAI chat feature in the frontend has been updated to use the new Rust proxy instead of making direct `fetch` calls.
- Configuration: A `settings.json` file has been added to the root of the project to allow you to specify the path to your `text-generation-webui` executable.
- Documentation: I also added a `DEPLOYMENT.md` guide to explain how you can set up and run the new version of the application.
@vercel
Copy link

vercel bot commented Aug 13, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Project Deployment Preview Comments Updated (UTC)
amica Failed Aug 14, 2025 7:43am

@vercel
Copy link

vercel bot commented Aug 13, 2025

@google-labs-jules[bot] is attempting to deploy a commit to the heyamica Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link

coderabbitai bot commented Aug 13, 2025

Walkthrough

Adds a local Deployment guide and default settings; upgrades Tauri to 2.0.0-beta and adds reqwest/futures-util; rewrites the Tauri backend to manage a sidecar, validate/sanitize proxy paths, and add streaming/blocking proxy commands; updates tauri config and resources; frontend adds quit confirmation and proxies OpenAI streaming/blocking through Tauri.

Changes

Cohort / File(s) Summary of Changes
Documentation
DEPLOYMENT.md
New deployment guide for running Amica locally: prerequisites, install/config/build/run steps, OS config locations, settings.json format, artifacts, and runtime notes.
Tauri Cargo / Dependencies
src-tauri/Cargo.toml
Upgraded tauri to 2.0.0-beta.21 (removed shell-open feature); added reqwest = { version = "0.12.5", default-features = false, features = ["json","rustls-tls"] } and futures-util = "0.3.30".
Tauri Config & Resources
src-tauri/tauri.conf.json, src-tauri/resources/settings.json
Restructured tauri config (keys renamed/moved: devPath→devUrl, distDir→frontendDist, tauri/tauri.bundle → top-level bundle, added root-level version/identifier); added resources entry and src-tauri/resources/settings.json with {"text_generation_webui_path": ""}.
Tauri Backend Core
src-tauri/src/main.rs
Major rewrite: adds AppState (sidecar child + termination flag), Settings struct and settings load from OS config or bundled resource, executable path validation/sanitization, allowlisted proxy paths, spawning/forwarding sidecar stdout, graceful shutdown, and new commands: proxy_request_streaming, proxy_request_blocking, quit_app; updates close_splashscreen to a tauri command. Implements streaming primitives that POST to localhost and emit stream-chunk/stream-end/stream-error events.
Frontend App Lifecycle
src/pages/_app.tsx
Adds client-side listener for confirm-close that prompts the user and invokes quit_app on confirmation; the listener is cleaned up on unmount.
Frontend Chat Proxy Refactor
src/features/chat/openAiChat.ts
Replaces direct frontend OpenAI HTTP streaming with Tauri-backed proxy streaming via proxy_request_streaming and event listeners; blocking requests use proxy_request_blocking. Adds stream cleanup, API key validation, updated types/signatures (vision chat now returns Promise<string>).
Node Package Config
package.json
Upgraded Tauri JS packages to pinned 2.0.0-beta.x devDependencies: @tauri-apps/api and @tauri-apps/cli.

Sequence Diagram(s)

sequenceDiagram
  participant UI as Frontend (openAiChat)
  participant Tauri as Tauri Backend
  participant API as Local API (127.0.0.1:5000)

  UI->>Tauri: invoke proxy_request_streaming(path: v1/chat/completions, body, auth?)
  Tauri->>Tauri: validate & sanitize path (allowlist)
  Tauri->>API: POST /v1/chat/completions (stream: true)
  API-->>Tauri: SSE / streamed chunks
  loop For each chunk
    Tauri-->>UI: emit "stream-chunk" { chunk }
  end
  Tauri-->>UI: emit "stream-end"
  Note over UI: UI aggregates chunks into ReadableStream
Loading
sequenceDiagram
  participant App as Frontend (_app.tsx)
  participant Tauri as Tauri Backend
  participant Sidecar as Sidecar Process

  Tauri->>Tauri: On setup: load settings.json, validate path
  Tauri->>Sidecar: spawn sidecar (text-generation-webui)
  Sidecar-->>Tauri: stdout lines
  Tauri-->>App: emit "sidecar-output"

  App->>Tauri: user triggers close -> receives "confirm-close"
  App->>Tauri: invokes quit_app (on confirm)
  Tauri->>Sidecar: graceful shutdown
  Tauri->>Tauri: exit application
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~35 minutes

Poem

I nibble bytes beneath the moon,
A sidecar hums a steady tune.
Chunks hop out in tidy streams,
Settings snug in config dreams.
Build and run — the rabbit beams. 🥕✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (3)
src-tauri/tauri.conf.json (1)

20-28: Add IPv6 loopback to HTTP scope for completeness

Some environments resolve localhost to ::1 or services bind to IPv6 loopback. Consider including [::1] to avoid surprises.

Apply this diff to extend the scope:

       "http": {
         "all": false,
         "request": true,
         "scope": [
           "http://127.0.0.1:*/*",
           "https://127.0.0.1:*/*",
           "http://localhost:*/*",
-          "https://localhost:*/*"
+          "https://localhost:*/*",
+          "http://[::1]:*/*",
+          "https://[::1]:*/*"
         ]
       }
DEPLOYMENT.md (1)

22-39: Fix heading increment levels (MD001) for steps

Headings jump from H2 to H4. Make Steps 1–3 H3 to satisfy markdownlint and improve readability.

Apply this diff:

-#### Step 1: Clone the Amica Repository
+### Step 1: Clone the Amica Repository
@@
-#### Step 2: Install JavaScript Dependencies
+### Step 2: Install JavaScript Dependencies
@@
-#### Step 3: Configure the `text-generation-webui` Path
+### Step 3: Configure the `text-generation-webui` Path
src-tauri/src/main.rs (1)

22-24: Add documentation for the shared state structure

Consider adding documentation to explain the purpose and thread-safety considerations of AppState.

+/// Application state shared across the Tauri app
+/// Contains a reference to the spawned sidecar process for lifecycle management
 struct AppState {
     child_process: Mutex<Option<tauri::api::process::Child>>,
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ca2415c and b382be7.

📒 Files selected for processing (6)
  • DEPLOYMENT.md (1 hunks)
  • settings.json (1 hunks)
  • src-tauri/Cargo.toml (1 hunks)
  • src-tauri/src/main.rs (1 hunks)
  • src-tauri/tauri.conf.json (1 hunks)
  • src/features/chat/koboldAiChat.ts (2 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
DEPLOYMENT.md

22-22: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

🔇 Additional comments (2)
src-tauri/tauri.conf.json (1)

17-19: Least-privilege shell allowlist looks good

Explicitly disabling "all" while only allowing "open" is a safe default.

src/features/chat/koboldAiChat.ts (1)

4-4: LGTM: Using Tauri’s invoke to route through the backend proxy

This aligns the frontend with the new Rust proxy flow.

Comment on lines 81 to 82
let settings_path = "settings.json";
let settings_str = fs::read_to_string(settings_path)
Copy link

@coderabbitai coderabbitai bot Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Settings file path should be relative to app directory

The settings.json path is hardcoded as a relative path, which could cause issues depending on where the application is launched from.

Use the app's resource directory for the settings file:

             // Read settings
-            let settings_path = "settings.json";
+            let settings_path = app.path_resolver()
+                .resolve_resource("settings.json")
+                .ok_or("Failed to resolve settings.json path")?;
             let settings_str = fs::read_to_string(settings_path)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let settings_path = "settings.json";
let settings_str = fs::read_to_string(settings_path)
// Read settings
let settings_path = app.path_resolver()
.resolve_resource("settings.json")
.ok_or("Failed to resolve settings.json path")?;
let settings_str = fs::read_to_string(settings_path)
🤖 Prompt for AI Agents
In src-tauri/src/main.rs around lines 81 to 82, the code uses a hardcoded
relative path "settings.json" which breaks if the app is launched from a
different CWD; replace this with the application's resource directory path
(e.g., obtain the app resource dir via tauri::api::path::resource_dir() or the
app handle/path resolver, join "settings.json" to that PathBuf, and read that
full path with fs::read_to_string); also handle the Option/Result (return an
error or fallback) so the app fails gracefully if the resource dir or file is
missing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've taken a close look at the current implementation in main.rs to ensure it follows this best practice. The code currently uses handle.path_resolver().resolve_resource("resources/settings.json").

Good news - this is Tauri's recommended API for exactly this purpose! It correctly resolves the path to the bundled resource file regardless of where the application is launched from. So, it seems the code is already robust against the CWD issue you've described.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlphaEcho11 You're absolutely right! I apologize for the confusion. Looking at the current implementation in the file, I can see that the code already properly handles the settings file path resolution:

  1. It first attempts to load from the user's config directory: config_dir.join("settings.json")
  2. If that doesn't exist, it falls back to the bundled resource: handle.path_resolver().resolve_resource("resources/settings.json")

This is indeed the correct and robust approach using Tauri's recommended APIs. The diff I commented on appears to have shown an intermediate state during development. The current implementation properly handles both user configuration overrides and bundled defaults, with appropriate error handling for missing files or directories.

Thank you for taking the time to clarify this - the current code is well-implemented and follows Tauri best practices perfectly!

Update DEPLOYMENT.md to clarify that settings.json is always read from the current working directory (CWD).

This note explains that this behavior applies to both development and packaged runs and that there is no OS-specific config path, preventing potential user confusion.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
DEPLOYMENT.md (5)

22-39: Fix heading level increments (MD001): use h3 for the “Step” subsections

Current “Step” headings jump from h2 (##) to h4 (####). Switch them to h3 (###) to satisfy markdownlint and keep a proper outline.

-#### Step 1: Clone the Amica Repository
+### Step 1: Clone the Amica Repository
@@
-#### Step 2: Install JavaScript Dependencies
+### Step 2: Install JavaScript Dependencies
@@
-#### Step 3: Configure the `text-generation-webui` Path
+### Step 3: Configure the `text-generation-webui` Path

85-88: Explain why streaming must be disabled (ties to current proxy limitations)

Make the reason explicit so users understand the constraint and don’t assume a misconfiguration.

-*   **Streaming/Extra Option:** If you see an option for streaming, make sure it is **disabled**.
+*   **Streaming/Extra Option:** If you see an option for streaming, make sure it is **disabled** (the current Rust `proxy_request` does not support streaming responses).

11-11: Avoid “pre-compiled” wording for text-generation-webui

text-generation-webui is typically a Python application started via scripts; “pre-compiled” can mislead users. Suggest phrasing as “installed/working.”

-*   **`text-generation-webui`:** You must have a working, pre-compiled version of `text-generation-webui`. You can find releases and setup instructions on its [GitHub repository](https://github.com/oobabooga/text-generation-webui). Make sure you can run it successfully on its own before integrating it with Amica.
+*   **`text-generation-webui`:** You must have a working installation of `text-generation-webui`. You can find setup instructions on its [GitHub repository](https://github.com/oobabooga/text-generation-webui). Make sure you can run it successfully on its own (e.g., via its provided start script) before integrating it with Amica.

53-59: Add a Windows quoting tip to avoid path-escaping issues

This helps users whose paths contain spaces or special characters.

         *(Note the double backslashes `\\`)*
+        On PowerShell, wrapping the path in single quotes can help avoid escaping issues, for example:
+        {
+          "text_generation_webui_path": 'C:\\Users\\Your User\\Desktop\\text-generation-webui\\start.bat'
+        }

89-90: Add a short Troubleshooting section

Common pitfalls (permissions, CWD, missing libs) will save users time.

 That's it! Your self-contained, Rust-powered Amica application is now ready to use.
+
+## 5. Troubleshooting
+
+- The app can’t find `settings.json`:
+  - Ensure you launch Amica from the directory that contains `settings.json` (Amica reads it from the current working directory).
+- The sidecar (`text-generation-webui`) fails to start:
+  - Verify the path points to the correct start script (`.bat` on Windows, `.sh` on Linux/macOS).
+  - On Linux/macOS, make sure the script is executable: `chmod +x start.sh`.
+- Linux build issues:
+  - Double-check that all required system packages are installed (see the Linux dependencies section).
+- Networking issues:
+  - If you use a firewall/AV, allow local loopback connections for Amica and the sidecar process.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b382be7 and 4da3fe0.

📒 Files selected for processing (1)
  • DEPLOYMENT.md (1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
DEPLOYMENT.md

22-22: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

🔇 Additional comments (2)
DEPLOYMENT.md (2)

79-80: Correct Tauri build artifact paths (bundle directory)

Tauri bundles into the bundle/ subdirectory (e.g., bundle/macos, bundle/dmg, bundle/appimage, bundle/msi/nsis), not directly under target/release. Adjust the instructions to reduce confusion after build.

-Once the build is complete, you will find the final application inside the `src-tauri/target/release/` directory. It will be a `.exe` file on Windows, a `.AppImage` on Linux, or a `.app` file inside a `.dmg` on macOS.
+Once the build is complete, artifacts are placed under `src-tauri/target/release/bundle/`:
+- Windows: `bundle/nsis` (installer `.exe`) or `bundle/msi` (MSI), depending on your config
+- Linux: `bundle/appimage` (`.AppImage`) and/or other formats if enabled
+- macOS: `bundle/macos` (`.app`) and `bundle/dmg` (`.dmg`)

If the project intentionally overrides output paths, please confirm and keep the docs aligned with your tauri.conf.json. I can update this section after you confirm the bundler config.


67-67: Nice: settings.json CWD lookup is now explicitly documented

This resolves prior ambiguity for packaged runs and dev runs alike. Good addition.

This commit refactors the application's startup and configuration logic to be more robust and easier for you to use.

Key changes include:
- **Configuration Loading:** The application now loads `settings.json` from the standard OS-specific application config directory, falling back to a bundled default if the file is not present. This replaces the previous CWD-based approach.
- **Path Validation:** Added validation to ensure that the `text_generation_webui_path` provided in the settings exists and is a valid file before attempting to spawn the process.
- **Graceful Error Handling:** Replaced all `panic!` and `.expect()` calls in the startup sequence with native error dialogs. This prevents the application from crashing and provides you with clear, easy-to-understand feedback in case of configuration errors.
- **Documentation:** The `DEPLOYMENT.md` guide has been updated to reflect the new configuration behavior, instructing you on where to place your `settings.json` file.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
DEPLOYMENT.md (1)

15-15: Use libwebkit2gtk-4.1-dev for Tauri v2 on Debian/Ubuntu

Your Linux prerequisites reference libwebkit2gtk-4.0-dev, which is for legacy Tauri v1. Tauri v2 requires libwebkit2gtk-4.1-dev. Update the package list and add a brief note for v1 users.

-    sudo apt-get install -y libwebkit2gtk-4.0-dev build-essential curl wget libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev
+    sudo apt-get install -y libwebkit2gtk-4.1-dev build-essential curl wget libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev

Optional note after the code block:

+    Note: If you're on Tauri v1, continue using libwebkit2gtk-4.0-dev. Tauri v2+ requires libwebkit2gtk-4.1-dev.
🧹 Nitpick comments (4)
DEPLOYMENT.md (4)

22-22: Fix heading levels (MD001) by using H3 under H2

Headings under “## 2. Installation and Configuration” should be “###”, not “####”, to satisfy markdownlint MD001 and maintain a consistent hierarchy.

-#### Step 1: Clone the Amica Repository
+### Step 1: Clone the Amica Repository
-#### Step 2: Install JavaScript Dependencies
+### Step 2: Install JavaScript Dependencies
-#### Step 3: Configure the `text-generation-webui` Path
+### Step 3: Configure the `text-generation-webui` Path

Also applies to: 31-31, 39-39


69-85: Add an execution-permissions note for Unix-like systems

Many users hit “Permission denied” when pointing to a shell script. Add a short note to make the script executable.

         {
           "text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh"
         }
       ```
+
+      On Linux/macOS, make sure the target script is executable:
+      ```bash
+      chmod +x /home/youruser/text-generation-webui/start.sh
+      ```

86-86: Document expected host/port and keep the backend local for security

The Rust proxy forwards to a local service (commonly 127.0.0.1:5000). Add a note to keep text-generation-webui bound to localhost and clarify the expected port, to avoid accidentally exposing the model server on the network.

 If Amica ever has trouble starting, it will show a dialog box explaining the configuration error. This usually means there's a typo in your `settings.json` file or the path to the executable is incorrect.
+Security note: Ensure your `text-generation-webui` listens on localhost (127.0.0.1) and the expected port (commonly 5000) so it isn’t exposed on your network. If you use a different port, update your configuration accordingly.

95-96: Ensure Tauri CLI availability for the npm script

If @tauri-apps/cli isn’t a devDependency or globally installed, npm run tauri build will fail. Consider adding a note.

 ```bash
 npm run tauri build

+Note: Ensure the Tauri CLI is available (either installed as a dev dependency or globally). If missing, install it via:
+bash +npm i -D @tauri-apps/cli +


</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used: CodeRabbit UI**
**Review profile: CHILL**
**Plan: Pro**


<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 4da3fe0a0c9af1b2e8c9dd304d6b8e612670cb39 and 0cdd5a61ac3c6b6978fa70a5b06389bdfdcb8fe5.

</details>

<details>
<summary>📒 Files selected for processing (4)</summary>

* `DEPLOYMENT.md` (1 hunks)
* `src-tauri/resources/settings.json` (1 hunks)
* `src-tauri/src/main.rs` (1 hunks)
* `src-tauri/tauri.conf.json` (2 hunks)

</details>

<details>
<summary>✅ Files skipped from review due to trivial changes (1)</summary>

* src-tauri/resources/settings.json

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (2)</summary>

* src-tauri/tauri.conf.json
* src-tauri/src/main.rs

</details>

<details>
<summary>🧰 Additional context used</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

<details>
<summary>DEPLOYMENT.md</summary>

22-22: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

</details>

</details>

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

These changes resolve some previously identified issues, including a functional regression and dependency optimization.

Key changes include:
- **Streaming Proxy:** The Rust backend now properly handles streaming API responses. A new `proxy_request_streaming` command uses Tauri events to send data chunks to the frontend in real-time, restoring the original streaming functionality.
- **Frontend Streaming:** The `getExtra` function in `koboldAiChat.ts` has been refactored to use the new event-based streaming mechanism, correctly reconstructing the `ReadableStream`.
- **Dependency Optimization:** Removed the redundant `tokio` dependency from `Cargo.toml` as Tauri provides its own async runtime. Configured `reqwest` to use `rustls-tls` instead of the native TLS backend, improving portability and reducing binary size.
- **Configuration Cleanup:** Removed the unnecessary `http` allowlist from `tauri.conf.json`.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (6)
src/features/chat/koboldAiChat.ts (2)

81-86: Guard against malformed responses before accessing results.

Avoid reading length on undefined and ensure expected shape.

-  if (json.results.length === 0) {
-    throw new Error(`KoboldAi result length 0`);
-  }
+  if (!json?.results || !Array.isArray(json.results) || json.results.length === 0) {
+    throw new Error("KoboldAI result is empty or malformed");
+  }
 
-  const text = json.results.map((row: {text: string}) => row.text).join('');
+  const text = json.results.map((row) => row?.text ?? "").join("");

74-79: Type the invoke response and add error handling.

Use invoke generics and handle backend errors cleanly.

-  const json: any = await invoke("proxy_request_blocking", {
-    payload: {
-      path: "api/v1/generate",
-      body: body,
-    },
-  });
+  let json: KoboldResponse;
+  try {
+    json = await invoke<KoboldResponse>("proxy_request_blocking", {
+      payload: {
+        path: "api/v1/generate",
+        body,
+      },
+    });
+  } catch (e: unknown) {
+    const msg = e instanceof Error ? e.message : String(e);
+    throw new Error(`KoboldAI proxy_request_blocking failed: ${msg}`);
+  }

Add these near the top of the file:

type KoboldResult = { text: string };
type KoboldResponse = { results: KoboldResult[] };
src-tauri/src/main.rs (4)

60-67: Validate and sanitize proxy path to prevent SSRF and path traversal.

payload.path is concatenated directly into the URL. Enforce an allowlist (and basic sanitization) in both streaming and blocking proxies.

Apply these diffs:

 async fn proxy_request_streaming(
     handle: tauri::AppHandle,
     payload: ProxyRequestPayload,
 ) -> Result<(), String> {
-    let client = reqwest::Client::new();
-    let url = format!("http://127.0.0.1:5000/{}", payload.path);
+    // Basic sanitization and allowlist
+    if payload.path.contains("://") || payload.path.contains("..") || payload.path.contains('\0') {
+        return Err(format!("Forbidden path: '{}'", payload.path));
+    }
+    const ALLOWED_PATHS: &[&str] = &[
+        "api/extra/generate/stream",
+        "api/v1/generate",
+        "api/v1/model",
+        "api/v1/config",
+    ];
+    if !ALLOWED_PATHS.iter().any(|p| payload.path == *p || payload.path.starts_with(&format!("{}/", p))) {
+        return Err(format!("Forbidden path: '{}'", payload.path));
+    }
+    let client = reqwest::Client::new();
+    let url = format!("http://127.0.0.1:5000/{}", payload.path);
 async fn proxy_request_blocking(payload: ProxyRequestPayload) -> Result<serde_json::Value, String> {
-    let client = reqwest::Client::new();
-    // This port should be configurable in the future.
-    let url = format!("http://127.0.0.1:5000/{}", payload.path);
+    if payload.path.contains("://") || payload.path.contains("..") || payload.path.contains('\0') {
+        return Err(format!("Forbidden path: '{}'", payload.path));
+    }
+    const ALLOWED_PATHS: &[&str] = &[
+        "api/v1/generate",
+        "api/v1/model",
+        "api/v1/config",
+    ];
+    if !ALLOWED_PATHS.iter().any(|p| payload.path == *p || payload.path.starts_with(&format!("{}/", p))) {
+        return Err(format!("Forbidden path: '{}'", payload.path));
+    }
+    let client = reqwest::Client::new();
+    // This port should be configurable in the future.
+    let url = format!("http://127.0.0.1:5000/{}", payload.path);

Also applies to: 106-111


201-211: Don’t call process exit from a spawned task; the comment is incorrect and this will terminate the app immediately.

show_error_and_exit invokes std::process::exit(1), which terminates the process from any thread. Instead, emit a frontend-visible error and return, allowing the app to continue or shut down gracefully via the main thread.

-                let (mut rx, child) = match Command::new(&settings.text_generation_webui_path).spawn() {
+                let (mut rx, child) = match Command::new(&settings.text_generation_webui_path).spawn() {
                     Ok(c) => c,
                     Err(e) => {
-                        let msg = format!("Failed to spawn the external process at '{}': {}", settings.text_generation_webui_path, e);
-                        show_error_and_exit(&handle, "Process Error", &msg);
-                        // This exit is in a spawned thread, so it won't kill the main app directly
-                        // The main app will continue, but the child process won't be running.
-                        // The dialog is the most important part.
-                        return;
+                        let msg = format!("Failed to start text generation service: {}", e);
+                        let _ = handle.emit_all("sidecar-error", Payload { message: msg });
+                        return;
                     }
                 };

Frontend can listen for "sidecar-error" to notify the user.


239-247: Harden sidecar termination to avoid double-kill and races; deduplicate logic.

Multiple paths kill the child with expect(). If the process already exited, this can panic. Introduce a synchronized, idempotent termination helper and reuse it.

Apply these replacements:

-                    let app_handle = app.app_handle();
-                    let app_state = app_handle.state::<AppState>();
-                    if let Some(child) = app_state.child_process.lock().unwrap().take() {
-                        child.kill().expect("Failed to kill sidecar");
-                    }
-                    app_handle.exit(0);
+                    let app_handle = app.app_handle();
+                    let app_state = app_handle.state::<AppState>();
+                    app_state.terminate_child_process();
+                    app_handle.exit(0);
-                let app_handle = event.window().app_handle();
-                let app_state = app_handle.state::<AppState>();
-                if let Some(child) = app_state.child_process.lock().unwrap().take() {
-                    child.kill().expect("Failed to kill sidecar");
-                }
-                app_handle.exit(0);
+                let app_handle = event.window().app_handle();
+                let app_state = app_handle.state::<AppState>();
+                app_state.terminate_child_process();
+                app_handle.exit(0);
-                let app_state = app_handle.state::<AppState>();
-                if let Some(child) = app_state.child_process.lock().unwrap().take() {
-                    child.kill().expect("Failed to kill sidecar");
-                }
+                let app_state = app_handle.state::<AppState>();
+                app_state.terminate_child_process();

Add this helper near AppState:

impl AppState {
    fn terminate_child_process(&self) {
        if let Ok(mut guard) = self.child_process.lock() {
            if let Some(mut child) = guard.take() {
                match child.try_wait() {
                    Ok(Some(_status)) => {
                        // Already exited
                    }
                    Ok(None) => {
                        let _ = child.kill(); // ignore error if already gone
                    }
                    Err(_e) => {
                        let _ = child.kill();
                    }
                }
            }
        }
    }
}

Also applies to: 269-276, 286-291


109-110: Make the proxy base URL/port configurable (not hardcoded 5000).

Follow the pattern used for settings.json: add a configurable base_url or port to Settings and use it to build the URL in both proxy functions. This enables different deployments and reduces surprises.

I can provide a patch to extend Settings and wire it through AppState if you’d like.

🧹 Nitpick comments (4)
src/features/chat/koboldAiChat.ts (2)

45-57: Add request correlation to avoid cross-talk between concurrent streams.

Events "stream-chunk"/"stream-error"/"stream-end" are global. Two overlapping requests will interleave. Pass a requestId in payload and namespace event names (or include requestId in payload and filter client-side). Requires backend support.

Example approach:

  • Generate const requestId = crypto.randomUUID()
  • Listen on stream-chunk:, stream-error:, stream-end:
  • Include requestId in invoke payload; backend emits with that suffix

If you want, I can provide a patch for both TS and Rust sides.


51-51: Filter empty stop_sequence entries.

An empty koboldai_stop_sequence results in [""] which can cause premature termination. Filter falsy entries.

-            stop_sequence: [`${config("name")}:`, ...`${config("koboldai_stop_sequence")}`.split("||")]
+            stop_sequence: [`${config("name")}:`, ...`${config("koboldai_stop_sequence")}`.split("||")].filter(s => s && s.trim().length > 0)
-  const stop_sequence: string[] = [`${config("name")}:`, ...`${config("koboldai_stop_sequence")}`.split("||")];
+  const stop_sequence: string[] = [`${config("name")}:`, ...`${config("koboldai_stop_sequence")}`.split("||")].filter(s => s && s.trim().length > 0);

Also applies to: 67-67

src-tauri/src/main.rs (2)

65-73: Add HTTP timeouts to avoid hanging requests.

Default reqwest client has no request timeout. Configure reasonable connect/read timeouts.

-    let client = reqwest::Client::new();
+    let client = reqwest::Client::builder()
+        .connect_timeout(std::time::Duration::from_secs(5))
+        .timeout(std::time::Duration::from_secs(60))
+        .build()
+        .map_err(|e| e.to_string())?;

Apply in both functions.

Also applies to: 112-118


91-96: Avoid unwrap() on event emission to prevent panics.

Emitting events can fail (e.g., no listeners, closed windows). Handle errors gracefully.

-                    handle.emit_all("stream-chunk", StreamChunkPayload { chunk: s }).unwrap();
+                    if let Err(e) = handle.emit_all("stream-chunk", StreamChunkPayload { chunk: s }) {
+                        eprintln!("emit stream-chunk failed: {}", e);
+                        // optionally break if emitting is essential
+                    }
-                    handle.emit_all("stream-error", StreamErrorPayload { error: error_message }).unwrap();
+                    if let Err(e) = handle.emit_all("stream-error", StreamErrorPayload { error: error_message }) {
+                        eprintln!("emit stream-error failed: {}", e);
+                    }
-        handle.emit_all("stream-end", ()).unwrap();
+        if let Err(e) = handle.emit_all("stream-end", ()) {
+            eprintln!("emit stream-end failed: {}", e);
+        }

Also applies to: 100-101

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0cdd5a6 and 78a0911.

📒 Files selected for processing (4)
  • src-tauri/Cargo.toml (1 hunks)
  • src-tauri/src/main.rs (1 hunks)
  • src-tauri/tauri.conf.json (2 hunks)
  • src/features/chat/koboldAiChat.ts (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • src-tauri/Cargo.toml
  • src-tauri/tauri.conf.json
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/features/chat/koboldAiChat.ts (3)
src/features/chat/messages.ts (1)
  • Message (4-7)
src/utils/buildPrompt.ts (1)
  • buildPrompt (4-21)
src/utils/config.ts (1)
  • config (139-154)
🔇 Additional comments (1)
src-tauri/src/main.rs (1)

145-180: Robust settings discovery with config-dir and resource fallback looks good.

Good job resolving the settings.json from the app’s config directory with a bundled resource fallback and user-friendly error dialogs. This significantly improves deployability.

AlphaEcho11 and others added 2 commits August 12, 2025 23:35
Windows, macOS, and Linux build tools/dependencies reflected in DEPLOYMENT.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This commit hardens the Rust proxy commands to prevent a potential Server-Side Request Forgery (SSRF) vulnerability. It also includes the final fixes from previous code reviews, including dependency optimization and restoring streaming functionality.

Key changes:
- **Security (SSRF Fix):** A new `validate_and_sanitize_path` function has been added. It is called by both proxy commands to ensure that the API path provided by the frontend is on a strict allowlist and does not contain malicious patterns. This prevents the proxy from being used to make unintended requests.
- **Streaming Proxy:** The Rust backend now properly handles streaming API responses. A new `proxy_request_streaming` command uses Tauri events to send data chunks to the frontend in real-time.
- **Frontend Streaming:** The `getExtra` function in `koboldAiChat.ts` has been refactored to use the new event-based streaming mechanism, restoring the original functionality.
- **Dependency Optimization:** Removed the redundant `tokio` dependency and configured `reqwest` to use `rustls-tls` for a smaller, more portable binary.
- **Configuration Cleanup:** Removed the unnecessary `http` allowlist from `tauri.conf.json`.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (3)
src-tauri/src/main.rs (3)

131-157: Proxy port should be configurable as previously discussed.

The port 5000 is still hardcoded despite previous discussions about making it configurable through the settings file. This limits deployment flexibility.


269-271: Race condition still exists in process cleanup.

The same race condition issue identified in previous reviews remains unaddressed. Multiple cleanup paths can still attempt to kill the same process simultaneously.

Also applies to: 298-300, 314-316


294-302: Window close event still prevents cancellation.

The window close handler immediately terminates the process without allowing users to cancel the operation, which could result in data loss.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 78a0911 and d4e59b6.

📒 Files selected for processing (1)
  • src-tauri/src/main.rs (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src-tauri/src/main.rs (2)
scripts/generate_paths.js (2)
  • fs (1-1)
  • path (3-3)
src-tauri/build.rs (1)
  • main (1-3)
🔇 Additional comments (3)
src-tauri/src/main.rs (3)

36-56: LGTM! Security validation is well-implemented.

The path validation function properly addresses the SSRF concerns from previous reviews with comprehensive sanitization and allowlisting.


170-206: LGTM! Settings loading logic is robust.

The implementation properly handles fallback from config directory to bundled resources with appropriate error handling and user-friendly error dialogs.


304-308: LGTM! Command handlers are properly registered.

The invoke handler correctly exposes all the necessary commands for frontend integration.

This commit refactors the sidecar process shutdown sequence to be idempotent and free of race conditions.

Key changes:
- **Idempotent Shutdown:** An `Arc<AtomicBool>` guard has been added to the `AppState` to ensure that the termination logic for the sidecar process can only be executed once, even if multiple shutdown signals are received concurrently (e.g., from closing the window and quitting from the system tray at the same time).
- **Graceful Termination:** The shutdown logic now uses `try_wait()` to check if the process has already exited before attempting to kill it, preventing panics. It then uses `kill()` followed by `wait()` to ensure the process is terminated and fully reaped by the operating system.
- **Centralized Logic:** The new robust shutdown logic has been moved into a single `shutdown_sidecar` helper function, which is now called from all three application exit paths (system tray, window close, and app exit event). This removes duplicated code and ensures consistent behavior.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
src-tauri/src/main.rs (4)

8-9: Remove duplicate import.


152-152: Handle event emission failures gracefully.

Also applies to: 156-156, 161-161


171-172: Make proxy port configurable.


282-284: Add error handling for stdout event emission.

🧹 Nitpick comments (2)
src-tauri/src/main.rs (2)

35-38: Consider non-blocking error dialog.

The current implementation uses a blocking dialog which may not work well in all environments. Consider using a non-blocking notification or logging mechanism for better user experience.

 fn show_error_and_exit(handle: &tauri::AppHandle, title: &str, message: &str) {
-    dialog::message(handle.get_window("main").as_ref(), title, message);
+    // Use non-blocking dialog or emit an error event to the frontend
+    let _ = handle.emit_all("fatal-error", serde_json::json!({
+        "title": title,
+        "message": message
+    }));
+    // Give time for the event to be processed
+    std::thread::sleep(std::time::Duration::from_millis(100));
     std::process::exit(1);
 }

327-332: Consider allowing close cancellation for better UX.

The current implementation immediately exits on any window close request. Consider implementing a confirmation dialog to allow users to cancel the close operation if they have unsaved work.

         .on_window_event(|event| {
-            if let tauri::WindowEvent::CloseRequested { .. } = event.event() {
-                shutdown_sidecar(&event.window().app_handle());
-                event.window().app_handle().exit(0);
+            if let tauri::WindowEvent::CloseRequested { api, .. } = event.event() {
+                // Prevent immediate close to allow confirmation
+                api.prevent_close();
+                
+                // Emit event to frontend for confirmation
+                let _ = event.window().emit("confirm-close", ());
+                
+                // The frontend should call a command to actually close after confirmation
             }
         })

You would also need to add a command that the frontend can call after user confirmation:

#[tauri::command]
async fn confirmed_close(app_handle: tauri::AppHandle) {
    shutdown_sidecar(&app_handle);
    app_handle.exit(0);
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d4e59b6 and 0ac7c01.

📒 Files selected for processing (1)
  • src-tauri/src/main.rs (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src-tauri/src/main.rs (2)
src/components/settings.tsx (1)
  • Settings (78-915)
src-tauri/build.rs (1)
  • main (1-3)
🔇 Additional comments (8)
src-tauri/src/main.rs (8)

19-22: LGTM! Clean settings structure.

The Settings struct is well-defined for configuration management with proper serde support.


30-33: Good use of atomic flag for race condition prevention.

The AppState struct properly combines mutex-protected child process with an atomic termination flag to prevent race conditions during shutdown.


40-70: Excellent process cleanup implementation.

The shutdown_sidecar function properly implements atomic shutdown protection and graceful process termination with comprehensive error handling.


72-92: Robust path validation with strong security measures.

The validation function effectively prevents SSRF attacks through multiple security layers: pattern rejection, path normalization, and strict allowlisting. The implementation is comprehensive and secure.


208-243: Excellent configuration loading with fallback strategy.

The implementation properly handles both user-specific and bundled default configurations with comprehensive error handling and clear error messages.


254-260: Good path validation before process spawn.

The validation ensures the executable exists before attempting to spawn, preventing runtime errors.


333-337: LGTM! Proper command registration.

The invoke handler correctly registers all three commands for frontend communication.


340-344: Consistent shutdown handling across exit scenarios.

The ExitRequested handler properly calls the same shutdown logic, ensuring consistent cleanup regardless of how the application exits.

This commit introduces a more user-friendly and robust application shutdown flow. It also includes all prior enhancements, such as the Rust-based API proxy, sidecar process management, and dependency optimizations.

Key changes in this final update:
- **Confirm on Quit:** When you attempt to close the window or quit from the system tray, the application now prevents an immediate exit. It sends an event to the frontend, which displays a confirmation dialog to you.
- **Graceful Shutdown Command:** A new `quit_app` command has been added to the Rust backend. This command is invoked by the frontend only after you confirm your intent to quit. It performs the full graceful shutdown of the sidecar process before exiting the application.
- **Race-Free Termination:** The shutdown logic is now idempotent and free of race conditions. It uses an `Arc<AtomicBool>` to ensure the termination sequence for the sidecar process runs only once, and it gracefully handles cases where the process may have already exited.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src-tauri/src/main.rs (2)

125-170: Avoid panics in streaming event emission; handle emission failures

unwrap() on emit_all will panic if emission fails (e.g., window closed). Replace with fallible handling and break out gracefully.

-                    let s = String::from_utf8_lossy(&chunk).to_string();
-                    handle.emit_all("stream-chunk", StreamChunkPayload { chunk: s }).unwrap();
+                    let s = String::from_utf8_lossy(&chunk).to_string();
+                    if let Err(e) = handle.emit_all("stream-chunk", StreamChunkPayload { chunk: s }) {
+                        eprintln!("Failed to emit stream chunk: {}", e);
+                        break;
+                    }
                 }
                 Err(e) => {
                     let error_message = format!("Error reading stream: {}", e);
-                    handle.emit_all("stream-error", StreamErrorPayload { error: error_message }).unwrap();
+                    let _ = handle.emit_all("stream-error", StreamErrorPayload { error: error_message });
                     break;
                 }
             }
         }
-        handle.emit_all("stream-end", ()).unwrap();
+        let _ = handle.emit_all("stream-end", ());

283-291: Handle stderr and termination events; avoid panic on emit_all unwrap

Only stdout is forwarded and unwrap() can panic. Forward stderr and termination, and make emission fallible.

-                while let Some(event) = rx.recv().await {
-                    if let CommandEvent::Stdout(line) = event {
-                        // Here you can log the output from the sidecar
-                        // Or send it to the frontend
-                        handle
-                            .emit_all("sidecar-output", Payload { message: line.into() })
-                            .unwrap();
-                    }
-                }
+                while let Some(event) = rx.recv().await {
+                    match event {
+                        CommandEvent::Stdout(line) => {
+                            if let Err(e) = handle.emit_all("sidecar-output", Payload { message: line.into() }) {
+                                eprintln!("Failed to emit sidecar stdout: {}", e);
+                                break;
+                            }
+                        }
+                        CommandEvent::Stderr(line) => {
+                            let _ = handle.emit_all("sidecar-error", Payload { message: line.into() });
+                        }
+                        CommandEvent::Terminated(payload) => {
+                            let _ = handle.emit_all("sidecar-terminated", Payload {
+                                message: format!("Process terminated with code: {:?}", payload.code)
+                            });
+                            break;
+                        }
+                        _ => {}
+                    }
+                }
🧹 Nitpick comments (5)
src-tauri/src/main.rs (4)

269-279: Misleading comment: process::exit(1) terminates the entire app, even from async tasks

show_error_and_exit calls std::process::exit(1), which exits the process immediately. The comment implies otherwise and should be removed/updated to avoid confusion.

-                        // This exit is in a spawned thread, so it won't kill the main app directly
-                        // The main app will continue, but the child process won't be running.
-                        // The dialog is the most important part.

102-107: Avoid unwraps on window ops; handle splash/main window visibility errors

These unwraps can panic if the windows are missing or already closed. Handle errors gracefully.

-  if let Some(splashscreen) = window.get_window("splashscreen") {
-    splashscreen.close().unwrap();
-  }
+  if let Some(splashscreen) = window.get_window("splashscreen") {
+    let _ = splashscreen.close();
+  }
   // Show main window
-  window.get_window("main").unwrap().show().unwrap();
+  if let Some(main) = window.get_window("main") {
+    let _ = main.show();
+  }

306-330: Don’t unwrap on UI event emission or shell open; log or ignore failures

unwrap()/expect() here can crash the app due to transient UI or OS errors.

-                "quit" => {
-                    app.app_handle().emit_all("confirm-close", ()).unwrap();
-                }
+                "quit" => {
+                    let _ = app.app_handle().emit_all("confirm-close", ());
+                }
                 "checkforupdates" => {
                     tauri::api::shell::open(
                         &app.shell_scope(),
                         "https://github.com/semperai/amica/releases/latest",
                         None,
                     )
-                    .expect("failed to open url");
+                    .unwrap_or_else(|e| eprintln!("failed to open updates url: {}", e));
                 }
                 "help" => {
                     tauri::api::shell::open(
                         &app.shell_scope(),
                         "https://docs.heyamica.com",
                         None,
                     )
-                    .expect("failed to open url");
+                    .unwrap_or_else(|e| eprintln!("failed to open help url: {}", e));
                 }

331-336: Emit confirm-close without unwrap; let UI decide next steps

Avoid panicking on event emission when preventing close. This path should be robust.

-                api.prevent_close();
-                event.window().emit("confirm-close", ()).unwrap();
+                api.prevent_close();
+                let _ = event.window().emit("confirm-close", ());
src/pages/_app.tsx (1)

6-9: Guard Tauri imports in web/SSR and reduce bundle by dynamic-importing APIs

Top-level imports from @tauri-apps/api can fail or bloat the web build. Dynamically import inside useEffect and only when running under Tauri.

-import { useEffect } from 'react';
-import { listen } from '@tauri-apps/api/event';
-import { invoke } from '@tauri-apps/api/tauri';
+import { useEffect } from 'react';

Then, update the effect below to lazy-load and guard on Tauri runtime:

   useEffect(() => {
-    let unlisten: (() => void) | undefined;
-
-    const setupListener = async () => {
-      unlisten = await listen('confirm-close', () => {
-        if (window.confirm('Are you sure you want to quit?')) {
-          invoke('quit_app');
-        }
-      });
-    };
-
-    setupListener();
+    let unlisten: (() => void) | undefined;
+    const setupListener = async () => {
+      const isTauri = typeof window !== 'undefined' && '__TAURI_IPC__' in window;
+      if (!isTauri) return;
+      const [{ listen }, { invoke }] = await Promise.all([
+        import('@tauri-apps/api/event'),
+        import('@tauri-apps/api/tauri'),
+      ]);
+      unlisten = await listen('confirm-close', async () => {
+        if (window.confirm('Are you sure you want to quit?')) {
+          try {
+            await invoke('quit_app');
+          } catch (e) {
+            console.error('Failed to quit app:', e);
+          }
+        }
+      });
+    };
+    void setupListener();
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0ac7c01 and 8938ea8.

📒 Files selected for processing (2)
  • src-tauri/src/main.rs (1 hunks)
  • src/pages/_app.tsx (1 hunks)
🔇 Additional comments (3)
src-tauri/src/main.rs (2)

39-69: Good: Idempotent, synchronized sidecar shutdown

The AtomicBool + Mutex pattern ensures only one shutdown path runs and avoids double-kill races. try_wait + kill + wait ordering is correct.


71-91: Allowlist verification complete – no changes needed
All proxy_request_blocking and proxy_request_streaming calls in the repo use only the two paths "api/v1/generate" and "api/extra/generate/stream", which exactly match the current allowlist in src-tauri/src/main.rs. No additional endpoints are proxied via Tauri, so the exact-match allowlist is sufficient.

src/pages/_app.tsx (1)

11-29: LGTM: Clean, user-friendly close-confirm flow

Listener lifecycle is handled and the UX hook aligns with the backend quit_app command.

Comment on lines 172 to 198
#[tauri::command]
async fn proxy_request_blocking(payload: ProxyRequestPayload) -> Result<serde_json::Value, String> {
let sanitized_path = validate_and_sanitize_path(&payload.path)?;
let client = reqwest::Client::new();
// This port should be configurable in the future.
let url = format!("http://127.0.0.1:5000/{}", sanitized_path);

let res = client
.post(&url)
.json(&payload.body)
.send()
.await
.map_err(|e| e.to_string())?;

if res.status().is_success() {
res.json::<serde_json::Value>()
.await
.map_err(|e| e.to_string())
} else {
let status = res.status();
let text = res.text().await.map_err(|e| e.to_string())?;
Err(format!(
"API request to {} failed with status {}: {}",
url, status, text
))
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make proxy port configurable via settings and app state; remove hardcoded 5000

The port is hardcoded and contradicts the PR objective to centralize configuration. Wire it from settings so deployments can change it without code edits.

Apply these diffs:

  1. Extend Settings with a defaulted proxy_port and introduce a SettingsState:
 #[derive(serde::Deserialize, Clone)]
 struct Settings {
     text_generation_webui_path: String,
+    #[serde(default = "default_proxy_port")]
+    proxy_port: u16,
 }
+
+#[derive(Clone)]
+struct SettingsState(pub Settings);
+
+fn default_proxy_port() -> u16 {
+    5000
+}
  1. After parsing settings in setup, manage SettingsState so commands can access it:
             let settings: Settings = match serde_json::from_str(&settings_str) {
                 Ok(s) => s,
                 Err(e) => {
                     let msg = format!("Failed to parse settings.json: {}. Please check for syntax errors.", e);
                     show_error_and_exit(&handle, "Configuration Error", &msg);
                     return Ok(());
                 }
             };
+
+            // Expose settings to commands
+            handle.manage(SettingsState(settings.clone()));
  1. Use the configured port in streaming proxy:
 async fn proxy_request_streaming(
     handle: tauri::AppHandle,
     payload: ProxyRequestPayload,
 ) -> Result<(), String> {
     let sanitized_path = validate_and_sanitize_path(&payload.path)?;
     let client = reqwest::Client::new();
-    let url = format!("http://127.0.0.1:5000/{}", sanitized_path);
+    let port = handle.state::<SettingsState>().0.proxy_port;
+    let url = format!("http://127.0.0.1:{}/{}", port, sanitized_path);
  1. Use the configured port in blocking proxy (add handle param; Tauri will inject it automatically):
-#[tauri::command]
-async fn proxy_request_blocking(payload: ProxyRequestPayload) -> Result<serde_json::Value, String> {
+#[tauri::command]
+async fn proxy_request_blocking(
+    handle: tauri::AppHandle,
+    payload: ProxyRequestPayload
+) -> Result<serde_json::Value, String> {
     let sanitized_path = validate_and_sanitize_path(&payload.path)?;
     let client = reqwest::Client::new();
-    // This port should be configurable in the future.
-    let url = format!("http://127.0.0.1:5000/{}", sanitized_path);
+    let port = handle.state::<SettingsState>().0.proxy_port;
+    let url = format!("http://127.0.0.1:{}/{}", port, sanitized_path);

Note: This change is backward-compatible with existing frontend calls; the extra AppHandle parameter is supplied by Tauri, not the caller.

Add a note to DEPLOYMENT.md to clarify that this project uses Tauri v1 and requires `libwebkit2gtk-4.0-dev`.

The note also informs users that Tauri v2+ projects require the newer `libwebkit2gtk-4.1-dev` package, providing helpful context for future development or other projects.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
DEPLOYMENT.md (4)

12-18: Linux WebKitGTK version likely wrong for Tauri v2 — update package and note

The apt package line still uses libwebkit2gtk-4.0-dev and the note claims this project uses Tauri v1. If the app is on Tauri v2 (as common now), this will cause build failures. If it’s truly v1, keep 4.0; otherwise switch to 4.1 and adjust the note.

Proposed update (for Tauri v2+):

-    sudo apt-get install -y libwebkit2gtk-4.0-dev build-essential curl wget libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev
+    sudo apt-get install -y libwebkit2gtk-4.1-dev build-essential curl wget libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev
-    > **Note:** This project uses Tauri v1, which requires `libwebkit2gtk-4.0-dev`. If you are working on a project with Tauri v2 or newer, you will need to use `libwebkit2gtk-4.1-dev` instead.
+    > **Note:** Tauri v2+ requires `libwebkit2gtk-4.1-dev`. If you are on legacy Tauri v1, use `libwebkit2gtk-4.0-dev`.

If you confirm the project version, I can adjust the exact wording accordingly.


7-16: Add Windows/macOS build tool prerequisites to prevent build failures

Docs list Linux deps but omit OS-specific toolchains that are required for Tauri builds on Windows/macOS.

Apply after the Rust bullet:

 *   **Rust:** The new backend is written in Rust. The easiest way to install Rust is by using `rustup`. You can find instructions at the [official Rust website](https://www.rust-lang.org/tools/install).
+*   **(Windows Only) Build Tools:** Install the MSVC toolchain (e.g., Visual Studio 2022 “Desktop development with C++” or the standalone Build Tools) and the Windows 10/11 SDK. Ensure `rustup` uses the `msvc` toolchain.
+*   **(macOS Only) Build Tools:** Install Xcode Command Line Tools (`xcode-select --install`). Homebrew is recommended for native libraries as needed.
 *   **`text-generation-webui`:** You must have a working, pre-compiled version of `text-generation-webui`. You can find releases and setup instructions on its [GitHub repository](https://github.com/oobabooga/text-generation-webui). Make sure you can run it successfully on its own before integrating it with Amica.

99-100: Correct Tauri bundle artifact locations

Artifacts are placed under src-tauri/target/release/bundle/{platform}, not directly under release/.

-Once the build is complete, you will find the final application inside the `src-tauri/target/release/` directory. It will be a `.exe` file on Windows, a `.AppImage` on Linux, or a `.app` file inside a `.dmg` on macOS.
+Once the build is complete, you will find the final application inside the `src-tauri/target/release/bundle/` directory:
+* Windows: `src-tauri/target/release/bundle/msi/` (MSI) and/or `.../bundle/windows/` (`.exe`)
+* Linux: `src-tauri/target/release/bundle/appimage/` (`.AppImage`) or `.../deb/` (`.deb`)
+* macOS: `src-tauri/target/release/bundle/macos/` (`.app`) and `.../bundle/dmg/` (`.dmg`)

55-57: Windows config path: remove the trailing “\config” folder

The backend writes/reads settings.json directly from app_config_dir. The extra “\config” in the path is misleading.

-    *   **Windows:** `%APPDATA%\\com.heyamica.dev\\config` (you can paste this into the Explorer address bar)
+    *   **Windows:** `%APPDATA%\\com.heyamica.dev` (you can paste this into the Explorer address bar)
🧹 Nitpick comments (1)
DEPLOYMENT.md (1)

80-86: Add a note about executable permissions on Linux/macOS

Many users will place a shell script path; without +x, the sidecar won’t launch.

         {
           "text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh"
         }
       ```
+      Note: Ensure the script is executable (`chmod +x /home/youruser/text-generation-webui/start.sh`).
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8938ea8 and 8297e10.

📒 Files selected for processing (1)
  • DEPLOYMENT.md (1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
DEPLOYMENT.md

23-23: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

🔇 Additional comments (1)
DEPLOYMENT.md (1)

105-108: Confirm whether “Streaming” should be enabled or disabled in UI

The backend implements a streaming proxy command. The instruction to “disable streaming” may be outdated or mode-dependent.

Please confirm current frontend behavior:

  • If the KoboldAI integration now uses proxy_request_streaming, advise enabling streaming.
  • If the integration uses only the blocking proxy, keep the instruction to disable streaming.
    I can update this line to match the actual code path in the frontend.

This commit corrects the example path for the application configuration directory on Windows.

The previous version incorrectly included a `\config` subfolder, which is not used by Tauri's `app_config_dir()` API. The path has been corrected to `%APPDATA%\com.heyamica.dev`.
This commit adds a `cancel` handler to the `ReadableStream` implementation in the `getExtra` function.

If a consumer of the stream aborts it (e.g., via AbortController or navigating away), the `cancel` handler will now be called. This handler cleans up the Tauri event listeners (`stream-chunk`, `stream-error`, `stream-end`), preventing resource and memory leaks.
This commit replaces `.unwrap()` calls with proper error handling when emitting events from the streaming proxy task in the Rust backend.

If the frontend disconnects while the stream is active, `handle.emit_all()` can fail. Previously, this would cause the spawned Rust task to panic. Now, it handles the error gracefully by logging it and terminating the stream, preventing the backend from crashing.
This commit replaces a `.unwrap()` call with proper error handling when forwarding the sidecar process's stdout to the frontend.

If the frontend disconnects while the sidecar is running, the `emit_all` call for `sidecar-output` could fail. Previously, this would cause the monitoring task to panic. Now, it handles the error gracefully by logging it and terminating the forwarding loop, preventing the backend task from crashing.
This commit updates the `DEPLOYMENT.md` file to be more specific about the streaming option in the UI.

It now uses the exact UI label "Use Extra (enables streaming)" and clarifies that the option is disabled by default, so no action is required for standard non-streaming behavior.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
src-tauri/src/main.rs (4)

131-133: Hardcoded proxy port; wire it from settings (configurable).

Both proxy functions still use 127.0.0.1:5000, contradicting the PR objective and previous review to make the port configurable. Read the port from settings via app state.

Apply this diff:

-    let client = reqwest::Client::new();
-    let url = format!("http://127.0.0.1:5000/{}", sanitized_path);
+    let client = reqwest::Client::new();
+    let port = handle.state::<SettingsState>().0.proxy_port;
+    let url = format!("http://127.0.0.1:{}/{}", port, sanitized_path);

Additionally, see the companion diffs in the Settings struct and setup to expose SettingsState.


175-181: Make blocking proxy use configurable port and accept AppHandle.

To access the settings, accept AppHandle and use the configured proxy_port. This change is backward-compatible with Tauri invoke (handle is injected automatically).

-#[tauri::command]
-async fn proxy_request_blocking(payload: ProxyRequestPayload) -> Result<serde_json::Value, String> {
+#[tauri::command]
+async fn proxy_request_blocking(
+    handle: tauri::AppHandle,
+    payload: ProxyRequestPayload,
+) -> Result<serde_json::Value, String> {
     let sanitized_path = validate_and_sanitize_path(&payload.path)?;
     let client = reqwest::Client::new();
-    // This port should be configurable in the future.
-    let url = format!("http://127.0.0.1:5000/{}", sanitized_path);
+    let port = handle.state::<SettingsState>().0.proxy_port;
+    let url = format!("http://127.0.0.1:{}/{}", port, sanitized_path);

18-21: Extend Settings with proxy_port and expose SettingsState.

Add proxy_port with a default, and manage a SettingsState so commands can read it. This aligns with the PR objective and prior review thread.

 #[derive(serde::Deserialize, Clone)]
 struct Settings {
     text_generation_webui_path: String,
+    #[serde(default = "default_proxy_port")]
+    proxy_port: u16,
 }

Add these supporting items outside this hunk:

#[derive(Clone)]
struct SettingsState(pub Settings);

fn default_proxy_port() -> u16 {
    5000
}

And manage SettingsState after parsing settings (see diff on lines 253-261).


253-261: Manage SettingsState so commands can access configuration.

Store the parsed settings in state for proxy functions.

             let settings: Settings = match serde_json::from_str(&settings_str) {
                 Ok(s) => s,
                 Err(e) => {
                     let msg = format!("Failed to parse settings.json: {}. Please check for syntax errors.", e);
                     show_error_and_exit(&handle, "Configuration Error", &msg);
                     return Ok(());
                 }
             };
+
+            // Expose settings to commands
+            handle.manage(SettingsState(settings.clone()));
🧹 Nitpick comments (6)
src-tauri/src/main.rs (6)

71-91: Avoid rebuilding the allowlist on every call; move it to a static.

Small perf and clarity improvement: define a static allowlist and reference it. Current HashSet cost is negligible but unnecessary.

-    // Enforce an allowlist of known good endpoints
-    let allowlist: HashSet<&str> = [
-        "api/v1/generate",
-        "api/extra/generate/stream",
-    ].iter().cloned().collect();
-
-    if !allowlist.contains(sanitized_path.as_str()) {
+    // Enforce an allowlist of known good endpoints
+    if !ALLOWED_PATHS.iter().any(|&p| p == sanitized_path) {
         return Err(format!("Invalid path '{}': not in allowlist.", path));
     }

Add this constant near the top of the file (outside this hunk):

const ALLOWED_PATHS: &[&str] = &[
    "api/v1/generate",
    "api/extra/generate/stream",
];

102-107: Avoid unwraps on window operations; handle failures gracefully.

Unwraps here can panic if the window is missing or already closed.

-  if let Some(splashscreen) = window.get_window("splashscreen") {
-    splashscreen.close().unwrap();
-  }
+  if let Some(splashscreen) = window.get_window("splashscreen") {
+    let _ = splashscreen.close();
+  }
   // Show main window
-  window.get_window("main").unwrap().show().unwrap();
+  if let Some(main) = window.get_window("main") {
+    let _ = main.show();
+  }

311-312: Do not unwrap event emits from system tray.

Event emission can fail if there are no windows/listeners. Avoid panics.

-                    app.app_handle().emit_all("confirm-close", ()).unwrap();
+                    let _ = app.app_handle().emit_all("confirm-close", ());

334-337: Avoid unwrap on window emit in CloseRequested.

Keep the app resilient if event dispatch fails.

-                api.prevent_close();
-                event.window().emit("confirm-close", ()).unwrap();
+                api.prevent_close();
+                let _ = event.window().emit("confirm-close", ());

150-170: Optional: Add request timeout to streaming to avoid hanging forever.

A small timeout helps avoid stuck connections.

Example:

-    let mut stream = res.bytes_stream();
+    let mut stream = res.bytes_stream();
     tauri::async_runtime::spawn(async move {
         while let Some(chunk_result) = stream.next().await {

And build the client with a timeout (adjust earlier lines where client is created):

-    let client = reqwest::Client::new();
+    let client = reqwest::Client::builder()
+        .timeout(std::time::Duration::from_secs(120))
+        .build()
+        .map_err(|e| e.to_string())?;

125-133: Optional: Reuse a single reqwest::Client via state for better perf.

Reusing a single Client enables connection pooling and reduces TLS handshakes.

Outside these ranges, add a shared client:

struct AppState {
    child_process: Mutex<Option<tauri::api::process::Child>>,
    is_terminating: Arc<AtomicBool>,
    http: reqwest::Client,
}

Initialize it in main():

let app_state = AppState {
    child_process: Mutex::new(None),
    is_terminating: Arc::new(AtomicBool::new(false)),
    http: reqwest::Client::builder()
        .timeout(std::time::Duration::from_secs(120))
        .build()
        .expect("failed to construct reqwest client"),
};

Then in proxies:

let client = handle.state::<AppState>().http.clone();

Also applies to: 175-181

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 009e03e and 1d156d3.

📒 Files selected for processing (1)
  • src-tauri/src/main.rs (1 hunks)
🔇 Additional comments (1)
src-tauri/src/main.rs (1)

39-69: Double-check Child API: try_wait vs try_status; adjust accordingly.

Tauri’s Command child type often uses try_status() rather than try_wait(), and may not expose wait(). Ensure these APIs exist; otherwise, switch to try_status() and drop the wait() call.

If needed, change as follows:

-            match child.try_wait() {
+            match child.try_status() {
                 Ok(Some(_)) => {
                     // Process already exited, nothing to do.
                 }
                 Ok(None) => {
                     // Process is still running, so kill it and wait for it to be reaped.
                     if let Err(e) = child.kill() {
                         eprintln!("Failed to kill sidecar process: {}", e);
                     }
-                    if let Err(e) = child.wait() {
-                        eprintln!("Failed to wait for sidecar process to exit: {}", e);
-                    }
                 }

Please confirm the available methods in your tauri version; adjust the code to compile cleanly across platforms.

This commit corrects the documentation for the location of the final build artifacts produced by `tauri build`.

The previous path was incorrect. The documentation now correctly points to the platform-specific subdirectories inside `src-tauri/target/release/bundle/`, providing clearer and more accurate instructions for users.
…ible API, as you requested. I replaced the previous, incorrect implementation that targeted the KoboldAI API with one that now targets the API provided by `text-generation-webui`.

This final version includes all previous enhancements:
- **Rust Backend:** Manages the `text-generation-webui` sidecar process and acts as a secure API proxy.
- **OpenAI API Proxy:** The proxy now forwards requests to the `/v1/chat/completions` endpoint and handles the `Authorization` header.
- **Frontend Refactoring:** I refactored the `openAiChat.ts` file to use the Rust proxy for both streaming and blocking requests and reverted the incorrect changes to `koboldAiChat.ts`.
- **Robustness and Security:** The implementation includes graceful shutdown logic requiring your confirmation, robust error handling with dialogs, and a secure API path allowlist.
- **Documentation:** I updated `DEPLOYMENT.md` to reflect the new setup. The instructions now guide you to select "ChatGPT" and to launch the sidecar with the `--api` flag.
…ndling

This commit refactors the frontend TypeScript code in `openAiChat.ts` to make the calls to the Rust proxy more robust and type-safe.

Key changes:
- **Strong Typing:** Replaced `any` with a new `OpenAIResponse` interface when calling the blocking proxy. The `invoke` generic (`invoke<OpenAIResponse>`) is now used to ensure the response is correctly typed.
- **Error Handling:** Wrapped all `invoke` calls (for both streaming and non-streaming requests) in `try/catch` blocks. This ensures that any errors from the Rust backend (e.g., validation failures, network errors) are caught and handled gracefully instead of causing unhandled promise rejections.
- **Safe Access:** Added more thorough checks for the response structure before accessing nested properties to prevent runtime errors.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (5)
DEPLOYMENT.md (1)

24-24: Fix heading level progression

The heading jumps from h2 (##) to h4 (####). Use h3 for proper hierarchy.

-#### Step 1: Clone the Amica Repository
+### Step 1: Clone the Amica Repository

Also apply this change to the subsequent Step headings (lines 33 and 41):

-#### Step 2: Install JavaScript Dependencies
+### Step 2: Install JavaScript Dependencies
-#### Step 3: Configure the `text-generation-webui` Path
+### Step 3: Configure the `text-generation-webui` Path
src-tauri/src/main.rs (4)

18-21: Add proxy_port field to Settings struct

The Settings struct needs to include the configurable proxy port field as discussed in the previous review.

 #[derive(serde::Deserialize, Clone)]
 struct Settings {
     text_generation_webui_path: String,
+    #[serde(default = "default_proxy_port")]
+    proxy_port: u16,
 }
+
+fn default_proxy_port() -> u16 {
+    5000
+}

132-132: Use configurable proxy port from settings

Both proxy functions should use the port from settings instead of hardcoding it.

For the streaming proxy (line 132), you'll need to pass the settings to the command:

+    // Add SettingsState to manage function after parsing settings
+    #[derive(Clone)]
+    struct SettingsState(Settings);
+    
+    // In main() after parsing settings:
+    handle.manage(SettingsState(settings.clone()));
+
     // In proxy_request_streaming:
-    let url = format!("http://127.0.0.1:5000/{}", sanitized_path);
+    let port = handle.state::<SettingsState>().0.proxy_port;
+    let url = format!("http://127.0.0.1:{}/{}", port, sanitized_path);

For the blocking proxy (line 184), add the handle parameter:

 #[tauri::command]
-async fn proxy_request_blocking(payload: ProxyRequestPayload) -> Result<serde_json::Value, String> {
+async fn proxy_request_blocking(
+    handle: tauri::AppHandle,
+    payload: ProxyRequestPayload
+) -> Result<serde_json::Value, String> {
     let sanitized_path = validate_and_sanitize_path(&payload.path)?;
     let client = reqwest::Client::new();
-    // This port should be configurable in the future.
-    let url = format!("http://127.0.0.1:5000/{}", sanitized_path);
+    let port = handle.state::<SettingsState>().0.proxy_port;
+    let url = format!("http://127.0.0.1:{}/{}", port, sanitized_path);

Also applies to: 184-184


219-222: Fix state acquisition across async boundary

The app_state captured at line 221 cannot be used inside the async task at line 292 due to lifetime constraints.

         .setup(|app| {
             let handle = app.handle().clone();
-            let app_state = handle.state::<AppState>();

             // ... settings loading code ...

             // Launch the external process
             tauri::async_runtime::spawn(async move {
                 let (mut rx, child) = match Command::new(&settings.text_generation_webui_path).spawn() {
                     Ok(c) => c,
                     Err(e) => {
                         let msg = format!("Failed to spawn the external process at '{}': {}", settings.text_generation_webui_path, e);
                         show_error_and_exit(&handle, "Process Error", &msg);
                         // This exit is in a spawned thread, so it won't kill the main app directly
                         // The main app will continue, but the child process won't be running.
                         // The dialog is the most important part.
                         return;
                     }
                 };

+                // Reacquire state inside the async task
+                let app_state = handle.state::<AppState>();
                 *app_state.child_process.lock().unwrap() = Some(child);

Also applies to: 292-292


268-268: Store settings in app state for proxy commands

After validating the settings, store them in the app state so the proxy commands can access the configured port.

Add this after line 268:

             };
+
+            // Define SettingsState type and manage it
+            #[derive(Clone)]
+            struct SettingsState(Settings);
+            app.manage(SettingsState(settings.clone()));

Note: The SettingsState struct definition should ideally be placed at the module level (around lines 29-32) rather than inside the function:

 struct AppState {
     child_process: Mutex<Option<tauri::api::process::Child>>,
     is_terminating: Arc<AtomicBool>,
 }
+
+#[derive(Clone)]
+struct SettingsState(Settings);
🧹 Nitpick comments (5)
DEPLOYMENT.md (1)

9-11: Add OS-specific build prerequisites for Windows and macOS

For completeness, include build requirements for Windows and macOS alongside the Linux-specific dependencies.

 *   **Node.js:** Amica's user interface is built with Node.js. You will need version `18.18.0` or newer. You can download it from the [official Node.js website](https://nodejs.org/).
 *   **Rust:** The new backend is written in Rust. The easiest way to install Rust is by using `rustup`. You can find instructions at the [official Rust website](https://www.rust-lang.org/tools/install).
+*   **(Windows Only) Build Tools:** Install the MSVC toolchain via Visual Studio 2022 or newer with "Desktop development with C++" workload, or the standalone Build Tools with Windows 10/11 SDK.
+*   **(macOS Only) Build Tools:** Install Xcode Command Line Tools by running `xcode-select --install` in Terminal.
 *   **`text-generation-webui`:** You must have a working, pre-compiled version of `text-generation-webui`. You can find releases and setup instructions on its [GitHub repository](https://github.com/oobabooga/text-generation-webui).
src-tauri/src/main.rs (1)

283-289: Consider graceful error handling instead of process exit

Calling show_error_and_exit from within a spawned task will terminate the entire application. Consider emitting an error event instead.

                     Err(e) => {
                         let msg = format!("Failed to spawn the external process at '{}': {}", settings.text_generation_webui_path, e);
-                        show_error_and_exit(&handle, "Process Error", &msg);
-                        // This exit is in a spawned thread, so it won't kill the main app directly
-                        // The main app will continue, but the child process won't be running.
-                        // The dialog is the most important part.
+                        // Show dialog and emit error event
+                        dialog::message(handle.get_window("main").as_ref(), "Process Error", &msg);
+                        let _ = handle.emit_all("sidecar-error", Payload { message: msg });
+                        // Optionally exit the app gracefully
+                        handle.exit(1);
                         return;
                     }
src/features/chat/openAiChat.ts (3)

24-44: Add error handling for malformed stream chunks

The stream parsing silently continues on JSON parse errors, which could hide backend issues.

Consider collecting and reporting parse errors:

         const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: '));
+        const parseErrors: string[] = [];
         for (const line of lines) {
           const data = line.substring(6);
           if (data.trim() === '[DONE]') {
             return;
           }
           try {
             const json = JSON.parse(data);
             const messagePiece = json.choices[0].delta.content;
             if (messagePiece) {
               controller.enqueue(messagePiece);
             }
           } catch (error) {
-            console.error("Failed to parse stream chunk:", error, "in chunk:", data);
+            parseErrors.push(data);
+            console.error("Failed to parse stream chunk:", error, "in chunk:", data);
           }
         }
+        if (parseErrors.length > 0) {
+          // Optionally emit a warning event or accumulate for debugging
+          console.warn(`Failed to parse ${parseErrors.length} chunks in this batch`);
+        }

114-119: Consider more specific error messages for vision API responses

The error thrown doesn't indicate whether the issue was missing choices, empty array, or missing message property.

   if (json.choices && json.choices.length > 0 && json.choices[0].message) {
     return json.choices[0].message.content;
   }

-  throw new Error("Invalid response structure from OpenAI compatible API");
+  const errorDetails = !json.choices ? "missing 'choices' field" :
+                       json.choices.length === 0 ? "empty 'choices' array" :
+                       "missing 'message' in choice";
+  throw new Error(`Invalid response structure from OpenAI compatible API: ${errorDetails}`);

3-4: No existing web-build or Tauri checks detected
Our search didn’t uncover any browser-only or web build scripts in package.json, vite.config, or webpack.config, nor any window.__TAURI__ or process.env.TAURI guards in the codebase. If this repository is exclusively a Tauri desktop app, invoking Tauri APIs directly is safe. If you plan to ship a web version (or reuse this module in a browser), please wrap your calls in an environment check, for example:

const isTauri =
  typeof window !== 'undefined' && (window as any).__TAURI__ !== undefined;

if (!isTauri) {
  throw new Error('This feature requires the Tauri desktop application');
}

// now it’s safe to call invoke() or listen()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1d156d3 and e9939b6.

📒 Files selected for processing (3)
  • DEPLOYMENT.md (1 hunks)
  • src-tauri/src/main.rs (1 hunks)
  • src/features/chat/openAiChat.ts (3 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
DEPLOYMENT.md

24-24: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)


Amica needs to know where to find your `text-generation-webui` executable. This is configured in a `settings.json` file.

##### How Configuration Works
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Align subheading levels under Step 3

Subheadings should be one level below the Step heading.

-##### How Configuration Works
+#### How Configuration Works
-##### Creating Your Custom `settings.json`
+#### Creating Your Custom `settings.json`

Also applies to: 53-53

🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 45 and 53, the "How Configuration Works" (and
other subheadings) are using the same heading level as the "Step 3" heading;
change these subheadings to be one level lower than the Step heading (e.g., if
Step 3 is "### Step 3", make the subheadings "#### ...") so they are nested
correctly under Step 3; update both occurrences (lines 45 and 53) to the
appropriate heading level and verify TOC/rendering reflects the hierarchy.

Comment on lines +66 to +69
{
"text_generation_webui_path": ""
}
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Document the configurable proxy port setting

The backend now supports a configurable proxy port, but the documentation only shows the path configuration.

     ```json
     {
-      "text_generation_webui_path": ""
+      "text_generation_webui_path": "",
+      "proxy_port": 5000
     }
     ```

Add a note explaining the proxy_port field:

 4.  Add the **full path** to your `text-generation-webui` executable inside the quotes.
+    The `proxy_port` field (default: 5000) specifies which port the text-generation-webui API server is listening on.
🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 66 to 69, the example JSON only shows
text_generation_webui_path but the backend supports a configurable proxy_port;
update the example to include "proxy_port": 5000 and add a short note below the
code block explaining that proxy_port sets the HTTP port the proxy listens on
(default value if any) and that it can be changed to avoid port conflicts or to
match deployment requirements.

Comment on lines +75 to +78
{
"text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat"
}
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Update examples to include proxy_port

The examples should reflect the complete settings structure.

         ```json
         {
-          "text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat"
+          "text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat",
+          "proxy_port": 5000
         }
         ```
         ```json
         {
-          "text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh"
+          "text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh",
+          "proxy_port": 5000
         }
         ```

Also applies to: 83-85

🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 75-78 (and likewise update lines 83-85), the
example JSON snippets are missing the proxy_port entry and thus do not show the
complete settings structure; update each example object to include a
"proxy_port": 5000 property (add the trailing comma on the preceding line where
needed) so the examples show both "text_generation_webui_path" and "proxy_port"
with valid JSON formatting.

This commit refactors the `setup` hook to follow Tauri best practices for handling State in spawned async tasks.

Instead of capturing the `State<'_, AppState>` guard (which can lead to lifetime compilation errors), the `AppHandle` is captured, and the state is reacquired from the handle within the async task just before it's needed. This makes the code more robust and less prone to lifetime issues.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/features/chat/openAiChat.ts (1)

56-60: Add type safety for error event payload.

The error event handler should have proper typing for the payload structure.

-      const onError = await listen("stream-error", (event: Event<any>) => {
+      const onError = await listen("stream-error", (event: Event<{error: string}>) => {
         console.error("Stream error from backend:", event.payload.error);
         controller.error(new Error(event.payload.error));
         cleanup();
       });
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e9939b6 and a7afd93.

📒 Files selected for processing (1)
  • src/features/chat/openAiChat.ts (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/features/chat/openAiChat.ts (2)
src/features/chat/messages.ts (1)
  • Message (4-7)
src/utils/config.ts (1)
  • config (139-154)
🔇 Additional comments (6)
src/features/chat/openAiChat.ts (6)

3-4: LGTM! Tauri imports are correctly added.

The Tauri API imports are properly structured for the new proxy-based architecture.


6-14: LGTM! TypeScript interfaces properly define OpenAI response structure.

The interfaces correctly model the expected OpenAI API response format for the blocking proxy calls.


24-29: Parameter rename improves clarity.

The _url parameter correctly indicates that URL handling is now delegated to the Rust proxy, improving code readability.


75-91: LGTM! Proxy request invocation is well-structured.

The streaming request invocation properly handles errors and includes appropriate payload structure with authentication and request body.


93-96: LGTM! Stream cancellation properly triggers cleanup.

The cancel handler correctly calls cleanup to prevent resource leaks when the stream is cancelled.


102-107: Verify Rust backend proxy handles OpenAI configuration keys

I didn’t find any occurrences of openai_apikey, openai_url, openai_model, vision_openai_apikey, or vision_openai_model in the src-tauri/ directory. Please confirm that the Rust proxy:

  • Loads these keys (e.g. from tauri.conf.json or environment variables)
  • Exposes them via the corresponding Tauri commands
  • Passes them correctly to the OpenAI endpoints

Comment on lines 34 to 54
const onChunk = await listen("stream-chunk", (event: Event<any>) => {
// The OpenAI stream sends data like `data: {"id":...,"choices":[{"delta":{"content":"..."}}]}\n\n`
// We need to parse this and extract the content.
const chunk = event.payload.chunk;
const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: '));
for (const line of lines) {
const data = line.substring(6);
if (data.trim() === '[DONE]') {
return;
}
try {
const json = JSON.parse(data);
const messagePiece = json.choices[0].delta.content;
if (messagePiece) {
controller.enqueue(messagePiece);
}
} catch (error) {
console.error("Failed to parse stream chunk:", error, "in chunk:", data);
}
}
} catch (error) {
console.error(error);
controller.error(error);
} finally {
reader.releaseLock();
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling and data validation in stream parsing.

The stream chunk parsing has several potential issues:

  1. No validation that event.payload.chunk exists
  2. Assumes json.choices[0].delta.content structure without null checks
  3. Could benefit from more robust error handling
-      const onChunk = await listen("stream-chunk", (event: Event<any>) => {
+      const onChunk = await listen("stream-chunk", (event: Event<{chunk: string}>) => {
         // The OpenAI stream sends data like `data: {"id":...,"choices":[{"delta":{"content":"..."}}]}\n\n`
         // We need to parse this and extract the content.
-        const chunk = event.payload.chunk;
+        const chunk = event.payload?.chunk;
+        if (!chunk) {
+          console.warn("Received stream chunk event without chunk data");
+          return;
+        }
         const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: '));
         for (const line of lines) {
           const data = line.substring(6);
           if (data.trim() === '[DONE]') {
             return;
           }
           try {
             const json = JSON.parse(data);
-            const messagePiece = json.choices[0].delta.content;
+            const messagePiece = json.choices?.[0]?.delta?.content;
             if (messagePiece) {
               controller.enqueue(messagePiece);
             }
           } catch (error) {
             console.error("Failed to parse stream chunk:", error, "in chunk:", data);
           }
         }
       });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const onChunk = await listen("stream-chunk", (event: Event<any>) => {
// The OpenAI stream sends data like `data: {"id":...,"choices":[{"delta":{"content":"..."}}]}\n\n`
// We need to parse this and extract the content.
const chunk = event.payload.chunk;
const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: '));
for (const line of lines) {
const data = line.substring(6);
if (data.trim() === '[DONE]') {
return;
}
try {
const json = JSON.parse(data);
const messagePiece = json.choices[0].delta.content;
if (messagePiece) {
controller.enqueue(messagePiece);
}
} catch (error) {
console.error("Failed to parse stream chunk:", error, "in chunk:", data);
}
}
} catch (error) {
console.error(error);
controller.error(error);
} finally {
reader.releaseLock();
});
const onChunk = await listen("stream-chunk", (event: Event<{chunk: string}>) => {
// The OpenAI stream sends data like `data: {"id":...,"choices":[{"delta":{"content":"..."}}]}\n\n`
// We need to parse this and extract the content.
const chunk = event.payload?.chunk;
if (!chunk) {
console.warn("Received stream chunk event without chunk data");
return;
}
const lines = chunk.split('\n').filter((line: string) => line.startsWith('data: '));
for (const line of lines) {
const data = line.substring(6);
if (data.trim() === '[DONE]') {
return;
}
try {
const json = JSON.parse(data);
const messagePiece = json.choices?.[0]?.delta?.content;
if (messagePiece) {
controller.enqueue(messagePiece);
}
} catch (error) {
console.error("Failed to parse stream chunk:", error, "in chunk:", data);
}
}
});
🤖 Prompt for AI Agents
In src/features/chat/openAiChat.ts around lines 34-54, the stream parsing
assumes event.payload.chunk and a specific JSON shape; update the handler to
first guard that event.payload and event.payload.chunk exist and skip if
missing/empty, split lines by '\n' and filter only those starting with 'data: ',
treat a line whose data trimmed equals '[DONE]' as an end-of-stream marker (stop
processing this chunk but do not return from the whole handler), wrap JSON.parse
in try/catch and on success validate that parsed.choices is an array with at
least one element, that parsed.choices[0].delta exists and has a string content
before calling controller.enqueue; on any validation or parse failure log a
clear error including the raw chunk/line and continue processing remaining lines
(do not throw).

This commit refactors the error handling for when the sidecar process fails to spawn.

Instead of calling a helper that uses `std::process::exit`, the logic now directly uses `dialog::message` to show the error to you and then calls `handle.exit(1)`. This ensures the application terminates gracefully via Tauri's event loop, rather than an abrupt process exit from a background thread.
This commit refactors the event listener cleanup mechanism in `getResponseStream` to be more robust and prevent race conditions.

Instead of assigning unlisten functions to separate variables, they are now pushed into an array immediately after creation. The `cleanup` function now iterates over this array. This ensures that if an error or end event arrives before all listeners have been registered, the cleanup function will still correctly unregister all listeners that have been successfully created up to that point, preventing memory leaks.
Copy link
Author

@AlphaEcho11 AlphaEcho11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confirming

Open your terminal, navigate to where you want to store the project, and run the following command:

```bash
git clone https://github.com/semperai/amica
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to look at the temp/final git location - for now, should be the current project at: https://github.com/AlphaEcho11/amica

Comment on lines 81 to 82
let settings_path = "settings.json";
let settings_str = fs::read_to_string(settings_path)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've taken a close look at the current implementation in main.rs to ensure it follows this best practice. The code currently uses handle.path_resolver().resolve_resource("resources/settings.json").

Good news - this is Tauri's recommended API for exactly this purpose! It correctly resolves the path to the bundled resource file regardless of where the application is launched from. So, it seems the code is already robust against the CWD issue you've described.

This commit migrates the project from Tauri v1 to v2 to resolve a build failure caused by using a legacy v1 feature flag (`shell-open`) with a v2 dependency.

Key changes include:
- **Rust Dependencies:** Updated the `tauri` dependency in `Cargo.toml` to a v2 version and removed the `shell-open` feature.
- **Rust API Usage:** Refactored `main.rs` to replace the deprecated `tauri::api::shell::open` with the new `app.shell().open()` API from Tauri v2.
- **Frontend Dependencies:** Updated the `@tauri-apps/api` and `@tauri-apps/cli` packages in `package.json` to versions compatible with Tauri v2.
- **Configuration:** Verified that the `shell` allowlist in `tauri.conf.json` is correctly configured for the v2 API.
…s resolves the build failure you were seeing, which was caused by an outdated configuration.

Here is a summary of the changes I made:
- Renamed `build.devPath` to `build.devUrl`.
- Renamed `build.distDir` to `build.frontendDist`.
- Removed the top-level `package` and `tauri` objects.
- Created a new top-level `app` object and moved `windows`, `security`, `systemTray`, and package information into it.
- Moved the `bundle` object to be top-level.
- Moved the `allowlist` into `app.security`.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
src-tauri/tauri.conf.json (3)

16-19: Limit shell.open usage and validate URLs at call sites

Allowing shell.open is fine for external links, but ensure the frontend strictly validates/sanitizes URLs (no file://, no custom schemes). Consider centralizing external-open calls in one helper for reviewability.

If usage is limited to http(s) links, you may also add runtime guards and a simple allowlist (e.g., only certain domains) in the call path.


70-71: macOS private API enabled: confirm distribution intent

usePrivateApi: true blocks Mac App Store distribution and may trigger review flags. If you don’t rely on private API features, set it to false for broader distribution.

-      "usePrivateApi": true
+      "usePrivateApi": false

If you intentionally need vibrancy/transparent effects requiring private APIs, keep as-is.


76-76: Bundle targets: use an explicit list to reduce CI time/artifacts

"targets": "all" builds everything and can slow CI. If you don’t need all formats, set a specific array.

Examples:

  • Cross-platform dev CI: "targets": ["app"]
  • macOS release: "targets": ["dmg"]
  • Windows release: "targets": ["nsis"]
-    "targets": "all",
+    "targets": ["app"],
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9c92a01 and cf98e74.

📒 Files selected for processing (1)
  • src-tauri/tauri.conf.json (2 hunks)
🔇 Additional comments (1)
src-tauri/tauri.conf.json (1)

72-74: Resources bundling looks correct

Including "resources" ensures settings.json and other assets are shipped. Ensure your Rust code reads from tauri asset paths or uses tauri::path::BaseDirectory::Resource as intended.

If you need a quick sanity check in code:

  • On Rust side, resolve via app.path().resolve("settings.json", BaseDirectory::Resource)?.
  • Avoid hardcoding relative paths.

Comment on lines 3 to 8
"build": {
"beforeBuildCommand": "export NEXT_OUTPUT=\"export\" && npm run build && npm run splash",
"beforeDevCommand": "npm run dev",
"devPath": "http://localhost:3000",
"distDir": "../out"
"devUrl": "http://localhost:3000",
"frontendDist": "../out"
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Cross-platform build env var: replace POSIX-only export with a portable approach

beforeBuildCommand uses export NEXT_OUTPUT="export" which will fail on Windows runners. Prefer cross-env (or move this into an npm script that sets the var portably).

Proposed change (keeps logic, makes it portable):

-    "beforeBuildCommand": "export NEXT_OUTPUT=\"export\" && npm run build && npm run splash",
+    "beforeBuildCommand": "cross-env NEXT_OUTPUT=export npm run build && npm run splash",

If you don’t use cross-env yet, add it as a devDependency and ensure CI runners have it available.

Quick repo check:


🏁 Script executed:

#!/bin/bash
# 1) Flag POSIX-only env var usage in Tauri build hooks
rg -nA1 $'"beforeBuildCommand": "export ' src-tauri/tauri.conf.json

# 2) Verify cross-env is available
fd package.json | xargs -I{} jq -r '.devDependencies["cross-env"] // empty' {} | sed '/^$/d' || echo "cross-env not found in devDependencies"

Length of output: 304


Use cross-env for portable environment variables in Tauri build

The beforeBuildCommand in src-tauri/tauri.conf.json currently uses a POSIX-only export invocation, which will fail on Windows:

-  "beforeBuildCommand": "export NEXT_OUTPUT=\"export\" && npm run build && npm run splash",
+  "beforeBuildCommand": "cross-env NEXT_OUTPUT=export npm run build && npm run splash",

cross-env isn’t listed in your devDependencies, so CI on Windows won’t have it. Please:

  • Install cross-env in your project root
    npm install --save-dev cross-env
    
  • Update src-tauri/tauri.conf.json as shown above

This ensures the environment variable is set consistently across all platforms.

🤖 Prompt for AI Agents
In src-tauri/tauri.conf.json around lines 3 to 8, the beforeBuildCommand uses a
POSIX-only export which fails on Windows; install cross-env as a devDependency
at the project root (npm install --save-dev cross-env) and update the
beforeBuildCommand to set NEXT_OUTPUT via cross-env so the env var is set
portably across platforms; ensure package.json's devDependencies includes
cross-env and adjust any CI scripts that run the tauri build to use the updated
config.

Comment on lines +11 to +12
"version": "0.2.1",
"identifier": "com.heyamica.dev",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Tauri v2 schema alignment: version/identifier likely belong at the root, not under app

The AI summary claims these were moved to the root in this PR; the current config still nests them under app. If targeting Tauri v2, move them to top-level to satisfy the v2 schema.

Proposed change:

 {
   "$schema": "../node_modules/@tauri-apps/cli/schema.json",
+  "version": "0.2.1",
+  "identifier": "com.heyamica.dev",
   "build": {
@@
-  "app": {
-    "productName": "amica",
-    "version": "0.2.1",
-    "identifier": "com.heyamica.dev",
+  "app": {
+    "productName": "amica",

If you’ve already validated this with the CLI and it works as-is, ignore; otherwise, please confirm against the v2 schema.

Optional web check for clarity:


🌐 Web query:

In Tauri 2, should "version" and "identifier" be placed at the root of tauri.conf.json or within the "app" block?

💡 Result:

Root / top-level. In Tauri 2 the app metadata keys (e.g. "version" and "identifier") belong at the top level of tauri.conf.json (not inside the "app" object). The v2 config restructure moved productName/version/identifier to the top-level and renamed the old "tauri" object to "app". (tauri.app, v2.tauri.app)

Example (minimal):
{
"productName": "my-app",
"version": "0.1.0",
"identifier": "com.example.myapp",
"app": { /* runtime/window/security config */ }
}

Citations:


Update tauri.conf.json to Tauri v2 schema: move version & identifier to top level

Confirmed by the Tauri v2 docs¹² that version and identifier belong at the root of your tauri.conf.json, not under app.

In src-tauri/tauri.conf.json:

  • Remove "version" and "identifier" from inside the app block.
  • Insert them at the top level, immediately after the "$schema" property.

Suggested diff:

 {
   "$schema": "../node_modules/@tauri-apps/cli/schema.json",
+  "version": "0.2.1",
+  "identifier": "com.heyamica.dev",
   "build": {
     /* … */
   },
-  "app": {
-    "productName": "amica",
-    "version": "0.2.1",
-    "identifier": "com.heyamica.dev",
+  "app": {
+    "productName": "amica",
     /* runtime/window/security config */
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"version": "0.2.1",
"identifier": "com.heyamica.dev",
{
"$schema": "../node_modules/@tauri-apps/cli/schema.json",
"version": "0.2.1",
"identifier": "com.heyamica.dev",
"build": {
/* … */
},
"app": {
"productName": "amica",
/* runtime/window/security config */
}
}
🤖 Prompt for AI Agents
In src-tauri/tauri.conf.json around lines 11-12, the "version" and "identifier"
properties are currently inside the "app" block but must be top-level for Tauri
v2; remove "version" and "identifier" from the app object and insert them as
top-level properties immediately after the "$schema" key (ensuring proper JSON
commas), so the file has "$schema", then "version" and "identifier" at the root,
and the app block no longer contains those keys.

Comment on lines +14 to +19
"csp": null,
"allowlist": {
"shell": {
"open": true
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

CSP is disabled; add a minimally safe CSP and use devCsp for development

"csp": null disables CSP entirely, which is risky given the new proxy endpoints and any webview content. Use a restrictive production CSP and a flexible devCsp to avoid DX friction.

Suggested security block:

     "security": {
-      "csp": null,
+      "csp": "default-src 'self'; img-src 'self' data: blob:; style-src 'self' 'unsafe-inline'; script-src 'self'; connect-src 'self' http://127.0.0.1:* https://api.openai.com;",
+      "devCsp": "default-src 'self' 'unsafe-eval' 'unsafe-inline' data: blob:; connect-src * ws: wss: http: https:",
       "allowlist": {
         "shell": {
           "open": true
         }
       }
     }
  • Tighten connect-src to the exact hosts/ports your proxy needs (e.g., http://127.0.0.1:5000).
  • If streaming over WebSockets, include ws://127.0.0.1:* or the exact port.

I can generate a tailored CSP once you confirm the exact local proxy ports and any third-party hosts.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"csp": null,
"allowlist": {
"shell": {
"open": true
}
}
"security": {
"csp": "default-src 'self'; img-src 'self' data: blob:; style-src 'self' 'unsafe-inline'; script-src 'self'; connect-src 'self' http://127.0.0.1:* https://api.openai.com;",
"devCsp": "default-src 'self' 'unsafe-eval' 'unsafe-inline' data: blob:; connect-src * ws: wss: http: https:",
"allowlist": {
"shell": {
"open": true
}
}
}
🤖 Prompt for AI Agents
In src-tauri/tauri.conf.json around lines 14–19, the CSP is currently disabled
("csp": null); set a restrictive production CSP string (not null) and add a
separate "devCsp" entry for development. Tighten directives—at minimum set
default-src 'self'; script-src 'self' 'unsafe-eval' only if required by dev
tooling; style-src 'self' 'unsafe-inline' only if necessary; and set connect-src
to the explicit local proxy host(s)/port(s) (e.g., http://127.0.0.1:5000) and
any exact third‑party hosts; include ws:// or wss:// origins for WebSocket
streaming with exact ports if used. Replace placeholder hosts/ports with your
actual proxy and third‑party endpoints, and ensure the production "csp" does not
include permissive wildcards or broad origins while "devCsp" remains more
permissive for local development.

@slowsynapse
Copy link
Collaborator

Cool, Thanks. Will review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants