Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
b382be7
I've integrated the Rust backend for process management and the API p…
google-labs-jules[bot] Aug 13, 2025
4da3fe0
docs: Clarify how settings.json is loaded
google-labs-jules[bot] Aug 13, 2025
0cdd5a6
refactor: Improve config loading and error handling
google-labs-jules[bot] Aug 13, 2025
78a0911
refactor: Implement streaming proxy and optimize dependencies
google-labs-jules[bot] Aug 13, 2025
84ce290
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
d4e59b6
fix: Add SSRF protection and improve proxy logic
google-labs-jules[bot] Aug 13, 2025
0ac7c01
fix: Implement robust, race-free shutdown logic
google-labs-jules[bot] Aug 13, 2025
8938ea8
feat: Add confirmation on quit and robust shutdown logic
google-labs-jules[bot] Aug 13, 2025
8297e10
docs: Clarify Linux dependency for Tauri v1 vs v2
google-labs-jules[bot] Aug 13, 2025
1f7cb2f
docs: Correct Windows config path in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
9399c2a
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
bf3598e
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
009e03e
fix(frontend): Add cancel handler to ReadableStream
google-labs-jules[bot] Aug 13, 2025
8f22f3f
fix(backend): Add error handling for event emission
google-labs-jules[bot] Aug 13, 2025
1d156d3
fix(backend): Add error handling for stdout forwarding
google-labs-jules[bot] Aug 13, 2025
09621db
docs: Clarify streaming option in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
8881db0
docs: Correct build artifact paths in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
e9939b6
I've finished refactoring the implementation to use the OpenAI-compat…
google-labs-jules[bot] Aug 13, 2025
a7afd93
refactor(frontend): Harden proxy invoke calls with types and error ha…
google-labs-jules[bot] Aug 13, 2025
ea0364a
refactor(backend): Reacquire Tauri State within async task
google-labs-jules[bot] Aug 13, 2025
a9f9b55
fix(backend): Use graceful exit when sidecar spawn fails
google-labs-jules[bot] Aug 13, 2025
c33b59d
fix(frontend): Implement robust event listener cleanup
google-labs-jules[bot] Aug 13, 2025
9c92a01
fix: Migrate project to Tauri v2
google-labs-jules[bot] Aug 14, 2025
cf98e74
I've migrated your `tauri.conf.json` file to the Tauri v2 format. Thi…
google-labs-jules[bot] Aug 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 87 additions & 0 deletions DEPLOYMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Deployment Guide: Running Amica Locally

This guide provides step-by-step instructions for setting up and running the Rust-powered version of Amica on your local machine.

## 1. Prerequisites

Before you begin, you need to have the following software installed on your system:

* **Node.js:** Amica's user interface is built with Node.js. You will need version `18.18.0` or newer. You can download it from the [official Node.js website](https://nodejs.org/).
* **Rust:** The new backend is written in Rust. The easiest way to install Rust is by using `rustup`. You can find instructions at the [official Rust website](https://www.rust-lang.org/tools/install).
* **`text-generation-webui`:** You must have a working, pre-compiled version of `text-generation-webui`. You can find releases and setup instructions on its [GitHub repository](https://github.com/oobabooga/text-generation-webui). Make sure you can run it successfully on its own before integrating it with Amica.
* **(Linux Only) Build Dependencies:** On Linux, you will need to install a few extra packages for Tauri to build correctly. You can install them with the following command:
```bash
sudo apt-get update
sudo apt-get install -y libwebkit2gtk-4.0-dev build-essential curl wget libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev
```

## 2. Installation and Configuration

Follow these steps to get the Amica project set up.

#### Step 1: Clone the Amica Repository

Open your terminal, navigate to where you want to store the project, and run the following command:

```bash
git clone https://github.com/semperai/amica
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to look at the temp/final git location - for now, should be the current project at: https://github.com/AlphaEcho11/amica

cd amica
```

#### Step 2: Install JavaScript Dependencies

Once you are in the `amica` directory, run this command to install all the necessary frontend packages:

```bash
npm install
```

#### Step 3: Configure the `text-generation-webui` Path

The application needs to know where to find your `text-generation-webui` executable.

1. Find the `settings.json` file in the root of the `amica` directory.
2. Open it and you will see:
```json
{
"text_generation_webui_path": ""
}
```
Comment on lines +66 to +69
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Document the configurable proxy port setting

The backend now supports a configurable proxy port, but the documentation only shows the path configuration.

     ```json
     {
-      "text_generation_webui_path": ""
+      "text_generation_webui_path": "",
+      "proxy_port": 5000
     }
     ```

Add a note explaining the proxy_port field:

 4.  Add the **full path** to your `text-generation-webui` executable inside the quotes.
+    The `proxy_port` field (default: 5000) specifies which port the text-generation-webui API server is listening on.
🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 66 to 69, the example JSON only shows
text_generation_webui_path but the backend supports a configurable proxy_port;
update the example to include "proxy_port": 5000 and add a short note below the
code block explaining that proxy_port sets the HTTP port the proxy listens on
(default value if any) and that it can be changed to avoid port conflicts or to
match deployment requirements.

3. Add the **full path** to your executable inside the quotes.

* **Windows Example:**
```json
{
"text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat"
}
```
Comment on lines +75 to +78
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Update examples to include proxy_port

The examples should reflect the complete settings structure.

         ```json
         {
-          "text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat"
+          "text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat",
+          "proxy_port": 5000
         }
         ```
         ```json
         {
-          "text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh"
+          "text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh",
+          "proxy_port": 5000
         }
         ```

Also applies to: 83-85

🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 75-78 (and likewise update lines 83-85), the
example JSON snippets are missing the proxy_port entry and thus do not show the
complete settings structure; update each example object to include a
"proxy_port": 5000 property (add the trailing comma on the preceding line where
needed) so the examples show both "text_generation_webui_path" and "proxy_port"
with valid JSON formatting.

*(Note the double backslashes `\\`)*

* **Linux/macOS Example:**
```json
{
"text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh"
}
```

## 3. Building the Application

Now that everything is configured, you can build the final, standalone executable.

Run the following command in your terminal. This process will compile the Rust backend and package it with the frontend into a single application. It may take several minutes.

```bash
npm run tauri build
```

Once the build is complete, you will find the final application inside the `src-tauri/target/release/` directory. It will be a `.exe` file on Windows, a `.AppImage` on Linux, or a `.app` file inside a `.dmg` on macOS.

## 4. Running Amica

You can now run this executable file directly! There is no need for any further commands.

On the first run, be sure to open the in-app settings and configure the following:
* **Chatbot Backend:** Select **KoboldAI**.
* **Streaming/Extra Option:** If you see an option for streaming, make sure it is **disabled**.

That's it! Your self-contained, Rust-powered Amica application is now ready to use.
3 changes: 3 additions & 0 deletions settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"text_generation_webui_path": ""
}
2 changes: 2 additions & 0 deletions src-tauri/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ tauri-build = { version = "1.5.5", features = [] }
serde_json = "1.0.128"
serde = { version = "1.0.210", features = ["derive"] }
tauri = { version = "1.8.0", features = [ "macos-private-api", "system-tray", "shell-open"] }
reqwest = { version = "0.12.5", features = ["json"] }
tokio = { version = "1.39.2", features = ["full"] }

[features]
# this feature is used for production builds or when `devPath` points to the filesystem and the built-in dev server is disabled.
Expand Down
200 changes: 162 additions & 38 deletions src-tauri/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,52 +2,176 @@
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]

use tauri::{
CustomMenuItem,
SystemTray,
SystemTrayEvent,
SystemTrayMenu,
SystemTrayMenuItem,
api::process::{Command, CommandEvent},
CustomMenuItem, Manager, SystemTray, SystemTrayEvent, SystemTrayMenu, SystemTrayMenuItem,
};
use tauri::Manager;
use std::sync::Mutex;
use std::fs;

#[derive(serde::Deserialize, Clone)]
struct Settings {
text_generation_webui_path: String,
}

// the payload type must implement `Serialize` and `Clone`.
#[derive(Clone, serde::Serialize)]
struct Payload {
message: String,
}

struct AppState {
child_process: Mutex<Option<tauri::api::process::Child>>,
}

#[tauri::command]
async fn close_splashscreen(window: tauri::Window) {
// Close splashscreen
window.get_window("splashscreen").expect("no window labeled 'splashscreen' found").close().unwrap();
if let Some(splashscreen) = window.get_window("splashscreen") {
splashscreen.close().unwrap();
}
// Show main window
window.get_window("main").expect("no window labeled 'main' found").show().unwrap();
window.get_window("main").unwrap().show().unwrap();
}

#[derive(serde::Deserialize)]
struct ProxyRequestPayload {
path: String,
body: serde_json::Value,
}

#[tauri::command]
async fn proxy_request(payload: ProxyRequestPayload) -> Result<serde_json::Value, String> {
let client = reqwest::Client::new();
// This port should be configurable in the future.
let url = format!("http://127.0.0.1:5000/{}", payload.path);

let res = client
.post(&url)
.json(&payload.body)
.send()
.await
.map_err(|e| e.to_string())?;

if res.status().is_success() {
res.json::<serde_json::Value>()
.await
.map_err(|e| e.to_string())
} else {
let status = res.status();
let text = res.text().await.map_err(|e| e.to_string())?;
Err(format!(
"API request to {} failed with status {}: {}",
url, status, text
))
}
}

fn main() {
tauri::Builder::default()
.system_tray(SystemTray::new()
.with_menu(SystemTrayMenu::new()
.add_item(CustomMenuItem::new("checkforupdates".to_string(), "Check for updates"))
.add_native_item(SystemTrayMenuItem::Separator)
.add_item(CustomMenuItem::new("help".to_string(), "Help"))
.add_native_item(SystemTrayMenuItem::Separator)
.add_item(CustomMenuItem::new("quit".to_string(), "Quit"))
)
)
.on_system_tray_event(|app, event| match event {
SystemTrayEvent::MenuItemClick { id, .. } => {
match id.as_str() {
"quit" => {
std::process::exit(0);
}
"checkforupdates" => {
tauri::api::shell::open(&app.shell_scope(), "https://github.com/semperai/amica/releases/latest", None).expect("failed to open url");
}
"help" => {
tauri::api::shell::open(&app.shell_scope(), "https://docs.heyamica.com", None).expect("failed to open url");
}
_ => {}
}
}
_ => {}
})
.invoke_handler(tauri::generate_handler![close_splashscreen])
.run(tauri::generate_context!())
.expect("error while running tauri application");
let app_state = AppState {
child_process: Mutex::new(None),
};

tauri::Builder::default()
.manage(app_state)
.setup(|app| {
let handle = app.handle();
let app_state = handle.state::<AppState>();

// Read settings
let settings_path = "settings.json";
let settings_str = fs::read_to_string(settings_path)
Copy link

@coderabbitai coderabbitai bot Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Settings file path should be relative to app directory

The settings.json path is hardcoded as a relative path, which could cause issues depending on where the application is launched from.

Use the app's resource directory for the settings file:

             // Read settings
-            let settings_path = "settings.json";
+            let settings_path = app.path_resolver()
+                .resolve_resource("settings.json")
+                .ok_or("Failed to resolve settings.json path")?;
             let settings_str = fs::read_to_string(settings_path)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let settings_path = "settings.json";
let settings_str = fs::read_to_string(settings_path)
// Read settings
let settings_path = app.path_resolver()
.resolve_resource("settings.json")
.ok_or("Failed to resolve settings.json path")?;
let settings_str = fs::read_to_string(settings_path)
🤖 Prompt for AI Agents
In src-tauri/src/main.rs around lines 81 to 82, the code uses a hardcoded
relative path "settings.json" which breaks if the app is launched from a
different CWD; replace this with the application's resource directory path
(e.g., obtain the app resource dir via tauri::api::path::resource_dir() or the
app handle/path resolver, join "settings.json" to that PathBuf, and read that
full path with fs::read_to_string); also handle the Option/Result (return an
error or fallback) so the app fails gracefully if the resource dir or file is
missing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've taken a close look at the current implementation in main.rs to ensure it follows this best practice. The code currently uses handle.path_resolver().resolve_resource("resources/settings.json").

Good news - this is Tauri's recommended API for exactly this purpose! It correctly resolves the path to the bundled resource file regardless of where the application is launched from. So, it seems the code is already robust against the CWD issue you've described.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlphaEcho11 You're absolutely right! I apologize for the confusion. Looking at the current implementation in the file, I can see that the code already properly handles the settings file path resolution:

  1. It first attempts to load from the user's config directory: config_dir.join("settings.json")
  2. If that doesn't exist, it falls back to the bundled resource: handle.path_resolver().resolve_resource("resources/settings.json")

This is indeed the correct and robust approach using Tauri's recommended APIs. The diff I commented on appears to have shown an intermediate state during development. The current implementation properly handles both user configuration overrides and bundled defaults, with appropriate error handling for missing files or directories.

Thank you for taking the time to clarify this - the current code is well-implemented and follows Tauri best practices perfectly!

.expect("Failed to read settings.json");
let settings: Settings = serde_json::from_str(&settings_str)
.expect("Failed to parse settings.json");

if settings.text_generation_webui_path.is_empty() {
// In a real app, you'd want to show a dialog to the user
panic!("text_generation_webui_path is not set in settings.json");
}

// Launch the external process
tauri::async_runtime::spawn(async move {
let (mut rx, child) = Command::new(settings.text_generation_webui_path)
.spawn()
.expect("Failed to spawn external process");

*app_state.child_process.lock().unwrap() = Some(child);

while let Some(event) = rx.recv().await {
if let CommandEvent::Stdout(line) = event {
// Here you can log the output from the sidecar
// Or send it to the frontend
handle
.emit_all("sidecar-output", Payload { message: line.into() })
.unwrap();
}
}
});

Ok(())
})
.system_tray(
SystemTray::new().with_menu(
SystemTrayMenu::new()
.add_item(CustomMenuItem::new("checkforupdates".to_string(), "Check for updates"))
.add_native_item(SystemTrayMenuItem::Separator)
.add_item(CustomMenuItem::new("help".to_string(), "Help"))
.add_native_item(SystemTrayMenuItem::Separator)
.add_item(CustomMenuItem::new("quit".to_string(), "Quit")),
),
)
.on_system_tray_event(|app, event| match event {
SystemTrayEvent::MenuItemClick { id, .. } => match id.as_str() {
"quit" => {
let app_handle = app.app_handle();
let app_state = app_handle.state::<AppState>();
if let Some(child) = app_state.child_process.lock().unwrap().take() {
child.kill().expect("Failed to kill sidecar");
}
app_handle.exit(0);
}
"checkforupdates" => {
tauri::api::shell::open(
&app.shell_scope(),
"https://github.com/semperai/amica/releases/latest",
None,
)
.expect("failed to open url");
}
"help" => {
tauri::api::shell::open(
&app.shell_scope(),
"https://docs.heyamica.com",
None,
)
.expect("failed to open url");
}
_ => {}
},
_ => {}
})
.on_window_event(|event| {
if let tauri::WindowEvent::CloseRequested { api, .. } = event.event() {
let app_handle = event.window().app_handle();
let app_state = app_handle.state::<AppState>();
if let Some(child) = app_state.child_process.lock().unwrap().take() {
child.kill().expect("Failed to kill sidecar");
}
app_handle.exit(0);
}
})
.invoke_handler(tauri::generate_handler![
close_splashscreen,
proxy_request
])
.build(tauri::generate_context!())
.expect("error while building tauri application")
.run(|app_handle, event| {
if let tauri::RunEvent::ExitRequested { .. } = event {
let app_state = app_handle.state::<AppState>();
if let Some(child) = app_state.child_process.lock().unwrap().take() {
child.kill().expect("Failed to kill sidecar");
}
}
});
}
11 changes: 11 additions & 0 deletions src-tauri/tauri.conf.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,18 @@
"allowlist": {
"all": false,
"shell": {
"all": false,
"open": true
},
"http": {
"all": false,
"request": true,
"scope": [
"http://127.0.0.1:*/*",
"https://127.0.0.1:*/*",
"http://localhost:*/*",
"https://localhost:*/*"
]
}
},
"bundle": {
Expand Down
23 changes: 11 additions & 12 deletions src/features/chat/koboldAiChat.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import { Message } from "./messages";
import { buildPrompt } from "@/utils/buildPrompt";
import { config } from '@/utils/config';
import { invoke } from "@tauri-apps/api/tauri";

export async function getKoboldAiChatResponseStream(messages: Message[]) {
if (config("koboldai_use_extra") === 'true') {
Expand Down Expand Up @@ -79,23 +80,21 @@ async function getExtra(messages: Message[]) {

// koboldai / no stream support
async function getNormal(messages: Message[]) {
const headers: Record<string, string> = {
"Content-Type": "application/json",
};

const prompt = buildPrompt(messages);
const stop_sequence: string[] = [`${config("name")}:`, ...`${config("koboldai_stop_sequence")}`.split("||")];

const res = await fetch(`${config("koboldai_url")}/api/v1/generate`, {
headers: headers,
method: "POST",
body: JSON.stringify({
prompt,
stop_sequence
}),
const body = {
prompt,
stop_sequence,
};

const json: any = await invoke("proxy_request", {
payload: {
path: "api/v1/generate",
body: body,
},
});

const json = await res.json();
if (json.results.length === 0) {
throw new Error(`KoboldAi result length 0`);
}
Expand Down