Skip to content
Open
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
b382be7
I've integrated the Rust backend for process management and the API p…
google-labs-jules[bot] Aug 13, 2025
4da3fe0
docs: Clarify how settings.json is loaded
google-labs-jules[bot] Aug 13, 2025
0cdd5a6
refactor: Improve config loading and error handling
google-labs-jules[bot] Aug 13, 2025
78a0911
refactor: Implement streaming proxy and optimize dependencies
google-labs-jules[bot] Aug 13, 2025
84ce290
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
d4e59b6
fix: Add SSRF protection and improve proxy logic
google-labs-jules[bot] Aug 13, 2025
0ac7c01
fix: Implement robust, race-free shutdown logic
google-labs-jules[bot] Aug 13, 2025
8938ea8
feat: Add confirmation on quit and robust shutdown logic
google-labs-jules[bot] Aug 13, 2025
8297e10
docs: Clarify Linux dependency for Tauri v1 vs v2
google-labs-jules[bot] Aug 13, 2025
1f7cb2f
docs: Correct Windows config path in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
9399c2a
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
bf3598e
Update DEPLOYMENT.md
AlphaEcho11 Aug 13, 2025
009e03e
fix(frontend): Add cancel handler to ReadableStream
google-labs-jules[bot] Aug 13, 2025
8f22f3f
fix(backend): Add error handling for event emission
google-labs-jules[bot] Aug 13, 2025
1d156d3
fix(backend): Add error handling for stdout forwarding
google-labs-jules[bot] Aug 13, 2025
09621db
docs: Clarify streaming option in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
8881db0
docs: Correct build artifact paths in DEPLOYMENT.md
google-labs-jules[bot] Aug 13, 2025
e9939b6
I've finished refactoring the implementation to use the OpenAI-compat…
google-labs-jules[bot] Aug 13, 2025
a7afd93
refactor(frontend): Harden proxy invoke calls with types and error ha…
google-labs-jules[bot] Aug 13, 2025
ea0364a
refactor(backend): Reacquire Tauri State within async task
google-labs-jules[bot] Aug 13, 2025
a9f9b55
fix(backend): Use graceful exit when sidecar spawn fails
google-labs-jules[bot] Aug 13, 2025
c33b59d
fix(frontend): Implement robust event listener cleanup
google-labs-jules[bot] Aug 13, 2025
9c92a01
fix: Migrate project to Tauri v2
google-labs-jules[bot] Aug 14, 2025
cf98e74
I've migrated your `tauri.conf.json` file to the Tauri v2 format. Thi…
google-labs-jules[bot] Aug 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
109 changes: 109 additions & 0 deletions DEPLOYMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# Deployment Guide: Running Amica Locally

This guide provides step-by-step instructions for setting up and running the Rust-powered version of Amica on your local machine.

## 1. Prerequisites

Before you begin, you need to have the following software installed on your system:

* **Node.js:** Amica's user interface is built with Node.js. You will need version `18.18.0` or newer. You can download it from the [official Node.js website](https://nodejs.org/).
* **Rust:** The new backend is written in Rust. The easiest way to install Rust is by using `rustup`. You can find instructions at the [official Rust website](https://www.rust-lang.org/tools/install).
* **`text-generation-webui`:** You must have a working, pre-compiled version of `text-generation-webui`. You can find releases and setup instructions on its [GitHub repository](https://github.com/oobabooga/text-generation-webui). Make sure you can run it successfully on its own before integrating it with Amica.
* **(Linux Only) Build Dependencies:** On Linux, you will need to install a few extra packages for Tauri to build correctly. You can install them with the following command:
```bash
sudo apt-get update
sudo apt-get install -y libwebkit2gtk-4.0-dev build-essential curl wget libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev
```
> **Note:** This project uses Tauri v1, which requires `libwebkit2gtk-4.0-dev`. If you are working on a project with Tauri v2 or newer, you will need to use `libwebkit2gtk-4.1-dev` instead.

## 2. Installation and Configuration

Follow these steps to get the Amica project set up.

#### Step 1: Clone the Amica Repository

Open your terminal, navigate to where you want to store the project, and run the following command:

```bash
git clone https://github.com/semperai/amica
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to look at the temp/final git location - for now, should be the current project at: https://github.com/AlphaEcho11/amica

cd amica
```

#### Step 2: Install JavaScript Dependencies

Once you are in the `amica` directory, run this command to install all the necessary frontend packages:

```bash
npm install
```

#### Step 3: Configure the `text-generation-webui` Path

Amica needs to know where to find your `text-generation-webui` executable. This is configured in a `settings.json` file.

##### How Configuration Works
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Align subheading levels under Step 3

Subheadings should be one level below the Step heading.

-##### How Configuration Works
+#### How Configuration Works
-##### Creating Your Custom `settings.json`
+#### Creating Your Custom `settings.json`

Also applies to: 53-53

🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 45 and 53, the "How Configuration Works" (and
other subheadings) are using the same heading level as the "Step 3" heading;
change these subheadings to be one level lower than the Step heading (e.g., if
Step 3 is "### Step 3", make the subheadings "#### ...") so they are nested
correctly under Step 3; update both occurrences (lines 45 and 53) to the
appropriate heading level and verify TOC/rendering reflects the hierarchy.


Amica uses a default, bundled configuration file to start. To customize the settings, you must create your own `settings.json` file and place it in the correct application configuration directory for your operating system.

When Amica starts, it looks for `settings.json` in this order:
1. **Your Custom `settings.json`:** It checks for the file in your OS's standard application config directory.
2. **Default `settings.json`:** If no custom file is found, it falls back to the default settings bundled inside the application. The default has an empty path, so you **must** create a custom file.

##### Creating Your Custom `settings.json`

1. First, you need to find your application's configuration directory. The paths are typically:
* **Windows:** `%APPDATA%\\com.heyamica.dev\\config` (you can paste this into the Explorer address bar)
* **macOS:** `~/Library/Application Support/com.heyamica.dev`
* **Linux:** `~/.config/com.heyamica.dev`

*(Note: The `com.heyamica.dev` directory might not exist until you run Amica at least once.)*

2. Create a new file named `settings.json` inside that directory.

3. Copy and paste the following content into your new `settings.json` file:
```json
{
"text_generation_webui_path": ""
}
```
Comment on lines +66 to +69
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Document the configurable proxy port setting

The backend now supports a configurable proxy port, but the documentation only shows the path configuration.

     ```json
     {
-      "text_generation_webui_path": ""
+      "text_generation_webui_path": "",
+      "proxy_port": 5000
     }
     ```

Add a note explaining the proxy_port field:

 4.  Add the **full path** to your `text-generation-webui` executable inside the quotes.
+    The `proxy_port` field (default: 5000) specifies which port the text-generation-webui API server is listening on.
🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 66 to 69, the example JSON only shows
text_generation_webui_path but the backend supports a configurable proxy_port;
update the example to include "proxy_port": 5000 and add a short note below the
code block explaining that proxy_port sets the HTTP port the proxy listens on
(default value if any) and that it can be changed to avoid port conflicts or to
match deployment requirements.


4. Add the **full path** to your `text-generation-webui` executable inside the quotes.

* **Windows Example:**
```json
{
"text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat"
}
```
Comment on lines +75 to +78
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Update examples to include proxy_port

The examples should reflect the complete settings structure.

         ```json
         {
-          "text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat"
+          "text_generation_webui_path": "C:\\Users\\YourUser\\Desktop\\text-generation-webui\\start.bat",
+          "proxy_port": 5000
         }
         ```
         ```json
         {
-          "text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh"
+          "text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh",
+          "proxy_port": 5000
         }
         ```

Also applies to: 83-85

🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 75-78 (and likewise update lines 83-85), the
example JSON snippets are missing the proxy_port entry and thus do not show the
complete settings structure; update each example object to include a
"proxy_port": 5000 property (add the trailing comma on the preceding line where
needed) so the examples show both "text_generation_webui_path" and "proxy_port"
with valid JSON formatting.

*(Note the double backslashes `\\`)*

* **Linux/macOS Example:**
```json
{
"text_generation_webui_path": "/home/youruser/text-generation-webui/start.sh"
}
```

If Amica ever has trouble starting, it will show a dialog box explaining the configuration error. This usually means there's a typo in your `settings.json` file or the path to the executable is incorrect.

## 3. Building the Application

Now that everything is configured, you can build the final, standalone executable.

Run the following command in your terminal. This process will compile the Rust backend and package it with the frontend into a single application. It may take several minutes.

```bash
npm run tauri build
```

Once the build is complete, you will find the final application inside the `src-tauri/target/release/` directory. It will be a `.exe` file on Windows, a `.AppImage` on Linux, or a `.app` file inside a `.dmg` on macOS.

## 4. Running Amica

You can now run this executable file directly! There is no need for any further commands.

On the first run, be sure to open the in-app settings and configure the following:
* **Chatbot Backend:** Select **KoboldAI**.
* **Streaming/Extra Option:** If you see an option for streaming, make sure it is **disabled**.

That's it! Your self-contained, Rust-powered Amica application is now ready to use.
2 changes: 2 additions & 0 deletions src-tauri/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ tauri-build = { version = "1.5.5", features = [] }
serde_json = "1.0.128"
serde = { version = "1.0.210", features = ["derive"] }
tauri = { version = "1.8.0", features = [ "macos-private-api", "system-tray", "shell-open"] }
reqwest = { version = "0.12.5", default-features = false, features = ["json", "rustls-tls"] }
futures-util = "0.3.30"

[features]
# this feature is used for production builds or when `devPath` points to the filesystem and the built-in dev server is disabled.
Expand Down
3 changes: 3 additions & 0 deletions src-tauri/resources/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"text_generation_webui_path": ""
}
Loading