A script that automatically installs all the required stuff to run selected AI apps on AMD Radeon 7900XTX. It should also work on 7900XT cards. For other cards, change HSA_OVERRIDE_GFX_VERSION and GFX at the beginning of the script (Not tested).
Note
Debian 13.1 with GNOME and Bash is recommended. Version 8.x is not tested on older systems.
On other distros, most of the python based apps should work, but manual installation of ROCm will be required.
Important
All apps and models are tested on a card with 24GB VRAM.
Some apps or models may not work on cards with less VRAM.
| Name | Info |
|---|---|
| CPU | AMD Ryzen 9950X3D |
| GPU | AMD Radeon 7900XTX |
| RAM | 64GB DDR5 6600MHz |
| Motherboard | ASRock B650E PG Riptide WiFi (BIOS 3.30) |
| OS | Debian 13.1 |
| Kernel | 6.12.43+deb13-amd64 |
| ROCm | 6.4.3 |
| Name | Links | Additional information |
|---|---|---|
| KoboldCPP | https://github.com/YellowRoseCx/koboldcpp-rocm | Support GGML and GGUF models. |
| Text generation web UI | https://github.com/oobabooga/text-generation-webui https://github.com/ROCm/bitsandbytes.git https://github.com/turboderp/exllamav2 |
1. Support ExLlamaV2, Transformers using ROCm and llama.cpp using Vulkan. 2. If you are using Transformers, it is recommended to use sdpa option instead of flash_attention_2. |
| SillyTavern | https://github.com/SillyTavern/SillyTavern | |
| llama.cpp | https://github.com/ggerganov/llama.cpp | 1. Put model.gguf into llama.cpp folder. 2. In run.sh file, change the values of GPU offload layers and context size to match your model. |
| Ollama | https://github.com/ollama/ollama | You can use standard Ollama commands in terminal or run GGUF model. 1. Put model.gguf into Ollama folder. 2. In run.sh file, change the values of GPU offload layers and context size to match your model. 3. In run.sh file, customize model parameters. |
| Name | Link | Additional information |
|---|---|---|
| WhisperSpeech web UI | https://github.com/Mateusz-Dera/whisperspeech-webui | Install and run WhisperSpeech web UI first. |
| Name | Links | Additional information |
|---|---|---|
| ComfyUI | https://github.com/comfyanonymous/ComfyUI | Workflows templates are in the workflows folder. |
| Artist | https://github.com/songrise/Artist/ | |
| Cinemo | https://huggingface.co/spaces/maxin-cn/Cinemo https://github.com/maxin-cn/Cinemo |
|
| Ovis-U1-3B | https://huggingface.co/spaces/AIDC-AI/Ovis-U1-3B https://github.com/AIDC-AI/Ovis-U1 |
Important
For GGUF Flux and Flux based models:
1. Accept accept the conditions to access its files and content on HugginFace website:
https://huggingface.co/black-forest-labs/FLUX.1-schnell
2. HugginFace token is required during installation.
| Name | Links | Additional information |
|---|---|---|
| ACE-Step | https://github.com/ace-step/ACE-Step | |
| YuE-UI | https://github.com/joeljuvel/YuE-UI https://huggingface.co/m-a-p/xcodec_mini_infer https://huggingface.co/Doctor-Shotgun/YuE-s1-7B-anneal-en-cot-exl2 https://huggingface.co/Doctor-Shotgun/YuE-s2-1B-general-exl2 |
Interface PyTorch uses PyTorch 2.6.0 YuE-s1-7B-anneal-en-cot-exl2 quant: 4.25bpw-h6 YuE-s2-1B-general-exl2 quant: 8.0bpw-h8 |
| Name | Links | Additional information |
|---|---|---|
| WhisperSpeech web UI | https://github.com/Mateusz-Dera/whisperspeech-webui https://github.com/collabora/WhisperSpeech |
|
| F5-TTS | https://github.com/SWivid/F5-TTS | Remember to select voice. |
| Matcha-TTS | https://github.com/shivammehta25/Matcha-TTS | |
| Dia | https://github.com/nari-labs/dia https://github.com/tralamazza/dia/tree/optional-rocm-cuda |
Script uses the optional-rocm-cuda fork by tralamazza. |
| IMS-Toucan | https://github.com/DigitalPhonetics/IMS-Toucan.git | Interface PyTorch uses PyTorch 2.4.0 |
| Chatterbox Multilingual | https://github.com/resemble-ai/chatterbox | Only Polish and English have been tested. ThMay not read non-English characters. Polish is fixed: resemble-ai/chatterbox#256 For other languages, you will need to add the changes manually in the multilingual_app.py file. For a better effect in Polish, I recommend using lowercase letters for the entire text. |
| KaniTTS | https://github.com/nineninesix-ai/kani-tts |
| Name | Links | Additional information |
|---|---|---|
| TripoSG | https://github.com/VAST-AI-Research/TripoSG | Added custom simple UI. Uses a modified version of PyTorch Cluster for ROCm https://github.com/Mateusz-Dera/pytorch_cluster_rocm. Sometimes there are probelms with the preview, but the model should still be available for download. |
| PartCrafter | https://github.com/wgsxm/PartCrafter | Added custom simple UI. Uses a modified version of PyTorch Cluster for ROCm https://github.com/Mateusz-Dera/pytorch_cluster_rocm. |
| Name | Links | Additional information |
|---|---|---|
| Fastfetch | https://github.com/fastfetch-cli/fastfetch | Custom Fastfetch configuration with GPU memory info. Supports also NVIDIA graphics cards (nvidia-smi needed). If you want your own logo, place the asci.txt file in the ~/.config/fastfetch directory. |
Note
First startup after installation of the selected app may take longer.
Important
If app does not download any default models, download your own.
Caution
If you update, back up your settings and models. Reinstallation deletes the previous directories.
1. If you have installed uv other than through pipx, uninstall uv first.
2. Clone repository
git clone https://github.com/Mateusz-Dera/ROCm-AI-Installer.git3. Run installer
bash ./install.sh4. Select installation path.
5. Select ROCm installation if you are upgrading or running the script for the first time.
6. If you are installing the script for the first time, restart system after this step.
7. Install selected app.
8. Go to the installation path with the selected app and run:
./run.sh