Skip to content

Commit 923ebbb

Browse files
authored
feat(qwen-tts): add Qwen-tts backend (#8163)
* feat(qwen-tts): add Qwen-tts backend Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Update intel deps Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop flash-attn for cuda13 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
1 parent ea51567 commit 923ebbb

38 files changed

+996
-84
lines changed

.github/workflows/backend.yml

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,19 @@ jobs:
105105
dockerfile: "./backend/Dockerfile.python"
106106
context: "./"
107107
ubuntu-version: '2404'
108+
- build-type: 'cublas'
109+
cuda-major-version: "12"
110+
cuda-minor-version: "9"
111+
platforms: 'linux/amd64'
112+
tag-latest: 'auto'
113+
tag-suffix: '-gpu-nvidia-cuda-12-qwen-tts'
114+
runs-on: 'ubuntu-latest'
115+
base-image: "ubuntu:24.04"
116+
skip-drivers: 'false'
117+
backend: "qwen-tts"
118+
dockerfile: "./backend/Dockerfile.python"
119+
context: "./"
120+
ubuntu-version: '2404'
108121
- build-type: 'cublas'
109122
cuda-major-version: "12"
110123
cuda-minor-version: "9"
@@ -353,6 +366,19 @@ jobs:
353366
dockerfile: "./backend/Dockerfile.python"
354367
context: "./"
355368
ubuntu-version: '2404'
369+
- build-type: 'cublas'
370+
cuda-major-version: "13"
371+
cuda-minor-version: "0"
372+
platforms: 'linux/amd64'
373+
tag-latest: 'auto'
374+
tag-suffix: '-gpu-nvidia-cuda-13-qwen-tts'
375+
runs-on: 'ubuntu-latest'
376+
base-image: "ubuntu:24.04"
377+
skip-drivers: 'false'
378+
backend: "qwen-tts"
379+
dockerfile: "./backend/Dockerfile.python"
380+
context: "./"
381+
ubuntu-version: '2404'
356382
- build-type: 'cublas'
357383
cuda-major-version: "13"
358384
cuda-minor-version: "0"
@@ -431,6 +457,19 @@ jobs:
431457
backend: "vibevoice"
432458
dockerfile: "./backend/Dockerfile.python"
433459
context: "./"
460+
- build-type: 'l4t'
461+
cuda-major-version: "13"
462+
cuda-minor-version: "0"
463+
platforms: 'linux/arm64'
464+
tag-latest: 'auto'
465+
tag-suffix: '-nvidia-l4t-cuda-13-arm64-qwen-tts'
466+
runs-on: 'ubuntu-24.04-arm'
467+
base-image: "ubuntu:24.04"
468+
skip-drivers: 'false'
469+
ubuntu-version: '2404'
470+
backend: "qwen-tts"
471+
dockerfile: "./backend/Dockerfile.python"
472+
context: "./"
434473
- build-type: 'l4t'
435474
cuda-major-version: "13"
436475
cuda-minor-version: "0"
@@ -680,6 +719,19 @@ jobs:
680719
dockerfile: "./backend/Dockerfile.python"
681720
context: "./"
682721
ubuntu-version: '2404'
722+
- build-type: 'hipblas'
723+
cuda-major-version: ""
724+
cuda-minor-version: ""
725+
platforms: 'linux/amd64'
726+
tag-latest: 'auto'
727+
tag-suffix: '-gpu-rocm-hipblas-qwen-tts'
728+
runs-on: 'arc-runner-set'
729+
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
730+
skip-drivers: 'false'
731+
backend: "qwen-tts"
732+
dockerfile: "./backend/Dockerfile.python"
733+
context: "./"
734+
ubuntu-version: '2404'
683735
- build-type: 'hipblas'
684736
cuda-major-version: ""
685737
cuda-minor-version: ""
@@ -824,6 +876,19 @@ jobs:
824876
dockerfile: "./backend/Dockerfile.python"
825877
context: "./"
826878
ubuntu-version: '2204'
879+
- build-type: 'l4t'
880+
cuda-major-version: "12"
881+
cuda-minor-version: "0"
882+
platforms: 'linux/arm64'
883+
tag-latest: 'auto'
884+
tag-suffix: '-nvidia-l4t-qwen-tts'
885+
runs-on: 'ubuntu-24.04-arm'
886+
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
887+
skip-drivers: 'true'
888+
backend: "qwen-tts"
889+
dockerfile: "./backend/Dockerfile.python"
890+
context: "./"
891+
ubuntu-version: '2204'
827892
- build-type: 'l4t'
828893
cuda-major-version: "12"
829894
cuda-minor-version: "0"
@@ -890,6 +955,19 @@ jobs:
890955
dockerfile: "./backend/Dockerfile.python"
891956
context: "./"
892957
ubuntu-version: '2404'
958+
- build-type: 'intel'
959+
cuda-major-version: ""
960+
cuda-minor-version: ""
961+
platforms: 'linux/amd64'
962+
tag-latest: 'auto'
963+
tag-suffix: '-gpu-intel-qwen-tts'
964+
runs-on: 'arc-runner-set'
965+
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
966+
skip-drivers: 'false'
967+
backend: "qwen-tts"
968+
dockerfile: "./backend/Dockerfile.python"
969+
context: "./"
970+
ubuntu-version: '2404'
893971
- build-type: 'intel'
894972
cuda-major-version: ""
895973
cuda-minor-version: ""
@@ -1343,6 +1421,19 @@ jobs:
13431421
dockerfile: "./backend/Dockerfile.python"
13441422
context: "./"
13451423
ubuntu-version: '2404'
1424+
- build-type: ''
1425+
cuda-major-version: ""
1426+
cuda-minor-version: ""
1427+
platforms: 'linux/amd64,linux/arm64'
1428+
tag-latest: 'auto'
1429+
tag-suffix: '-cpu-qwen-tts'
1430+
runs-on: 'ubuntu-latest'
1431+
base-image: "ubuntu:24.04"
1432+
skip-drivers: 'false'
1433+
backend: "qwen-tts"
1434+
dockerfile: "./backend/Dockerfile.python"
1435+
context: "./"
1436+
ubuntu-version: '2404'
13461437
- build-type: ''
13471438
cuda-major-version: ""
13481439
cuda-minor-version: ""

.github/workflows/test-extra.yml

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,4 +284,23 @@ jobs:
284284
- name: Test pocket-tts
285285
run: |
286286
make --jobs=5 --output-sync=target -C backend/python/pocket-tts
287-
make --jobs=5 --output-sync=target -C backend/python/pocket-tts test
287+
make --jobs=5 --output-sync=target -C backend/python/pocket-tts test
288+
tests-qwen-tts:
289+
runs-on: ubuntu-latest
290+
steps:
291+
- name: Clone
292+
uses: actions/checkout@v6
293+
with:
294+
submodules: true
295+
- name: Dependencies
296+
run: |
297+
sudo apt-get update
298+
sudo apt-get install build-essential ffmpeg
299+
sudo apt-get install -y ca-certificates cmake curl patch python3-pip
300+
# Install UV
301+
curl -LsSf https://astral.sh/uv/install.sh | sh
302+
pip install --user --no-cache-dir grpcio-tools==1.64.1
303+
- name: Test qwen-tts
304+
run: |
305+
make --jobs=5 --output-sync=target -C backend/python/qwen-tts
306+
make --jobs=5 --output-sync=target -C backend/python/qwen-tts test

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ENV DEBIAN_FRONTEND=noninteractive
1010
RUN apt-get update && \
1111
apt-get install -y --no-install-recommends \
1212
ca-certificates curl wget espeak-ng libgomp1 \
13-
ffmpeg libopenblas0 libopenblas-dev && \
13+
ffmpeg libopenblas0 libopenblas-dev sox && \
1414
apt-get clean && \
1515
rm -rf /var/lib/apt/lists/*
1616

Makefile

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Disable parallel execution for backend builds
2-
.NOTPARALLEL: backends/diffusers backends/llama-cpp backends/piper backends/stablediffusion-ggml backends/whisper backends/faster-whisper backends/silero-vad backends/local-store backends/huggingface backends/rfdetr backends/kitten-tts backends/kokoro backends/chatterbox backends/llama-cpp-darwin backends/neutts build-darwin-python-backend build-darwin-go-backend backends/mlx backends/diffuser-darwin backends/mlx-vlm backends/mlx-audio backends/stablediffusion-ggml-darwin backends/vllm backends/moonshine backends/pocket-tts
2+
.NOTPARALLEL: backends/diffusers backends/llama-cpp backends/piper backends/stablediffusion-ggml backends/whisper backends/faster-whisper backends/silero-vad backends/local-store backends/huggingface backends/rfdetr backends/kitten-tts backends/kokoro backends/chatterbox backends/llama-cpp-darwin backends/neutts build-darwin-python-backend build-darwin-go-backend backends/mlx backends/diffuser-darwin backends/mlx-vlm backends/mlx-audio backends/stablediffusion-ggml-darwin backends/vllm backends/moonshine backends/pocket-tts backends/qwen-tts
33

44
GOCMD=go
55
GOTEST=$(GOCMD) test
@@ -317,6 +317,7 @@ prepare-test-extra: protogen-python
317317
$(MAKE) -C backend/python/vibevoice
318318
$(MAKE) -C backend/python/moonshine
319319
$(MAKE) -C backend/python/pocket-tts
320+
$(MAKE) -C backend/python/qwen-tts
320321

321322
test-extra: prepare-test-extra
322323
$(MAKE) -C backend/python/transformers test
@@ -326,6 +327,7 @@ test-extra: prepare-test-extra
326327
$(MAKE) -C backend/python/vibevoice test
327328
$(MAKE) -C backend/python/moonshine test
328329
$(MAKE) -C backend/python/pocket-tts test
330+
$(MAKE) -C backend/python/qwen-tts test
329331

330332
DOCKER_IMAGE?=local-ai
331333
DOCKER_AIO_IMAGE?=local-ai-aio
@@ -459,6 +461,7 @@ BACKEND_CHATTERBOX = chatterbox|python|.|false|true
459461
BACKEND_VIBEVOICE = vibevoice|python|.|--progress=plain|true
460462
BACKEND_MOONSHINE = moonshine|python|.|false|true
461463
BACKEND_POCKET_TTS = pocket-tts|python|.|false|true
464+
BACKEND_QWEN_TTS = qwen-tts|python|.|false|true
462465

463466
# Helper function to build docker image for a backend
464467
# Usage: $(call docker-build-backend,BACKEND_NAME,DOCKERFILE_TYPE,BUILD_CONTEXT,PROGRESS_FLAG,NEEDS_BACKEND_ARG)
@@ -505,12 +508,13 @@ $(eval $(call generate-docker-build-target,$(BACKEND_CHATTERBOX)))
505508
$(eval $(call generate-docker-build-target,$(BACKEND_VIBEVOICE)))
506509
$(eval $(call generate-docker-build-target,$(BACKEND_MOONSHINE)))
507510
$(eval $(call generate-docker-build-target,$(BACKEND_POCKET_TTS)))
511+
$(eval $(call generate-docker-build-target,$(BACKEND_QWEN_TTS)))
508512

509513
# Pattern rule for docker-save targets
510514
docker-save-%: backend-images
511515
docker save local-ai-backend:$* -o backend-images/$*.tar
512516

513-
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-transformers docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-bark docker-build-chatterbox docker-build-vibevoice docker-build-exllama2 docker-build-moonshine docker-build-pocket-tts
517+
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-transformers docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-bark docker-build-chatterbox docker-build-vibevoice docker-build-exllama2 docker-build-moonshine docker-build-pocket-tts docker-build-qwen-tts
514518

515519
########################################################
516520
### END Backends

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -298,6 +298,7 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
298298
| **neutts** | Text-to-speech with voice cloning | CUDA 12/13, ROCm, CPU |
299299
| **vibevoice** | Real-time TTS with voice cloning | CUDA 12/13, ROCm, Intel, CPU |
300300
| **pocket-tts** | Lightweight CPU-based TTS | CUDA 12/13, ROCm, Intel, CPU |
301+
| **qwen-tts** | High-quality TTS with custom voice, voice design, and voice cloning | CUDA 12/13, ROCm, Intel, CPU |
301302

302303
### Image & Video Generation
303304
| Backend | Description | Acceleration Support |
@@ -319,8 +320,8 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
319320
|-------------------|-------------------|------------------|
320321
| **NVIDIA CUDA 12** | All CUDA-compatible backends | Nvidia hardware |
321322
| **NVIDIA CUDA 13** | All CUDA-compatible backends | Nvidia hardware |
322-
| **AMD ROCm** | llama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, bark, neutts, vibevoice, pocket-tts | AMD Graphics |
323-
| **Intel oneAPI** | llama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, exllama2, coqui, kokoro, bark, vibevoice, pocket-tts | Intel Arc, Intel iGPUs |
323+
| **AMD ROCm** | llama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, bark, neutts, vibevoice, pocket-tts, qwen-tts | AMD Graphics |
324+
| **Intel oneAPI** | llama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, exllama2, coqui, kokoro, bark, vibevoice, pocket-tts, qwen-tts | Intel Arc, Intel iGPUs |
324325
| **Apple Metal** | llama.cpp, whisper, diffusers, MLX, MLX-VLM, bark-cpp | Apple M1/M2/M3+ |
325326
| **Vulkan** | llama.cpp, whisper, stablediffusion | Cross-platform GPUs |
326327
| **NVIDIA Jetson (CUDA 12)** | llama.cpp, whisper, stablediffusion, diffusers, rfdetr | ARM64 embedded AI (AGX Orin, etc.) |

backend/index.yaml

Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -428,6 +428,28 @@
428428
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice"
429429
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice"
430430
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
431+
- &qwen-tts
432+
urls:
433+
- https://github.com/QwenLM/Qwen3-TTS
434+
description: |
435+
Qwen3-TTS is a high-quality text-to-speech model supporting custom voice, voice design, and voice cloning.
436+
tags:
437+
- text-to-speech
438+
- TTS
439+
license: apache-2.0
440+
name: "qwen-tts"
441+
alias: "qwen-tts"
442+
capabilities:
443+
nvidia: "cuda12-qwen-tts"
444+
intel: "intel-qwen-tts"
445+
amd: "rocm-qwen-tts"
446+
nvidia-l4t: "nvidia-l4t-qwen-tts"
447+
default: "cpu-qwen-tts"
448+
nvidia-cuda-13: "cuda13-qwen-tts"
449+
nvidia-cuda-12: "cuda12-qwen-tts"
450+
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-tts"
451+
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-tts"
452+
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
431453
- &pocket-tts
432454
urls:
433455
- https://github.com/kyutai-labs/pocket-tts
@@ -1613,6 +1635,89 @@
16131635
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice"
16141636
mirrors:
16151637
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice
1638+
## qwen-tts
1639+
- !!merge <<: *qwen-tts
1640+
name: "qwen-tts-development"
1641+
capabilities:
1642+
nvidia: "cuda12-qwen-tts-development"
1643+
intel: "intel-qwen-tts-development"
1644+
amd: "rocm-qwen-tts-development"
1645+
nvidia-l4t: "nvidia-l4t-qwen-tts-development"
1646+
default: "cpu-qwen-tts-development"
1647+
nvidia-cuda-13: "cuda13-qwen-tts-development"
1648+
nvidia-cuda-12: "cuda12-qwen-tts-development"
1649+
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-tts-development"
1650+
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-tts-development"
1651+
- !!merge <<: *qwen-tts
1652+
name: "cpu-qwen-tts"
1653+
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen-tts"
1654+
mirrors:
1655+
- localai/localai-backends:latest-cpu-qwen-tts
1656+
- !!merge <<: *qwen-tts
1657+
name: "cpu-qwen-tts-development"
1658+
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen-tts"
1659+
mirrors:
1660+
- localai/localai-backends:master-cpu-qwen-tts
1661+
- !!merge <<: *qwen-tts
1662+
name: "cuda12-qwen-tts"
1663+
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen-tts"
1664+
mirrors:
1665+
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen-tts
1666+
- !!merge <<: *qwen-tts
1667+
name: "cuda12-qwen-tts-development"
1668+
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen-tts"
1669+
mirrors:
1670+
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen-tts
1671+
- !!merge <<: *qwen-tts
1672+
name: "cuda13-qwen-tts"
1673+
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen-tts"
1674+
mirrors:
1675+
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen-tts
1676+
- !!merge <<: *qwen-tts
1677+
name: "cuda13-qwen-tts-development"
1678+
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen-tts"
1679+
mirrors:
1680+
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen-tts
1681+
- !!merge <<: *qwen-tts
1682+
name: "intel-qwen-tts"
1683+
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-qwen-tts"
1684+
mirrors:
1685+
- localai/localai-backends:latest-gpu-intel-qwen-tts
1686+
- !!merge <<: *qwen-tts
1687+
name: "intel-qwen-tts-development"
1688+
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-qwen-tts"
1689+
mirrors:
1690+
- localai/localai-backends:master-gpu-intel-qwen-tts
1691+
- !!merge <<: *qwen-tts
1692+
name: "rocm-qwen-tts"
1693+
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen-tts"
1694+
mirrors:
1695+
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen-tts
1696+
- !!merge <<: *qwen-tts
1697+
name: "rocm-qwen-tts-development"
1698+
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen-tts"
1699+
mirrors:
1700+
- localai/localai-backends:master-gpu-rocm-hipblas-qwen-tts
1701+
- !!merge <<: *qwen-tts
1702+
name: "nvidia-l4t-qwen-tts"
1703+
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-qwen-tts"
1704+
mirrors:
1705+
- localai/localai-backends:latest-nvidia-l4t-qwen-tts
1706+
- !!merge <<: *qwen-tts
1707+
name: "nvidia-l4t-qwen-tts-development"
1708+
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-qwen-tts"
1709+
mirrors:
1710+
- localai/localai-backends:master-nvidia-l4t-qwen-tts
1711+
- !!merge <<: *qwen-tts
1712+
name: "cuda13-nvidia-l4t-arm64-qwen-tts"
1713+
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-tts"
1714+
mirrors:
1715+
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-tts
1716+
- !!merge <<: *qwen-tts
1717+
name: "cuda13-nvidia-l4t-arm64-qwen-tts-development"
1718+
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-tts"
1719+
mirrors:
1720+
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-tts
16161721
## pocket-tts
16171722
- !!merge <<: *pocket-tts
16181723
name: "pocket-tts-development"

backend/python/bark/requirements-intel.txt

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
1-
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2-
intel-extension-for-pytorch==2.8.10+xpu
3-
torch==2.3.1+cxx11.abi
4-
torchaudio==2.3.1+cxx11.abi
5-
oneccl_bind_pt==2.3.100+xpu
1+
--extra-index-url https://download.pytorch.org/whl/xpu
2+
torch
3+
torchaudio
64
optimum[openvino]
75
setuptools
86
transformers

backend/python/chatterbox/requirements-intel.txt

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
1-
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2-
intel-extension-for-pytorch==2.3.110+xpu
3-
torch==2.3.1+cxx11.abi
4-
torchaudio==2.3.1+cxx11.abi
1+
--extra-index-url https://download.pytorch.org/whl/xpu
2+
torch
3+
torchaudio
54
transformers
65
numpy>=1.24.0,<1.26.0
76
# https://github.com/mudler/LocalAI/pull/6240#issuecomment-3329518289

backend/python/common/libbackend.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -398,7 +398,7 @@ function runProtogen() {
398398
# NOTE: for BUILD_PROFILE==intel, this function does NOT automatically use the Intel python package index.
399399
# you may want to add the following line to a requirements-intel.txt if you use one:
400400
#
401-
# --index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
401+
# --index-url https://download.pytorch.org/whl/xpu
402402
#
403403
# If you need to add extra flags into the pip install command you can do so by setting the variable EXTRA_PIP_INSTALL_FLAGS
404404
# before calling installRequirements. For example:
Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
1-
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2-
intel-extension-for-pytorch==2.8.10+xpu
1+
--extra-index-url https://download.pytorch.org/whl/xpu
32
torch==2.8.0
43
oneccl_bind_pt==2.8.0+xpu
54
optimum[openvino]

0 commit comments

Comments
 (0)