Skip to content

Commit

Permalink
Merge branch 'xorbitsai:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
liuzhenghua authored Nov 22, 2024
2 parents 9a58215 + c456ef9 commit e2a33f5
Show file tree
Hide file tree
Showing 696 changed files with 172,147 additions and 25,758 deletions.
26 changes: 0 additions & 26 deletions .github/ISSUE_TEMPLATE/bug_report.md

This file was deleted.

77 changes: 77 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
name: "Bug Report"
description: Submit a bug report to help us improve Xinference. You should provide useful information AMAP rather than simply describing what happened. / 提交一个问题报告来帮助我们改进 Xinference。你必须提供有用的信息而不只是描述发生的现象,否则将不予处理。
body:
- type: textarea
id: system-info
attributes:
label: System Info / 系統信息
description: Your operating environment / 您的运行环境信息
placeholder: Includes Cuda version, transformers / llama-cpp-python / vllm version, Python version, operating system... / 包括Cuda版本,transformers / llama-cpp-python / vllm版本,Python版本,操作系统等。
validations:
required: true

- type: checkboxes
id: information-scripts-examples
attributes:
label: Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
description: 'How are you using Xinference? / 以何种方式使用 Xinference?'
options:
- label: docker / docker
- label: pip install / 通过 pip install 安装
- label: installation from source / 从源码安装

- type: textarea
id: start-way
attributes:
label: Version info / 版本信息
description: The version of Xinference you are running / Xinference 版本
validations:
required: true

- type: textarea
id: commandline
attributes:
label: The command used to start Xinference / 用以启动 xinference 的命令
description: |
Please provide the command used to start Xinference.
If it is a distributed scenario, the commands for starting the supervisor and worker need to be listed separately.
If it is a Docker scenario, please provide the complete command for starting Xinference through Docker.
If it is another method, please describe it specifically.
请提供启动 xinference 的命令。
如果是分布式场景,启动 supervisor 和 worker 的命令需要分别列出。
如果是docker场景,请提供通过 docker 启动 xinference 的完整命令。
如果是其他方式,请具体描述。
validations:
required: true

- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Reproduction / 复现过程
description: |
Please provide a code example that reproduces the problem you encountered, preferably with a minimal reproduction unit.
If you have code snippets, error messages, stack traces, please provide them here as well.
Please format your code correctly using code tags. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are difficult to read and (more importantly) do not allow others to copy and paste your code.
请提供能重现您遇到的问题的代码示例,最好是最小复现单元。
如果您有代码片段、错误信息、堆栈跟踪、涉及的命令行操作等也请在此提供。
请使用代码标签正确格式化您的代码。请参见 https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
请勿使用截图,因为截图难以阅读,而且(更重要的是)不允许他人复制粘贴您的代码。
placeholder: |
Steps to reproduce the behavior/复现Bug的步骤:
1.
2.
3.
- type: textarea
id: expected-behavior
validations:
required: true
attributes:
label: Expected behavior / 期待表现
description: "A clear and concise description of what you would expect to happen. / 简单描述您期望发生的事情。"
20 changes: 0 additions & 20 deletions .github/ISSUE_TEMPLATE/feature_request.md

This file was deleted.

34 changes: 34 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: "Feature request"
description: Submit a request for a new Xinference feature / 提交一个新的 Xinference 的功能建议
labels: [ "feature" ]
body:
- type: textarea
id: feature-request
validations:
required: true
attributes:
label: Feature request / 功能建议
description: |
A brief description of the functional proposal.
对功能建议的简述。
- type: textarea
id: motivation
validations:
required: true
attributes:
label: Motivation / 动机
description: |
Your motivation for making the suggestion. If that motivation is related to another GitHub issue, link to it here.
您提出建议的动机。如果该动机与另一个 GitHub 问题有关,请在此处提供对应的链接。
- type: textarea
id: contribution
validations:
required: true
attributes:
label: Your contribution / 您的贡献
description: |
Your PR link or any other link you can help with.
您的PR链接或者其他您能提供帮助的链接。
10 changes: 0 additions & 10 deletions .github/ISSUE_TEMPLATE/other.md

This file was deleted.

9 changes: 0 additions & 9 deletions .github/ISSUE_TEMPLATE/question.md

This file was deleted.

5 changes: 3 additions & 2 deletions .github/workflows/docker-cd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ concurrency:

jobs:
build:
timeout-minutes: 120
runs-on: self-hosted
strategy:
matrix:
Expand Down Expand Up @@ -61,19 +62,19 @@ jobs:
docker push "$DOCKER_ORG/xinference:${IMAGE_TAG}"
docker build -t "$DOCKER_ORG/xinference:${IMAGE_TAG}-cpu" --progress=plain -f xinference/deploy/docker/cpu.Dockerfile .
docker push "$DOCKER_ORG/xinference:${IMAGE_TAG}-cpu"
echo "XINFERENCE_IMAGE_TAG=${IMAGE_TAG}" >> $GITHUB_ENV
done
if [[ -n "$GIT_TAG" ]]; then
docker tag "$DOCKER_ORG/xinference:${GIT_TAG}" "$DOCKER_ORG/xinference:latest"
docker push "$DOCKER_ORG/xinference:latest"
docker tag "$DOCKER_ORG/xinference:${GIT_TAG}-cpu" "$DOCKER_ORG/xinference:latest-cpu"
docker push "$DOCKER_ORG/xinference:latest-cpu"
echo "XINFERENCE_GIT_TAG=${GIT_TAG}" >> $GITHUB_ENV
fi
- name: Clean docker image cache
shell: bash
if: ${{ github.repository == 'xorbitsai/inference' }}
env:
DOCKER_ORG: ${{ secrets.DOCKERHUB_USERNAME }}
run: |
docker system prune -f -a
24 changes: 24 additions & 0 deletions .github/workflows/issue.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Close inactive issues
on:
schedule:
- cron: "0 19 * * *"
workflow_dispatch:

jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
with:
days-before-issue-stale: 7
days-before-issue-close: 5
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 7 days with no activity."
close-issue-message: "This issue was closed because it has been inactive for 5 days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
operations-per-run: 500
repo-token: ${{ secrets.GITHUB_TOKEN }}
82 changes: 66 additions & 16 deletions .github/workflows/python.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -66,22 +66,24 @@ jobs:
env:
CONDA_ENV: test
SELF_HOST_PYTHON: /root/miniconda3/envs/inference_test/bin/python
SELF_HOST_CONDA: /root/miniconda3/condabin/conda
defaults:
run:
shell: bash -l {0}
strategy:
fail-fast: false
matrix:
os: [ "ubuntu-latest", "macos-12", "windows-latest" ]
python-version: [ "3.8", "3.9", "3.10", "3.11" ]
python-version: [ "3.9", "3.10", "3.11", "3.12" ]
module: [ "xinference" ]
exclude:
- { os: macos-12, python-version: 3.9 }
- { os: macos-12, python-version: 3.10 }
- { os: windows-latest, python-version: 3.9 }
- { os: macos-12, python-version: 3.11 }
- { os: windows-latest, python-version: 3.10 }
- { os: windows-latest, python-version: 3.11 }
include:
- { os: self-hosted, module: gpu, python-version: 3.9}
- { os: macos-latest, module: metal, python-version: "3.10" }

steps:
- name: Check out code
Expand All @@ -97,6 +99,12 @@ jobs:
python-version: ${{ matrix.python-version }}
activate-environment: ${{ env.CONDA_ENV }}

# Important for python == 3.12
- name: Update pip and setuptools
if: ${{ matrix.python-version == '3.12' }}
run: |
python -m pip install -U pip setuptools
- name: Install dependencies
env:
MODULE: ${{ matrix.module }}
Expand All @@ -109,25 +117,31 @@ jobs:
sudo rm -rf "/usr/local/share/boost"
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
fi
pip install "llama-cpp-python>=0.2.23,!=0.2.58"
if [ "$MODULE" == "metal" ]; then
pip install mlx-lm
fi
pip install "llama-cpp-python==0.2.77" --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
pip install transformers
pip install attrdict
pip install "timm>=0.9.16"
pip install torch
pip install torchvision
pip install torch torchvision
pip install accelerate
pip install sentencepiece
pip install transformers_stream_generator
pip install bitsandbytes
pip install "sentence-transformers>=2.3.1"
pip install s3fs
pip install modelscope
pip install diffusers
pip install protobuf
pip install FlagEmbedding
pip install "tenacity>=8.2.0,<8.4.0"
pip install -e ".[dev]"
pip install "jinja2==3.1.2"
pip install tensorizer
pip install jj-pytorchvideo
pip install qwen-vl-utils
pip install datamodel_code_generator
pip install jsonschema
working-directory: .

- name: Test with pytest
Expand All @@ -144,29 +158,65 @@ jobs:
${{ env.SELF_HOST_PYTHON }} -m pip install -U "aioprometheus[starlette]"
${{ env.SELF_HOST_PYTHON }} -m pip install -U "pynvml"
${{ env.SELF_HOST_PYTHON }} -m pip install -U "transformers"
${{ env.SELF_HOST_PYTHON }} -m conda install -c conda-forge pynini=2.1.5
${{ env.SELF_HOST_PYTHON }} -m pip install -U nemo_text_processing
${{ env.SELF_HOST_CONDA }} install -c conda-forge pynini=2.1.5
${{ env.SELF_HOST_CONDA }} install -c conda-forge "ffmpeg<7"
${{ env.SELF_HOST_PYTHON }} -m pip install -U funasr
${{ env.SELF_HOST_PYTHON }} -m pip install -U nemo_text_processing<1.1.0
${{ env.SELF_HOST_PYTHON }} -m pip install -U omegaconf~=2.3.0
${{ env.SELF_HOST_PYTHON }} -m pip install -U vector_quantize_pytorch
${{ env.SELF_HOST_PYTHON }} -m pip install -U vocos
${{ env.SELF_HOST_PYTHON }} -m pip install -U WeTextProcessing
${{ env.SELF_HOST_PYTHON }} -m pip install -U WeTextProcessing<1.0.4
${{ env.SELF_HOST_PYTHON }} -m pip install -U librosa
${{ env.SELF_HOST_PYTHON }} -m pip install -U xxhash
${{ env.SELF_HOST_PYTHON }} -m pip install -U "ChatTTS>=0.2.1"
${{ env.SELF_HOST_PYTHON }} -m pip install -U HyperPyYAML
${{ env.SELF_HOST_PYTHON }} -m pip uninstall -y matcha-tts
${{ env.SELF_HOST_PYTHON }} -m pip install -U onnxruntime-gpu==1.16.0; sys_platform == 'linux'
${{ env.SELF_HOST_PYTHON }} -m pip install -U openai-whisper
${{ env.SELF_HOST_PYTHON }} -m pip install -U "torch==2.3.1" "torchaudio==2.3.1"
${{ env.SELF_HOST_PYTHON }} -m pip install -U "loguru"
${{ env.SELF_HOST_PYTHON }} -m pip install -U "natsort"
${{ env.SELF_HOST_PYTHON }} -m pip install -U "loralib"
${{ env.SELF_HOST_PYTHON }} -m pip install -U "ormsgpack"
${{ env.SELF_HOST_PYTHON }} -m pip uninstall -y opencc
${{ env.SELF_HOST_PYTHON }} -m pip uninstall -y "faster_whisper"
${{ env.SELF_HOST_PYTHON }} -m pip install -U accelerate
${{ env.SELF_HOST_PYTHON }} -m pip install -U verovio
${{ env.SELF_HOST_PYTHON }} -m pip install -U cachetools
${{ env.SELF_HOST_PYTHON }} -m pip install -U silero-vad
${{ env.SELF_HOST_PYTHON }} -m pip install -U pydantic
${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \
--disable-warnings \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/core/tests/test_continuous_batching.py && \
${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/image/tests/test_stable_diffusion.py && \
${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/image/tests/test_got_ocr2.py && \
${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/audio/tests/test_whisper.py && \
${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/image/tests/test_stable_diffusion.py
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/audio/tests/test_funasr.py && \
${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/audio/tests/test_whisper.py
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/audio/tests/test_chattts.py && \
${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/audio/tests/test_chattts.py
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/audio/tests/test_cosyvoice.py && \
${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/audio/tests/test_fish_speech.py
elif [ "$MODULE" == "metal" ]; then
pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/llm/mlx/tests/test_mlx.py
else
pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/client/tests/test_client.py
pytest --timeout=1500 \
-W ignore::PendingDeprecationWarning \
--cov-config=setup.cfg --cov-report=xml --cov=xinference --ignore xinference/client/tests/test_client.py --ignore xinference/model/image/tests/test_stable_diffusion.py --ignore xinference/model/audio/tests/test_whisper.py --ignore xinference/model/audio/tests/test_chattts.py xinference
--cov-config=setup.cfg --cov-report=xml --cov=xinference --ignore xinference/core/tests/test_continuous_batching.py --ignore xinference/client/tests/test_client.py --ignore xinference/model/image/tests/test_stable_diffusion.py --ignore xinference/model/image/tests/test_got_ocr2.py --ignore xinference/model/audio/tests xinference
fi
working-directory: .
Loading

0 comments on commit e2a33f5

Please sign in to comment.