Skip to content

Commit 0bc48c1

Browse files
committed
update readme
1 parent 62d0826 commit 0bc48c1

File tree

6 files changed

+54
-19
lines changed

6 files changed

+54
-19
lines changed

examples/grpo/cosyvoice2/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
FROM verlai/verl:app-verl0.4-vllm0.8.5-mcore0.12.2-te2.2
2-
COPY requirements-cosyvoice.txt /myworkspace/requirements.txt
2+
COPY requirements.txt /myworkspace/requirements.txt
33
RUN pip install -r /myworkspace/requirements.txt
44
RUN pip install -U nvidia-pytriton
55
RUN git clone https://github.com/yuekaizhang/verl.git /myworkspace/verl -b thread && cd /myworkspace/verl && pip install --no-deps -e .

examples/grpo/cosyvoice2/README.md

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# CosyVoice2 LLM Reinforcement Learning Recipe
22

3-
This recipe demonstrates how to fine-tune the **CosyVoice2** large language model with reinforcement learning algorithms—specifically **GRPO**—using the [veRL](https://github.com/volcengine/verl) framework. Our experiments show that applying GRPO reduces the character error rate (CER) on the CosyVoice3 `zero_shot_zh` set from 4.08 % to 3.36 %.
3+
This recipe demonstrates how to fine-tune the **CosyVoice2** large language model with reinforcement learning algorithms—specifically **GRPO**—using the [veRL](https://github.com/volcengine/verl) framework. Our experiments show that applying GRPO reduces the character error rate (CER) on the CosyVoice3 `zero_shot_zh` set from 4.08% to 3.36%.
44

55
## Table of Contents
66

@@ -18,6 +18,7 @@ We recommend using the pre-built Docker image below. Alternatively, you can manu
1818
```bash
1919
docker pull soar97/verl:app-verl0.4-vllm0.8.5-mcore0.12.2-te2.2
2020
```
21+
If Docker is not available, you can refer to `run.sh` `stage -2` to install the dependencies locally.
2122

2223
## Data Preparation
2324

@@ -43,16 +44,16 @@ data/parquet_tiny/train.parquet
4344
data/parquet_tiny/test.parquet
4445
```
4546

46-
Each sample is automatically wrapped into a cosyvoice2-style prompt so that the LLM learns to output CosyVoice2 speech tokens.
47+
Each sample is automatically wrapped into a CosyVoice2-style prompt so that the LLM learns to output CosyVoice2 speech tokens.
4748

4849

4950
## Reward Function & ASR Server
5051

51-
To compute rewards we run a lightweight server that:
52+
To compute rewards, we run a lightweight server that:
5253

5354
1. Converts generated speech tokens back to a 16 kHz waveform with the **CosyVoice2** pretrained U-Net model.
5455
2. Transcribes the waveform with **SenseVoice** ASR.
55-
3. Calculates the pinyin-level error rate relative to the ground-truth text and maps it to a score in the range \[0-1\].
56+
3. Calculates the pinyin-level error rate relative to the ground-truth text and maps it to a score between 0 and 1.
5657

5758
Start the server (stage `1`) in a dedicated terminal or on a separate GPU:
5859

@@ -61,7 +62,7 @@ bash run.sh 1 1
6162
# Triton server listens on ports 8000/8001/8002
6263
```
6364

64-
The custom reward implementation lives in [`reward_tts.py`](./reward_tts.py) and calls the server to obtain the reward score.
65+
The custom reward implementation is located in [`reward_tts.py`](./reward_tts.py) and calls the server to obtain the reward score.
6566

6667
## Training
6768

@@ -78,10 +79,12 @@ Key CLI arguments passed to `verl.trainer.main_ppo`:
7879
* `custom_reward_function.path=reward_tts.py` – custom reward function described above.
7980

8081
Adjust `CUDA_VISIBLE_DEVICES`, batch sizes, and other hyperparameters to match your hardware.
82+
> [!TIP]
83+
> Note: the lm_head bias is disabled during training to make the model compatible with VLLM and Transformers' Qwen model.
8184
8285
## Evaluation
8386

84-
After training completes, collect the sharded FSDP weights and export a Hugging Face-style checkpoint (stage `3`):
87+
After training is complete, collect the sharded FSDP weights and export a Hugging Face-style checkpoint (stage `3`):
8588

8689
```bash
8790
bash run.sh 3 3 # merges weights into $llm_path/merged_hf_model
@@ -107,15 +110,16 @@ bash run.sh 5 5
107110
```
108111

109112
The script converts the Hugging Face checkpoint back into the format expected by the CosyVoice repository.
113+
> [!TIP]
114+
> However, we observed a slight accuracy drop when using the RL-trained model after conversion, compared with the Hugging Face format.
110115
111116
## Results
112117

113118
| Model | Seed-TTS `test_zh` CER | CosyVoice3 `zero_shot_zh` CER | Comment |
114119
|-------|------------------------|------------------------------|---------|
115-
| CosyVoice2 LLM (official) | 1.45 % | 4.08 % | See the [paper](https://arxiv.org/abs/2412.10117) |
116-
| CosyVoice2 LLM + GRPO | 1.37 % | **3.36 %** | See the [decoding results](yuekai/official-cosyvoice-llm-grpo-aishell3) |
120+
| CosyVoice2 LLM (official) | 1.45% | 4.08% | See the [paper](https://arxiv.org/abs/2412.10117) |
121+
| CosyVoice2 LLM + GRPO | 1.37% | **3.36%** | See the [decoding results](yuekai/official-cosyvoice-llm-grpo-aishell3), Hugging Face-format model |
117122

118123
## Acknowledgement
119124

120125
This work was inspired by the implementation in [ch-tts-llasa-rl-grpo](https://github.com/channel-io/ch-tts-llasa-rl-grpo).
121-

examples/grpo/cosyvoice2/pretrained_to_huggingface.py

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
#!/usr/bin/env python3
12

23
# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
34
# SPDX-License-Identifier: Apache-2.0
@@ -94,7 +95,8 @@ def get_args():
9495
with torch.no_grad():
9596
# set the weight and bias of the new lm_head to 0
9697
new_lm_head.weight.data.zero_()
97-
new_lm_head.bias.data.zero_()
98+
# make bias value -inf
99+
new_lm_head.bias.data.fill_(-float('inf'))
98100
new_lm_head.weight[original_tokenizer_vocab_size:original_tokenizer_vocab_size + cosyvoice2_token_size + 3] = llm_decoder.weight
99101
new_lm_head.bias[original_tokenizer_vocab_size:original_tokenizer_vocab_size + cosyvoice2_token_size + 3] = llm_decoder.bias
100102

@@ -107,8 +109,7 @@ def get_args():
107109

108110
eos_token_ids = [original_tokenizer_vocab_size + cosyvoice2_token_size,
109111
original_tokenizer_vocab_size + cosyvoice2_token_size + 1,
110-
original_tokenizer_vocab_size + cosyvoice2_token_size + 2,
111-
original_tokenizer_vocab_size + cosyvoice2_token_size + 3]
112+
original_tokenizer_vocab_size + cosyvoice2_token_size + 2]
112113
llm.generation_config.eos_token_id = eos_token_ids
113114
llm.generation_config.temperature = 1.0
114115
llm.generation_config.top_p = 0.8
@@ -121,6 +122,14 @@ def get_args():
121122
llm.to(torch.bfloat16)
122123
llm.save_pretrained(args.save_path)
123124

124-
TEMPLATE = "{%- for message in messages %}{%- if message['role'] == 'user' %}{{- '<|sos|>' + message['content'] + '<|task_id|>' }}{%- elif message['role'] == 'assistant' %}{{- message['content']}}{%- endif %}{%- endfor %}"
125+
TEMPLATE = (
126+
"{%- for message in messages %}"
127+
"{%- if message['role'] == 'user' %}"
128+
"{{- '<|sos|>' + message['content'] + '<|task_id|>' }}"
129+
"{%- elif message['role'] == 'assistant' %}"
130+
"{{- message['content']}}"
131+
"{%- endif %}"
132+
"{%- endfor %}"
133+
)
125134
tokenizer.chat_template = TEMPLATE
126135
tokenizer.save_pretrained(args.save_path)

examples/grpo/cosyvoice2/run.sh

Lines changed: 26 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
set -eou pipefail
44

55
stage=-1
6-
stop_stage=5
6+
stop_stage=4
77

88
log() {
99
# This function is from espnet
@@ -15,6 +15,22 @@ export PYTHONPATH=/workspace/CosyVoice
1515
model_scope_model_path=./CosyVoice2-0.5B
1616
sft_model_path=./transformers_cosyvoice2_llm
1717

18+
if [ $stage -le -2 ] && [ $stop_stage -ge -2 ]; then
19+
log "stage -2: install dependencies locally if pre-built docker image is not available"
20+
conda create -n cosyvoice2 python=3.10 -y
21+
conda activate cosyvoice2
22+
# install verl
23+
git clone https://github.com/yuekaizhang/verl.git -b thread
24+
cd verl
25+
USE_MEGATRON=0 bash scripts/install_vllm_sglang_mcore.sh
26+
pip install --no-deps -e .
27+
cd -
28+
# install requirements
29+
pip install -r requirements.txt
30+
pip install -U nvidia-pytriton
31+
git clone https://github.com/yuekaizhang/PytritonSenseVoice.git && cd PytritonSenseVoice && pip install -e .
32+
fi
33+
1834
if [ $stage -le -1 ] && [ $stop_stage -ge -1 ]; then
1935
log "stage -1: download official CosyVoice2-0.5B LLM model and convert to huggingface compatible checkpoint"
2036
modelscope download --model iic/CosyVoice2-0.5B --local_dir $model_scope_model_path
@@ -24,13 +40,15 @@ if [ $stage -le -1 ] && [ $stop_stage -ge -1 ]; then
2440

2541
# Or, you could use the following command to download the huggingface compatible checkpoint
2642
# huggingface-cli download --local-dir $sft_model_path yuekai/cosyvoice2_llm
43+
44+
# Note: we remove the lm_head's bias to make it compatible with the Qwen2.5-0.5B model in Transformers.
2745
fi
2846

2947
data_dir=data/parquet_aishell3
3048
if [ $stage -le 0 ] && [ $stop_stage -ge 0 ]; then
3149
log "stage 0: prepare data into verl format"
3250
mkdir -p $data_dir
33-
wget https://huggingface.co/datasets/SparkAudio/voxbox/resolve/main/metadata/aishell-3.jsonl -O data/aishell-3.jsonl
51+
wget -O data/aishell-3.jsonl https://huggingface.co/datasets/SparkAudio/voxbox/resolve/main/metadata/aishell-3.jsonl
3452
# total 88035 samples
3553
head -n 80000 data/aishell-3.jsonl > data/train.jsonl
3654
tail -n 100 data/aishell-3.jsonl > data/test.jsonl
@@ -98,7 +116,8 @@ if [ $stage -le 2 ] && [ $stop_stage -ge 2 ]; then
98116
trainer.val_before_train=False
99117
fi
100118

101-
step=400
119+
steps=(100 200 300 400 500)
120+
for step in ${steps[@]}; do
102121
llm_path=./checkpoints/cosyvoice2_grpo/$exp_name/global_step_${step}
103122
if [ $stage -le 3 ] && [ $stop_stage -ge 3 ]; then
104123
log "stage 3: merge the model"
@@ -111,7 +130,7 @@ fi
111130
if [ $stage -le 4 ] && [ $stop_stage -ge 4 ]; then
112131
log "stage 4: Test the model"
113132
dataset=zero_shot_zh
114-
# dataset=test_zh
133+
# dataset=test_zh seed_tts test_zh
115134
output_dir=./outputs_${exp_name}_${step}_${dataset}
116135

117136
token2wav_path=/workspace/CosyVoice2-0.5B
@@ -127,12 +146,14 @@ if [ $stage -le 4 ] && [ $stop_stage -ge 4 ]; then
127146

128147
bash scripts/compute_wer.sh $output_dir ${dataset}
129148
fi
149+
done
130150

131151
if [ $stage -le 5 ] && [ $stop_stage -ge 5 ]; then
132152
log "stage 5: Convert the RL trained model to CosyVoice repo format"
133153
python3 huggingface_to_pretrained.py \
134154
--hf-cosyvoice2-llm-path $llm_path/merged_hf_model \
135-
--pretrained-cosyvoice2-path /workspace/CosyVoice2-0.5B \
136155
--output-path /workspace/CosyVoice2-0.5B/llm-new.pt
137156
# You need to manually move the llm-new.pt to overwrite /workspace/CosyVoice2-0.5B/llm.pt
157+
# However, we found that the RL trained model accuracy would slightly drop after this conversion.
158+
# Please be careful or use the huggingface format inference code.
138159
fi

examples/grpo/cosyvoice2/scripts/compute_wer.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ model_path=models/sherpa-onnx-paraformer-zh-2023-09-14
1010
if [ ! -d $model_path ]; then
1111
pip install sherpa-onnx
1212
wget -nc https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2023-09-14.tar.bz2
13+
mkdir models
1314
tar xvf sherpa-onnx-paraformer-zh-2023-09-14.tar.bz2 -C models
1415
fi
1516

0 commit comments

Comments
 (0)