Skip to content

feat: extend the num head terms for fp8 calibration #972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 22, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions lightllm/common/offline_fp8_quant_mem_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ def __init__(

self.qmax = torch.finfo(torch.float8_e4m3fn).max
self.qmin = torch.finfo(torch.float8_e4m3fn).min
self.layer_num = layer_num
self.total_head_num = head_num * dist.get_world_size() if dist.is_initialized() else head_num

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Removing unused layer_num attribute.

self.count = 0
self.scales = None
Expand All @@ -45,7 +44,13 @@ def __init__(
self.scales_list = cfg["scales"]
self.scales = torch.tensor(self.scales_list, dtype=torch.float32, device="cuda").view(cfg["scales_shape"])
if not get_env_start_args().enable_fa3:
self.scales = torch.repeat_interleave(self.scales, self.head_num, dim=-1)
self.scales = torch.repeat_interleave(self.scales, head_num, dim=-1)
elif cfg["num_head"] > self.total_head_num:
factor = cfg["num_head"] // self.total_head_num
self.scales = self.scales[..., ::factor].contiguous()
elif cfg["num_head"] < self.total_head_num:
factor = self.total_head_num // cfg["num_head"]
self.scales = torch.repeat_interleave(self.scales, factor, dim=-1).contiguous()
Comment on lines +48 to +53

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current if/elif structure seems to conflate the enable_fa3 flag with the quantization type, which can lead to incorrect behavior. For instance, if enable_fa3 is False but the quantization type from the config is per_head, the code will incorrectly execute torch.repeat_interleave, which is likely intended only for per_tensor quantization. This would erroneously expand the per-head scales.

Suggested change
elif cfg["num_head"] > self.total_head_num:
factor = cfg["num_head"] // self.total_head_num
self.scales = self.scales[..., ::factor].contiguous()
elif cfg["num_head"] < self.total_head_num:
factor = self.total_head_num // cfg["num_head"]
self.scales = torch.repeat_interleave(self.scales, factor, dim=-1).contiguous()
quant_type = cfg.get("quant_type", "per_tensor")
if quant_type == "per_tensor":
if not get_env_start_args().enable_fa3:
self.scales = torch.repeat_interleave(self.scales, head_num, dim=-1)
elif quant_type == "per_head":
if get_env_start_args().enable_fa3:
if cfg["num_head"] > self.total_head_num:
factor = cfg["num_head"] // self.total_head_num
self.scales = self.scales[..., ::factor].contiguous()
elif cfg["num_head"] < self.total_head_num:
factor = self.total_head_num // cfg["num_head"]
self.scales = torch.repeat_interleave(self.scales, factor, dim=-1).contiguous()

if get_env_start_args().enable_fa3 and dist.is_initialized() and dist.get_world_size() > 1:
half_head = self.total_head_num // 2
start_head = dist.get_rank() * head_num
Expand Down Expand Up @@ -77,7 +82,7 @@ def _load_and_check_config(self):
raise ValueError(
f"num_layers {cfg['num_layers']} in config " f"not match current layer_num {self.layer_num}"
)
if cfg["num_head"] != self.total_head_num:
if cfg["num_head"] % self.total_head_num != 0 and self.total_head_num % cfg["num_head"] != 0:
raise ValueError(
f"num_head {cfg['num_head']} in config " f"not match current model head num {self.total_head_num}"
)
Expand Down
1 change: 0 additions & 1 deletion lightllm/server/api_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,6 @@ def make_argument_parser() -> argparse.ArgumentParser:
export_fp8kv_calibration record and export kv cache quant calibration results to a json file.
It can be used for llama and qwen model.
Calibration need to disable cudagraph and use fa3 or flashinfer backend.
Tp size must no more than head num when calibration.
ppl_int8kv mode use int8 to store kv cache, and use ppl fast kernel;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Removing the constraint that tp size must be no more than head num when calibration, as this constraint is no longer valid with the changes in this PR.

ppl_fp16 mode use ppl fast fp16 decode attention kernel;
you need to read source code to make sure the supported detail mode for all models""",
Expand Down
Loading