Skip to content

[BugFix]add int8 cache dtype when using attention quantization #147

[BugFix]add int8 cache dtype when using attention quantization

[BugFix]add int8 cache dtype when using attention quantization #147

Triggered via pull request February 21, 2025 02:52
Status Success
Total duration 2m 54s
Artifacts

mypy.yaml

on: pull_request
Matrix: mypy
Fit to window
Zoom out
Zoom in