Skip to content

[BugFix]add int8 cache dtype when using attention quantization #161

[BugFix]add int8 cache dtype when using attention quantization

[BugFix]add int8 cache dtype when using attention quantization #161

Triggered via pull request February 21, 2025 10:47
Status Success
Total duration 2m 53s
Artifacts

mypy.yaml

on: pull_request
Matrix: mypy
Fit to window
Zoom out
Zoom in