You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to fine-tune a pretrained efficientdet-lite0 model on my dataset and then do post training quantization (full int8). What is the recommended way of doing this? I did not find any documentation on this in the repository, however since efficientdet-lite was designed to be resistant to quantization I wonder how to do this.
The text was updated successfully, but these errors were encountered:
rohansaw
changed the title
Recommended way for Quantization
Recommended way for EfficientDet-Lite Quantization
Feb 7, 2024
I would like to fine-tune a pretrained efficientdet-lite0 model on my dataset and then do post training quantization (full int8). What is the recommended way of doing this? I did not find any documentation on this in the repository, however since efficientdet-lite was designed to be resistant to quantization I wonder how to do this.
The text was updated successfully, but these errors were encountered: