Updated Code Snippet for PyTorch tutorial for "example_pytorch_data_generation.ipynb" #1327
+488
−463
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request Description:
Previous Code Snippet:
run post training quantization on the model to get the quantized model output
quantized_model_generated_data, quantization_info = mct.ptq.pytorch_post_training_quantization(
in_module=float_model,
representative_data_gen=representative_data_gen,
target_platform_capabilities=target_platform_cap
)
This code snippet is unable to detect the float_model and it's parameters. While using all the same snippets from "example_pytorch_data_generation.ipynb" in here - https://github.com/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_data_generation.ipynb
I was given this runtime error in Google Colab.
RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment. If you were attempting to deepcopy a module, this may be because of a torch.nn.utils.weight_norm usage, see pytorch/pytorch#103001
However, I have updated this code snippets, which will make a new float_model with the same architecture and using it for Quantization. This is working perfectly.
Checklist before requesting a review: