\ No newline at end of file
diff --git a/docs/api/api_docs/methods/keras_kpi_data.html b/docs/api/api_docs/methods/keras_kpi_data.html
index 4671a47b3..8e5994884 100644
--- a/docs/api/api_docs/methods/keras_kpi_data.html
+++ b/docs/api/api_docs/methods/keras_kpi_data.html
@@ -5,9 +5,9 @@
-
+
- Get KPI information for Keras Models — MCT Documentation: ver 1.10.0
+ Get KPI information for Keras Models — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/api_docs/methods/keras_post_training_quantization.html b/docs/api/api_docs/methods/keras_post_training_quantization.html
index 9ddd68ca8..1ad0cc711 100644
--- a/docs/api/api_docs/methods/keras_post_training_quantization.html
+++ b/docs/api/api_docs/methods/keras_post_training_quantization.html
@@ -5,9 +5,9 @@
-
+
- Keras Post Training Quantization — MCT Documentation: ver 1.10.0
+ Keras Post Training Quantization — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/api_docs/methods/keras_post_training_quantization_mixed_precision.html b/docs/api/api_docs/methods/keras_post_training_quantization_mixed_precision.html
index 9a18aa84d..e539ca057 100644
--- a/docs/api/api_docs/methods/keras_post_training_quantization_mixed_precision.html
+++ b/docs/api/api_docs/methods/keras_post_training_quantization_mixed_precision.html
@@ -5,9 +5,9 @@
-
+
- Keras Post Training Mixed Precision Quantization — MCT Documentation: ver 1.10.0
+ Keras Post Training Mixed Precision Quantization — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/api_docs/methods/pytorch_kpi_data.html b/docs/api/api_docs/methods/pytorch_kpi_data.html
index 730c3af7c..43a6a7a4c 100644
--- a/docs/api/api_docs/methods/pytorch_kpi_data.html
+++ b/docs/api/api_docs/methods/pytorch_kpi_data.html
@@ -5,9 +5,9 @@
-
+
- Get KPI information for PyTorch Models — MCT Documentation: ver 1.10.0
+ Get KPI information for PyTorch Models — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/api_docs/methods/pytorch_post_training_quantization.html b/docs/api/api_docs/methods/pytorch_post_training_quantization.html
index 826879ec5..e16d7cf98 100644
--- a/docs/api/api_docs/methods/pytorch_post_training_quantization.html
+++ b/docs/api/api_docs/methods/pytorch_post_training_quantization.html
@@ -5,9 +5,9 @@
-
+
- Pytorch Post Training Quantization — MCT Documentation: ver 1.10.0
+ Pytorch Post Training Quantization — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/api_docs/methods/pytorch_post_training_quantization_mixed_precision.html b/docs/api/api_docs/methods/pytorch_post_training_quantization_mixed_precision.html
index 0759ca150..aa9b02bf6 100644
--- a/docs/api/api_docs/methods/pytorch_post_training_quantization_mixed_precision.html
+++ b/docs/api/api_docs/methods/pytorch_post_training_quantization_mixed_precision.html
@@ -5,9 +5,9 @@
-
+
- PyTorch Post Training Mixed Precision Quantization — MCT Documentation: ver 1.10.0
+ PyTorch Post Training Mixed Precision Quantization — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/classes/DefaultDict.html b/docs/api/experimental_api_docs/classes/DefaultDict.html
index ea4477b1c..f5bf3bbc8 100644
--- a/docs/api/experimental_api_docs/classes/DefaultDict.html
+++ b/docs/api/experimental_api_docs/classes/DefaultDict.html
@@ -5,9 +5,9 @@
-
+
- DefaultDict Class — MCT Documentation: ver 1.10.0
+ DefaultDict Class — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
Default dictionary. It wraps a dictionary given at initialization and return its
values when requested. If the requested key is not presented at initial dictionary,
-it returns the returned value a default factory (that is passed at initialization) generates.
+it returns the returned value a default value (that is passed at initialization) generates.
Parameters:
known_dict – Dictionary to wrap.
-
default_factory – Callable to get default values when requested key is not in known_dict.
+
default_value – default value when requested key is not in known_dict.
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/classes/FrameworkInfo.html b/docs/api/experimental_api_docs/classes/FrameworkInfo.html
index 07621a13c..86c31cd4e 100644
--- a/docs/api/experimental_api_docs/classes/FrameworkInfo.html
+++ b/docs/api/experimental_api_docs/classes/FrameworkInfo.html
@@ -5,9 +5,9 @@
-
+
- FrameworkInfo Class — MCT Documentation: ver 1.10.0
+ FrameworkInfo Class — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/classes/GradientPTQConfig.html b/docs/api/experimental_api_docs/classes/GradientPTQConfig.html
index 3093e32d6..70cc97772 100644
--- a/docs/api/experimental_api_docs/classes/GradientPTQConfig.html
+++ b/docs/api/experimental_api_docs/classes/GradientPTQConfig.html
@@ -5,9 +5,9 @@
-
+
- GradientPTQConfigV2 Class — MCT Documentation: ver 1.10.0
+ GradientPTQConfigV2 Class — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
The following API can be used to create a GradientPTQConfigV2 instance which can be used for post training quantization using knowledge distillation from a teacher (float model) to a student (the quantized model). This is experimental and subject to future changes.
Configuration to use for quantization with GradientPTQV2 (experimental).
Initialize a GradientPTQConfigV2.
@@ -65,7 +65,7 @@
Navigation
optimizer_quantization_parameter (Any) – Optimizer to override the rest optimizer for quantizer parameters.
optimizer_bias (Any) – Optimizer to override the rest optimizerfor bias.
regularization_factor (float) – A floating point number that defines the regularization factor.
-
hessian_weights_config (GPTQHessianWeightsConfig) – A configuration that include all necessary arguments to run a computation of Hessian weights for the GPTQ loss.
+
hessian_weights_config (GPTQHessianScoresConfig) – A configuration that include all necessary arguments to run a computation of Hessian scores for the GPTQ loss.
gptq_quantizer_params_override (dict) – A dictionary of parameters to override in GPTQ quantizer instantiation. Defaults to None (no parameters).
hessian_weights_config (GPTQHessianScoresConfig) – A configuration that include all necessary arguments to run a computation of Hessian scores for the GPTQ loss.
gptq_quantizer_params_override (dict) – A dictionary of parameters to override in GPTQ quantizer instantiation. Defaults to None (no parameters).
The following API can be used to create a GPTQHessianWeightsConfig instance which can be used to define necessary parameters for computing Hessian weights for the GPTQ loss function.
The following API can be used to create a GPTQHessianScoresConfig instance which can be used to define necessary parameters for computing Hessian scores for the GPTQ loss function.
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/index.html b/docs/api/experimental_api_docs/index.html
index 37825d0fe..b96916644 100644
--- a/docs/api/experimental_api_docs/index.html
+++ b/docs/api/experimental_api_docs/index.html
@@ -5,9 +5,9 @@
-
+
- API Docs — MCT Documentation: ver 1.10.0
+ API Docs — MCT Documentation: ver 1.11.0
@@ -35,7 +35,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/get_keras_data_generation_config.html b/docs/api/experimental_api_docs/methods/get_keras_data_generation_config.html
index 7f9d0ef9b..e6fa19dae 100644
--- a/docs/api/experimental_api_docs/methods/get_keras_data_generation_config.html
+++ b/docs/api/experimental_api_docs/methods/get_keras_data_generation_config.html
@@ -5,9 +5,9 @@
-
+
- Get DataGenerationConfig for Keras Models — MCT Documentation: ver 1.10.0
+ Get DataGenerationConfig for Keras Models — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/get_keras_gptq_config.html b/docs/api/experimental_api_docs/methods/get_keras_gptq_config.html
index 179e55b4f..09ff89fda 100644
--- a/docs/api/experimental_api_docs/methods/get_keras_gptq_config.html
+++ b/docs/api/experimental_api_docs/methods/get_keras_gptq_config.html
@@ -5,9 +5,9 @@
-
+
- Get GradientPTQConfig for Keras Models — MCT Documentation: ver 1.10.0
+ Get GradientPTQConfig for Keras Models — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
Create a GradientPTQConfigV2 instance for Keras models.
Parameters:
@@ -56,6 +56,7 @@
Navigation
loss (Callable) – loss to use during fine-tuning. should accept 4 lists of tensors. 1st list of quantized tensors, the 2nd list is the float tensors, the 3rd is a list of quantized weights and the 4th is a list of float weights.
log_function (Callable) – Function to log information about the gptq process.
use_hessian_based_weights (bool) – Whether to use Hessian-based weights for weighted average loss.
+
regularization_factor (float) – A floating point number that defines the regularization factor.
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/get_pytorch_data_generation_config.html b/docs/api/experimental_api_docs/methods/get_pytorch_data_generation_config.html
index 66d5e1a47..2852b8509 100644
--- a/docs/api/experimental_api_docs/methods/get_pytorch_data_generation_config.html
+++ b/docs/api/experimental_api_docs/methods/get_pytorch_data_generation_config.html
@@ -5,9 +5,9 @@
-
+
- Get DataGenerationConfig for Pytorch Models — MCT Documentation: ver 1.10.0
+ Get DataGenerationConfig for Pytorch Models — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/get_pytroch_gptq_config.html b/docs/api/experimental_api_docs/methods/get_pytroch_gptq_config.html
index 636f58704..a9414270e 100644
--- a/docs/api/experimental_api_docs/methods/get_pytroch_gptq_config.html
+++ b/docs/api/experimental_api_docs/methods/get_pytroch_gptq_config.html
@@ -5,9 +5,9 @@
-
+
- Get GradientPTQConfig for Pytorch Models — MCT Documentation: ver 1.10.0
+ Get GradientPTQConfig for Pytorch Models — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
Create a GradientPTQConfigV2 instance for Pytorch models.
Parameters:
@@ -56,6 +56,7 @@
Navigation
loss (Callable) – loss to use during fine-tuning. should accept 4 lists of tensors. 1st list of quantized tensors, the 2nd list is the float tensors, the 3rd is a list of quantized weights and the 4th is a list of float weights.
log_function (Callable) – Function to log information about the gptq process.
use_hessian_based_weights (bool) – Whether to use Hessian-based weights for weighted average loss.
+
regularization_factor (float) – A floating point number that defines the regularization factor.
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/get_target_platform_capabilities.html b/docs/api/experimental_api_docs/methods/get_target_platform_capabilities.html
index 467821734..f486b41f8 100644
--- a/docs/api/experimental_api_docs/methods/get_target_platform_capabilities.html
+++ b/docs/api/experimental_api_docs/methods/get_target_platform_capabilities.html
@@ -5,9 +5,9 @@
-
+
- Get TargetPlatformCapabilities — MCT Documentation: ver 1.10.0
+ Get TargetPlatformCapabilities — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/keras_data_generation_experimental.html b/docs/api/experimental_api_docs/methods/keras_data_generation_experimental.html
index 2b61f8828..e63b115a0 100644
--- a/docs/api/experimental_api_docs/methods/keras_data_generation_experimental.html
+++ b/docs/api/experimental_api_docs/methods/keras_data_generation_experimental.html
@@ -5,9 +5,9 @@
-
+
- Keras Data Generation — MCT Documentation: ver 1.10.0
+ Keras Data Generation — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/keras_gradient_post_training_quantization_experimental.html b/docs/api/experimental_api_docs/methods/keras_gradient_post_training_quantization_experimental.html
index 378cb9645..4374a3d33 100644
--- a/docs/api/experimental_api_docs/methods/keras_gradient_post_training_quantization_experimental.html
+++ b/docs/api/experimental_api_docs/methods/keras_gradient_post_training_quantization_experimental.html
@@ -5,9 +5,9 @@
-
+
- Keras Gradient Based Post Training Quantization — MCT Documentation: ver 1.10.0
+ Keras Gradient Based Post Training Quantization — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/keras_kpi_data_experimental.html b/docs/api/experimental_api_docs/methods/keras_kpi_data_experimental.html
index ec350e651..9fbb233e5 100644
--- a/docs/api/experimental_api_docs/methods/keras_kpi_data_experimental.html
+++ b/docs/api/experimental_api_docs/methods/keras_kpi_data_experimental.html
@@ -5,9 +5,9 @@
-
+
- Get KPI information for Keras Models — MCT Documentation: ver 1.10.0
+ Get KPI information for Keras Models — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/keras_load_quantizad_model.html b/docs/api/experimental_api_docs/methods/keras_load_quantizad_model.html
index 15f9ff406..bf405bc93 100644
--- a/docs/api/experimental_api_docs/methods/keras_load_quantizad_model.html
+++ b/docs/api/experimental_api_docs/methods/keras_load_quantizad_model.html
@@ -5,9 +5,9 @@
-
+
- Load Quantized Keras Model — MCT Documentation: ver 1.10.0
+ Load Quantized Keras Model — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/keras_post_training_quantization_experimental.html b/docs/api/experimental_api_docs/methods/keras_post_training_quantization_experimental.html
index 58ed1e922..605c9870a 100644
--- a/docs/api/experimental_api_docs/methods/keras_post_training_quantization_experimental.html
+++ b/docs/api/experimental_api_docs/methods/keras_post_training_quantization_experimental.html
@@ -5,9 +5,9 @@
-
+
- Keras Post Training Quantization — MCT Documentation: ver 1.10.0
+ Keras Post Training Quantization — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
Perform structured pruning on a Keras model to meet a specified target KPI.
+This function prunes the provided model according to the target KPI by grouping and pruning
+channels based on each layer’s SIMD configuration in the Target Platform Capabilities (TPC).
+By default, the importance of each channel group is determined using the Label-Free Hessian
+(LFH) method, assessing each channel’s sensitivity to the Hessian of the loss function.
+This pruning strategy considers groups of channels together for a more hardware-friendly
+architecture. The process involves analyzing the model with a representative dataset to
+identify groups of channels that can be removed with minimal impact on performance.
+
Notice that the pruned model must be retrained to recover the compressed model’s performance.
+
+
Parameters:
+
+
model (Model) – The original Keras model to be pruned.
+
target_kpi (KPI) – The target Key Performance Indicators to be achieved through pruning.
+
representative_data_gen (Callable) – A function to generate representative data for pruning analysis.
+
pruning_config (PruningConfig) – Configuration settings for the pruning process. Defaults to standard config.
+
target_platform_capabilities (TargetPlatformCapabilities) – Platform-specific constraints and capabilities.
+Defaults to DEFAULT_KERAS_TPC.
+
+
+
Returns:
+
A tuple containing the pruned Keras model and associated pruning information.
Define a target KPI for pruning.
+Here, we aim to reduce the memory footprint of weights by 50%, assuming the model weights
+are represented in float32 data type (thus, each parameter is represented using 4 bytes):
Optionally, define a pruning configuration. num_score_approximations can be passed
+to configure the number of importance scores that will be calculated for each channel.
+A higher value for this parameter yields more precise score approximations but also
+extends the duration of the pruning process:
PruningInfo stores information about a pruned model, including the pruning masks
+and importance scores for each layer. This class acts as a container for accessing
+pruning-related metadata.
Stores the pruning masks for each layer.
+A pruning mask is an array where each element indicates whether the corresponding
+channel or neuron has been pruned (0) or kept (1).
+
+
+
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/keras_quantization_aware_training_finalize.html b/docs/api/experimental_api_docs/methods/keras_quantization_aware_training_finalize.html
index ec6ef1a2b..612f5827c 100644
--- a/docs/api/experimental_api_docs/methods/keras_quantization_aware_training_finalize.html
+++ b/docs/api/experimental_api_docs/methods/keras_quantization_aware_training_finalize.html
@@ -5,9 +5,9 @@
-
+
- Keras Quantization Aware Training Model Finalize — MCT Documentation: ver 1.10.0
+ Keras Quantization Aware Training Model Finalize — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/keras_quantization_aware_training_init.html b/docs/api/experimental_api_docs/methods/keras_quantization_aware_training_init.html
index 86613a01c..74e65d8f7 100644
--- a/docs/api/experimental_api_docs/methods/keras_quantization_aware_training_init.html
+++ b/docs/api/experimental_api_docs/methods/keras_quantization_aware_training_init.html
@@ -5,9 +5,9 @@
-
+
- Keras Quantization Aware Training Model Init — MCT Documentation: ver 1.10.0
+ Keras Quantization Aware Training Model Init — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/pytorch_gradient_post_training_quantization_experimental.html b/docs/api/experimental_api_docs/methods/pytorch_gradient_post_training_quantization_experimental.html
index 5d4daa84d..c4cb82945 100644
--- a/docs/api/experimental_api_docs/methods/pytorch_gradient_post_training_quantization_experimental.html
+++ b/docs/api/experimental_api_docs/methods/pytorch_gradient_post_training_quantization_experimental.html
@@ -5,9 +5,9 @@
-
+
- Pytorch Gradient Based Post Training Quantization — MCT Documentation: ver 1.10.0
+ Pytorch Gradient Based Post Training Quantization — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/pytorch_kpi_data_experimental.html b/docs/api/experimental_api_docs/methods/pytorch_kpi_data_experimental.html
index 7d96d92f8..f2313ef5f 100644
--- a/docs/api/experimental_api_docs/methods/pytorch_kpi_data_experimental.html
+++ b/docs/api/experimental_api_docs/methods/pytorch_kpi_data_experimental.html
@@ -5,9 +5,9 @@
-
+
- Get KPI information for PyTorch Models — MCT Documentation: ver 1.10.0
+ Get KPI information for PyTorch Models — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/pytorch_post_training_quantization_experimental.html b/docs/api/experimental_api_docs/methods/pytorch_post_training_quantization_experimental.html
index 6e32cb9b0..9b796b324 100644
--- a/docs/api/experimental_api_docs/methods/pytorch_post_training_quantization_experimental.html
+++ b/docs/api/experimental_api_docs/methods/pytorch_post_training_quantization_experimental.html
@@ -5,9 +5,9 @@
-
+
- Pytorch Post Training Quantization — MCT Documentation: ver 1.10.0
+ Pytorch Post Training Quantization — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/pytorch_quantization_aware_training_finalize.html b/docs/api/experimental_api_docs/methods/pytorch_quantization_aware_training_finalize.html
index 0e594a453..3dab25964 100644
--- a/docs/api/experimental_api_docs/methods/pytorch_quantization_aware_training_finalize.html
+++ b/docs/api/experimental_api_docs/methods/pytorch_quantization_aware_training_finalize.html
@@ -5,9 +5,9 @@
-
+
- PyTorch Quantization Aware Training Model Finalize — MCT Documentation: ver 1.10.0
+ PyTorch Quantization Aware Training Model Finalize — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
\ No newline at end of file
diff --git a/docs/api/experimental_api_docs/methods/pytorch_quantization_aware_training_init.html b/docs/api/experimental_api_docs/methods/pytorch_quantization_aware_training_init.html
index 610e0c879..ec1023c08 100644
--- a/docs/api/experimental_api_docs/methods/pytorch_quantization_aware_training_init.html
+++ b/docs/api/experimental_api_docs/methods/pytorch_quantization_aware_training_init.html
@@ -5,9 +5,9 @@
-
+
- PyTorch Quantization Aware Training Model Init — MCT Documentation: ver 1.10.0
+ PyTorch Quantization Aware Training Model Init — MCT Documentation: ver 1.11.0
@@ -31,7 +31,7 @@
Allows to export a quantized model in the following serialization formats:
-
-
TensorFlow models can be exported as Tensorflow models (.h5 extension) and TFLite models (.tflite extension).
-
PyTorch models can be exported as torch script models and ONNX models (.onnx extension).
-
-
Also, allows to export quantized model in the following quantization formats:
-
-
Fake Quant (where weights and activations are float fakely-quantized values)
-
INT8 (where weights and activations are represented using 8bits integers)
-
-
For more details about the export formats and options, please refer to the project’s GitHub README file.
+
Allows to export a quantized model in different serialization formats and quantization formats.
+For more details about the export formats and options, please refer to the project’s GitHub README file.
Note that this feature is experimental and subject to future changes. If you have any questions or issues, please open an issue in this GitHub repository.
Export a Keras quantized model to a h5 or tflite model.
The model will be saved to the path in save_model_path.
keras_export_model supports the combination of QuantizationFormat.FAKELY_QUANT (where weights
@@ -97,11 +88,10 @@
Export a PyTorch quantized model to a torchscript or onnx model.
The model will be saved to the path in save_model_path.
Currently, pytorch_export_model supports only QuantizationFormat.FAKELY_QUANT (where weights
@@ -153,12 +122,10 @@
Here is an example for how to export a quantized Pytorch model in a ONNX fakly-quantized format:
-
importtempfile
-
-frommct.target_platform_capabilities.tpc_models.default_tpc.latestimportget_pytorch_tpc_latest
-frommct.exporterimportPytorchExportSerializationFormat
-
-# Path of exported model
-_,onnx_file_path=tempfile.mkstemp('.onnx')
-
-# Get TPC
-pytorch_tpc=get_pytorch_tpc_latest()
-
-# Use mode PytorchExportSerializationFormat.ONNX for keras h5 model and default pytorch tpc for fakely-quantized weights
-# and activations
-mct.exporter.pytorch_export_model(model=quantized_exportable_model,save_model_path=onnx_file_path,
- repr_dataset=representative_data_gen,target_platform_capabilities=pytorch_tpc,
- serialization_format=PytorchExportSerializationFormat.ONNX)
-
Class with mixed precision parameters to quantize the input model.
Unlike QuantizationConfig, number of bits for quantization is a list of possible bit widths to
support mixed-precision model quantization.
\ No newline at end of file
diff --git a/docs/genindex.html b/docs/genindex.html
index 7306c9ffa..349de2131 100644
--- a/docs/genindex.html
+++ b/docs/genindex.html
@@ -6,7 +6,7 @@
- Index — MCT Documentation: ver 1.10.0
+ Index — MCT Documentation: ver 1.11.0
@@ -30,7 +30,7 @@