Skip to content

Commit

Permalink
fix docsrc
Browse files Browse the repository at this point in the history
  • Loading branch information
ofirgo committed Jan 13, 2025
1 parent 9b0fbb9 commit 36014bf
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 42 deletions.
4 changes: 2 additions & 2 deletions docsrc/source/api/api_docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,10 +107,10 @@ keras_load_quantized_model


target_platform_capabilities
================
==============================
- :ref:`target_platform_capabilities<ug-target_platform_capabilities>`: Module to create and model hardware-related settings to optimize the model according to, by the hardware the optimized model will use during inference.
- :ref:`get_target_platform_capabilities<ug-get_target_platform_capabilities>`: A function to get a target platform model for Tensorflow and Pytorch.
- :ref:`DefaultDict<ug-DefaultDict>`: Util class for creating a FrameworkQuantizationCapabilities.
- :ref:`DefaultDict<ug-DefaultDict>`: Util class for creating a TargetPlatformCapabilities.


Indices and tables
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
.. _ug-target_platform_capabilities:


=================================
=====================================
target_platform_capabilities Module
=================================
=====================================

MCT can be configured to quantize and optimize models for different hardware settings.
For example, when using qnnpack backend for Pytorch model inference, Pytorch `quantization
Expand All @@ -24,7 +24,7 @@ Models for IMX500, TFLite and qnnpack can be observed `here <https://github.com/

|
The object MCT should get called FrameworkQuantizationCapabilities (or shortly TPC).
The object MCT should get called TargetPlatformCapabilities (or shortly TPC).
This diagram demonstrates the main components:

.. image:: ../../../../images/tpc.jpg
Expand All @@ -42,62 +42,37 @@ QuantizationMethod

OpQuantizationConfig
======================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.OpQuantizationConfig
.. autoclass:: model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.OpQuantizationConfig



AttributeQuantizationConfig
============================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.AttributeQuantizationConfig
.. autoclass:: model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.AttributeQuantizationConfig


QuantizationConfigOptions
============================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.QuantizationConfigOptions
.. autoclass:: model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.QuantizationConfigOptions


TargetPlatformCapabilities
=======================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.TargetPlatformCapabilities
============================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.TargetPlatformCapabilities


OperatorsSet
================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.OperatorsSet
.. autoclass:: model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.OperatorsSet



Fusing
==============
.. autoclass:: model_compression_toolkit.target_platform_capabilities.Fusing
.. autoclass:: model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.Fusing



OperatorSetGroup
====================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.OperatorSetGroup


OperationsToLayers
=====================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.OperationsToLayers


OperationsSetToLayers
=========================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.OperationsSetToLayers


LayerFilterParams
=========================
.. autoclass:: model_compression_toolkit.target_platform_capabilities.LayerFilterParams

More filters and usage examples are detailed :ref:`here<ug-layer_filters>`.


FrameworkQuantizationCapabilities
=============================
.. autoclass:: model_compression_toolkit.target_platform.FrameworkQuantizationCapabilities



.. autoclass:: model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.OperatorSetGroup
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ For example, we can set a trainable weights quantizer with the following configu

.. code-block:: python
from model_compression_toolkit.target_platform_capabilities import QuantizationMethod
from model_compression_toolkit.target_platform_capabilities.target_platform_capabilities import QuantizationMethod
from model_compression_toolkit.constants import THRESHOLD, MIN_THRESHOLD
TrainableQuantizerWeightsConfig(weights_quantization_method=QuantizationMethod.SYMMETRIC,
Expand All @@ -79,7 +79,7 @@ For example, we can set a trainable activation quantizer with the following conf

.. code-block:: python
from model_compression_toolkit.target_platform_capabilities import QuantizationMethod
from model_compression_toolkit.target_platform_capabilities.target_platform_capabilities import QuantizationMethod
from model_compression_toolkit.constants import THRESHOLD, MIN_THRESHOLD
TrainableQuantizerActivationConfig(activation_quantization_method=QuantizationMethod.UNIFORM,
Expand Down
4 changes: 2 additions & 2 deletions docsrc/source/api/api_docs/notes/tpc_note.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

.. note::
For now, some fields of :class:`model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.OpQuantizationConfig` are ignored during
For now, some fields of :class:`~model_compression_toolkit.target_platform_capabilities.OpQuantizationConfig` are ignored during
the optimization process such as quantization_preserving, fixed_scale, and fixed_zero_point.

- MCT will use more information from :class:`~model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.OpQuantizationConfig`, in the future.
- MCT will use more information from :class:`~model_compression_toolkit.target_platform_capabilities.OpQuantizationConfig`, in the future.

0 comments on commit 36014bf

Please sign in to comment.