|
| 1 | +Tensorflow Model Support in Intel® Low Precision Optimization Tool |
| 2 | +=========================================== |
| 3 | + |
| 4 | +Intel® Low Precision Optimization Tool supports diffrent model formats of TensorFlow 1.x and 2.x. |
| 5 | + |
| 6 | + |
| 7 | +| TensorFlow model format | Supported? | Example | Comments | |
| 8 | +| ------ | ------ |------|------| |
| 9 | +| frozen pb | Yes | [examples/tensorflow/image_recognition](examples/tensorflow/image_recognition), [examples/tensorflow/oob_models](examples/tensorflow/oob_models) | | |
| 10 | +| Graph object | Yes | [examples/helloworld/tf1.x](examples/helloworld/tf1.x), [examples/tensorflow/style_transfer](examples/tensorflow/style_transfer), [examples/tensorflow/recommendation/wide_deep_large_ds](examples/tensorflow/recommendation/wide_deep_large_ds) | | |
| 11 | +| GraphDef object | Yes | | | |
| 12 | +| tf1.x checkpoint | Yes | [examples/tensorflow/object_detection](examples/tensorflow/object_detection) | | |
| 13 | +| keras.Model object | Yes | [examples/helloworld/tf2.x](examples/helloworld/tf2.x) | | |
| 14 | +| keras saved model | Yes | [examples/helloworld/tf2.x](examples/helloworld/tf2.x) | | |
| 15 | +| tf2.x saved model | TBD | | | |
| 16 | +| tf2.x h5 format model | TBD || |
| 17 | +| slim checkpoint | TBD | | |
| 18 | +| tf1.x saved model | No| | No plan to support it | |
| 19 | +| tf2.x checkpoint | No | | As tf2.x checkpoint only has weight and does not contain any description of the computation, please use different tf2.x model for quantization | |
| 20 | + |
| 21 | + |
| 22 | +# Usage |
| 23 | + |
| 24 | +You can directly pass the directory or object to quantizer, for example: |
| 25 | +```python |
| 26 | +from ilit import Quantization |
| 27 | +quantizer = Quantization('./conf.yaml') |
| 28 | +dataset = mnist_dataset(mnist.test.images, mnist.test.labels) |
| 29 | +data_loader = quantizer.dataloader(dataset=dataset, batch_size=1) |
| 30 | +model = frozen_pb/Graph/GraphDef/checkpoint_path/keras.Model/keras_savedmodel_path |
| 31 | +q_model = quantizer(frozen_pb, q_dataloader=data_loader, eval_func=eval_func) |
| 32 | + |
| 33 | +``` |
| 34 | + |
| 35 | + |
0 commit comments