diff --git a/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb b/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb index f576d7ee2..046e96882 100644 --- a/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb +++ b/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "source": [ "# Post-Training Quantization in Keras using the Model Compression Toolkit (MCT)\n", - "[Run this tutorial in Google Colab](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post_training_quantization.ipynb)\n", + "[Run this tutorial in Google Colab](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb)\n", "\n", "## Overview\n", "This quick-start guide explains how to use the **Model Compression Toolkit (MCT)** to quantize a Keras model. We will load a pre-trained model and quantize it using the MCT with **Post-Training Quatntization (PTQ)**. Finally, we will evaluate the quantized model and export it to a Keras or TFLite files.\n",