Skip to content

Commit

Permalink
Add yolov8n tutorial with integrated post processing (#855)
Browse files Browse the repository at this point in the history
* Add tutorial of yolov8n object detection model with integrated post processing. 
* Align Nanodet tutorial to work with the same utility functions (remove duplications). 
* Align both tutorials to work on google colab.
  • Loading branch information
Idan-BenAmi authored Nov 12, 2023
1 parent f5aa9ae commit 6753bce
Show file tree
Hide file tree
Showing 9 changed files with 1,428 additions and 44 deletions.
114 changes: 90 additions & 24 deletions tutorials/notebooks/example_keras_nanodet_plus.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,7 @@
"cell_type": "markdown",
"source": [
"## Setup\n",
"Install the relevant packages.\n",
"We assume that the folder 'resources' is cloned from [https://github.com/sony/model_optimization/tree/main/tutorials](https://github.com/sony/model_optimization/tree/main/tutorials) into the user's current directory."
"Install the relevant packages."
],
"metadata": {
"collapsed": false
Expand All @@ -46,14 +45,64 @@
"!pip install -q torch\n",
"!pip install -q tensorflow\n",
"!pip install -q pycocotools\n",
"!pip install -q model-compression-toolkit\n",
"!git clone https://github.com/sony/model_optimization/tree/main/tutorials/resources"
"!pip install -q model-compression-toolkit"
],
"metadata": {
"collapsed": false
},
"id": "7c7fa04c9903736f"
},
{
"cell_type": "markdown",
"source": [
"Clone a copy of the MCT (Model Compression Toolkit) into your current directory. This step ensures that you have access to the tutorials resources folder which contains all the necessary utility functions for this tutorial"
],
"metadata": {
"collapsed": false
},
"id": "32eedce88a1e52bd"
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"!git clone https://github.com/sony/model_optimization.git local_mct\n",
"import sys\n",
"sys.path.insert(0,\"/content/local_mct\")"
],
"metadata": {
"collapsed": false
},
"id": "342eb1e5639e0cb7"
},
{
"cell_type": "markdown",
"source": [
"Finally, load COCO evaluation set"
],
"metadata": {
"collapsed": false
},
"id": "625cd9bfff9aa210"
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"!wget -nc http://images.cocodataset.org/annotations/annotations_trainval2017.zip\n",
"!unzip -q -o annotations_trainval2017.zip -d /content/coco\n",
"!echo Done loading annotations\n",
"!wget -nc http://images.cocodataset.org/zips/val2017.zip\n",
"!unzip -q -o val2017.zip -d /content/coco\n",
"!echo Done loading val2017 images"
],
"metadata": {
"collapsed": false
},
"id": "ab47e0b3bbfa4bd9"
},
{
"cell_type": "markdown",
"id": "084c2b8b-3175-4d46-a18a-7c4d8b6fcb38",
Expand All @@ -62,7 +111,7 @@
"## Floating Point Model\n",
"\n",
"### Load the pre-trained weights of Nanodet-Plus\n",
"We begin by loading the pre-trained weights of `nanodet-plus-m-1.5x-416` using `torch.load`, as the original model is in PyTorch format. Please make sure to download the pretrained weights from [here](https://github.com/RangiLyu/nanodet/tree/main) into the current directory, otherwise, specify the correct file path."
"We begin by loading the pre-trained weights of `nanodet-plus-m-1.5x-416` using `torch.load`, as the original model is in PyTorch format. Please make sure to download the pretrained weights from [here](https://github.com/RangiLyu/nanodet#model-zoo) and upload them into the '/content' folder on your drive, otherwise, specify the correct file path."
]
},
{
Expand All @@ -74,7 +123,7 @@
"source": [
"import torch\n",
"\n",
"PRETRAINED_WEIGHTS_FILE = 'nanodet-plus-m-1.5x_416.pth'\n",
"PRETRAINED_WEIGHTS_FILE = '/content/nanodet-plus-m-1.5x_416.pth'\n",
"pretrained_weights = torch.load(PRETRAINED_WEIGHTS_FILE, map_location=torch.device('cpu'))['state_dict']"
]
},
Expand All @@ -99,7 +148,8 @@
"source": [
"import tensorflow as tf\n",
"from keras.models import Model\n",
"from resources.nanodet_keras_model import nanodet_plus_m, nanodet_box_decoding, set_model_weights\n",
"from tutorials.resources.nanodet.nanodet_keras_model import nanodet_plus_m, nanodet_box_decoding\n",
"from tutorials.resources.utils.torch2keras_weights_translation import load_state_dict\n",
"\n",
"# Parameters of nanodet-plus-m-1.5x_416\n",
"INPUT_RESOLUTION = 416\n",
Expand All @@ -112,7 +162,7 @@
"model = nanodet_plus_m(INPUT_SHAPE, SCALE_FACTOR, BOTTLENECK_RATIO, FEATURE_CHANNELS)\n",
"\n",
"# Set the pre-trained weights\n",
"set_model_weights(model, pretrained_weights)\n",
"load_state_dict(model, state_dict_torch=pretrained_weights)\n",
"\n",
"# Add Nanodet Box decoding layer (decode the model outputs to bounding box coordinates)\n",
"scores, boxes = nanodet_box_decoding(model.output, res=INPUT_RESOLUTION)\n",
Expand Down Expand Up @@ -157,10 +207,11 @@
"outputs": [],
"source": [
"import cv2\n",
"from resources.coco_evaluation import coco_dataset_generator, CocoEval\n",
"from tutorials.resources.utils.coco_evaluation import coco_dataset_generator, CocoEval\n",
"\n",
"EVAL_DATASET_FOLDER = '/content/coco/val2017'\n",
"EVAL_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"\n",
"EVAL_DATASET_FOLDER = '/path/to/coco/training/images/val2017'\n",
"EVAL_DATASET_ANNOTATION_FILE = '/path/to/coco/annotations/instances_val2017.json'\n",
"BATCH_SIZE = 5\n",
"\n",
"def nanodet_preprocess(x):\n",
Expand Down Expand Up @@ -203,7 +254,7 @@
"\n",
"### Post training quantization using Model Compression Toolkit \n",
"\n",
"Now we are ready to use MCT's post training quantization! We will define a representative dataset based on the training dataset and preform the model quantization. We will use 100 representative images for calibration (20 iterations of \"batch_size\" images each).\n",
"Now we are ready to use MCT's post training quantization! We will define a representative dataset and proceed with the model quantization. Please note that, for the sake of demonstration, we'll use the evaluation dataset as our representative dataset (and skip the download of the training dataset). We will use 100 representative images for calibration (20 iterations of \"batch_size\" images each).\n",
"Same as the above section, please ensure that the dataset path has been set correctly."
]
},
Expand All @@ -215,30 +266,45 @@
"outputs": [],
"source": [
"import model_compression_toolkit as mct\n",
"from typing import Iterator, Tuple, List\n",
"\n",
"TRAIN_DATASET_FOLDER = '/path/to/coco/training/images/train2017'\n",
"TRAIN_DATASET_ANNOTATION_FILE = '/path/to/coco/annotations/instances_train2017.json'\n",
"REPRESENTATIVE_DATASET_FOLDER = '/content/coco/val2017'\n",
"REPRESENTATIVE_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"n_iters = 20\n",
"\n",
"# Load COCO train set\n",
"train_dataset = coco_dataset_generator(dataset_folder=TRAIN_DATASET_FOLDER,\n",
" annotation_file=TRAIN_DATASET_ANNOTATION_FILE,\n",
" preprocess=nanodet_preprocess,\n",
" batch_size=BATCH_SIZE)\n",
"# Load representative dataset\n",
"representative_dataset = coco_dataset_generator(dataset_folder=REPRESENTATIVE_DATASET_FOLDER,\n",
" annotation_file=REPRESENTATIVE_DATASET_ANNOTATION_FILE,\n",
" preprocess=nanodet_preprocess,\n",
" batch_size=BATCH_SIZE)\n",
"\n",
"# Define representative dataset generator\n",
"def get_representative_dataset(n_iter, train_loader):\n",
"\n",
" def representative_dataset():\n",
" ds_iter = iter(train_loader)\n",
"def get_representative_dataset(n_iter: int, dataset_loader: Iterator[Tuple]):\n",
" \"\"\"\n",
" This function creates a representative dataset generator.\n",
" \n",
" Args:\n",
" n_iter: number of iterations for MCT to calibrate on\n",
" Returns:\n",
" A representative dataset generator\n",
" \"\"\" \n",
" def representative_dataset() -> Iterator[List]:\n",
" \"\"\"\n",
" Creates a representative dataset generator from a PyTorch data loader, The generator yields numpy\n",
" arrays of batches of shape: [Batch, H, W ,C].\n",
" \n",
" Returns:\n",
" A representative dataset generator\n",
" \"\"\"\n",
" ds_iter = iter(dataset_loader)\n",
" for _ in range(n_iter):\n",
" yield [next(ds_iter)[0]]\n",
"\n",
" return representative_dataset\n",
"\n",
"# Preform post training quantization \n",
"quant_model, _ = mct.ptq.keras_post_training_quantization_experimental(model,\n",
" get_representative_dataset(n_iters, train_dataset))\n",
" get_representative_dataset(n_iters, representative_dataset))\n",
"\n",
"print('Quantized model is ready')"
]
Expand Down
Loading

0 comments on commit 6753bce

Please sign in to comment.