Skip to content

Commit

Permalink
Add yolov8n tutorial with integrated post processing
Browse files Browse the repository at this point in the history
  • Loading branch information
Idan-BenAmi committed Nov 8, 2023
1 parent b39da7e commit daeeac5
Show file tree
Hide file tree
Showing 2 changed files with 74 additions and 45 deletions.
67 changes: 59 additions & 8 deletions tutorials/notebooks/example_keras_nanodet_plus.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,57 @@
},
"id": "7c7fa04c9903736f"
},
{
"cell_type": "markdown",
"source": [
"Clone a copy of the MCT (Model Compression Toolkit) into your current directory. This step ensures that you have access to the tutorials resources folder which contains all the necessary utility functions for this tutorial"
],
"metadata": {
"collapsed": false
},
"id": "32eedce88a1e52bd"
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"!git clone https://github.com/sony/model_optimization.git local_mct\n",
"import sys\n",
"sys.path.insert(0,\"/content/local_mct\")"
],
"metadata": {
"collapsed": false
},
"id": "342eb1e5639e0cb7"
},
{
"cell_type": "markdown",
"source": [
"Finally, load COCO evaluation set"
],
"metadata": {
"collapsed": false
},
"id": "625cd9bfff9aa210"
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"!wget -nc http://images.cocodataset.org/annotations/annotations_trainval2017.zip\n",
"!unzip -q -o annotations_trainval2017.zip -d /content/coco\n",
"!echo Done loading annotations\n",
"!wget -nc http://images.cocodataset.org/zips/val2017.zip\n",
"!unzip -q -o val2017.zip -d /content/coco\n",
"!echo Done loading val2017 images"
],
"metadata": {
"collapsed": false
},
"id": "ab47e0b3bbfa4bd9"
},
{
"cell_type": "markdown",
"id": "084c2b8b-3175-4d46-a18a-7c4d8b6fcb38",
Expand All @@ -62,7 +113,7 @@
"## Floating Point Model\n",
"\n",
"### Load the pre-trained weights of Nanodet-Plus\n",
"We begin by loading the pre-trained weights of `nanodet-plus-m-1.5x-416` using `torch.load`, as the original model is in PyTorch format. Please make sure to download the pretrained weights from [here](https://github.com/RangiLyu/nanodet/tree/main) into the current directory, otherwise, specify the correct file path."
"We begin by loading the pre-trained weights of `nanodet-plus-m-1.5x-416` using `torch.load`, as the original model is in PyTorch format. Please make sure to download the pretrained weights from [here](https://github.com/RangiLyu/nanodet#model-zoo) into the current directory, otherwise, specify the correct file path."
]
},
{
Expand Down Expand Up @@ -99,7 +150,7 @@
"source": [
"import tensorflow as tf\n",
"from keras.models import Model\n",
"from resources.nanodet_keras_model import nanodet_plus_m, nanodet_box_decoding, set_model_weights\n",
"from tutorials.resources.nanodet.nanodet_keras_model import nanodet_plus_m, nanodet_box_decoding, set_model_weights\n",
"\n",
"# Parameters of nanodet-plus-m-1.5x_416\n",
"INPUT_RESOLUTION = 416\n",
Expand Down Expand Up @@ -157,10 +208,10 @@
"outputs": [],
"source": [
"import cv2\n",
"from resources.coco_evaluation import coco_dataset_generator, CocoEval\n",
"from tutorials.resources.utils.coco_evaluation import coco_dataset_generator, CocoEval\n",
"\n",
"EVAL_DATASET_FOLDER = '/path/to/coco/training/images/val2017'\n",
"EVAL_DATASET_ANNOTATION_FILE = '/path/to/coco/annotations/instances_val2017.json'\n",
"EVAL_DATASET_FOLDER = '/content/coco/val2017'\n",
"EVAL_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"BATCH_SIZE = 5\n",
"\n",
"def nanodet_preprocess(x):\n",
Expand Down Expand Up @@ -203,7 +254,7 @@
"\n",
"### Post training quantization using Model Compression Toolkit \n",
"\n",
"Now we are ready to use MCT's post training quantization! We will define a representative dataset based on the training dataset and preform the model quantization. We will use 100 representative images for calibration (20 iterations of \"batch_size\" images each).\n",
"Now we are ready to use MCT's post training quantization! We will define a representative dataset and proceed with the model quantization. Please note that, for the sake of demonstration, we'll use the evaluation dataset as our representative dataset (and skip the download of the training dataset). We will use 100 representative images for calibration (20 iterations of \"batch_size\" images each).\n",
"Same as the above section, please ensure that the dataset path has been set correctly."
]
},
Expand All @@ -216,8 +267,8 @@
"source": [
"import model_compression_toolkit as mct\n",
"\n",
"TRAIN_DATASET_FOLDER = '/path/to/coco/training/images/train2017'\n",
"TRAIN_DATASET_ANNOTATION_FILE = '/path/to/coco/annotations/instances_train2017.json'\n",
"TRAIN_DATASET_FOLDER = '/content/coco/val2017'\n",
"TRAIN_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"n_iters = 20\n",
"\n",
"# Load COCO train set\n",
Expand Down
52 changes: 15 additions & 37 deletions tutorials/notebooks/example_keras_yolov8n.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "4c261298-309f-41e8-9338-a5e205f09b05",
"metadata": {},
"source": [
"# Post Training Quantization a Yolo8-nano Object Detection Model\n",
"# Post Training Quantization a YoloV8-nano Object Detection Model\n",
"\n",
"[Run this tutorial in Google Colab](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/example_keras_nanodet_plus.ipynb)\n",
"\n",
Expand All @@ -14,7 +14,7 @@
"\n",
"In this tutorial, we'll demonstrate the post-training quantization using MCT for a pre-trained object detection model in Keras. Specifically, we'll integrate post-processing, including the non-maximum suppression (NMS) layer, into the model. This integration aligns with the imx500 target platform capabilities.\n",
"\n",
"In this example we will use an existing pre-trained Yolo8-nano model taken from [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics). We will convert the model to a Tensorflow model that includes box decoding and NMS layer. Further, we will quantize the model using MCT post training quantization and evaluate the performance of the floating point model and the quantized model on COCO dataset.\n",
"In this example we will use an existing pre-trained YoloV8-nano model taken from [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics). We will convert the model to a Tensorflow model that includes box decoding and NMS layer. Further, we will quantize the model using MCT post training quantization and evaluate the performance of the floating point model and the quantized model on COCO dataset.\n",
"\n",
"\n",
"## Summary\n",
Expand Down Expand Up @@ -45,6 +45,7 @@
"!pip install -q torch\n",
"!pip install -q tensorflow\n",
"!pip install -q pycocotools\n",
"!pip install -q ultralytics\n",
"!pip install -q model-compression-toolkit"
],
"metadata": {
Expand Down Expand Up @@ -103,28 +104,6 @@
},
"id": "8bea492d71b4060f"
},
{
"cell_type": "markdown",
"source": [
"Lastly, download the pre-trained weights of `YOLOv8n` from [Ultralytics](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) "
],
"metadata": {
"collapsed": false
},
"id": "8061596ccedc6214"
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"!wget -nc https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt"
],
"metadata": {
"collapsed": false
},
"id": "e49dc92d2fe498bf"
},
{
"cell_type": "markdown",
"id": "084c2b8b-3175-4d46-a18a-7c4d8b6fcb38",
Expand All @@ -133,7 +112,7 @@
"## Floating Point Model\n",
"\n",
"### Load the pre-trained weights of Yolo8-nano\n",
"We begin by loading the pre-trained weights of `YOLOv8n` using `torch.load`, as the original model is in PyTorch format. Please make sure the pre-trained weights are located in the `content` directory or specify the correct path."
"We begin by loading the pre-trained weights of `YOLOv8n` from [Ultralytics](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) using `torch.load`, as the original model is in PyTorch format. Please make sure the pre-trained weights are located in the `content` directory or specify the correct path."
]
},
{
Expand All @@ -143,10 +122,11 @@
"metadata": {},
"outputs": [],
"source": [
"!wget -nc https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt\n",
"import torch\n",
"\n",
"PRETRAINED_WEIGHTS_FILE = '/content/yolov8n.pt'\n",
"pretrained_weights = torch.load(PRETRAINED_WEIGHTS_FILE, map_location=torch.device('cpu'))"
"pretrained_weights = torch.load(PRETRAINED_WEIGHTS_FILE)['model'].state_dict()"
]
},
{
Expand All @@ -173,23 +153,23 @@
"from tutorials.resources.utils.torch2keras_weights_translation import load_state_dict\n",
"from tutorials.resources.yolov8.yolov8_keras import yolov8_keras\n",
"\n",
"# Parameters of nanodet-plus-m-1.5x_416\n",
"# Parameter of Yolov8n\n",
"INPUT_RESOLUTION = 640\n",
"\n",
"# Generate Yolov8n model \n",
"model = yolov8_keras('/content/local_mct/model_optimization/tutorials/resources/yolov8/yolov8n.yaml', INPUT_RESOLUTION)\n",
"model = yolov8_keras('/content/local_mct/tutorials/resources/yolov8/yolov8n.yaml', INPUT_RESOLUTION)\n",
"\n",
"# Set the pre-trained weights\n",
"load_state_dict(model, state_dict_torch=pretrained_weights)\n",
"\n",
"# Add Tensorflow NMS layer\n",
"boxes, scores = model.output\n",
"boxes, scores = model.output\n",
"outputs = tf.image.combined_non_max_suppression(\n",
" boxes,\n",
" scores,\n",
" max_output_size_per_class=300,\n",
" max_total_size=300,\n",
" iou_threshold=0.65,\n",
" iou_threshold=0.7,\n",
" score_threshold=0.001,\n",
" pad_per_class=False,\n",
" clip_boxes=False\n",
Expand All @@ -211,7 +191,7 @@
"source": [
"#### Evaluate the floating point model\n",
"Next, we evaluate the floating point model by using `cocoeval` library alongside additional dataset utilities. We can verify the mAP accuracy aligns with that of the original model. \n",
"Note that we set the \"batch_size\" to 5 and the preprocessing according to [Nanodet](https://github.com/RangiLyu/nanodet/tree/main).\n",
"Note that we set the \"batch_size\" to 5 and the preprocessing according to [Ultralytics](https://github.com/ultralytics/ultralytics).\n",
"Please ensure that the dataset path has been set correctly before running this code cell."
]
},
Expand All @@ -220,12 +200,10 @@
"execution_count": null,
"outputs": [],
"source": [
"import cv2\n",
"import numpy as np\n",
"from tutorials.resources.coco_evaluation import coco_dataset_generator, CocoEval\n",
"from tutorials.resources.utils.coco_evaluation import coco_dataset_generator, CocoEval\n",
"from tutorials.resources.yolov8.yolov8_keras import yolov8_preprocess\n",
"\n",
"EVAL_DATASET_FOLDER = '/content/coco/images/val2017'\n",
"EVAL_DATASET_FOLDER = '/content/coco/val2017'\n",
"EVAL_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"BATCH_SIZE = 5\n",
"\n",
Expand Down Expand Up @@ -269,7 +247,7 @@
"\n",
"### Post training quantization using Model Compression Toolkit \n",
"\n",
"Now we are ready to use MCT's post training quantization! We will define a representative dataset based on the training dataset and preform the model quantization. We will use 100 representative images for calibration (20 iterations of \"batch_size\" images each).\n",
"Now, we're all set to use MCT's post-training quantization. To begin, we'll define a representative dataset and proceed with the model quantization. Please note that, for demonstration purposes, we'll use the evaluation dataset as our representative dataset. We'll calibrate the model using 100 representative images, divided into 20 iterations of 'batch_size' images each.\n",
"Same as the above section, please ensure that the dataset path has been set correctly."
]
},
Expand All @@ -282,7 +260,7 @@
"source": [
"import model_compression_toolkit as mct\n",
"\n",
"TRAIN_DATASET_FOLDER = '/content/coco/images/val2017/'\n",
"TRAIN_DATASET_FOLDER = '/content/coco/val2017/'\n",
"TRAIN_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"n_iters = 20\n",
"\n",
Expand Down

0 comments on commit daeeac5

Please sign in to comment.