Skip to content

tomaszsmaruj25/Cup_class_detection

Repository files navigation

Cup class detection project

See on YouTube: Detection of damaged cup classes

Main idea

The main concept is to create a system that would be used to check the quality of packaging of food products such as yoghurts or other food products on production lines. The project aims to automate the quality control process through an intelligent vision system based on image processing and Convolutional Neural Networks. The main task of the network would be to detect dirty, damaged and unmarketable items. Placing the camera next to the conveyor belt would allow the elimination of individual pieces of the product already at the stage of exiting the machine, before reaching the carton.

The system control will be based on a Raspberry PI microcontroller with an additional camera. Further, a database will be created containing product photos that will be used to train the neural network model. In the first phase, the project will be implemented in a simulation test environment, and soon on the packing machine. The considerations would focus on finding the most optimal conditions for the operation of the system, such as the use of additional light sources, appropriate structures of the neural network model and applications in an industrial environment.

Libraries

Python PyPI version

Source Code

Open All Collab

First Attempt - object classification

See on YouTube: Classification of damaged products

TODO

Database

  • Create the first database (1000 photos, 4th class)
  • Use Roboflow to store a database
  • Mark images with VOC labels
  • Complete the photo database - photos of cups with different labels, with and without lighting, photos on different backgrounds and in various configurations

Model training

  • Prepare your first CNN model
  • Use pretrained Models and google collab
  • Prepare scripts in Jupyter Notebook and Google Collab (for TF2 Model Detection Zoo, Yolov4 Training, [TF2 Model Detection Zoo]](link)

Script for camera

  • Prepare script for camera in anaconda
  • Launch of the tflite model
  • Show FPS number on the video stream

Testing and maintaining script on Raspberry PI

  • Migrate model to RPI
  • Choose the most efficient solution - SSD Mobilenet v2 FPNLite 320x320
  • Deploy an anti-crash solution and automatic script execution after restart
  • Try to execute the script as a service and show video in the sample API (different project)

Finally

  • Earn a Master of Science degree :)

Model Training

Install Object Detection API

Download code

import os
import pathlib

# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
  while "models" in pathlib.Path.cwd().parts:
    os.chdir('..')
elif not pathlib.Path('models').exists():
  !git clone --depth 1 https://github.com/tensorflow/models

Download code

# Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
#run model builder test
!python /content/models/research/object_detection/builders/model_builder_tf2_test.py

Performance

On test set

Resolution and FPS

Detection model EfficientDet SSD MobileNet v2 SSD MobileNet v2
Resolution 512x512 640x640 320x320
Avg. Accuracy 61.96 57.33
FPS 61.96 57.33

Example Factor - light.

Before applying light :

After applying light :

References

  • YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4.
  • link

About

My master's thesis project repository: Intelligent vision system for product quality inspection.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published