diff --git a/LICENSE b/LICENSE
index e5609a0..c1f68ce 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,6 +1,6 @@
MIT License
-Copyright (c) 2019 JetsonHacks
+Copyright (c) 2019-2022 JetsonHacks
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
diff --git a/README.md b/README.md
old mode 100644
new mode 100755
index 8caefcf..bb5905d
--- a/README.md
+++ b/README.md
@@ -1,9 +1,9 @@
# CSI-Camera
-Simple example of using a MIPI-CSI(2) Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Nano Developer Kit. This is support code for the article on JetsonHacks: https://wp.me/p7ZgI9-19v
+Simple example of using a MIPI-CSI(2) Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Developer Kits with CSI camera ports. This includes the recent Jetson Nano and Jetson Xavier NX. This is support code for the article on JetsonHacks: https://wp.me/p7ZgI9-19v
-The camera should be installed in the MIPI-CSI Camera Connector on the carrier board. The pins on the camera ribbon should face the Jetson Nano module, the stripe faces outward.
+For the Nanos and Xavier NX, the camera should be installed in the MIPI-CSI Camera Connector on the carrier board. The pins on the camera ribbon should face the Jetson module, the tape stripe faces outward.
-The new Jetson Nano B01 developer kit has two CSI camera slots. You can use the sensor_mode attribute with nvarguscamerasrc to specify the camera. Valid values are 0 or 1 (the default is 0 if not specified), i.e.
+Some Jetson developer kits have two CSI camera slots. You can use the sensor_mode attribute with the GStreamer nvarguscamerasrc element to specify which camera. Valid values are 0 or 1 (the default is 0 if not specified), i.e.
```
nvarguscamerasrc sensor_id=0
@@ -21,63 +21,67 @@ $ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink
# Example also shows sensor_mode parameter to nvarguscamerasrc
# See table below for example video modes of example sensor
$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! \
- 'video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12' ! \
- nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=720' ! \
- nvvidconv ! nvegltransform ! nveglglessink -e
-
-Note: The cameras appear to report differently than show below on some Jetsons. You can use the simple gst-launch example above to determine the camera modes that are reported by the sensor you are using. As an example the same camera from below may report differently on a Jetson Nano B01:
-
-GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000
-
-You should adjust accordingly. As an example, for 3264x2464 @ 21 fps on sensor_id 1 of a Jetson Nano B01:
-$ gst-launch-1.0 nvarguscamerasrc sensor_id=1 ! \
- 'video/x-raw(memory:NVMM),width=3264, height=2464, framerate=21/1, format=NV12' ! \
- nvvidconv flip-method=0 ! 'video/x-raw, width=816, height=616' ! \
+ 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1' ! \
+ nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=540' ! \
nvvidconv ! nvegltransform ! nveglglessink -e
+```
-Also, it's been noticed that the display transform is sensitive to width and height (in the above example, width=816, height=616). If you experience issues, check to see if your display width and height is the same ratio as the camera frame size selected (In the above example, 816x616 is 1/4 the size of 3264x2464).
+Note: The cameras may report differently than show below. You can use the simple gst-launch example above to determine the camera modes that are reported by the sensor you are using.
+```
+GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
```
-There are several examples:
+Also, the display transform may be sensitive to width and height (in the above example, width=960, height=540). If you experience issues, check to see if your display width and height is the same ratio as the camera frame size selected (In the above example, 960x540 is 1/4 the size of 1920x1080).
+
-Note: You may need to install numpy for the Python examples to work, ie $ pip3 install numpy
+## Samples
-simple_camera.py is a Python script which reads from the camera and displays to a window on the screen using OpenCV:
+### simple_camera.py
+simple_camera.py is a Python script which reads from the camera and displays the frame to a window on the screen using OpenCV:
+```
$ python simple_camera.py
+```
+### face_detect.py
face_detect.py is a python script which reads from the camera and uses Haar Cascades to detect faces and eyes:
-
+```
$ python face_detect.py
-
+```
Haar Cascades is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. The function is then used to detect objects in other images.
See: https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html
-The third example is a simple C++ program which reads from the camera and displays to a window on the screen using OpenCV:
+### dual_camera.py
+Note: You will need install numpy for the Dual Camera Python example to work:
```
-$ g++ -std=c++11 -Wall -I/usr/lib/opencv simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera
-
-$ ./simple_camera
+$ pip3 install numpy
```
-
-The final example is dual_camera.py. This example is for the newer rev B01 of the Jetson Nano board, identifiable by two CSI-MIPI camera ports. This is a simple Python program which reads both CSI cameras and displays them in a window. The window is 960x1080. For performance, the script uses a separate thread for reading each camera image stream. To run the script:
+This example is for the newer Jetson boards (Jetson Nano, Jetson Xavier NX) with two CSI-MIPI camera ports. This is a simple Python program which reads both CSI cameras and displays them in one window. The window is 1920x540. For performance, the script uses a separate thread for reading each camera image stream. To run the script:
```
$ python3 dual_camera.py
```
-The directory 'instrumented' contains instrumented code which can help adjust performance and frame rates.
+### simple_camera.cpp
+The last example is a simple C++ program which reads from the camera and displays to a window on the screen using OpenCV:
+
+```
+$ g++ -std=c++11 -Wall -I/usr/lib/opencv -I/usr/include/opencv4 simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera
+
+$ ./simple_camera
+```
+This program is a simple outline, and does not handle needed error checking well. For better C++ code, use https://github.com/dusty-nv/jetson-utils
Notes
Camera Image Formats
You can use v4l2-ctl to determine the camera capabilities. v4l2-ctl is in the v4l-utils:
-
+```
$ sudo apt-get install v4l-utils
-
-For the Raspberry Pi V2 camera, typically the output is (assuming the camera is /dev/video0):
+```
+For the Raspberry Pi V2 camera, a typical output is (assuming the camera is /dev/video0):
```
$ v4l2-ctl --list-formats-ext
@@ -130,6 +134,21 @@ If you can open the camera in GStreamer from the command line, and have issues o
Release Notes
+v3.2 Release January, 2022
+* Add Exception handling to Python code
+* Faster GStreamer pipelines, better performance
+* Better naming of variables, simplification
+* Remove Instrumented examples
+* L4T 32.6.1 (JetPack 4.5)
+* OpenCV 4.4.1
+* Python3
+* Tested on Jetson Nano B01, Jetson Xavier NX
+* Tested with Raspberry Pi V2 cameras
+
+
+v3.11 Release April, 2020
+* Release both cameras in dual camera example (bug-fix)
+
v3.1 Release March, 2020
* L4T 32.3.1 (JetPack 4.3)
* OpenCV 4.1.1
diff --git a/dual_camera.py b/dual_camera.py
old mode 100644
new mode 100755
index 43516ba..81299ed
--- a/dual_camera.py
+++ b/dual_camera.py
@@ -1,33 +1,24 @@
# MIT License
-# Copyright (c) 2019,2020 JetsonHacks
-# See license
-# A very simple code snippet
+# Copyright (c) 2019-2022 JetsonHacks
+
+# A simple code snippet
# Using two CSI cameras (such as the Raspberry Pi Version 2) connected to a
-# NVIDIA Jetson Nano Developer Kit (Rev B01) using OpenCV
+# NVIDIA Jetson Nano Developer Kit with two CSI ports (Jetson Nano, Jetson Xavier NX) via OpenCV
# Drivers for the camera and OpenCV are included in the base image in JetPack 4.3+
# This script will open a window and place the camera stream from each camera in a window
# arranged horizontally.
# The camera streams are each read in their own thread, as when done sequentially there
# is a noticeable lag
-# For better performance, the next step would be to experiment with having the window display
-# in a separate thread
import cv2
import threading
import numpy as np
-# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
-# Flip the image by setting the flip_method (most common values: 0 and 2)
-# display_width and display_height determine the size of each camera pane in the window on the screen
-
-left_camera = None
-right_camera = None
-
class CSI_Camera:
- def __init__ (self) :
+ def __init__(self):
# Initialize instance variables
# OpenCV video capture element
self.video_capture = None
@@ -39,20 +30,19 @@ def __init__ (self) :
self.read_lock = threading.Lock()
self.running = False
-
def open(self, gstreamer_pipeline_string):
try:
self.video_capture = cv2.VideoCapture(
gstreamer_pipeline_string, cv2.CAP_GSTREAMER
)
-
+ # Grab the first frame to start the video capturing
+ self.grabbed, self.frame = self.video_capture.read()
+
except RuntimeError:
self.video_capture = None
print("Unable to open camera")
print("Pipeline: " + gstreamer_pipeline_string)
- return
- # Grab the first frame to start the video capturing
- self.grabbed, self.frame = self.video_capture.read()
+
def start(self):
if self.running:
@@ -60,14 +50,16 @@ def start(self):
return None
# create a thread to read the camera image
if self.video_capture != None:
- self.running=True
+ self.running = True
self.read_thread = threading.Thread(target=self.updateCamera)
self.read_thread.start()
return self
def stop(self):
- self.running=False
+ self.running = False
+ # Kill the thread
self.read_thread.join()
+ self.read_thread = None
def updateCamera(self):
# This is the thread to read images from the camera
@@ -75,18 +67,17 @@ def updateCamera(self):
try:
grabbed, frame = self.video_capture.read()
with self.read_lock:
- self.grabbed=grabbed
- self.frame=frame
+ self.grabbed = grabbed
+ self.frame = frame
except RuntimeError:
print("Could not read image from camera")
# FIX ME - stop and cleanup thread
# Something bad happened
-
def read(self):
with self.read_lock:
frame = self.frame.copy()
- grabbed=self.grabbed
+ grabbed = self.grabbed
return grabbed, frame
def release(self):
@@ -98,30 +89,32 @@ def release(self):
self.read_thread.join()
-# Currently there are setting frame rate on CSI Camera on Nano through gstreamer
-# Here we directly select sensor_mode 3 (1280x720, 59.9999 fps)
+"""
+gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
+Flip the image by setting the flip_method (most common values: 0 and 2)
+display_width and display_height determine the size of each camera pane in the window on the screen
+Default 1920x1080
+"""
+
+
def gstreamer_pipeline(
sensor_id=0,
- sensor_mode=3,
- capture_width=1280,
- capture_height=720,
- display_width=1280,
- display_height=720,
+ capture_width=1920,
+ capture_height=1080,
+ display_width=1920,
+ display_height=1080,
framerate=30,
flip_method=0,
):
return (
- "nvarguscamerasrc sensor-id=%d sensor-mode=%d ! "
- "video/x-raw(memory:NVMM), "
- "width=(int)%d, height=(int)%d, "
- "format=(string)NV12, framerate=(fraction)%d/1 ! "
+ "nvarguscamerasrc sensor-id=%d ! "
+ "video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink"
% (
sensor_id,
- sensor_mode,
capture_width,
capture_height,
framerate,
@@ -132,15 +125,17 @@ def gstreamer_pipeline(
)
-def start_cameras():
+def run_cameras():
+ window_title = "Dual CSI Cameras"
left_camera = CSI_Camera()
left_camera.open(
gstreamer_pipeline(
sensor_id=0,
- sensor_mode=3,
+ capture_width=1920,
+ capture_height=1080,
flip_method=0,
- display_height=540,
display_width=960,
+ display_height=540,
)
)
left_camera.start()
@@ -149,45 +144,53 @@ def start_cameras():
right_camera.open(
gstreamer_pipeline(
sensor_id=1,
- sensor_mode=3,
+ capture_width=1920,
+ capture_height=1080,
flip_method=0,
- display_height=540,
display_width=960,
+ display_height=540,
)
)
right_camera.start()
- cv2.namedWindow("CSI Cameras", cv2.WINDOW_AUTOSIZE)
-
- if (
- not left_camera.video_capture.isOpened()
- or not right_camera.video_capture.isOpened()
- ):
- # Cameras did not open, or no camera attached
+ if left_camera.video_capture.isOpened() and right_camera.video_capture.isOpened():
- print("Unable to open any cameras")
- # TODO: Proper Cleanup
- SystemExit(0)
+ cv2.namedWindow(window_title, cv2.WINDOW_AUTOSIZE)
- while cv2.getWindowProperty("CSI Cameras", 0) >= 0 :
-
- _ , left_image=left_camera.read()
- _ , right_image=right_camera.read()
- camera_images = np.hstack((left_image, right_image))
- cv2.imshow("CSI Cameras", camera_images)
-
- # This also acts as
- keyCode = cv2.waitKey(30) & 0xFF
- # Stop the program on the ESC key
- if keyCode == 27:
- break
+ try:
+ while True:
+ _, left_image = left_camera.read()
+ _, right_image = right_camera.read()
+ # Use numpy to place images next to each other
+ camera_images = np.hstack((left_image, right_image))
+ # Check to see if the user closed the window
+ # Under GTK+ (Jetson Default), WND_PROP_VISIBLE does not work correctly. Under Qt it does
+ # GTK - Substitute WND_PROP_AUTOSIZE to detect if window has been closed by user
+ if cv2.getWindowProperty(window_title, cv2.WND_PROP_AUTOSIZE) >= 0:
+ cv2.imshow(window_title, camera_images)
+ else:
+ break
+
+ # This also acts as
+ keyCode = cv2.waitKey(30) & 0xFF
+ # Stop the program on the ESC key
+ if keyCode == 27:
+ break
+ finally:
+
+ left_camera.stop()
+ left_camera.release()
+ right_camera.stop()
+ right_camera.release()
+ cv2.destroyAllWindows()
+ else:
+ print("Error: Unable to open both cameras")
+ left_camera.stop()
+ left_camera.release()
+ right_camera.stop()
+ right_camera.release()
- left_camera.stop()
- left_camera.release()
- right_camera.stop()
- right_camera.release()
- cv2.destroyAllWindows()
if __name__ == "__main__":
- start_cameras()
+ run_cameras()
diff --git a/face_detect.py b/face_detect.py
old mode 100644
new mode 100755
index e41e2a6..65f279b
--- a/face_detect.py
+++ b/face_detect.py
@@ -1,36 +1,35 @@
# MIT License
-# Copyright (c) 2019 JetsonHacks
+# Copyright (c) 2019-2022 JetsonHacks
# See LICENSE for OpenCV license and additional information
# https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html
# On the Jetson Nano, OpenCV comes preinstalled
# Data files are in /usr/sharc/OpenCV
-import numpy as np
+
import cv2
# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
-# Defaults to 1280x720 @ 30fps
+# Defaults to 1920x1080 @ 30fps
# Flip the image by setting the flip_method (most common values: 0 and 2)
# display_width and display_height determine the size of the window on the screen
-
+# Notice that we drop frames if we fall outside the processing time in the appsink element
def gstreamer_pipeline(
- capture_width=3280,
- capture_height=2464,
- display_width=820,
- display_height=616,
- framerate=21,
+ capture_width=1920,
+ capture_height=1080,
+ display_width=960,
+ display_height=540,
+ framerate=30,
flip_method=0,
):
return (
"nvarguscamerasrc ! "
"video/x-raw(memory:NVMM), "
- "width=(int)%d, height=(int)%d, "
- "format=(string)NV12, framerate=(fraction)%d/1 ! "
+ "width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
- "video/x-raw, format=(string)BGR ! appsink"
+ "video/x-raw, format=(string)BGR ! appsink drop=True"
% (
capture_width,
capture_height,
@@ -43,38 +42,45 @@ def gstreamer_pipeline(
def face_detect():
+ window_title = "Face Detect"
face_cascade = cv2.CascadeClassifier(
"/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml"
)
eye_cascade = cv2.CascadeClassifier(
"/usr/share/opencv4/haarcascades/haarcascade_eye.xml"
)
- cap = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
- if cap.isOpened():
- cv2.namedWindow("Face Detect", cv2.WINDOW_AUTOSIZE)
- while cv2.getWindowProperty("Face Detect", 0) >= 0:
- ret, img = cap.read()
- gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
- faces = face_cascade.detectMultiScale(gray, 1.3, 5)
-
- for (x, y, w, h) in faces:
- cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
- roi_gray = gray[y : y + h, x : x + w]
- roi_color = img[y : y + h, x : x + w]
- eyes = eye_cascade.detectMultiScale(roi_gray)
- for (ex, ey, ew, eh) in eyes:
- cv2.rectangle(
- roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2
- )
-
- cv2.imshow("Face Detect", img)
- keyCode = cv2.waitKey(30) & 0xFF
- # Stop the program on the ESC key
- if keyCode == 27:
- break
+ video_capture = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
+ if video_capture.isOpened():
+ try:
+ cv2.namedWindow(window_title, cv2.WINDOW_AUTOSIZE)
+ while True:
+ ret, frame = video_capture.read()
+ gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
+ faces = face_cascade.detectMultiScale(gray, 1.3, 5)
- cap.release()
- cv2.destroyAllWindows()
+ for (x, y, w, h) in faces:
+ cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
+ roi_gray = gray[y : y + h, x : x + w]
+ roi_color = frame[y : y + h, x : x + w]
+ eyes = eye_cascade.detectMultiScale(roi_gray)
+ for (ex, ey, ew, eh) in eyes:
+ cv2.rectangle(
+ roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2
+ )
+ # Check to see if the user closed the window
+ # Under GTK+ (Jetson Default), WND_PROP_VISIBLE does not work correctly. Under Qt it does
+ # GTK - Substitute WND_PROP_AUTOSIZE to detect if window has been closed by user
+ if cv2.getWindowProperty(window_title, cv2.WND_PROP_AUTOSIZE) >= 0:
+ cv2.imshow(window_title, frame)
+ else:
+ break
+ keyCode = cv2.waitKey(10) & 0xFF
+ # Stop the program on the ESC key or 'q'
+ if keyCode == 27 or keyCode == ord('q'):
+ break
+ finally:
+ video_capture.release()
+ cv2.destroyAllWindows()
else:
print("Unable to open camera")
diff --git a/instrumented/csi_camera.py b/instrumented/csi_camera.py
deleted file mode 100644
index f4063d8..0000000
--- a/instrumented/csi_camera.py
+++ /dev/null
@@ -1,145 +0,0 @@
-# MIT License
-# Copyright (c) 2019,2020 JetsonHacks
-# See license in root folder
-# CSI_Camera is a class which encapsulates an OpenCV VideoCapture element
-# The VideoCapture element is initialized via a GStreamer pipeline
-# The camera is read in a separate thread
-# The class also tracks how many frames are read from the camera;
-# The calling application tracks the frames_displayed
-
-# Let's use a repeating Timer for counting FPS
-import cv2
-import threading
-
-class RepeatTimer(threading.Timer):
- def run(self):
- while not self.finished.wait(self.interval):
- self.function(*self.args, **self.kwargs)
-
-class CSI_Camera:
-
- def __init__ (self) :
- # Initialize instance variables
- # OpenCV video capture element
- self.video_capture = None
- # The last captured image from the camera
- self.frame = None
- self.grabbed = False
- # The thread where the video capture runs
- self.read_thread = None
- self.read_lock = threading.Lock()
- self.running = False
- self.fps_timer=None
- self.frames_read=0
- self.frames_displayed=0
- self.last_frames_read=0
- self.last_frames_displayed=0
-
-
- def open(self, gstreamer_pipeline_string):
- try:
- self.video_capture = cv2.VideoCapture(
- gstreamer_pipeline_string, cv2.CAP_GSTREAMER
- )
-
- except RuntimeError:
- self.video_capture = None
- print("Unable to open camera")
- print("Pipeline: " + gstreamer_pipeline_string)
- return
- # Grab the first frame to start the video capturing
- self.grabbed, self.frame = self.video_capture.read()
-
- def start(self):
- if self.running:
- print('Video capturing is already running')
- return None
- # create a thread to read the camera image
- if self.video_capture != None:
- self.running=True
- self.read_thread = threading.Thread(target=self.updateCamera)
- self.read_thread.start()
- return self
-
- def stop(self):
- self.running=False
- self.read_thread.join()
-
- def updateCamera(self):
- # This is the thread to read images from the camera
- while self.running:
- try:
- grabbed, frame = self.video_capture.read()
- with self.read_lock:
- self.grabbed=grabbed
- self.frame=frame
- self.frames_read += 1
- except RuntimeError:
- print("Could not read image from camera")
- # FIX ME - stop and cleanup thread
- # Something bad happened
-
-
- def read(self):
- with self.read_lock:
- frame = self.frame.copy()
- grabbed=self.grabbed
- return grabbed, frame
-
- def release(self):
- if self.video_capture != None:
- self.video_capture.release()
- self.video_capture = None
- # Kill the timer
- self.fps_timer.cancel()
- self.fps_timer.join()
- # Now kill the thread
- if self.read_thread != None:
- self.read_thread.join()
-
- def update_fps_stats(self):
- self.last_frames_read=self.frames_read
- self.last_frames_displayed=self.frames_displayed
- # Start the next measurement cycle
- self.frames_read=0
- self.frames_displayed=0
-
- def start_counting_fps(self):
- self.fps_timer=RepeatTimer(1.0,self.update_fps_stats)
- self.fps_timer.start()
-
- @property
- def gstreamer_pipeline(self):
- return self._gstreamer_pipeline
-
- # Currently there are setting frame rate on CSI Camera on Nano through gstreamer
- # Here we directly select sensor_mode 3 (1280x720, 59.9999 fps)
- def create_gstreamer_pipeline(
- self,
- sensor_id=0,
- sensor_mode=3,
- display_width=1280,
- display_height=720,
- framerate=60,
- flip_method=0,
- ):
- self._gstreamer_pipeline = (
- "nvarguscamerasrc sensor-id=%d sensor-mode=%d ! "
- "video/x-raw(memory:NVMM), "
- "format=(string)NV12, framerate=(fraction)%d/1 ! "
- "nvvidconv flip-method=%d ! "
- "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
- "videoconvert ! "
- "video/x-raw, format=(string)BGR ! appsink"
- % (
- sensor_id,
- sensor_mode,
- framerate,
- flip_method,
- display_width,
- display_height,
- )
- )
-
-
-
diff --git a/instrumented/dual_camera_fps.py b/instrumented/dual_camera_fps.py
deleted file mode 100644
index 3639937..0000000
--- a/instrumented/dual_camera_fps.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# MIT License
-# Copyright (c) 2019,2020 JetsonHacks
-# See license
-# A very simple code snippet
-# Using two CSI cameras (such as the Raspberry Pi Version 2) connected to a
-# NVIDIA Jetson Nano Developer Kit (Rev B01) using OpenCV
-# Drivers for the camera and OpenCV are included in the base image in JetPack 4.3+
-
-# This script will open a window and place the camera stream from each camera in a window
-# arranged horizontally.
-# The camera streams are each read in their own thread, as when done sequentially there
-# is a noticeable lag
-
-import cv2
-import numpy as np
-from csi_camera import CSI_Camera
-
-show_fps = True
-
-# Simple draw label on an image; in our case, the video frame
-def draw_label(cv_image, label_text, label_position):
- font_face = cv2.FONT_HERSHEY_SIMPLEX
- scale = 0.5
- color = (255,255,255)
- # You can get the size of the string with cv2.getTextSize here
- cv2.putText(cv_image, label_text, label_position, font_face, scale, color, 1, cv2.LINE_AA)
-
-# Read a frame from the camera, and draw the FPS on the image if desired
-# Return an image
-def read_camera(csi_camera,display_fps):
- _ , camera_image=csi_camera.read()
- if display_fps:
- draw_label(camera_image, "Frames Displayed (PS): "+str(csi_camera.last_frames_displayed),(10,20))
- draw_label(camera_image, "Frames Read (PS): "+str(csi_camera.last_frames_read),(10,40))
- return camera_image
-
-# Good for 1280x720
-# DISPLAY_WIDTH=640
-# DISPLAY_HEIGHT=360
-# For 1920x1080
-DISPLAY_WIDTH=960
-DISPLAY_HEIGHT=540
-
-# 1920x1080, 30 fps
-SENSOR_MODE_1080=2
-# 1280x720, 60 fps
-SENSOR_MODE_720=3
-
-def start_cameras():
- left_camera = CSI_Camera()
- left_camera.create_gstreamer_pipeline(
- sensor_id=0,
- sensor_mode=SENSOR_MODE_720,
- framerate=30,
- flip_method=0,
- display_height=DISPLAY_HEIGHT,
- display_width=DISPLAY_WIDTH,
- )
- left_camera.open(left_camera.gstreamer_pipeline)
- left_camera.start()
-
- right_camera = CSI_Camera()
- right_camera.create_gstreamer_pipeline(
- sensor_id=1,
- sensor_mode=SENSOR_MODE_720,
- framerate=30,
- flip_method=0,
- display_height=DISPLAY_HEIGHT,
- display_width=DISPLAY_WIDTH,
- )
- right_camera.open(right_camera.gstreamer_pipeline)
- right_camera.start()
-
- cv2.namedWindow("CSI Cameras", cv2.WINDOW_AUTOSIZE)
-
- if (
- not left_camera.video_capture.isOpened()
- or not right_camera.video_capture.isOpened()
- ):
- # Cameras did not open, or no camera attached
-
- print("Unable to open any cameras")
- # TODO: Proper Cleanup
- SystemExit(0)
- try:
- # Start counting the number of frames read and displayed
- left_camera.start_counting_fps()
- right_camera.start_counting_fps()
- while cv2.getWindowProperty("CSI Cameras", 0) >= 0 :
- left_image=read_camera(left_camera,show_fps)
- right_image=read_camera(right_camera,show_fps)
- # We place both images side by side to show in the window
- camera_images = np.hstack((left_image, right_image))
- cv2.imshow("CSI Cameras", camera_images)
- left_camera.frames_displayed += 1
- right_camera.frames_displayed += 1
- # This also acts as a frame limiter
- # Stop the program on the ESC key
- if (cv2.waitKey(20) & 0xFF) == 27:
- break
-
- finally:
- left_camera.stop()
- left_camera.release()
- right_camera.stop()
- right_camera.release()
- cv2.destroyAllWindows()
-
-
-if __name__ == "__main__":
- start_cameras()
diff --git a/instrumented/dual_camera_naive.py b/instrumented/dual_camera_naive.py
deleted file mode 100644
index 03da0ed..0000000
--- a/instrumented/dual_camera_naive.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# MIT License
-# Copyright (c) 2019 JetsonHacks
-# See license
-# Using a CSI camera (such as the Raspberry Pi Version 2) connected to a
-# NVIDIA Jetson Nano Developer Kit using OpenCV
-# Drivers for the camera and OpenCV are included in the base image
-
-import cv2
-from timecontext import Timer
-import numpy as np
-
-# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
-# Defaults to 1280x720 @ 60fps
-# Flip the image by setting the flip_method (most common values: 0 and 2)
-# display_width and display_height determine the size of the window on the screen
-
-
-def gstreamer_pipeline(
- sensor_id=0,
- sensor_mode=3,
- capture_width=1280,
- capture_height=720,
- display_width=1280,
- display_height=720,
- framerate=60,
- flip_method=0,
-):
- return (
- "nvarguscamerasrc sensor-id=%d sensor-mode=%d ! "
- "video/x-raw(memory:NVMM), "
- "width=(int)%d, height=(int)%d, "
- "format=(string)NV12, framerate=(fraction)%d/1 ! "
- "nvvidconv flip-method=%d ! "
- "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
- "videoconvert ! "
- "video/x-raw, format=(string)BGR ! appsink"
- % (
- sensor_id,
- sensor_mode,
- capture_width,
- capture_height,
- framerate,
- flip_method,
- display_width,
- display_height,
- )
- )
-
-
-def show_camera():
- # To flip the image, modify the flip_method parameter (0 and 2 are the most common)
- print(gstreamer_pipeline(flip_method=0))
- left_cap = cv2.VideoCapture(
- gstreamer_pipeline(flip_method=0,display_width=960,display_height=540,framerate=30), cv2.CAP_GSTREAMER)
- right_cap = cv2.VideoCapture(gstreamer_pipeline(
- flip_method=0, sensor_id=1,display_width=960,display_height=540,framerate=30), cv2.CAP_GSTREAMER)
- if left_cap.isOpened():
- cv2.namedWindow("CSI Camera", cv2.WINDOW_AUTOSIZE)
- # Window
- while cv2.getWindowProperty("CSI Camera", 0) >= 0:
- with Timer() as context_time:
- ret_val, left_image = left_cap.read()
- ret_val, right_image = right_cap.read()
- # print(context_time.elapsed)
- # We place both images side by side to show in the window
- camera_images = np.hstack((left_image, right_image))
- cv2.imshow("CSI Cameras", camera_images)
- # cv2.imshow("CSI Camera", left_image)
- # print(context_time.elapsed)
-
- # This also acts as
- keyCode = cv2.waitKey(20) & 0xFF
- # print(context_time.elapsed)
- # print("---")
- # Stop the program on the ESC key
- if keyCode == 27:
- break
- left_cap.release()
- right_cap.release()
- cv2.destroyAllWindows()
- else:
- print("Unable to open camera")
-
-
-if __name__ == "__main__":
- show_camera()
diff --git a/instrumented/face_detect_faster.py b/instrumented/face_detect_faster.py
deleted file mode 100644
index ced1ee9..0000000
--- a/instrumented/face_detect_faster.py
+++ /dev/null
@@ -1,105 +0,0 @@
-# MIT License
-# Copyright (c) 2019 JetsonHacks
-# See LICENSE for OpenCV license and additional information
-
-# https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html
-# On the Jetson Nano, OpenCV comes preinstalled
-# Data files are in /usr/sharc/OpenCV
-
-import cv2
-import numpy as np
-from csi_camera import CSI_Camera
-
-show_fps = True
-
-# Simple draw label on an image; in our case, the video frame
-def draw_label(cv_image, label_text, label_position):
- font_face = cv2.FONT_HERSHEY_SIMPLEX
- scale = 0.5
- color = (255,255,255)
- # You can get the size of the string with cv2.getTextSize here
- cv2.putText(cv_image, label_text, label_position, font_face, scale, color, 1, cv2.LINE_AA)
-
-# Read a frame from the camera, and draw the FPS on the image if desired
-# Return an image
-def read_camera(csi_camera,display_fps):
- _ , camera_image=csi_camera.read()
- if display_fps:
- draw_label(camera_image, "Frames Displayed (PS): "+str(csi_camera.last_frames_displayed),(10,20))
- draw_label(camera_image, "Frames Read (PS): "+str(csi_camera.last_frames_read),(10,40))
- return camera_image
-
-# Good for 1280x720
-DISPLAY_WIDTH=640
-DISPLAY_HEIGHT=360
-# For 1920x1080
-# DISPLAY_WIDTH=960
-# DISPLAY_HEIGHT=540
-
-# 1920x1080, 30 fps
-SENSOR_MODE_1080=2
-# 1280x720, 60 fps
-SENSOR_MODE_720=3
-
-def face_detect():
- face_cascade = cv2.CascadeClassifier(
- "/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml"
- )
- eye_cascade = cv2.CascadeClassifier(
- "/usr/share/opencv4/haarcascades/haarcascade_eye.xml"
- )
- left_camera = CSI_Camera()
- left_camera.create_gstreamer_pipeline(
- sensor_id=0,
- sensor_mode=SENSOR_MODE_720,
- framerate=30,
- flip_method=0,
- display_height=DISPLAY_HEIGHT,
- display_width=DISPLAY_WIDTH,
- )
- left_camera.open(left_camera.gstreamer_pipeline)
- left_camera.start()
- cv2.namedWindow("Face Detect", cv2.WINDOW_AUTOSIZE)
-
- if (
- not left_camera.video_capture.isOpened()
- ):
- # Cameras did not open, or no camera attached
-
- print("Unable to open any cameras")
- # TODO: Proper Cleanup
- SystemExit(0)
- try:
- # Start counting the number of frames read and displayed
- left_camera.start_counting_fps()
- while cv2.getWindowProperty("Face Detect", 0) >= 0 :
- img=read_camera(left_camera,False)
- gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
- faces = face_cascade.detectMultiScale(gray, 1.3, 5)
-
- for (x, y, w, h) in faces:
- cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
- roi_gray = gray[y : y + h, x : x + w]
- roi_color = img[y : y + h, x : x + w]
- eyes = eye_cascade.detectMultiScale(roi_gray)
- for (ex, ey, ew, eh) in eyes:
- cv2.rectangle(
- roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2
- )
- if show_fps:
- draw_label(img, "Frames Displayed (PS): "+str(left_camera.last_frames_displayed),(10,20))
- draw_label(img, "Frames Read (PS): "+str(left_camera.last_frames_read),(10,40))
- cv2.imshow("Face Detect", img)
- left_camera.frames_displayed += 1
- keyCode = cv2.waitKey(5) & 0xFF
- # Stop the program on the ESC key
- if keyCode == 27:
- break
- finally:
- left_camera.stop()
- left_camera.release()
- cv2.destroyAllWindows()
-
-
-if __name__ == "__main__":
- face_detect()
diff --git a/instrumented/face_detect_fps.py b/instrumented/face_detect_fps.py
deleted file mode 100644
index 8bcd5bf..0000000
--- a/instrumented/face_detect_fps.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# MIT License
-# Copyright (c) 2019 JetsonHacks
-# See LICENSE for OpenCV license and additional information
-
-# https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html
-# On the Jetson Nano, OpenCV comes preinstalled
-# Data files are in /usr/sharc/OpenCV
-import numpy as np
-import cv2
-import threading
-from timecontext import Timer
-
-# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
-# Defaults to 1280x720 @ 30fps
-# Flip the image by setting the flip_method (most common values: 0 and 2)
-# display_width and display_height determine the size of the window on the screen
-
-class RepeatTimer(threading.Timer):
- def run(self):
- while not self.finished.wait(self.interval):
- self.function(*self.args, **self.kwargs)
-
-frames_displayed=0
-fps_timer=None
-
-def update_fps_stats():
- global frames_displayed
- print("======")
- print("FPS: "+str(frames_displayed))
- frames_displayed=0
-
-def start_counting_fps():
- global fps_timer
- print("starting to count fps")
- fps_timer=RepeatTimer(1.0,update_fps_stats)
- fps_timer.start()
-
-def gstreamer_pipeline(
- capture_width=3280,
- capture_height=2464,
- display_width=820,
- display_height=616,
- framerate=21,
- flip_method=0,
-):
- return (
- "nvarguscamerasrc ! "
- "video/x-raw(memory:NVMM), "
- "width=(int)%d, height=(int)%d, "
- "format=(string)NV12, framerate=(fraction)%d/1 ! "
- "nvvidconv flip-method=%d ! "
- "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
- "videoconvert ! "
- "video/x-raw, format=(string)BGR ! appsink"
- % (
- capture_width,
- capture_height,
- framerate,
- flip_method,
- display_width,
- display_height,
- )
- )
-
-
-def face_detect():
- global frames_displayed
- global fps_timer
- face_cascade = cv2.CascadeClassifier(
- "/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml"
- )
- eye_cascade = cv2.CascadeClassifier(
- "/usr/share/opencv4/haarcascades/haarcascade_eye.xml"
- )
- cap = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
- if cap.isOpened():
- try:
- cv2.namedWindow("Face Detect", cv2.WINDOW_AUTOSIZE)
- # Setup our Frames per second counter
- start_counting_fps()
- while cv2.getWindowProperty("Face Detect", 0) >= 0:
- with Timer() as measure :
- ret, img = cap.read()
- print("---")
- print("Read Cam:" + str(measure.elapsed))
- before=measure.elapsed
- gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
- faces = face_cascade.detectMultiScale(gray, 1.3, 5)
- print("detectMultipleScale: "+str(measure.elapsed-before))
- before=measure.elapsed
- for (x, y, w, h) in faces:
- cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
- roi_gray = gray[y : y + h, x : x + w]
- roi_color = img[y : y + h, x : x + w]
- eyes = eye_cascade.detectMultiScale(roi_gray)
- for (ex, ey, ew, eh) in eyes:
- cv2.rectangle(
- roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2
- )
- print("eyeCascade: "+str(measure.elapsed-before))
- print(measure.elapsed)
- cv2.imshow("Face Detect", img)
-
- print("Elapsed time: "+str(measure.elapsed))
- frames_displayed = frames_displayed+1
- keyCode = cv2.waitKey(10) & 0xFF
- # Stop the program on the ESC key
- if keyCode == 27:
- break
- finally:
- fps_timer.cancel()
- fps_timer.join()
-
- cap.release()
- # Kill the fps timer
- cv2.destroyAllWindows()
- else:
- print("Unable to open camera")
-
-
-if __name__ == "__main__":
- face_detect()
diff --git a/instrumented/simple_camera.py b/instrumented/simple_camera.py
deleted file mode 100644
index 2241af6..0000000
--- a/instrumented/simple_camera.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# MIT License
-# Copyright (c) 2019 JetsonHacks
-# See license
-# Using a CSI camera (such as the Raspberry Pi Version 2) connected to a
-# NVIDIA Jetson Nano Developer Kit using OpenCV
-# Drivers for the camera and OpenCV are included in the base image
-
-import cv2
-from timecontext import Timer
-
-# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
-# Defaults to 1280x720 @ 60fps
-# Flip the image by setting the flip_method (most common values: 0 and 2)
-# display_width and display_height determine the size of the window on the screen
-
-
-def gstreamer_pipeline(
- capture_width=1280,
- capture_height=720,
- display_width=1280,
- display_height=720,
- framerate=60,
- flip_method=0,
-):
- return (
- "nvarguscamerasrc ! "
- "video/x-raw(memory:NVMM), "
- "width=(int)%d, height=(int)%d, "
- "format=(string)NV12, framerate=(fraction)%d/1 ! "
- "nvvidconv flip-method=%d ! "
- "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
- "videoconvert ! "
- "video/x-raw, format=(string)BGR ! appsink"
- % (
- capture_width,
- capture_height,
- framerate,
- flip_method,
- display_width,
- display_height,
- )
- )
-
-
-def show_camera():
- # To flip the image, modify the flip_method parameter (0 and 2 are the most common)
- print(gstreamer_pipeline(flip_method=0))
- cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
- if cap.isOpened():
- window_handle = cv2.namedWindow("CSI Camera", cv2.WINDOW_AUTOSIZE)
- # Window
- while cv2.getWindowProperty("CSI Camera", 0) >= 0:
- with Timer() as context_time:
- ret_val, img = cap.read()
- print(context_time.elapsed)
- cv2.imshow("CSI Camera", img)
- print(context_time.elapsed)
-
- # This also acts as
- keyCode = cv2.waitKey(20) & 0xFF
- print(context_time.elapsed)
- print("---")
- # Stop the program on the ESC key
- if keyCode == 27:
- break
- cap.release()
- cv2.destroyAllWindows()
- else:
- print("Unable to open camera")
-
-
-if __name__ == "__main__":
- show_camera()
diff --git a/instrumented/timecontext.py b/instrumented/timecontext.py
deleted file mode 100644
index 9d01fe1..0000000
--- a/instrumented/timecontext.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from timeit import default_timer
-
-class Timer:
- def __init__(self):
- self.timer=default_timer
- self.end_time=None
-
- def __call__(self):
- return self.timer()
-
- def __enter__(self):
- print("Entering context")
- self.start_time=self()
- return self
-
- def __exit__(self, exc_type, exc_value, exc_traceback):
- self.end_time=self()
-
- @property
- def elapsed(self):
- if self.end_time is None:
- return self()-self.start_time
- else:
- return self.end_time-self.start_time
-
diff --git a/simple_camera.cpp b/simple_camera.cpp
old mode 100644
new mode 100755
index 7865ff5..18455a4
--- a/simple_camera.cpp
+++ b/simple_camera.cpp
@@ -1,19 +1,16 @@
// simple_camera.cpp
// MIT License
-// Copyright (c) 2019 JetsonHacks
+// Copyright (c) 2019-2022 JetsonHacks
// See LICENSE for OpenCV license and additional information
// Using a CSI camera (such as the Raspberry Pi Version 2) connected to a
// NVIDIA Jetson Nano Developer Kit using OpenCV
// Drivers for the camera and OpenCV are included in the base image
-// #include
#include
-// #include
-// #include
std::string gstreamer_pipeline (int capture_width, int capture_height, int display_width, int display_height, int framerate, int flip_method) {
return "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(capture_width) + ", height=(int)" +
- std::to_string(capture_height) + ", format=(string)NV12, framerate=(fraction)" + std::to_string(framerate) +
+ std::to_string(capture_height) + ", framerate=(fraction)" + std::to_string(framerate) +
"/1 ! nvvidconv flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height=(int)" +
std::to_string(display_height) + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";
}
@@ -24,7 +21,7 @@ int main()
int capture_height = 720 ;
int display_width = 1280 ;
int display_height = 720 ;
- int framerate = 60 ;
+ int framerate = 30 ;
int flip_method = 0 ;
std::string pipeline = gstreamer_pipeline(capture_width,
@@ -53,7 +50,7 @@ int main()
}
cv::imshow("CSI Camera",img);
- int keycode = cv::waitKey(30) & 0xff ;
+ int keycode = cv::waitKey(10) & 0xff ;
if (keycode == 27) break ;
}
diff --git a/simple_camera.py b/simple_camera.py
old mode 100644
new mode 100755
index e666482..fffc131
--- a/simple_camera.py
+++ b/simple_camera.py
@@ -1,36 +1,37 @@
# MIT License
-# Copyright (c) 2019 JetsonHacks
-# See license
+# Copyright (c) 2019-2022 JetsonHacks
+
# Using a CSI camera (such as the Raspberry Pi Version 2) connected to a
# NVIDIA Jetson Nano Developer Kit using OpenCV
# Drivers for the camera and OpenCV are included in the base image
import cv2
-# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
-# Defaults to 1280x720 @ 60fps
-# Flip the image by setting the flip_method (most common values: 0 and 2)
-# display_width and display_height determine the size of the window on the screen
-
+"""
+gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
+Flip the image by setting the flip_method (most common values: 0 and 2)
+display_width and display_height determine the size of each camera pane in the window on the screen
+Default 1920x1080 displayd in a 1/4 size window
+"""
def gstreamer_pipeline(
- capture_width=1280,
- capture_height=720,
- display_width=1280,
- display_height=720,
- framerate=60,
+ sensor_id=0,
+ capture_width=1920,
+ capture_height=1080,
+ display_width=960,
+ display_height=540,
+ framerate=30,
flip_method=0,
):
return (
- "nvarguscamerasrc ! "
- "video/x-raw(memory:NVMM), "
- "width=(int)%d, height=(int)%d, "
- "format=(string)NV12, framerate=(fraction)%d/1 ! "
+ "nvarguscamerasrc sensor-id=%d !"
+ "video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink"
% (
+ sensor_id,
capture_width,
capture_height,
framerate,
@@ -42,24 +43,32 @@ def gstreamer_pipeline(
def show_camera():
+ window_title = "CSI Camera"
+
# To flip the image, modify the flip_method parameter (0 and 2 are the most common)
print(gstreamer_pipeline(flip_method=0))
- cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
- if cap.isOpened():
- window_handle = cv2.namedWindow("CSI Camera", cv2.WINDOW_AUTOSIZE)
- # Window
- while cv2.getWindowProperty("CSI Camera", 0) >= 0:
- ret_val, img = cap.read()
- cv2.imshow("CSI Camera", img)
- # This also acts as
- keyCode = cv2.waitKey(30) & 0xFF
- # Stop the program on the ESC key
- if keyCode == 27:
- break
- cap.release()
- cv2.destroyAllWindows()
+ video_capture = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
+ if video_capture.isOpened():
+ try:
+ window_handle = cv2.namedWindow(window_title, cv2.WINDOW_AUTOSIZE)
+ while True:
+ ret_val, frame = video_capture.read()
+ # Check to see if the user closed the window
+ # Under GTK+ (Jetson Default), WND_PROP_VISIBLE does not work correctly. Under Qt it does
+ # GTK - Substitute WND_PROP_AUTOSIZE to detect if window has been closed by user
+ if cv2.getWindowProperty(window_title, cv2.WND_PROP_AUTOSIZE) >= 0:
+ cv2.imshow(window_title, frame)
+ else:
+ break
+ keyCode = cv2.waitKey(10) & 0xFF
+ # Stop the program on the ESC key or 'q'
+ if keyCode == 27 or keyCode == ord('q'):
+ break
+ finally:
+ video_capture.release()
+ cv2.destroyAllWindows()
else:
- print("Unable to open camera")
+ print("Error: Unable to open camera")
if __name__ == "__main__":