Skip to content

Commit

Permalink
Faster, Exception handling
Browse files Browse the repository at this point in the history
  • Loading branch information
jetsonhacks committed Jan 25, 2022
1 parent cbec935 commit 64dd5f0
Show file tree
Hide file tree
Showing 12 changed files with 178 additions and 827 deletions.
33 changes: 18 additions & 15 deletions README.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -21,48 +21,51 @@ $ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink
# Example also shows sensor_mode parameter to nvarguscamerasrc
# See table below for example video modes of example sensor
$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! \
'video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12' ! \
nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=720' ! \
'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1' ! \
nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=540' ! \
nvvidconv ! nvegltransform ! nveglglessink -e
Note: The cameras appear to report differently than show below on some Jetsons. You can use the simple gst-launch example above to determine the camera modes that are reported by the sensor you are using. As an example the same camera from below may report differently on a Jetson Nano B01:
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
You should adjust accordingly. As an example, for 3264x2464 @ 21 fps on sensor_id 1 of a Jetson Nano B01:
$ gst-launch-1.0 nvarguscamerasrc sensor_id=1 ! \
'video/x-raw(memory:NVMM),width=3264, height=2464, framerate=21/1, format=NV12' ! \
nvvidconv flip-method=0 ! 'video/x-raw, width=816, height=616' ! \
nvvidconv ! nvegltransform ! nveglglessink -e
Also, it's been noticed that the display transform is sensitive to width and height (in the above example, width=816, height=616). If you experience issues, check to see if your display width and height is the same ratio as the camera frame size selected (In the above example, 816x616 is 1/4 the size of 3264x2464).
Also, it's been reported that the display transform is sensitive to width and height (in the above example, width=960, height=540). If you experience issues, check to see if your display width and height is the same ratio as the camera frame size selected (In the above example, 960x540 is 1/4 the size of 1920x1080).
```

There are several examples:

Note: You may need to install numpy for the Python examples to work, ie $ pip3 install numpy

### simple_camera.py
simple_camera.py is a Python script which reads from the camera and displays to a window on the screen using OpenCV:

```
$ python simple_camera.py
```
### face_detect.py

face_detect.py is a python script which reads from the camera and uses Haar Cascades to detect faces and eyes:

```
$ python face_detect.py

```
Haar Cascades is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. The function is then used to detect objects in other images.

See: https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html


The third example is a simple C++ program which reads from the camera and displays to a window on the screen using OpenCV:

```
$ g++ -std=c++11 -Wall -I/usr/lib/opencv simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera
$ g++ -std=c++11 -Wall -I/usr/lib/opencv -I/usr/include/opencv4 simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera
$ ./simple_camera
```

The final example is dual_camera.py. This example is for the newer rev B01 of the Jetson Nano board, identifiable by two CSI-MIPI camera ports. This is a simple Python program which reads both CSI cameras and displays them in a window. The window is 960x1080. For performance, the script uses a separate thread for reading each camera image stream. To run the script:
### dual_camera.py
Note: You will need install numpy for the Dual Camera Python example to work, ie
```
$ pip3 install numpy
```
The final example is dual_camera.py. This example is for the newer rev B01 of the Jetson Nano type boards (Jetson Nano, Jetson Xavier NX), identifiable by two CSI-MIPI camera ports. This is a simple Python program which reads both CSI cameras and displays them in a window. The window is 1080x540. For performance, the script uses a separate thread for reading each camera image stream. To run the script:

```
$ python3 dual_camera.py
Expand Down
143 changes: 73 additions & 70 deletions dual_camera.py
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,33 +1,24 @@
# MIT License
# Copyright (c) 2019,2020 JetsonHacks
# See license
# A very simple code snippet
# Copyright (c) 2019-2022 JetsonHacks

# A simple code snippet
# Using two CSI cameras (such as the Raspberry Pi Version 2) connected to a
# NVIDIA Jetson Nano Developer Kit (Rev B01) using OpenCV
# NVIDIA Jetson Nano Developer Kit with two CSI ports (Jetson Nano, Jetson Xavier NX) via OpenCV
# Drivers for the camera and OpenCV are included in the base image in JetPack 4.3+

# This script will open a window and place the camera stream from each camera in a window
# arranged horizontally.
# The camera streams are each read in their own thread, as when done sequentially there
# is a noticeable lag
# For better performance, the next step would be to experiment with having the window display
# in a separate thread

import cv2
import threading
import numpy as np

# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
# Flip the image by setting the flip_method (most common values: 0 and 2)
# display_width and display_height determine the size of each camera pane in the window on the screen

left_camera = None
right_camera = None


class CSI_Camera:

def __init__ (self) :
def __init__(self):
# Initialize instance variables
# OpenCV video capture element
self.video_capture = None
Expand All @@ -39,54 +30,54 @@ def __init__ (self) :
self.read_lock = threading.Lock()
self.running = False


def open(self, gstreamer_pipeline_string):
try:
self.video_capture = cv2.VideoCapture(
gstreamer_pipeline_string, cv2.CAP_GSTREAMER
)

# Grab the first frame to start the video capturing
self.grabbed, self.frame = self.video_capture.read()

except RuntimeError:
self.video_capture = None
print("Unable to open camera")
print("Pipeline: " + gstreamer_pipeline_string)
return
# Grab the first frame to start the video capturing
self.grabbed, self.frame = self.video_capture.read()


def start(self):
if self.running:
print('Video capturing is already running')
return None
# create a thread to read the camera image
if self.video_capture != None:
self.running=True
self.running = True
self.read_thread = threading.Thread(target=self.updateCamera)
self.read_thread.start()
return self

def stop(self):
self.running=False
self.running = False
# Kill the thread
self.read_thread.join()
self.read_thread = None

def updateCamera(self):
# This is the thread to read images from the camera
while self.running:
try:
grabbed, frame = self.video_capture.read()
with self.read_lock:
self.grabbed=grabbed
self.frame=frame
self.grabbed = grabbed
self.frame = frame
except RuntimeError:
print("Could not read image from camera")
# FIX ME - stop and cleanup thread
# Something bad happened


def read(self):
with self.read_lock:
frame = self.frame.copy()
grabbed=self.grabbed
grabbed = self.grabbed
return grabbed, frame

def release(self):
Expand All @@ -98,30 +89,32 @@ def release(self):
self.read_thread.join()


# Currently there are setting frame rate on CSI Camera on Nano through gstreamer
# Here we directly select sensor_mode 3 (1280x720, 59.9999 fps)
"""
gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
Flip the image by setting the flip_method (most common values: 0 and 2)
display_width and display_height determine the size of each camera pane in the window on the screen
Default 1920x1080
"""


def gstreamer_pipeline(
sensor_id=0,
sensor_mode=3,
capture_width=1280,
capture_height=720,
display_width=1280,
display_height=720,
capture_width=1920,
capture_height=1080,
display_width=1920,
display_height=1080,
framerate=30,
flip_method=0,
):
return (
"nvarguscamerasrc sensor-id=%d sensor-mode=%d ! "
"video/x-raw(memory:NVMM), "
"width=(int)%d, height=(int)%d, "
"format=(string)NV12, framerate=(fraction)%d/1 ! "
"nvarguscamerasrc sensor-id=%d ! "
"video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink"
% (
sensor_id,
sensor_mode,
capture_width,
capture_height,
framerate,
Expand All @@ -132,15 +125,17 @@ def gstreamer_pipeline(
)


def start_cameras():
def run_cameras():
window_title = "Dual CSI Cameras"
left_camera = CSI_Camera()
left_camera.open(
gstreamer_pipeline(
sensor_id=0,
sensor_mode=3,
capture_width=1920,
capture_height=1080,
flip_method=0,
display_height=540,
display_width=960,
display_height=540,
)
)
left_camera.start()
Expand All @@ -149,45 +144,53 @@ def start_cameras():
right_camera.open(
gstreamer_pipeline(
sensor_id=1,
sensor_mode=3,
capture_width=1920,
capture_height=1080,
flip_method=0,
display_height=540,
display_width=960,
display_height=540,
)
)
right_camera.start()

cv2.namedWindow("CSI Cameras", cv2.WINDOW_AUTOSIZE)

if (
not left_camera.video_capture.isOpened()
or not right_camera.video_capture.isOpened()
):
# Cameras did not open, or no camera attached
if left_camera.video_capture.isOpened() and right_camera.video_capture.isOpened():

print("Unable to open any cameras")
# TODO: Proper Cleanup
SystemExit(0)
cv2.namedWindow(window_title, cv2.WINDOW_AUTOSIZE)

while cv2.getWindowProperty("CSI Cameras", 0) >= 0 :

_ , left_image=left_camera.read()
_ , right_image=right_camera.read()
camera_images = np.hstack((left_image, right_image))
cv2.imshow("CSI Cameras", camera_images)

# This also acts as
keyCode = cv2.waitKey(30) & 0xFF
# Stop the program on the ESC key
if keyCode == 27:
break
try:
while True:
_, left_image = left_camera.read()
_, right_image = right_camera.read()
# Use numpy to place images next to each other
camera_images = np.hstack((left_image, right_image))
# Check to see if the user closed the window
# Under GTK+ (Jetson Default), WND_PROP_VISIBLE does not work correctly. Under Qt it does
# GTK - Substitute WND_PROP_AUTOSIZE to detect if window has been closed by user
if cv2.getWindowProperty(window_title, cv2.WND_PROP_AUTOSIZE) >= 0:
cv2.imshow(window_title, camera_images)
else:
break

# This also acts as
keyCode = cv2.waitKey(30) & 0xFF
# Stop the program on the ESC key
if keyCode == 27:
break
finally:

left_camera.stop()
left_camera.release()
right_camera.stop()
right_camera.release()
cv2.destroyAllWindows()
else:
print("Error: Unable to open both cameras")
left_camera.stop()
left_camera.release()
right_camera.stop()
right_camera.release()

left_camera.stop()
left_camera.release()
right_camera.stop()
right_camera.release()
cv2.destroyAllWindows()


if __name__ == "__main__":
start_cameras()
run_cameras()
Loading

0 comments on commit 64dd5f0

Please sign in to comment.