Skip to content

Commit

Permalink
Merge pull request #49 from JetsonHacksNano/develop
Browse files Browse the repository at this point in the history
Faster Gstreamer pipelines and exception handling in Python
  • Loading branch information
JetsonHacksNano authored Jan 25, 2022
2 parents cbec935 + 91a8d7a commit 2322935
Show file tree
Hide file tree
Showing 13 changed files with 211 additions and 844 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2019 JetsonHacks
Copyright (c) 2019-2022 JetsonHacks

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
81 changes: 50 additions & 31 deletions README.md
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# CSI-Camera
Simple example of using a MIPI-CSI(2) Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Nano Developer Kit. This is support code for the article on JetsonHacks: https://wp.me/p7ZgI9-19v
Simple example of using a MIPI-CSI(2) Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Developer Kits with CSI camera ports. This includes the recent Jetson Nano and Jetson Xavier NX. This is support code for the article on JetsonHacks: https://wp.me/p7ZgI9-19v

The camera should be installed in the MIPI-CSI Camera Connector on the carrier board. The pins on the camera ribbon should face the Jetson Nano module, the stripe faces outward.
For the Nanos and Xavier NX, the camera should be installed in the MIPI-CSI Camera Connector on the carrier board. The pins on the camera ribbon should face the Jetson module, the tape stripe faces outward.

The new Jetson Nano B01 developer kit has two CSI camera slots. You can use the sensor_mode attribute with nvarguscamerasrc to specify the camera. Valid values are 0 or 1 (the default is 0 if not specified), i.e.
Some Jetson developer kits have two CSI camera slots. You can use the sensor_mode attribute with the GStreamer nvarguscamerasrc element to specify which camera. Valid values are 0 or 1 (the default is 0 if not specified), i.e.

```
nvarguscamerasrc sensor_id=0
Expand All @@ -21,63 +21,67 @@ $ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink
# Example also shows sensor_mode parameter to nvarguscamerasrc
# See table below for example video modes of example sensor
$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! \
'video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12' ! \
nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=720' ! \
nvvidconv ! nvegltransform ! nveglglessink -e
Note: The cameras appear to report differently than show below on some Jetsons. You can use the simple gst-launch example above to determine the camera modes that are reported by the sensor you are using. As an example the same camera from below may report differently on a Jetson Nano B01:
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000
You should adjust accordingly. As an example, for 3264x2464 @ 21 fps on sensor_id 1 of a Jetson Nano B01:
$ gst-launch-1.0 nvarguscamerasrc sensor_id=1 ! \
'video/x-raw(memory:NVMM),width=3264, height=2464, framerate=21/1, format=NV12' ! \
nvvidconv flip-method=0 ! 'video/x-raw, width=816, height=616' ! \
'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1' ! \
nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=540' ! \
nvvidconv ! nvegltransform ! nveglglessink -e
```

Also, it's been noticed that the display transform is sensitive to width and height (in the above example, width=816, height=616). If you experience issues, check to see if your display width and height is the same ratio as the camera frame size selected (In the above example, 816x616 is 1/4 the size of 3264x2464).
Note: The cameras may report differently than show below. You can use the simple gst-launch example above to determine the camera modes that are reported by the sensor you are using.
```
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
```

There are several examples:
Also, the display transform may be sensitive to width and height (in the above example, width=960, height=540). If you experience issues, check to see if your display width and height is the same ratio as the camera frame size selected (In the above example, 960x540 is 1/4 the size of 1920x1080).


Note: You may need to install numpy for the Python examples to work, ie $ pip3 install numpy
## Samples

simple_camera.py is a Python script which reads from the camera and displays to a window on the screen using OpenCV:

### simple_camera.py
simple_camera.py is a Python script which reads from the camera and displays the frame to a window on the screen using OpenCV:
```
$ python simple_camera.py
```
### face_detect.py

face_detect.py is a python script which reads from the camera and uses Haar Cascades to detect faces and eyes:

```
$ python face_detect.py

```
Haar Cascades is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. The function is then used to detect objects in other images.

See: https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html

The third example is a simple C++ program which reads from the camera and displays to a window on the screen using OpenCV:

### dual_camera.py
Note: You will need install numpy for the Dual Camera Python example to work:
```
$ g++ -std=c++11 -Wall -I/usr/lib/opencv simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera
$ ./simple_camera
$ pip3 install numpy
```

The final example is dual_camera.py. This example is for the newer rev B01 of the Jetson Nano board, identifiable by two CSI-MIPI camera ports. This is a simple Python program which reads both CSI cameras and displays them in a window. The window is 960x1080. For performance, the script uses a separate thread for reading each camera image stream. To run the script:
This example is for the newer Jetson boards (Jetson Nano, Jetson Xavier NX) with two CSI-MIPI camera ports. This is a simple Python program which reads both CSI cameras and displays them in one window. The window is 1920x540. For performance, the script uses a separate thread for reading each camera image stream. To run the script:

```
$ python3 dual_camera.py
```

The directory 'instrumented' contains instrumented code which can help adjust performance and frame rates.
### simple_camera.cpp
The last example is a simple C++ program which reads from the camera and displays to a window on the screen using OpenCV:

```
$ g++ -std=c++11 -Wall -I/usr/lib/opencv -I/usr/include/opencv4 simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera
$ ./simple_camera
```
This program is a simple outline, and does not handle needed error checking well. For better C++ code, use https://github.com/dusty-nv/jetson-utils

<h2>Notes</h2>

<h3>Camera Image Formats</h3>
You can use v4l2-ctl to determine the camera capabilities. v4l2-ctl is in the v4l-utils:

```
$ sudo apt-get install v4l-utils

For the Raspberry Pi V2 camera, typically the output is (assuming the camera is /dev/video0):
```
For the Raspberry Pi V2 camera, a typical output is (assuming the camera is /dev/video0):

```
$ v4l2-ctl --list-formats-ext
Expand Down Expand Up @@ -130,6 +134,21 @@ If you can open the camera in GStreamer from the command line, and have issues o

<h2>Release Notes</h2>

v3.2 Release January, 2022
* Add Exception handling to Python code
* Faster GStreamer pipelines, better performance
* Better naming of variables, simplification
* Remove Instrumented examples
* L4T 32.6.1 (JetPack 4.5)
* OpenCV 4.4.1
* Python3
* Tested on Jetson Nano B01, Jetson Xavier NX
* Tested with Raspberry Pi V2 cameras


v3.11 Release April, 2020
* Release both cameras in dual camera example (bug-fix)

v3.1 Release March, 2020
* L4T 32.3.1 (JetPack 4.3)
* OpenCV 4.1.1
Expand Down
143 changes: 73 additions & 70 deletions dual_camera.py
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,33 +1,24 @@
# MIT License
# Copyright (c) 2019,2020 JetsonHacks
# See license
# A very simple code snippet
# Copyright (c) 2019-2022 JetsonHacks

# A simple code snippet
# Using two CSI cameras (such as the Raspberry Pi Version 2) connected to a
# NVIDIA Jetson Nano Developer Kit (Rev B01) using OpenCV
# NVIDIA Jetson Nano Developer Kit with two CSI ports (Jetson Nano, Jetson Xavier NX) via OpenCV
# Drivers for the camera and OpenCV are included in the base image in JetPack 4.3+

# This script will open a window and place the camera stream from each camera in a window
# arranged horizontally.
# The camera streams are each read in their own thread, as when done sequentially there
# is a noticeable lag
# For better performance, the next step would be to experiment with having the window display
# in a separate thread

import cv2
import threading
import numpy as np

# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
# Flip the image by setting the flip_method (most common values: 0 and 2)
# display_width and display_height determine the size of each camera pane in the window on the screen

left_camera = None
right_camera = None


class CSI_Camera:

def __init__ (self) :
def __init__(self):
# Initialize instance variables
# OpenCV video capture element
self.video_capture = None
Expand All @@ -39,54 +30,54 @@ def __init__ (self) :
self.read_lock = threading.Lock()
self.running = False


def open(self, gstreamer_pipeline_string):
try:
self.video_capture = cv2.VideoCapture(
gstreamer_pipeline_string, cv2.CAP_GSTREAMER
)

# Grab the first frame to start the video capturing
self.grabbed, self.frame = self.video_capture.read()

except RuntimeError:
self.video_capture = None
print("Unable to open camera")
print("Pipeline: " + gstreamer_pipeline_string)
return
# Grab the first frame to start the video capturing
self.grabbed, self.frame = self.video_capture.read()


def start(self):
if self.running:
print('Video capturing is already running')
return None
# create a thread to read the camera image
if self.video_capture != None:
self.running=True
self.running = True
self.read_thread = threading.Thread(target=self.updateCamera)
self.read_thread.start()
return self

def stop(self):
self.running=False
self.running = False
# Kill the thread
self.read_thread.join()
self.read_thread = None

def updateCamera(self):
# This is the thread to read images from the camera
while self.running:
try:
grabbed, frame = self.video_capture.read()
with self.read_lock:
self.grabbed=grabbed
self.frame=frame
self.grabbed = grabbed
self.frame = frame
except RuntimeError:
print("Could not read image from camera")
# FIX ME - stop and cleanup thread
# Something bad happened


def read(self):
with self.read_lock:
frame = self.frame.copy()
grabbed=self.grabbed
grabbed = self.grabbed
return grabbed, frame

def release(self):
Expand All @@ -98,30 +89,32 @@ def release(self):
self.read_thread.join()


# Currently there are setting frame rate on CSI Camera on Nano through gstreamer
# Here we directly select sensor_mode 3 (1280x720, 59.9999 fps)
"""
gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
Flip the image by setting the flip_method (most common values: 0 and 2)
display_width and display_height determine the size of each camera pane in the window on the screen
Default 1920x1080
"""


def gstreamer_pipeline(
sensor_id=0,
sensor_mode=3,
capture_width=1280,
capture_height=720,
display_width=1280,
display_height=720,
capture_width=1920,
capture_height=1080,
display_width=1920,
display_height=1080,
framerate=30,
flip_method=0,
):
return (
"nvarguscamerasrc sensor-id=%d sensor-mode=%d ! "
"video/x-raw(memory:NVMM), "
"width=(int)%d, height=(int)%d, "
"format=(string)NV12, framerate=(fraction)%d/1 ! "
"nvarguscamerasrc sensor-id=%d ! "
"video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink"
% (
sensor_id,
sensor_mode,
capture_width,
capture_height,
framerate,
Expand All @@ -132,15 +125,17 @@ def gstreamer_pipeline(
)


def start_cameras():
def run_cameras():
window_title = "Dual CSI Cameras"
left_camera = CSI_Camera()
left_camera.open(
gstreamer_pipeline(
sensor_id=0,
sensor_mode=3,
capture_width=1920,
capture_height=1080,
flip_method=0,
display_height=540,
display_width=960,
display_height=540,
)
)
left_camera.start()
Expand All @@ -149,45 +144,53 @@ def start_cameras():
right_camera.open(
gstreamer_pipeline(
sensor_id=1,
sensor_mode=3,
capture_width=1920,
capture_height=1080,
flip_method=0,
display_height=540,
display_width=960,
display_height=540,
)
)
right_camera.start()

cv2.namedWindow("CSI Cameras", cv2.WINDOW_AUTOSIZE)

if (
not left_camera.video_capture.isOpened()
or not right_camera.video_capture.isOpened()
):
# Cameras did not open, or no camera attached
if left_camera.video_capture.isOpened() and right_camera.video_capture.isOpened():

print("Unable to open any cameras")
# TODO: Proper Cleanup
SystemExit(0)
cv2.namedWindow(window_title, cv2.WINDOW_AUTOSIZE)

while cv2.getWindowProperty("CSI Cameras", 0) >= 0 :

_ , left_image=left_camera.read()
_ , right_image=right_camera.read()
camera_images = np.hstack((left_image, right_image))
cv2.imshow("CSI Cameras", camera_images)

# This also acts as
keyCode = cv2.waitKey(30) & 0xFF
# Stop the program on the ESC key
if keyCode == 27:
break
try:
while True:
_, left_image = left_camera.read()
_, right_image = right_camera.read()
# Use numpy to place images next to each other
camera_images = np.hstack((left_image, right_image))
# Check to see if the user closed the window
# Under GTK+ (Jetson Default), WND_PROP_VISIBLE does not work correctly. Under Qt it does
# GTK - Substitute WND_PROP_AUTOSIZE to detect if window has been closed by user
if cv2.getWindowProperty(window_title, cv2.WND_PROP_AUTOSIZE) >= 0:
cv2.imshow(window_title, camera_images)
else:
break

# This also acts as
keyCode = cv2.waitKey(30) & 0xFF
# Stop the program on the ESC key
if keyCode == 27:
break
finally:

left_camera.stop()
left_camera.release()
right_camera.stop()
right_camera.release()
cv2.destroyAllWindows()
else:
print("Error: Unable to open both cameras")
left_camera.stop()
left_camera.release()
right_camera.stop()
right_camera.release()

left_camera.stop()
left_camera.release()
right_camera.stop()
right_camera.release()
cv2.destroyAllWindows()


if __name__ == "__main__":
start_cameras()
run_cameras()
Loading

0 comments on commit 2322935

Please sign in to comment.