Replies: 3 comments 2 replies
-
Hi @wamj45 Instead of using a separate camera with zoom lens for barcode detection, if the D455 can see the barcode then it might be able to detect it if the RealSense SDK's RGB Sharpness option is maximized to a value of '100' to sharpen the distant barcode image. #11246 has Python scripting for setting the Sharpness value.
|
Beta Was this translation helpful? Give feedback.
-
Glare from reflections can be greatly negated if a thin-film linear polarization filter is placed over the front of the camera. Because most polarizing filters work so song as they are the linear type, they are inexpensive to obtain. You can search stores such as Amazon with the term linear polarization filter sheet to find examples. The image below from Intel's optical filter guide for 400 Series cameras shows an RGB image without (left) and with (right) a linear polarization filter that depicts how glare from glass windows can be negated by a filter. https://dev.intelrealsense.com/docs/optical-filters-for-intel-realsense-depth-cameras-d400 If you are aiming though to align a RealSense RGB image to the barcode camera's RGB image then that would be difficult as alignment usually has to have a depth image as one of the two image types that are being aligned. Also, the barcode camera's RGB view would be offset from the RealSense RGB view because the RGB sensors of the two cameras would be in different physical positions and so not seeing the scene from the exact same origin position. |
Beta Was this translation helpful? Give feedback.
-
There is an OpenCV / Python project at the link below for aligning full-HD images from two cameras. |
Beta Was this translation helpful? Give feedback.
-
Here is my setup:
What I can do today:
What I want to achieve:
What I have tried:
cv2.perspectiveTransform()
to get the points of the bounding box in RealSense coordinates. This kinda works but if the scene changes too much then the transformation matrix does not hold true.camera_matrix
,anddist_coeffs
of each camera. I calculated therotation_vector
andtrans_vector
of both cameras. This is where I am stuck. I have tried different things here like taking the dot product of the vectors and feeding that intocv2.projectPoints()
. I have tried to also undistort the points then project them. I tried to to usecv2.solvPnP()
but I do not fully understand what is meant byobjectPoints
OpenCV solvePnP docsDoes anyone have any advice, tips, anything that can point me in the right direction?
Any help is appreciated thanks!
Beta Was this translation helpful? Give feedback.
All reactions