-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aborted #6
Comments
I am facing the same issue. It could be because of the images we feed in (that the method cannot find suitable features), but I am not sure. It would be nice if the authors could take a look into it. |
A follow up on this issue :
The output folders dense and sparse were empty. this error is also confusing : |
Hi @UcefMountacer and @eyildiz-ugoe, it looks like registration failed to converge during The |
Thank you for your rapid reply @w-bonelli . As you pointed towards adding and removing images, let me ask you one thing. Does the method assume that the images were collected from a static plant (e.g. your setup)? In my case I have the opposite case. Plant is rotating and the cameras are fixated, taking images of a rotating plant. I'm feeding those images into your method which gives the error described above. If there is such implicit requirement to your method, please let me know. |
Hi, thank you for your testing our pipeline. Our method uses the 3D scanner to collect images in a fixed illumination environment and a static plant with rotating cameras mounted in an arc. The CAD design of the 3D scanner was attached with the supplementary materials. We did not use the fixed camera and fixed plant set up was to avoid the bluring effect and shaking of the root object. As Wes mentioned, registration failed to converge in colmap, which was caused by lacking of enough feature matching points. We would like to help you with your reconstruction, would you mind sharing your whole image set with us? we can help you debug it and find the best solution. |
@eyildiz-ugoe there is no requirement in principle that the cameras rotate while the plant stays fixed. One potential problem is that rotating the plant may produce blurry images. The reconstruction is sensitive to blur as well as lighting. Maybe reducing the rotation speed could help? If you can share some sample images we can help diagnose the issue. In our setup @lsx1980 has the camera arm running on a stepper motor which does not rotate continuously but stops before taking each round of images. |
Dear @w-bonelli and @lsx1980 , thank you for your answers. I've seen your setup on your YouTube page, you used several cameras around a rig with fixed lighting, rotating around the plant in steps. Stopping every now and then accounts for shaking and blurry input, I can see that. Here is some sample data from our runs. We are having the plant rotating instead of the cameras. We have a distinctive background, QR code, and the plant in the scene. We hope to utilize the QR codes for feature matching (although originally they were there for plant identification). However, I think the method fails to find enough features in our images, which results in the aforementioned error. Our idea is to 3d reconstruct the plant root with what we have. Unfortunately, we cannot repeat the experiments again using a different setup. That's the downside. On the upper side, we have an almost unlimited amount of videos (of the same setup, different plants). Any idea/direction towards 3d reconstruction of our plants would be appreciated. I see that your method is doing exactly what we are after, that's why I am trying to see if it would work. Therefore, any advise would be highly appreciated. |
Thanks for responding to our issue.
We had indeed some doubts that the problem comes from the setup (plant root being fixed, rotating, and camera being fixed). But the root is rotating slowly, as you can see in the videos provided by @eyildiz-ugoe .
Thanks, we now know for sure that its a feature problem. |
@eyildiz-ugoe @UcefMountacer thanks for sharing your sample data. One thing that could help is preprocessing, in order to lighten dark images, omit blurry images, and crop more closely to the root. The pipeline has some preprocessing options which attempt to do this (see the I will see if I can do a test run later today on your samples. |
@w-bonelli Thanks for your reply. We have used these commands, the last output I shared was using these options. The gamma_correction improved the visibility of the root. However, i didn't see any effect using segmentation option. At least, the root wasn't segmented well. Maybe this has to do with poor features. I'm interested to see your test results using our data. Please let us know when you have done it. Thanks |
Hi, @eyildiz-ugoe @UcefMountacer, thanks for sharing your sample data. I have discussed this with Wes. We tried with one of your videos named “MVI_0590.MOV”, we use the https://ezgif.com/video-to-jpg to extract to image sequence from the video. The results are attached in (https://drive.google.com/drive/folders/1oWldxZKu5SmhZmf-aYM5Enj0Tx5EkQkJ?usp=sharing) We were able to reproduce the error you encountered, the main reason was "no initial good feature found at feature matching step.”, which means the input images did not have enough features to perform feature matching and the following step to compute the coordinates of the matching point and reconstruct a 3D model. We also tried to segment your image sequence focusing on only the root part, which can generate a very sparse 3D model. We checked the image sequences converted from videos, like we mentioned before, there is a serious motion blur and the root object was very blurry although the image resolution meets HD standard, (1920*1080). You might wonder why we are using HD record the video still gets blurry root part? The motion blur was caused by when compressing all the images into videos, for example, MOV is a video format that was developed by Apple. It's a MPEG 4 video container, MPEG 4 (https://en.wikipedia.org/wiki/MPEG-4) can compress video from image sequences by removing both temporal redundancy and spatial redundancy (https://stackoverflow.com/questions/593649/how-does-mpeg4-compression-work). In Temporal redundancy, motion estimation examines the movement of objects in an image sequence to try to obtain vectors representing the estimated motion. Motion compensation uses the knowledge of object motion so obtained to achieve data compression. Therefore, you can see a lot of square shape blocks in your images sequence, especially the root part, which is the only “motion” object in your scene. like the shared image"snapshot.png". Suggestions,
Thank you for your testing our pipeline. |
Thanks @lsx1980 for the great analysis of our dataset. It's really full of insights. We will try to get good data, although these recording date back to 2018. So its difficult to ask for none-blurred data of these plant roots. The suggestion to remove blur effect from the images is the most feasible one right now for us. We will try some algorithms to do this and test again with your tool. Until then, let's keep in touch. Thank you |
Hi,
I am getting this error :
I can share the whole output:
Hope someone has an idea !
thanks
The text was updated successfully, but these errors were encountered: