Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors when using Meshroom to make photogrammetry model of small (1 cm long) objects #2591

Open
megaraptor1 opened this issue Nov 3, 2024 · 25 comments

Comments

@megaraptor1
Copy link

I have been having trouble getting Meshroom to properly compile images into a 3D model using photogrammetry. I have been taking pictures of several specimens using a Canon EOS 90D macro lens (so focal length and lens information are available for all photos). These specimens are fairly small, the largest is about 1.2 cm in diameter. There are about ~150 pictures taken in several rings all around the specimen at different orientation angles (see below for reconstructed cameras). The specimen remains in the same position on every image but has merely been rotated on a turntable at angles of about 8-9 degrees between each image. The specimen is also on a background with unique, non-repeating symbols or imagery to make image matching easier. The pictures are all very crisp and it is possible to make out details on them very easily, so in theory it should be relatively straightforward for Meshroom to match the images.

Images of specimen are relatively crisp

Nevertheless, despite this Meshroom has consistently been unable to produce models of these specimens. I have tried taking photos of these specimens on two different occasions, as well as tried to create models for multiple specimens, but have been unable to.

On the first attempt, despite the specimen being in focus and sharply defined in each photo, Meshroom simply failed to reconstruct
cameras for about half of the total images.

Reconstructing specimen in Meshroom

This led to a lot of gaps in the model and a really distorted final product.

Resulting model

I tried taking pictures of the same specimen again from a different orientation and I did get all of the cameras (see picture below).

AMNH 28439 second attempt

In practice, the StructureFromMotion model for this attempt looked better but when I opened the resulting mesh for this file in MeshLab it was completely empty and only had a few triangles (which I got out of the Texturing subfolder of the MeshroomCache folder). Additionally, I get an error saying "the following textures have not been loaded: texture_1001.exr", which suggests a texture file was not output from Meshroom.

Second attempt resulting mesh

I tried this again with a second specimen and got similar results. Again, in this case all 164 cameras were accurately reconstructed (see image) and the StructureFromMotion model would suggest the model would turn out relatively okay.

Photograph of specimen 2

But once the process was finished when I opened the resulting model in Meshlab and there was nothing there but a few triangles.

Resulting mesh for specimen 2

I am unsure as to what is going wrong. I have been fairly diligent about doing things to improve mesh correlation and creation, but it doesn't seem to work. Notably, I've been able to get Meshroom to work with photographs of large objects from a distance and screenshots of a 3D model, but I haven't been able to get it to work with photos of these smaller specimens.

Desktop (please complete the following and other pertinent information):

  • OS: Windows 10
  • Python version: 3.7.4
  • Meshroom version: 2023.2.0
@msanta
Copy link

msanta commented Nov 4, 2024

Can you show some cropped in views of the subject at full resolution? To me it looks like the photos taken from the side do not have enough depth of field and Meshroom was unable to get enough features out of the specimen itself to work out the camera positions. In the image view there is an icon with 3 dots that shows the features that were detected in the image. Take a look to see what Meshroom detected on the specimen.
image

Are you able to get closer to the specimen so it takes up half the image and use focus stacking to get enough of it in focus?

As to why you didn't get a mesh from the second attempt I do not know. The point cloud looks decent enough. Double click on the meshing node which should show the mesh in the viewer. How many triangles is it reporting?

Are you able to share the images so I can play around with them?

@megaraptor1
Copy link
Author

Are you able to get closer to the specimen so it takes up half the image and use focus stacking to get enough of it in focus?

No, I do not think I can get any closer. I was already getting fairly close to the minimum possible focal distance and when I tried to get closer the camera would fail to take the picture.

As to why you didn't get a mesh from the second attempt I do not know. The point cloud looks decent enough. Double click on the meshing node which should show the mesh in the viewer. How many triangles is it reporting?

When I bring the one with the decent point cloud up it says the result mesh has only 99 triangles. By contrast, the one with the distorted model produces a mesh of about 326k triangles.

In the image view there is an icon with 3 dots that shows the features that were detected in the image. Take a look to see what Meshroom detected on the specimen.

This is the one that failed to work

Meshroom Points 2

This is the revised version of the first specimen that produced a near-empty model

Meshroom Points

I tried opening the other project that produced an empty model, but somehow it got overwritten and is now blank.

Can you show some cropped in views of the subject at full resolution?

Here are some cropped in views of the subject at full resolution. These are the same images I showed the views with the icon with three dots..

Cropped in specimen 2

Cropped in specimen 1

Are you able to share the images so I can play around with them?

Yes. How may I best be able to send them to you?

@msanta
Copy link

msanta commented Nov 7, 2024

No, I do not think I can get any closer. I was already getting fairly close to the minimum possible focal distance and when I tried to get closer the camera would fail to take the picture.

Would focus stacking be an option?

Yes. How may I best be able to send them to you?

Could you put them on Google Drive (or Dropbox) and share the folder?

@megaraptor1
Copy link
Author

Would focus stacking be an option?

I am not sure. The specimen is in focus with the camera with the current set of images. I have the depth of field turned up to maximum for each photo.

Could you put them on Google Drive (or Dropbox) and share the folder?

Okay, I have a folder together. Where do I need to share it?

@FlachyJoe
Copy link

Maybe you could paint your object to avoid flare.

@megaraptor1
Copy link
Author

megaraptor1 commented Nov 10, 2024 via email

@FlachyJoe
Copy link

And what about structured lighting?

@msanta
Copy link

msanta commented Nov 11, 2024

Okay, I have a folder together. Where do I need to share it?

Just paste the link here.

@megaraptor1
Copy link
Author

Here is the link to the folder.

https://www.dropbox.com/scl/fo/riwqhpocigsx1cfkdy8o2/AJx9AKDJZUjcovP7iE3T35o?rlkey=6ybnfsf5xsrt6ychw96ql6av1&st=oigpm9y0&dl=0

@FlachyJoe

I am unsure what you mean by structured lighting. Do you mean a 3D surface scanner? I tried experimenting with one but I didn't get very good results, the type of scanner I had available didn't seem able to scan such a small specimen.

@msanta
Copy link

msanta commented Nov 14, 2024

For the first dataset Meshroom produced a model for me:

image

I only did the SfM step for the second dataset, but it looked fine.

Can you check the log in the meshing node. Are there any warnings in it?

What I would try to do is to reconstruct something else using the exact same settings you used for the specimens. It doesn't have to be very detailed (30 photos will do). If that doesn't work then try again with the default settings. If it works with default settings then you could try changing settings one at a time until it no longer works.

@FlachyJoe
Copy link

@megaraptor1 I think about a light pattern projected on the model to increase the feature count. The light has to be fixed relative to the model so fixed on the turn-table.

@megaraptor1
Copy link
Author

@msanta

For the first dataset Meshroom produced a model for me: I only did the SfM step for the second dataset, but it looked fine.
Can you check the log in the meshing node. Are there any warnings in it?

If that is the case I will have to try it again and see if it works. I don't think it will necessarily produce different results, but if you got something may be it was just random error. Did you use any alternative settings for Meshroom I need to be aware of, or did you just use the default settings?

What I would try to do is to reconstruct something else using the exact same settings you used for the specimens. It doesn't have to be very detailed (30 photos will do). If that doesn't work then try again with the default settings. If it works with default settings then you could try changing settings one at a time until it no longer works.

I have tried doing this with photographs of some other, larger objects and those generally have worked. I can try again with something small if this next attempt at meshing doesn't go well. I'm wondering if it has to do something with how close the object is to the camera, the window showing what Meshroom had detected seem to show the tooth in yellows and reds even though it was close enough for the autofocus to properly focus on it.

@FlachyJoe

I think about a light pattern projected on the model to increase the feature count. The light has to be fixed relative to the model so fixed on the turn-table.

I am still a little confused as to what you mean. Would that be something like using a light diffuser in order to prevent their from being any spots of over illumination or flare? What kind of device would produce this structured lighting?

Some of my colleagues suggested to me that maybe I should use small dabs of colored Play-Dough or print out a version of the photogrammetry guide with unique, colored marks on the paper in order to increase the number of unique features to link images up. Is that like what you're suggesting?

I've also been wondering if printing out the photogrammetry backboard and a high-resolution so lines are much sharper might help, though I am much less confident in this.

@msanta
Copy link

msanta commented Nov 21, 2024

Did you use any alternative settings for Meshroom I need to be aware of, or did you just use the default settings?

For the specimen 2 set everything was left at default. For the specimen 1 set I changed these settings in the ImageMatching node:
image
I don't think that actually made a difference in this case (for my projects I like to increase these values to get more matches).

By the way I am using version 2023.2 on Linux.

I am still a little confused as to what you mean. Would that be something like using a light diffuser in order to prevent their from being any spots of over illumination or flare? What kind of device would produce this structured lighting?

From what I understand the idea is to project a light pattern onto an object to allow the feature detector to find features on an object that has very few features.
image

This would be better than placing dots onto the object since you could take one photo without the light and one with the light. The photos with the light projection would be used for feature extraction, image matching, SfM, meshing. Then the normal photos would be used for texturing.

In any case feature extraction is not an issue here. The images have enough features. The problem is somewhere in the meshing process. I suggest trying again with a few photos using default settings.

If you are still having issues and if you need to get these objects scanned urgently you might want to try out RealityCapture.

@megaraptor1
Copy link
Author

megaraptor1 commented Nov 22, 2024

Okay, so I tried it again and I got the same result. I didn't change anything in the ImageMatching node just to keep things consistent.

What I did was take the first 70 or so photos from the Specimen 2 folder in Dropbox and try to make a mesh out of them.

The StructureFromMotion output looked relatively decent, though it had that issue where the image is mirrored on both sides. Almost certainly a result of not using the entire image set; no matter, a great image is not needed here just a replicable one.

Attempt 2a

This is what I get for display features.

display

I also checked the pipeline and there were no errors anywhere that caused a loss of information or a premature truncation of the entire process.

However, once again I got a non-functional model. I end up only getting this little scrap of 151 triangles.

Attempt 2

No clue why this is turning out this way. It cannot be the photos, as MeshRoom produced a model on your end. It also cannot be due to the computer going to sleep in the middle of the meshing process and disrupting MeshRoom; I was on the computer the entire time and it never went into sleep mode.

Could it possibly be saving the mesh somewhere else?

@msanta
Copy link

msanta commented Nov 22, 2024

I managed to replicate your issue (success!). When I first tried your dataset I had the 'Downscale' value in the DepthMap node set to 4 because my video card couldn't handle the default value of 2. However now I have a better video card so I was able to run with the default downscale and the result was that a mesh was generated with only a handful of triangles.

I also tried meshing the second dataset and it failed to produce a mesh when the downscale value was 2 (a few triangles) or 4 (failed completely), however it did work with a downscale value of 8. I guess I should have tried that out earlier instead of assuming it would just work.

I have no idea why a lower downscale value causes the meshing step to fail in this way. Luckily using a downscale of 4 or 8 seems to produce acceptable results (although that is for you to decide on).

@megaraptor1
Copy link
Author

I'll have to try running it again and see if it works.

@megaraptor1
Copy link
Author

Okay, so I tried it again. All settings are set as default except for the downscale factor was set to 8 for the DepthMap node. The data used was the Specimen1 dataset in Dropbox that @msanta used to create their mesh.

The process still did not work. The StructureFromMotion map looked okay (but see below), but the final texturing map was basically flat and the model looked like a splattered tomato.

Final model

I tried setting the downscale factor to 8 for the Texturing node, in case that is where the problem was, but it did not change the resulting model. Again, no other nodes were altered, this is straight out of the box settings aside from the downscale factor.

Looking at the StructureFromMotion model what's strange is the model looked inverted from what the specimen should look like, although it looks like a few images may have been reconstructed the right way up (the brown points on the side facing the camera). It seems like this may be related to what the problem is.

Attempt 2024-11-22 2

@msanta
Copy link

msanta commented Nov 26, 2024

No idea why the point cloud is inverted. Has it always been inverted? There is an issue about a mesh getting inverted (#1112) but there was no solution to it.

I tried the 2nd image set in Colmap and it also produced an inverted point cloud. Very strange.

@megaraptor1
Copy link
Author

I don't think so. It's been right side up in quite a few when looking at the Structure From Motion node in the past. I've seen a few where it doesn't appear to recognize all of the photos are of the same object and makes two images mirrored across the flat backdrop.

What's weird is the model are often not consistent when I try to replicate things. Sometimes the same set of photos will produce two slightly different models, or one will hit an error and stop but if I tell it to start again it compiles without issue.

@megaraptor1
Copy link
Author

megaraptor1 commented Dec 4, 2024

So I tried running a mesh of a larger specimen, a roughly 10 cm long jaw, using a small stack of pictures taken from a phone camera in my version of Meshroom. It produced a decent model, about as much as can be expected given the quality of the pictures. This seems to suggest the problem is related to either the camera type or possibly the size of the object being photographed? However, selecting the three dots still showed the larger specimen in red.

@megaraptor1
Copy link
Author

megaraptor1 commented Jan 1, 2025

Okay, so I tried some more experimentation with the specimens. Specifically I have tried a couple of new things…

  • I tried shrinking down the turntable wheel to get more of it in frame with each picture. Specifically this is supposed to be small enough where it is possible to see the series of letters and numbers on the outer edge of the wheel. My hope is that having these unique, asymmetric symbols along the outer edge of the scene will make it easier for the program to link up certain views and – because many letters and numbers are asymmetric – will prevent the program from trying to fit images into the model by simply inverting them relative to the “floor”.
  • I also tried printing out the turntable wheel at a much higher resolution (1200 dpi) in the hopes that having sharper edges and corners would help in feature recognition.
  • I tried taking several pictures from further back getting the entire wheel in frame. My hope is that doing so will allow the program to more easily see where things are and can again link everything together due to the unique letters and numbers around the periphery.
  • I tried taking photos from several additional angles to try and minimize the amount of uniqueness between any set of images, including taking some from straight on top of the specimen. The hope with this is it will prevent any cameras from failing to be reconstructed due to insufficient overlap.

It still is not giving me good results. I tried with an initial sample set of 72 images but this still resulted in an inverted mirror image on the StructureFromMotion model and when I went to see what the resulting model looked like I got a model of 6 triangles. However, all 72 cameras were reconstructed, without any being unable to be identified in the model (though given an inverted image was produced I doubt how accurate they all are).

124 images

In fact, I suspect quite a few are wrong, looking at the cameras there are a significant number of doubled cameras when they should be largely non-overlapping and arranged in an arc.

Camera arc

I did a TextureDownscale of 4 and it still produced a model with a triangle count of 6.

Downsampling

I also tried adjusting the VocTree:Max Descriptors and VocTree:Nb Matches settings in the ImageMatching node, just to see if that did anything, but I couldn't get them to open up or work. Clicking on them did not give me an option to change anything.

What I wonder is if the reason MeshRoom isn’t producing a coherent image is related to the inverted image problem. Similar to how I’ve seen this photogrammetry guide suggest using two dissimilar bases will result in the program editing out the bases and keeping only shared features, what if the reason it is only producing a few triangles is the program is removing everything that is not shared – and because those inverted images are being produced the specimen is not consistent between all photos?

What’s also very strange is when I reconstruct the image using just a single rotation, it reconstructs the cameras in a weird parabola shape. I know for a fact the camera was not reconstructed in these positions, I had the specimen on a turntable and took pictures of it with the camera in a fixed position.

Image parabola

Even more interestingly, even reconstructing the specimen with these 35 images in a single rotation creates a significant ghost image where there should be none.

Ghost image

Compiling this image I get weird results. Meshroom does an okay job at reconstructing the photogrammetry turntable (but the letters and numbers don't come out clean), but the actual specimen is a complete mess.

One rotation mesh

Image is a mess

Checking Display Features a lot of the image is in red. But again, the image itself is in focus...

One rotation image

I am using all default settings for this model. @msanta, did you have any other setting set differently beyond installation defaults when you ran the program and produced that image you showed previously? I am unable to replicate your results.

I am still not sure what is going wrong. Given @msanta was able to produce a decent model using the images taken previously, it almost seems like something must be set wrong on my end. I am not sure why I cannot replicate their results. However, I am unsure what that would be, as I am using all program defaults straight out of the box.

@msanta, something else I am wondering is were you able to produce your image with the camera intrinsics added? I noticed even when using that small dataset with a single rotation much of the points under Display Features were red or yellow. I am wondering if the program is using inbuilt focal length data or something and that is why it is failing to recognize features. Though I would assume the camera metadata is still on the photos I had forwarded.

Feature map one rotation

These are the exact camera intrinsics I was using, imported directly as metadata from the images.

Camera intrinsicts

@megaraptor1
Copy link
Author

Update: I tried compiling the photos with the options "Cross Matching", "Feature Matching" and "Match from Known Camera Poses" turned on in the FeatureMatching node. The model did slightly better (there was no parabolic reconstruction of cameras), but some of the cameras were still incorrectly rotated and "doubled" and some of the points were still inverted. Additionally, no model is produced: I ended up getting a model of only 20 triangles.

@megaraptor1
Copy link
Author

Update 2: I wondered if maybe the issue could be lack of disc space on my computer could be causing model compilation to fail. I cleaned out a lot of my hard drive and tried again. However, the model still failed to compile correctly. I noticed the specimen was being reconstructed incorrectly in the StructureFromMotion node, and the resulting model was incorrect. Strangely, the turntable was reconstructed relatively well.

I tried running the same images in a trial version of 3D Zephyr and the model was compiled just fine.

@msanta
Copy link

msanta commented Jan 7, 2025

I am using all default settings for this model. @msanta, did you have any other setting set differently beyond installation defaults when you ran the program and produced that image you showed previously? I am unable to replicate your results.

@msanta, something else I am wondering is were you able to produce your image with the camera intrinsics added? I noticed even when using that small dataset with a single rotation much of the points under Display Features were red or yellow. I am wondering if the program is using inbuilt focal length data or something and that is why it is failing to recognize features. Though I would assume the camera metadata is still on the photos I had forwarded.

I didn't change any settings from the defaults. Here are the camera instrinsics:
image

The only differences between your setup and mine would be the OS (windows vs linux) and different hardware. I suppose I could also try this on Windows to confirm if that makes a difference.

For what it is worth, I have also tried these images in RealityCapture and got a good result.
screenshot1

@megaraptor1
Copy link
Author

@msanta Okay, thank you for letting me know. I assume it must be something about the difference between iOS and Windows. Not sure what else could be causing such a different with the default settings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants