Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it feasiable to annotate object with 8 more point?Even if the bounding box will not to be a cuboid? #191

Open
sejmoonwei opened this issue Nov 15, 2021 · 5 comments

Comments

@sejmoonwei
Copy link

sejmoonwei commented Nov 15, 2021

Thanks for the great work DOPE shared with us.I'm working on this recently and astonished at the result(accuracy) that can be inferenced from only RGM data.
I've look through the issues and found no solution of this problem: my object(a Nut) can not be represented by cuboid(rectangle) properly. Represented it by Hexagonal column will be better I think.
So the question is: Is it feasiable to assign 8 more points for the annotation of a object?Should I do this when creating the 3D models?
Looking forward to your reply
微信图片_20211115114915

微信图片_20211115114922

@TontonTremblay
Copy link
Collaborator

There are symmetries in your model, so you need to be careful how you generate your data. You can still use the cuboid representation. #186 read through this, there is also a script that Martin shares that you might want to use.

@sejmoonwei
Copy link
Author

There are symmetries in your model, so you need to be careful how you generate your data. You can still use the cuboid representation. #186 read through this, there is also a script that Martin shares that you might want to use.

Thanks for the quick reply.I'll check it out.

@sejmoonwei
Copy link
Author

sejmoonwei commented Nov 25, 2021

There are symmetries in your model, so you need to be careful how you generate your data. You can still use the cuboid representation. #186 read through this, there is also a script that Martin shares that you might want to use.

Hi @TontonTremblay

I've made some progress. I finally choose a bolt as the object to detect. And now have some problems.

My steps:

-1. Create the 3D white uncolored model by scanning , tuning the mesh,import to UE4/ndds and edit the material/texture to make it look close to the real one

  • the uncolored model in meshlab

微信图片_20211125145138

  • after processing

屏幕截图 2021-11-25 151608

-2. Generate data. As yet I only use DR data which generated by NDDS. I use 20k COCO img , choose randomly as background, enabled random rotation/move and get 20k as train sample. As the object is symmetrical, according to #176 (comment), I made this dataset with roll,pitch rotate in range (0,90) and yaw rotate are disabled. So the pose in train data are restricted. They look something this:

  • far and near

微信图片_20211125161030

-3. Train. I use the command: python train.py --data /home/albert/Desktop/myp/dope/Deep_Object_Pose-master/bolt/ --batchsize 16 --pretrained False --datatest /home/albert/Desktop/myp/dope/Deep_Object_Pose-master/bolt/ --imagesize 400 --workers 0 --gpuids 0 1, set the epoch to 60 .
I stopped training at eopch 25 for the loss stop to decrease ,the iniatial loss is 0.6 and final loss is 0.05.

Results

By reading the former issues , the belief map seems to be important. So I post the detect image and belief_map in pairs.

  • A bad result? Belief maps show the corner point converges, but to wrong location.
    微信图片_20211125163331
    微信图片_20211125163307

  • This one seems better.This dataset contains a lot of poses like that. But the accuracy still not satisfactory.
    微信图片_20211125163626
    微信图片_20211125163810

  • And a head down pose which NOT in this dataset. I'm little confused why this happened as I have some sample with head up pose in this dataset. It seems failed to generalize more pose than given in dataset.
    微信图片_20211125164922
    微信图片_20211125164927

My problem are as follow:

  1. Do I have to generate the photorealistic data by NViSII and train the model with DR + photorealistic dataset?Will that help?
  2. Am I right on dealing with the symmetry object? From the results, I'm probably wrong about that (sad). I noticed in Training loss fails to converge #176 there is a flip_symmetrical_objects.py , but the author seems not adopt this script. So I followed the former method provided by the author.
    3.What should I do next ? The goal is to train a robust detect model on this object.

Thanks for the time spend on my issue. I'm looking forward to be enlightened.

@TontonTremblay
Copy link
Collaborator

I would randomized the colors and materials of the bolt. https://github.com/NVlabs/DREAM we did that in that work and it worked pretty well, specially learning about the metallic material without NViSII is going to be hard I think.

Dealing with symmetries is really not that easy. We have a loss term for it in our centerpose work. https://pythonrepo.com/repo/NVlabs-CenterPose it just came out, it might be worth exploring it, but that will mean some work on your end to incorporate the loss term. If it works I would love a PR ;)

Also your scene is always lighten up in the same way. I do not see much diversity in the light. Have you thought of using blenderproc? I am not sure about having the data in the format for DOPE, but that might help.

@sejmoonwei
Copy link
Author

I would randomized the colors and materials of the bolt. https://github.com/NVlabs/DREAM we did that in that work and it worked pretty well, specially learning about the metallic material without NViSII is going to be hard I think.

Dealing with symmetries is really not that easy. We have a loss term for it in our centerpose work. https://pythonrepo.com/repo/NVlabs-CenterPose it just came out, it might be worth exploring it, but that will mean some work on your end to incorporate the loss term. If it works I would love a PR ;)

Also your scene is always lighten up in the same way. I do not see much diversity in the light. Have you thought of using blenderproc? I am not sure about having the data in the format for DOPE, but that might help.

Got it. I'll first try nvisii to randomize the color and material and check the result. Then look into the new loss term in https://pythonrepo.com/repo/NVlabs-CenterPose. Thanks for the advice. Hope your recent experiments goes well : )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants