-
Notifications
You must be signed in to change notification settings - Fork 288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting access to the annotated image and json data on which the original models were trained on #293
Comments
I am so sorry, I do not have access to the data anymore. We were not really
precise with our data back then.
…On Thu, Apr 6, 2023 at 9:55 AM Arghya Chatterjee ***@***.***> wrote:
Hi,
I am trying to generate or replicate the actual result showed in the
paper. So, for that, I think you have generated dataset with different
background both photorealistic and non-photorealistic. Can I have access to
the training data that you used ?
Thanks,
Arghya
—
Reply to this email directly, view it on GitHub
<#293>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABK6JIA2YEML432T4Y5BHFTW73YPHANCNFSM6AAAAAAWVUY66Q>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Ah, ok. Thanks for letting me know. Also, how did you split the photorealistic (from UE4) and non-photorealistic (NViSII generated) dataset during training ? Say in 100k dataset for a single object, how many were photorealistic and how many were non-photorealistic ? |
Part of the dataset is available online it is the FAT dataset.
From what I remember it was 60k from FAT (selected randomly) and 60k from
domain randomization all rendered with UE4.
For the HOPE object I did 60k from NViSII script from this repo and not FAT
dataset for that one.
…On Thu, Apr 6, 2023 at 4:19 PM Arghya Chatterjee ***@***.***> wrote:
Ah, ok. Thanks for letting me know. Also, how did you split the
photorealistic (from UE4) and non-photorealistic (NViSII generated) dataset
during training ? Say in 100k dataset for a single object, how many were
photorealistic and how many were non-photorealistic ?
—
Reply to this email directly, view it on GitHub
<#293 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABK6JIBQ67VNRFOCCHX6M5TW75FPLANCNFSM6AAAAAAWVUY66Q>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Hey @TontonTremblay What is the difference between left and right version of iamges? The look exactly the same for me: Thanks, |
For the YCB it was 60k from FAT (selected randomly) and 60k from domain randomization all rendered with UE4 (NDDS). The FAT was also built for stereo cameras (2 rgbs) they are placed at 8 cms from each other with parallel optical rays. You can ignore the right or left or mix them. |
Hi,
I am trying to generate or replicate the actual result showed in the paper. So, for that, I think you have generated dataset with different background both photorealistic and non-photorealistic. Can I have access to the training data that you used ?
Thanks,
Arghya
The text was updated successfully, but these errors were encountered: