-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dataset Annotation file format #1
Comments
Thanks for your interest in our dataset,
The oikit provide basic usage of our dataset. You can find the code for loading data and annotations in it. Best, |
Thank you for clarifying this. I have a follow up question regarding OakInk-Shape annotation format. For example, I am looking at the folded:
Thank you. |
Hi,
Hope these help. |
Thank you Lixin! |
Is my understanding of OakInk-Shape data format correct? The root directory If yes, I am a little confused by the directory structure. Sometimes there is a single hexadecimal folder such as Can you please correct me if I am wrong or provide some clarifications here? Thanks! |
Hi, thank you for your interest in our work. Yes, we store the hand grasp parameters in In some cases, since the grasp might not be suitable for transferring or the category does not contain enough object CAD models, we only provide the grasps of real-world objects, like Also, as we have a perceptual evaluation, some hand poses have been filtered since they do not satisfy visual plausibility. |
Awesome, thanks! One followup question. For real object case |
Yes. We invited 12 subjects to grasp the objects with different intents. Usually, a real-world object will have about ten different grasps with four types of intents: use, hold, lift-up and handover. |
Another follow up question regarding image data (sorry for lot of questions)! For sequences where the intent is handover, there are two actions give and receive, which means that there are two hands. Do you provide annotations for both the hands. For example, I am looking at annotations in |
Yes, that is correct. |
Dear authors, I am trying to visualize some data based on different mode of splits or even different data splits using this. But it looks like these splits are not updated in the code yet: OakInk-Image loader. Can you please update this? That is, can you please provide different data splits based on subjects/ objects, and also train/ val/ test splits? Thanks |
Of course, we plan to update version_3 of the Oakink dataset around July 25th. version_3 includes:
|
Dear authors,
Congratulations on this awesome work! This is a superb and solid work.
Thanks for releasing dropbox versions of the dataset. I have some questions regarding dataset format and annotations:
S100014_0003_0002
, is the object usedS100014
? Also, what does0003
and0002
stand for? Are they intent labels or camera views?anno/hand_v
andanno/hand_j
. What coordinate system are they in? World coordinates or Camera coordinates?anno/hand_v/A01001__0003__0002__2021-09-26-20-02-08__0__6__1.pkl
andanno/hand_v/A01001__0003__0002__2021-09-26-20-02-08__0__6__2.pkl
. What are the differences?Can you please clarify the above questions?
Also, are you planning to release a README file explaining the annotation and file format?
The text was updated successfully, but these errors were encountered: