-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exposure-Guided Event Representation. #3
Comments
Hi, thanks for your questions. Your understanding is correct, our EGER is designed based on voxel grid representation. For the first question about the For the visualization of EGER, it is correct that most values are zeros. In EGER, we divide the events according to the exposure interval of the target latent frame and use zero padding for the regions with no event, where the number of zero values helps inform the network about the blurriness of outputs (please refer to Fig.2a of the paper where white blocks indicate zero values). If you just want to visualize events, I recommend using the visualization tools in this repository. Hope this helps :) |
Thank you very much for your reply and giving information about the event representation, it have cleared my confusion. |
Hello, I have found a problem in event representation when I tried to visualization the EGER result. E1 contains events in the front bin and no events in the latter bin. Some bins in front of E3 have no events, and some bins in back have events, and the two complement each other. When I checked the code in codes/data/utils.py, I found that the 16 bin split intervals were calculated by the total exposure time, not the time range divided into three segments. I do not know whether I misunderstood the meaning expressed in the paper, I would like to check with you. |
Hi, thanks for the question. Yes, your observation is correct, and this is consistent with the fig. 2 a (case 2) of the paper. In EGER, I used the total exposure time for E1, E2, and E3 to ensure the same temporal scale of event representation. Otherwise, the event sequences might be stretched/squeezed when restoring the latent images at different timestamps. But I am not sure how important this is. I guess dividing the time range into three segments could also work if the model is retrained accordingly :) |
Ok, thank you very much for your timely answer! |
After I read the paper, I know that events are split and accumulate by timestamp, and this method looks like a voxelgrid approach to convert event stream to a tensor.


First, I didn't understand the opreation of the code in the file codes/data/utils.py 38th row. Can you explain this to help me further undersatnd it.
Second, when I tried to visualize the results of this event tensor, I found the output picture is black, and the most value of the tensor E are zero.
The text was updated successfully, but these errors were encountered: