Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exposure-Guided Event Representation. #3

Open
ShuangMa156 opened this issue Dec 11, 2023 · 5 comments
Open

Exposure-Guided Event Representation. #3

ShuangMa156 opened this issue Dec 11, 2023 · 5 comments

Comments

@ShuangMa156
Copy link

After I read the paper, I know that events are split and accumulate by timestamp, and this method looks like a voxelgrid approach to convert event stream to a tensor.
First, I didn't understand the opreation of the code in the file codes/data/utils.py 38th row. Can you explain this to help me further undersatnd it.
看不懂的代码
Second, when I tried to visualize the results of this event tensor, I found the output picture is black, and the most value of the tensor E are zero.
E的内容

@XiangZ-0
Copy link
Owner

Hi, thanks for your questions. Your understanding is correct, our EGER is designed based on voxel grid representation. For the first question about the np.add.at function, it is a commonly adopted numpy function to generate voxel grids for events mainly for faster computation. The basic pipeline here is to first convert the empty voxel grid E to a one-dimensional array via ravel function, and then accumulate events on the one-dimensional array via np.add.at. You could try directly accumulating events on E instead of using the np.add.at function to feel the different computation speeds.

For the visualization of EGER, it is correct that most values are zeros. In EGER, we divide the events according to the exposure interval of the target latent frame and use zero padding for the regions with no event, where the number of zero values helps inform the network about the blurriness of outputs (please refer to Fig.2a of the paper where white blocks indicate zero values). If you just want to visualize events, I recommend using the visualization tools in this repository. Hope this helps :)

@ShuangMa156
Copy link
Author

Thank you very much for your reply and giving information about the event representation, it have cleared my confusion.

@ShuangMa156
Copy link
Author

Hello, I have found a problem in event representation when I tried to visualization the EGER result. E1 contains events in the front bin and no events in the latter bin. Some bins in front of E3 have no events, and some bins in back have events, and the two complement each other. When I checked the code in codes/data/utils.py, I found that the 16 bin split intervals were calculated by the total exposure time, not the time range divided into three segments. I do not know whether I misunderstood the meaning expressed in the paper, I would like to check with you.

@XiangZ-0
Copy link
Owner

XiangZ-0 commented Jan 9, 2025

Hi, thanks for the question. Yes, your observation is correct, and this is consistent with the fig. 2 a (case 2) of the paper. In EGER, I used the total exposure time for E1, E2, and E3 to ensure the same temporal scale of event representation. Otherwise, the event sequences might be stretched/squeezed when restoring the latent images at different timestamps. But I am not sure how important this is. I guess dividing the time range into three segments could also work if the model is retrained accordingly :)
Hope this helps!

@ShuangMa156
Copy link
Author

Ok, thank you very much for your timely answer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants