This repo contains implementation of the motion contrast 3-D Scanning method proposed by Matsuda et. al. Check out the project page here and the video to the project here.
The data in the experiments folder have been collected from a DAVIS346 Camera. The drivers developed by the robotics and perception group were used to collect data from the DAVIS.
An object is placed on a platform and a moving light source generated in generator.py
is made to fall on the object. The videos folder contains different light sources generated using the generator file. The gif on the left shows the experimental setup and the gif on the right shows the events visualised by accumulating them to frames at a particular rate
![]() |
![]() |
The event camera, placed in a stereo configuration with the light projector ,records the bending of the incident light. This property is used to calculate the depth of the scene specified in more detail in the paper.
To run the files execute with the following command
python main.py
To run for different object change the obj
and offset
variables in the main_line_scanning()
function in the main.py
file. In case you are using a camera other than DAVIS346, camera dimensions can be changed by changing the cam_dims
variable. The output is a .xyz file. The .xyz files can be imported in meshlab and meshed to get 3D results.
Below are some of the results of the Implementation.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
To understand how events occur in an event based camera, please watch the video at this link.
Below are the links to different types of Event Cameras,