Given 3 images of the same object under different lighting conditions, recover the surface normal at each pixel and the overall shape of the object. This technique is used in GelSight touch sensor, but my approach is simpler.
Assuming lambert surface with all equal radiance from all angles, the Light direction is calculated by looking for the brightest spot on a sphere, I used Matlab to automatic find this position. The coordinate of the brightest spot is L(x, y, z). By dividing the vector by -z, we can get:
Where we define p, q values of the Light L are:
Each Light L is represented by p and q. We know that the cross product of surface normal n and light is propotional to the image irradiance E:
Expand those and We get:In Matlab, I used scatterInterpolant as the data structure to store E1/E2 and E2/E3 lookup value. q and p range from -10 to 10, with step size 0.1.
Since we have a Lookup table to find Gradience (p, q) at each pixel, we can easily build a gradience map the same size as the image. Each pixel of the image corresponds to a (p, q) value pair. I call this map pqMap. These pq value pair can also be represented by the surface normal.
I created a normal drawer function to plot the surface normals from a pqMap. Some render results:
I integrate surface gradience from 2 directions (LeftTop->Right Bottom, Right Bottom->LeftTop) and averaged them out. Some results are here under:
The sample images are taken by a camera and the surface is not fully lambert reflectance, so the mirror reflection (high light) causes the error of the calibration stage. Also if a pixel at the 3 input images is all black, there is no solution to the equation in Step 2, we get the error in some dark areas. Carefully placing light sources and using matte surface materials gives better results.