Wednesday 25 April 2012

Geometry Rendering

The sensing techniques described above produce a collection of point samples
on the surface of objects. The point cloud is usually transformed into a triangle
mesh for graphical rendering via standard mesh rendering techniques. Laser
scanning, stereo vision, and spacetime stereo can additionally capture images
of the scene which can be used to associate color with the point samples or as
a texture map for the resulting mesh. The simplest approach is to create
a triangulation of the points by connecting a point with its neighbors as determined
by the sensor layout. For example, a point corresponding to a pixel in the
camera of a structured light, stereo vision, or shaped light pulse system would
be connected to points corresponding to the neighboring pixels.

More advanced and complex procedures can fill holes, align, and merge multiple
depth maps of the same object. This creates a model that has more
complete geometry. A single depth map will only show the geometry of one
side and will have holes wherever there is occlusion. Multiple depth maps from
different viewpoints will reduce occlusion and the merging algorithm can also
guess to fill in holes. Unfortunately, most of these techniques do not work
with deformable objects because aligning the depth maps depends on the object
being rigid. A recent advance in geometry processing is able to extract
correspondences and merge point clouds of deforming objects. However, the
size of the dataset that can be processed is limited.

No comments:

Post a Comment