A typical LiDAR scanner will provide points acquired on a uniform spherical grid. The result is a point cloud with known 4-connectivity (i.e. we know which point is above, below, left, and right of the current point). Is it reasonable to store this data in a 2D Vector Image where the values in each pixel are the 3D point coordinates? It seems more intuitive to store it in an actual mesh data structure (itkMesh and possibly use itkQuadEdge?) but then it doesn't seem possible to traverse it in rows/columns like the Vector Image method as well as use existing methods of searching a 3d radius, etc. <br>
<br>Can anyone comment on this trade off? Or maybe someone has already written code to load a ptx file into some an ITK data structure?<br>
<br clear="all">Thanks,<br><br>David<br>