the problem background: <br>given:<br>1) a set of n ( n ~= 200) 3D cartesian coordinates (x,y,z) obtained by touch sensor probing <br>the surface of a human face (the subject) - fixed point space<br>2) a 3D CT scan of the subject's head - moving image space<br>
<br>required:<br>the transformation that maps coordinates from fixed point space<br>to moving image space so that the tip of an instrument/probe can be<br>visualized in real time within a volume rendering of the image<br>
<br>a cost image has been generated by scaling the gradient magnitude<br>of image a binary segmentation of the subject's head such that large<br>gradient mag values lie on the skin surface. I am thinking of changing this<br>
to some use a laplacian filter or medial axis transform of some kind that <br>would assign a unique value to the surface interface in the image space. <br><br>itk classes;<br>itkPointSet - fill by reading in a text file of points, a constant value is associated with each point of the point set <br>
itkImageFileReader - read in the cost image<br>itkMeanReciprocalSquareDifferencePointSetToImageMetric - the registration metric<br>itkLinearInterpolateImageFunction - the registration interpolator<br>itkVersorRigid3DTransform - the registration transform<br>
itkVersorRegid3DTransformOptimizer - the registration optimizer<br>itkPointSetToImageRegistrationMethod - the registration method<br><br>I created two auxiliary classes (attached):<br>1) itkPointSetMomentsCalculator - given a point set, calculates center of mass assuming each point has a unit <br>
point mass, calculates the principle axes in physical coordinates. class is based on itkImageMomentsCalculator<br>2) itkCustomTransformInitializer - given a fixed point set and a moving image, initializes an internal <br>
itkVersorRigid3DTransform with its SetCenter, SetTranslation, SetMatrix methods based on the calculations<br> performed using an internal itkPointSetMomentsCalculator and itkImageMomentsCalculator. The registration<br>
pipeline's transform can be initialized by for example: <br>registration_transform->SetCenter( initializer->GetTransform()->GetCenter() ) <br>and so on<br><br>Can someone explain how I should set up the transformations internally within the initializer class so that<br>
the registration objective can be fulfilled? There are next to no examples or substantial documentation on<br>using pointset to image registration in a 3D context.<br><br>best regards,<br>Dean<br><br><br><br>