[Insight-users] Speed of Mutual Information calculation

Luis Ibanez luis.ibanez at kitware.com
Thu May 18 08:47:51 EDT 2006



Hi Markus,


The claim that one Metric is faster than another is
useless if it is not supported by experimental data
and the description of the reproducible methods used
for gathering such data.


In particular, if we seriously want to compare
two implementations of the Mutual Information
metric such as the Mattes and Viola-Wells, the
comparison *MUST* include details about:


   1) the number of pixels in both images
   2) the number of histogram bins
   3) the real entropy of the images
   4) the real joint entropy of the 2 images
   5) the quality of the MI estimation



It is trivial to make one metric compute faster
by doing something like reducing the number of
samples.


Even with this information at hand, it is fundamental
to verify if the specifications for one metric are
equivalent to the specifications of the other metric.
In other words, if you intend to compare speed, you
must do it by first placing the Metrics in settings
where they provide a similar quality of estimation
of Mutual Information.   Otherwise, it will be also
trivial to set a Metric to go very fast by computing
a poor estimation of MI. E.g. like claiming that car
X is faster than car Y, but by making car X go to a
different finish line than car Y.


It is unfortunate that so many among us are still
suffering from "Paperitis", the mental disease of
claiming that "method A is better than method B"
without offering any experimental (or mathematical)
support for such claims.  The void practice of

            "Publish or Perish"

has made "Paperitis" an almost epidemic disease.


.....


Fortunately the cure for "Paperitis" is quite simple:


           "Ask for a proof"


Where is the data ?
Where is the experiment ?
Can I repeat it ?
Does it work only in certain conditions ?


===



About your question: The metrics that do have the
"Histogram" word on them, actually compute a histogram
internally, and once they have that histogram, they
use it for computing entropy of each image, as well
as the joint entropy.


There is no particular reason why we couldn't modify
the Viola-Wells and the Mattes implementation of MI
in order to provide a Normalized mutual information
instead of a typical MI.


It comes down to replacing the final lines in the
GetValues() methods in order to compute:

        NMI = ( Ha + Hb ) / Hab

instead of the current:

        MI = Ha + Hb - Hab



This could be done by reorganizing the summation in
the file:

itkMattesMutualInformationImageToImageMetric.txx,

along the lines 815-838.


Note that the metric here is computing Ha, Hb and Hab
in the same while loop.


In the case of the Viola-Wells implementation, you
could convert the metric into a Normalized Mutual
Information by changing lines 270-272 in the file

   itkMutualInformationImageToImageMetric.txx



 From the practical point of view, you can always
convert MI into NMI using the relationship


            NMI =  MI / Hab  + 1


In practice, the main advantage of using NMI is that
it is easier to interpret because its values are in
standard scale, while MI is measured  in "bits"
(units of information).


One of the cases where is preferable to use the
Histogram based metrics is when you have images with
unusual distributions of intensities, and you find
worth to understand what parts of the histogram
are driving the registration. This behavior can be
studied easier in a Histogram based metric because
at every iteration you can look at the histogram,
and compare it with the one from the previous iteration
of the optimizer.  The bins that are changing in the
histogram are the ones that drive the registration.



    Regards,



       Luis



===============================
m.weigert at fz-juelich.de wrote:
> Hi Karthik,
> 
> my impression is, that Mattes is faster than the viola - wells
> implementation, but I wouldn't rely on it.
> 
> But another question:
> What is the difference between an image to image metric and a histogram image to image metric, and why is there a NormalizedMutualInformationHistogramImageToImageMetric but
> not a NormalizedMutualInformationImageToImageMetric in ITK?
> I suppose, the histogram image to image metric compares the histograms in
> a certain way to evaluate the similarity between to images.
> So what are the special circumstances or problems when you prefer to use a
> HistogramImageToImageMetric instead of a common imageToImageMetric?
> 
> 
> Regards,
> Markus
> 
> 
> 
> 
> ----- Original Message -----
> From: Karthik Krishnan <Karthik.Krishnan at kitware.com>
> Date: Tuesday, May 16, 2006 5:18 pm
> Subject: Re: [Insight-users] Speed of Mutual Information calculation
> 
> 
>>Markus Weigert wrote:
>>
>>
>>>Hi Luis,
>>>
>>>thanks for your response.
>>>I use the Viola - Wells implementation.
>>>Strangely, the Mattes implementation is much faster.
>>
>>That is strange. Is this for the same number of spatial samples in 
>>both 
>>Viola-Wells and Mattes and for the same number of iterations in 
>>both 
>>cases. I would think *one iteration* of Mattes should be slower 
>>than 
>>*one iteration* of Viola-wells cause of the BSpline based parzen 
>>windowing.
>>
>>>I compiled for release with dbg. information on VC6
>>>and used GradientDescentOptimizer, not 
>>>RegularStepGradientDescentOptimizer.
>>>I plot the progress from a Command Observer and currently don't 
>>
>>use 
>>
>>>multiresolution
>>>(only on the original resolution).
>>>
>>>Cheers,
>>>Markus
>>>
>>>
>>>----- Original Message ----- From: "Luis Ibanez" 
>>><luis.ibanez at kitware.com>
>>>To: "Markus Weigert" <m.weigert at fz-juelich.de>
>>>Cc: <insight-users at itk.org>
>>>Sent: Monday, May 15, 2006 4:40 PM
>>>Subject: Re: [Insight-users] Speed of Mutual Information calculation
>>>
>>>
>>>
>>>>Hi Markus,
>>>>
>>>>Nope, this is not the common time for this size of images.
>>>>
>>>>This type of registration should take about 2 minutes in
>>>>a modern standard computer.
>>>>
>>>>
>>>>Some questions:
>>>>
>>>>
>>>>1) Are you compiling your application for "Release" ?
>>>>
>>>>2) Are you using multi-resolution ?
>>>>
>>>>3) Are you using the GradientDescent or
>>>>   the RegularGradientDescent optimizer ?
>>>>
>>>>4) Are you plotting the progress of the optimizer ?
>>>>   from a connected Command Observer ?
>>>>
>>>>5) Which one of the 5 ITK implementations of
>>>>   Mutual Information Metric  are you using  ?
>>>>
>>>>
>>>>
>>>>It is very likely that you are letting the optimizer run
>>>>for a lot of uncessary iterations.
>>>>
>>>>Have you measure the time needed for performing One iteration ?
>>>>This will indicate if the problem is to have too many iterations,
>>>>or to have metric evaluations that are too slow.
>>>>
>>>>
>>>>
>>>>The best way to figure out the problem is to analyze the
>>>>trace provided by the Command Observer.
>>>>
>>>>Given that you are testing with a 3D translation transform,
>>>>you are in the lucky situation were you can actually plot
>>>>the path of the optimizer in the parametric space.
>>>>
>>>>You could use a tool such as GNUplot, in order to see this
>>>>path in 3D.  Other easy options are a VTK script, or saving
>>>>the trace in a .vtk file and loading it into ParaView.
>>>>
>>>>Whi
>>>>
>>>>
>>>>
>>>>
>>>>=====================
>>>>Markus Weigert wrote:
>>>>
>>>>
>>>>>Dear insight users,
>>>>> I currently try to register two 3D images (CT and MR)
>>>>>by using mutual information as metric.
>>>>>The images have a size of approx. 255 * 290 * 75 slices each 
>>
>>(MR 
>>
>>>>>perhaps even more).
>>>>>Although I use a very simple transformation (translation) and 
>>
>>a 
>>
>>>>>graddescent
>>>>>optimizer, one iteration of the optimizer takes more than 1.5h.
>>>>> Is this a common time for images of this size???
>>>>>The metric uses 60000 spatial samples.
>>>>>I thougt about using BSpline transform in the next step of the 
>>>>>registration
>>>>>with MI metric too, but I think I can forget to do this, if I 
>>
>>have 
>>
>>>>>to deal with 3000 Parameters
>>>>>to be optimized!
>>>>> Regards,
>>>>>Markus
>>>>> --------------------------------------------------------------
>>
>>---------- 
>>
>>>>>
>>>>>_______________________________________________
>>>>>Insight-users mailing list
>>>>>Insight-users at itk.org
>>>>>http://www.itk.org/mailman/listinfo/insight-users
>>>>
>>>>
>>>>
>>>_______________________________________________
>>>Insight-users mailing list
>>>Insight-users at itk.org
>>>http://www.itk.org/mailman/listinfo/insight-users
>>>
>>
> 
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk.org
> http://www.itk.org/mailman/listinfo/insight-users
> 
> 




More information about the Insight-users mailing list