[Insight-users] MultiResMIRegistration Example

Luis Ibanez luis.ibanez@kitware.com
Thu, 11 Apr 2002 16:47:04 -0400


David,

A comment about Templates and performance:

>>David Rich wrote:
>>
>>During the registration, I observed the memory allocation for the
>>executable.  It typically jumped up and down, indicating some memory
>>cleanup.  However, it still ran between 30MB to 50MB, which raises
>>questions as to whether or not the intent of using heavily templated
>>code to improve operation efficiency is being successful.  
>>

The main advantages of using Templages (Generic Programming)
for image processing are:


1) Code reuse, which simplify maintenance and debugging.
     For example ITK has only one filter that takes care of all
      the binary pixel-wise operations between images.

2)  Inlining.   Given that templates allow the compiler
      to make a lot of decisions at compile-time, there is a
      large number of operations that can successfully be
      inlined.
 
      This is particularly important in images because of the
      large amount of Pixel-level operation.  Any attempt of
      using polymorphism  at pixel level is without hope !.
      Having images templated over PixelType  facilitate to
       inline pixel-level operations that are executed millions
      of times. Note that most of these only happen when
      the compilation is done using optimization.


Templates do not provide *any* advantage or disadvantage
for  memory management. All the normal problems related
with memory allocation and release are *exactly* the same
when you use generic programming (Templates).

The success of the use of heavily templated code can only be
measured as "computing time". The goal in ITK is to be as close
as possible to the speed of doing image processing with (char *)  
pointers but yet allowing users to freely customize pixel types
and expressing algorithms in a more generic way.

We are looking now into the memory consumption problem
that you have pointed out.  ITK filters are designed to work in
a datapipeline. In this approach, a lot of memory is keept with
temporary data with the interest of reducing the time for
subsequent executions of the pipeline.   The basic assumption
is that users do not want to run the pipeline *only once*, but
typically multiple times in a row by changing parameters
on the different filters.  

Let's consider the experimentation that you are doing right
now using a file for entering parameters:  Each time that
you run the experiment a new pyramid of images is computed.
If the change that you made on the parameters file is only
related with the number of iterations of the optimizer... there
was no need to recompute the pyramids. In a full fledged
application with GUI you will have modified these parameters
in a text-box and re-run the experiment taking only the time
for running the optimizer without having to spent time reading
files again and computing the downsampling of the images
for the pyramids. The price for reducing computing time is
paid in memory consumption.

This later approach is the one that has been assumed during
the design of the toolkit. It is in fact the one that corresponds
to real interactive applications.  

The question that your tests have raised up is if an alternative
approach should be considered in which the pipeline will be
expected to be executed only once.  In this case, every filter
will be eliminated as soon as it is done with its execution.
This approach could release memory progressively, under
the assumption that intermediate data will not be needed
anymore.



    Luis