[Insight-users] Incremental Updates

Luis Ibanez luis.ibanez@kitware.com
Sun, 23 Mar 2003 11:35:28 -0500


Hi Ron,

The application I mentioned in the previous
image shares many characteristics with yours.

In our case, we are using ITK for supporting
Image Guided Surgery. We have images comming
from a fluoroscope C-arm. These images are
taken from a standard video output on the
C-arm and are digitized using a Matrox
grabber card.

The images are then processed with ITK in
order to segment surgical instruments present
in the fluoroscopic image.

The processing is done in demand, which means
that the application ask for a new update
of the segmented object, this trigger the update
of all the filters connected in the pipeline up
to the first one which is the vtkVideoSource.
http://www.vtk.org/doc/nightly/html/classvtkVideoSource.html

Note that the vtkVideoSource is not driven by
the grabber, it is not working in interruption
mode but in query mode. Which means that the
image from the grabber card will only be taken
when we ask the vtkVideoSource to update.

Since a typical fluoroscopy runs at 5 to 10 frames
per second, we are still close to what could be
called 'rea time'.

You probably have a faster natural refresh in
the case of intra-vascular ultrasound. So, you have
to do some high level analysis of how fast the probe
is moving, how long it takes to process the images,
how often you need to refresh your display. It is
a typical Producer/Consumer situation, in which you
don't want any of them to be hands-down waiting for
the other.

Note also that you could use the grabber in Record mode.
Please take a look at the methods "Record()", "Play()",
"Stop()", "SetFrameRate()" in the vtkVideoSource class.


---


What is very particular in your case is the interest
in processing the images by groups of slices. This
doesn't fit really well in the ITK data pipeline
architecture. This means that the backbone structure
of your program will probably not be a ITK+VTK pipeline.

Instead, you may want to package groups of ITK filters
into a module that will perform the processing you want
to do for the group of incoming slices.

Your application will have to deal with the magic of

1- Taking a group of slices from the input,
2- Forming a 3D image with them
3- Passing this piece of data to the ITK module
4- Set the parameters of the module
5- Execute the module
6- Take the output data from the module and
    paste it in a larger 3D image.


Your option of grouping 50 slices for being
processed together will fit in this framework.
You will still have a lot of fun writing this high
level management in your application. ITK will
help you by relieving you from having to write
image processing and segmentation filters.

Those you can take them directly from ITK and
package them in the module. You probably don't want
to write a N-D finite difference solver in order to
use Level Set methods, for example, knowing that it
is already available in ITK    :-)


If you take a look at the plug-ins we have been
writing for Volview:

    InsightApplications/VolviewPlugins

You will find a similar approach. Volview is not
aware of ITK, nor the ITK pipeline. The application
simpy provides a buffer of data as 'input' data,
along with metadata like image dimensions, spacing,
and pixel type.

This is received by a module which internally contains
4 or 5 ITK filters. For example FastMarchingModule has
about 5 ITK filters. The final output of the module is
copied into a buffer (also provided by Volview), and
after that the ITK module is destroyed.

In this configuration ITK is providing basic functionality
without taking over the architecture of your application.
It simply provides services.


----


In your application there seem to be a good deal of
correlation between an incoming frame and the next.
Since the probe is goind inside the blood vessel, the
image will change little from one acquisition to the
next. That lead to think that you could take advantage
of LevelSets as a progressive technique, in which you
use the final level set of one slice as the initial for
the next.

You could also consider to achieve segmentation by
performing registration with a model. You could customize
an circular-like model, initialize it at the size
of the vessel walls, and then perform deformable registration
between the model and the following slices.
With a careful set up of parameters, this can be far more
efficient than doing pure segmenation every frame.

An example on Model to Image segmentation is available
in the SoftwareGuide.pdf
http://www.itk.org/ItkSoftwareGuide.pdf



Regards,



    Luis


---------------------------

Ron Inbar wrote:
> Hi Luis,
> 
> I really want to thank you for the time and effort you put into your
> replies.
> 
> As I mentioned in previous posts, I am under considerable time pressure to
> decide whether or not to use ITK in my application.  I would mostly like to
> use the existing pipeline infrastructure *as is*, while I'm willing to write
> my own filters, if necessary.
> 
> So, having already discussed some important technical issues, I would like
> to ask you for your honest opinion on this matter: do you think I could
> benefit from using ITK (especially the pipeline infrastructure), or would it
> be better to design my own infrastructure?  From what I told you so far, do
> you think my application is too far from what you people had in mind when
> you designed ITK?
> 
> Your opinion, as well as the opinion of any other person involved in the
> design and development of ITK, is of great value to me.
> 
> I would like now to respond to some of your suggestions:
> 
> 
>>1) We are using vtkVideoSource succesfully in
>>    another project in order to feed video data
>>    into ITK. (using the VTKImageToImage adaptor).
> 
> 
> I'm glad to hear that.  I'm just not sure I understood how to use
> vtkVideoSource.  Do I just have to "press" Record and, after some time, poll
> it for what it managed to grab so far?  What does "Play" do, then?
> 
> 
>>    That should work ok for you. Just keep in mind
>>    that the pipeline is driven by demand,
>>    and not by pushing data. So your
>>    application has to have a loop requesting
>>    images every X secons (or milliseconds)
>>    in order to update the pipeline.
> 
> 
> That's fine with me; do you see any problems with this approach?
> 
> 
>>2) I would suggest to preallocate the full 3D
>>    image that you expect to fill up. That is,
>>    set the number of Z slices to the expected
>>    US images to be captured. And progresively
>>    copy data from your video source into the
>>    correspondig slice of the 3D data set.
> 
> 
> OK, no problem here, either.
> 
> 
>>3) I'm affraid that the pipeline will not be
>>    easy to use for just computing the incremental
>>    changes. Every filter in the pipeline will
>>    assume that the 3D image is an entirely new
>>    data set.
>>
>>    The only option is if your processing is
>>    done on each slice independently.... but
>>    this will put you in the old approach of
>>    doing 3D segmentation by stacking 2D
>>    segmentations.
> 
> 
> But what if I process not every frame, but every, say, 50 frames?
> I was thinking of doing something inspired by the StreamingImageFilter,
> namely:
> 1. Wait for 50 frames to arrive, then
> 2. Set the RequestedRegion on the last filter to include only those last 50
> frames, then
> 3. Update the pipeline, and finally
> 4. Append the result from the BufferedRegion to the segmentation of previous
> frames.
> The implementation could require writing some new "administrative" filter,
> but I may be able to use StreamingImageFilter as a baseline, so I think it
> shouldn't be too difficult.
> What do you think?
> 
> 
>>4) In practice it depends on what type of filters
>>    do you want to apply.
>>
>>     For example, you could manage to run a
>>     Region growing algrithm only on the region where
>>     the new slice(s) were added, and keeping previos
>>     segmentged regions...
>>
>>     It will require some coding... but it should be
>>     feasible.
>>
>>
>>5)  Do you want this segmentation to work in real
>>     time ?
>>
>>     That is, as each new slice arrive, do you need
>>     to get the segmentation updated before the next
>>     video image gets there ?
> 
> 
> Not every single frame, but every 2-3 seconds.
>  
> 
>>6)  One option that comes to mind is to segment
>>     every slice using 2D level sets. For example
>>     GeodesicActiveContours.  then use the final
>>     level set of slice N as the initial level set
>>     for slice N+1.  Since level sets are robust
>>     to topological splits and merges, they would
>>     probaly do the right thing when going through
>>     vascular branch points in the artery.
>>
>>     I'm assuming that you move the US probe along
>>     the axis of the artery, so each indivicual image
>>     shows the artery as an almost circular object....
>>     is this the case ?
> 
> 
> I attached some IVUS images so you can see for yourself.
> 
> Please let me know what you think about using ITK in this application.
> 
> Thanks again,
> 
> Ron
> 
> 
> 
> 
> This e-mail message has been sent by MediGuide
> and is for the use of the intended recipients only.
> The message may contain privileged or confidential information .
> If you are not the intended recipient you are hereby notified that any use,
> distribution or copying of this communication is strictly prohibited,
> and you are requested to delete the e-mail and any attachments
> and notify the sender immediately.
> 
> 
> ------------------------------------------------------------------------
> 
> 
> ------------------------------------------------------------------------
>