[Insight-users] Filter buffers and memory limits

Kevin H. Hobbs hobbsk at ohiou.edu
Fri Nov 21 12:33:05 EST 2008


On Fri, 2008-11-21 at 08:40 -0800, Urlek wrote:
> Hi,
> 
>    even when I push my big 3D images through through a fairly simple
> pipeline, the memory fingerprint of the process is several times bigger than
> what I would expect from the task. I guess filters are buffering the image
> as it passes through the pipeline so that multiple (different) copies reside
> in memory at any given time. 

Yup that is exactly what's happening: If you have a reader, a filter,
and a writer, then they will each keep their output in memory. This can
use a lot of memory but it's faster when changes are made to the
pipeline. For example if you change a parameter of the last filter
several times and write the output, then only the last filter and the
writer need to run. 

>     How do you usually go about this, when available memory is a limit? Do
> you process the image in single steps, releasing filters once they have been
> used? 

Yes you can do this by calling ReleaseDataFlagOn() for each reader and
filter.

Or you can use streaming. In some situations the entire pipeline can be
"streamed." Streaming is where the end of the pipeline asks for a single
piece and each upstream filter only works on the data required to
produce that piece. In this scenario a piece is read, a piece is
filtered, and a piece is written. The entire image is never loaded into
memory. I find it convenient to use the ITK to VTK pipeline and the VTK
image writers here because of the convenient streaming interface for
those writers. Some filters can not work correctly this way. Fast
marching, connectivity, and IIR filters all could depend on the entire
image.

Also you can use the ITK StreamingImageFilter which streames its
upstream filters. You still need enough memory for at least one copy of
the whole image. Still it can be a lifesaver if all you want as output
is a lowres, low precision, scalar pixel type image and your upstream
pipeline is streamable, highres, high presision, and uses vector pixel
types.

> Where could I read more on ITK's memory management and on how to
> optimally run a processing pipeline in memory-critical cases?
> 

As always read the software guide, but search the list archives too
there have been some nice discussions.

Ask questions and tell the list what you are trying to do. 

> Thanks a lot!
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://www.itk.org/pipermail/insight-users/attachments/20081121/dab60f62/attachment.pgp>


More information about the Insight-users mailing list