[Insight-users] masked neighborhood operators (convolution) - want smaller output than input

Amy C mathematical.coffee at gmail.com
Mon Jan 18 05:00:03 EST 2010


sorry, please ignore the previous stub from me, I accidentally pressed some
keys and sent the email before I was done typing it.

I've had a few more goes at this, and I think the pseudocode is something
like this:

ImageRegionIteratorWithIndex xIt ( xImage, ... );
itk::NeighborhoodInnerProduct<ImageType> innerProduct;

xIt.GoToBegin();
while ( !xIt.IsAtEnd() ) {
    // convert the index from x-image coordinates to the coordinates in the
input image. In my case idx[i] = xIt.GetIndex()[i]*h[i]
    idx = ConvertXImageIndexToInputImage( xIt.GetIndex() );

    // get the neighborhood centred at idx. I guess it would need to be a
neighbourhood iterator?
    NeighborhoodIteratorType nbhd( radius, inputImage, ???? );

   // do the inner product
    xIt.Set( innerProduct( nbhd, kernel ));

   ++xIt;
}

And I could use a face calculator to put in boundary conditions etc.
Is there a more efficient way to do this? Do I really need to make a
neighbourhood iterator and make sure it is centered at idx every loop, even
though I won't be incrementing it? (I only use a neighborhood iterator
because that's what innerProduct needs). Is there something in ITK to the
effect of

nhbd = Neighborhood( radius, idx ); // radius, centre pixel
xIt.Set( innerProduct( nhbd, kernel ) );

?
I'll be doing it quite a few times and in an optimisation loop so I wondered
if there was anything particularly fast.

cheers,
Amy

-----------------------------------------------------------------------------
x.x.x.x
.......
x.x.x.x
.......
x.x.x.x

Here my input image is of dimension 5x7, and the 'x' and '.' are voxels of
the input image. (It is really N-dimensional).
Then there's an image of dimension 3x4 (call it the x-image), where
x-image(i,j) is the value of the (i,j)th 'x' in the picture above.
The idea is I want to do a convolution, but only at the voxels marked 'x'.
I know MaskNeighborhoodOperatorImageF
ilter does this, however I would like the output image to be the same
dimensions as the x-image - smaller than the input image. So output(i,j)
contains the convolution at the (i,j)th 'x'.
I use this for something done in voxel space, so if the physical
space/origin aren't right it doesn't matter.

The additional bits of information are that:
- I know exactly how to orient the 'x' within the input image; they are on a
regular grid starting at (0,0) of the input image, and then placed every h
voxels (of the input image).
- My kernel is separable ( however, can range from 3x3 to ~35x35. I
anticipate the likely sizes will be 3x3 to ~20x20 )  (mine will be usu. 2-
or 3-D)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.itk.org/pipermail/insight-users/attachments/20100118/0001040a/attachment.htm>


More information about the Insight-users mailing list