sorry, please ignore the previous stub from me, I accidentally pressed some keys and sent the email before I was done typing it.<br><br>I've had a few more goes at this, and I think the pseudocode is something like this:<br>
<br>ImageRegionIteratorWithIndex xIt ( xImage, ... );<br>itk::NeighborhoodInnerProduct<ImageType> innerProduct;<br><br>xIt.GoToBegin();<br>while ( !xIt.IsAtEnd() ) {<br> // convert the index from x-image coordinates to the coordinates in the input image. In my case idx[i] = xIt.GetIndex()[i]*h[i]<br>
idx = ConvertXImageIndexToInputImage( xIt.GetIndex() );<br><br> // get the neighborhood centred at idx. I guess it would need to be a neighbourhood iterator?<br> NeighborhoodIteratorType nbhd( radius, inputImage, ???? );<br>
<br> // do the inner product<br> xIt.Set( innerProduct( nbhd, kernel ));<br><br> ++xIt;<br>}<br><br>And I could use a face calculator to put in boundary conditions etc.<br>Is there a more efficient way to do this? Do I really need to make a neighbourhood iterator and make sure it is centered at idx every loop, even though I won't be incrementing it? (I only use a neighborhood iterator because that's what innerProduct needs). Is there something in ITK to the effect of<br>
<br>nhbd = Neighborhood( radius, idx ); // radius, centre pixel<br>xIt.Set( innerProduct( nhbd, kernel ) );<br><br>?<br>I'll be doing it quite a few times and in an optimisation loop so I wondered if there was anything particularly fast.<br>
<br>cheers,<br>Amy<br><br>-----------------------------------------------------------------------------<br>x.x.x.x<br>.......<br>x.x.x.x<br>.......<br>x.x.x.x<br><br>Here my input image is of dimension 5x7, and the 'x' and '.' are voxels of the input image. (It is really N-dimensional).<br>
Then there's an image of dimension 3x4 (call it the x-image), where
x-image(i,j) is the value of the (i,j)th 'x' in the picture above.<br>The idea is I want to do a convolution, but only at the voxels marked 'x'. <br>
I know MaskNeighborhoodOperatorImageF<div id=":5t" class="ii gt">ilter
does this, however I would like the output image to be the same
dimensions as the x-image - smaller than the input image. So
output(i,j) contains the convolution at the (i,j)th 'x'.<br>
I use this for something done in voxel space, so if the physical space/origin aren't right it doesn't matter.<br><br>The additional bits of information are that:<br>-
I know exactly how to orient the 'x' within the input image; they are
on a regular grid starting at (0,0) of the input image, and then placed
every h voxels (of the input image).<br>
- My kernel is separable ( however, can range from 3x3 to ~35x35. I
anticipate the likely sizes will be 3x3 to ~20x20 ) (mine will be usu.
2- or 3-D)</div><br>