[Insight-users] Rotation of extracted regions of images
Neal R. Harvey
harve at lanl.gov
Thu Aug 20 14:33:30 EDT 2009
I am again having problems with certain aspects of ITK that seem
straightforward, according to the
limited examples provided in the book, but that in practice prove far
from simple.
Basically, I am trying to extract regions of a larger image and then
rotate them by a certain
amount, and then save these rotated extracted regions as image files. I
want the rotated extracted
images to contain all the information that was in the original unrotated
extracted regions - i.e. I want
the vertices of the original rectangular region to be just on the edges
of the rotated version. The rotation
angle can be anything from 0 to 360 degrees.
Just following the examples in the book - translating so that the center
is at the origin, rotating and then
translating back hasn't worked so far.
I have been breaking the problem down into parts, to try and figure out
where I am getting it wrong.
So, firstly I have 4 scenarios - based on which of 4 quadrants the
rotation is in: 1) 0 - 90; 2) 90 - 180;
3) 180 - 270; 4) 270 - 360.
Here's how I calculate the image center and thus the first translation:
const double imageCenterX = ((ExtractedOrigin[0] +
(ExtractedSpacing[0] * ExtractedImageSize[0])) / 2.0);
const double imageCenterY = ((ExtractedOrigin[1] +
(ExtractedSpacing[1] * ExtractedImageSize[1])) / 2.0);
translation1[0] = -imageCenterX;
translation1[1] = -imageCenterY;
AffineTransform->Translate( translation1 );
AffineTransform->Rotate2D( orientation, false );
(pretty much exactly the same as in the textbook examples)
Then, for each of the 4 rotation scenarios, I calculate, based on the
angle of rotation, what the overall dimensions
of the rotated image should be to ensure that the vertices of the
rotated box are at the edges of the rotated
image.
if ((orientation < (Pi/2.0)) && (orientation >= 0.0)) {
// Scenario 1: 0 <= orientation < Pi/2
Xstep1 = ((ExtractedImageSize[0] / 2.0) * cos( orientation ));
Xstep2 = ((ExtractedImageSize[1] / 2.0) * sin( orientation ));
Ystep1 = ((ExtractedImageSize[0] / 2.0) * sin( orientation ));
Ystep2 = ((ExtractedImageSize[1] / 2.0) * cos( orientation ));
ResampledImageSize[0] = static_cast<long unsigned int>(2.0 *
(Xstep1 + Xstep2));
ResampledImageSize[1] = static_cast<long unsigned int>(2.0 *
(Ystep1 + Ystep2));
} else if ((orientation < Pi) && (orientation >= (Pi/2.0))) {
// Scenario 2: Pi/2 <= orientation < Pi
Xstep1 = ((ExtractedImageSize[0] / 2.0) * cos( Pi - orientation ));
Xstep2 = ((ExtractedImageSize[1] / 2.0) * sin( Pi - orientation ));
Ystep1 = ((ExtractedImageSize[0] / 2.0) * sin( Pi - orientation ));
Ystep2 = ((ExtractedImageSize[1] / 2.0) * cos( Pi - orientation ));
ResampledImageSize[0] = static_cast<long unsigned int>(2.0 *
(Xstep1 + Xstep2));
ResampledImageSize[1] = static_cast<long unsigned int>(2.0 *
(Ystep1 + Ystep2));
} else if ((orientation < ((3.0*Pi)/2.0)) && (orientation >= Pi)) {
// Scenario 3: Pi <= orientation < 3*Pi/2
Xstep1 = ((ExtractedImageSize[0] / 2.0) * cos( orientation - Pi ));
Xstep2 = ((ExtractedImageSize[1] / 2.0) * sin( orientation - Pi ));
Ystep1 = ((ExtractedImageSize[1] / 2.0) * cos( orientation - Pi ));
//Ystep2 = ((ExtractedImageSize[0] / 2.0) * sin( orientation - Pi ));
ResampledImageSize[0] = static_cast<long unsigned int>(2.0 *
(Xstep1 + Xstep2));
ResampledImageSize[1] = static_cast<long unsigned int>(2.0 *
(Ystep1 + Ystep2));
} else { // ((orientation < (2.0*Pi)) && (orientation >=
((3.0*Pi)/2.0)))
// Scenario 4: 3*Pi/2 <= orientation < 2*Pi
Xstep1 = ((ExtractedImageSize[1] / 2.0) * sin( (2.0 * Pi) -
orientation ));
Xstep2 = ((ExtractedImageSize[0] / 2.0) * cos( (2.0 * Pi) -
orientation ));
Ystep1 = ((ExtractedImageSize[0] / 2.0) * sin( (2.0 * Pi) -
orientation ));
Ystep2 = ((ExtractedImageSize[1] / 2.0) * cos( (2.0 * Pi) -
orientation ));
ResampledImageSize[0] = static_cast<long unsigned int>(2.0 *
(Xstep1 + Xstep2));
ResampledImageSize[1] = static_cast<long unsigned int>(2.0 *
(Ystep1 + Ystep2));
}
If I then do the following, I don't get images that are at all like I
was hoping.
translation2[0] = imageCenterX;
translation2[1] = imageCenterY;
ResampleFilter->SetSize( ResampledImageSize );
AffineTransform->Translate( translation2, false );
ResampleFilter->SetTransform( AffineTransform );
ResampleFilter->Update();
writer->SetInput( ResampleFilter->GetOutput() );
I am assuming that the problem is to do with the second translation.
But, I am at a loss
of how to determine what it should be. I have sat down and looked at the
trigonometry and
from this tried numerous things, but to no avail. Could it be I am
confused due to how the
translations and rotations are done - whether they are in the original
co-ordinate space or
a rotated co-ordinated space.
I know I post far too much on this list. But, I have been impressed with
ITK's capabilities
for a bunch of things, but somewhat frustrated at the difficulty in
grasping the details of
how to perform what should be some rather simple operations and the
translation of the
limited examples provided in the textbook and online to real-world
applications I am
dealing with. I have been struggling with this rotation problem for a
week. I struggled with
another problem for 2 weeks before getting the information necessary to
do what I needed.
I have invested a fair bit of time in ITK and I am hoping that,
ultimately, it will be worth it.
Any assistance that anyone can provide with my problem would be much
appreciated.
If you can point me to any documents (beyond the textbook) that might
enlighten me, that
would be great. If you have dealt with this problem yourself and found a
solution, if you
could share your wisdom it would also be very much appreciated.
Cheers
Harve
More information about the Insight-users
mailing list