[Insight-users] [Fwd: Re: MultResMIRegistration using CT and US]
Luis Ibanez
luis.ibanez@kitware.com
Wed, 15 May 2002 09:15:33 -0400
This is a multi-part message in MIME format.
--------------000000020003090505070400
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
--------------000000020003090505070400
Content-Type: message/rfc822;
name="Re: MultResMIRegistration using CT and US"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
filename="Re: MultResMIRegistration using CT and US"
>From - Tue May 14 18:33:55 2002
X-Mozilla-Status2: 00000000
Message-ID: <3CE190D2.5060602@kitware.com>
Date: Tue, 14 May 2002 18:33:54 -0400
From: Luis Ibanez <luis.ibanez@kitware.com>
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:0.9.8) Gecko/20020204
X-Accept-Language: en-us
MIME-Version: 1.0
To: Jon Harald Kaspersen <Jon.H.Kaspersen@unimed.sintef.no>
Subject: Re: MultResMIRegistration using CT and US
References: <3C38A0B0-670B-11D6-B347-00039364CC7A@sintef.no>
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
Hi Jon,
I'm glad to hear that you have made so much
progress with OS X, we all hope to see ITK
running on Macs one day.
About your registration questions:
0) About registering CT and Ultrasound:
Yes, it is possible and very useful !
One of the great interest in doing this
is to provide guidance during surgery.
In particular for those interventions
that are minimaly invasive, because in
those cases visibility is minimal.
Having an ultrasound doesn't produce much
additional information but if you use the
ultrasound to register the patients anatomy
with a CT and then use the CT content for
displaying let's say a tumor, that could be
a big help for the surgeon.
1) Resampling the ultrasound to fit the CT
resolution will simplify your work of
selecting parameters but would probably not
help at all with the registration process.
The advantage for you is that when you are
selecting parameters for subsampling the
volumes you can think of them in a similar way.
I will think of doing resampling during your
first tries with the registration method, while
you find a good combination of parameters for your
problem. (Most of the fun in registration is trying
to figure out the right parameters to fine-tune a
particular method to your application). Then, once
you are familiar with the trade-offs of the parameters
I would think on going back to the original resolution
of both data sets. That should speed up the registration
(probably a lot !).
2) The influence of large regions of black pixels
is null as far as the Metrics are concerned. (Except
for "pattern intensity" where pixel count is in some
way part of the metric, in which case you don't want
the zero pixels to participate).
The major concern, however it that adding a lot of
black pixels could give you a big hit in performance.
(not because they are zero, but because there are a
lot of them) In general, Registration time is linearly
proportional to the number of pixels in the volume.
With mutual information this not exactly true, since
there is a process for selecting a precise number of
pixels and the metric is computed only for those
pixels (but in practice this number of pixels selected
could depend on the size of the volume...).
3) I think you could take advantage of the fact that
the role of the images is not simetrical in ITK
registration. That means that when you pass
the CT as the 'fixed' image and the Ultrasound as
the 'moving' image, the process of registration
will behave different than if you pass CT as 'moving'
and US as 'fixed'.
The registration process basically take pixels from
the 'fixed' image, and map their spatial location through
the current transform into the geometric space of the
'moving' image. (you can see those details in the
MeanSquaresImageMetric for example). In in your best
interest to use the US as the 'fixed' image since
that will put your registration in a more favorable
setup.
Let's say that you Ultrasound has 1 Million pixels
(I'm making up the numbers here... so please forgive
me if they are not realistic, just replace them with
values that fit your images) while your CT (512x512x300 ?)
have ~80 Megapixels.
If you attempt to use the CT as the 'fixed' image,
each evaluation of the metric will visit 8x10^9 pixels,
most of them will end up being mapped into the black
regions of the Ultrasound and not contributing at all
to the metric... but on the way they are crunching
your CPU cycles.
If, on the other hand you use your Ultrasound as 'fixed'
image, without *any* resampling. The computation of the
metric will be visiting only 1^6 pixel and most of them
will be mapped to regions of the CT that containg some
information, so they will probably contribute to the
metric.
Another advantage of exchanging the role of the images
is that the derivative of Metrics use the gradient of
the 'moving' image, and I guess that you rather use the
gradient of the CT than the gradient of the US.
Exchanging the images between 'fixed' and 'moving'
will just make that at the end you get a Transform T
or the inverse of this same transform T.
4) About the learning rate....
This is probably the hardest parameter to fit on the
process. Let me tell you first what the paramer does,
so you can setup you own rationale about how to change
the values:
The registration framework in ITK is modular. The idea
is that you compose your registration in the same way
you order a menu: you select the apetizer, the entree,
the desert and the drink, and each one is relatively
independent of the others. Probably the most critical
components here are the Metrics and the Optimizers.
The metric compare your two images (after a transformation)
to evaluate how well one match to the other. The optimizer
keeps changing the parameters of the transformation looking
for a combination that will give you the best value
of the metric.
The particular example that you are using, utilizes the
"GradientDescent" optimizer. It implements is one of the
many variants of gradient descent. The learning-rate you
are asking about is used by the Gradient descent in order
to compute how long and step should be in the parametric
space of the optimization.
Let's say that you are just doing rigid 2D: The parameter
space is tridimensional: { translation: (x,y) and rotation
angle: th}.
The optimizer starts with a particular combination (x,y,th).
It evaluates the metric for this values and the derivative
of the metric with respect to x, y and th. With this derivative
the optimizer have a 'direction' in this 3D space (x,y,th)
along which it seems to be good to move in order to improve
the value of the metric. Having the direction, the question
is then: how far should I move from my current (x,y,th)
position ?
The rule in the "GradientDescent" optimizer is that the step
is computed by taking the Metric-Derivative vector and
multiplying it by the "Learning rate".
So: the larger the learning rate, the longer the step will
be. If you have a lot of time (or a very fast computer)
the simplest thing to do is just to stick with small values.
How small...? well, it depends on the values of the
Metric derivative.
Because what counts at the end is the product of the
Metric derivative by the learning rate.
In practice here is the 'scientific method' we use for
finding good values for the learning rate:
1) get coffee :-)
2) select an arbitrary value (let's say 1.0)
3) run a registration and track the values of the transform
from one iteration to the next. Here you need to have clear
how the values of the transform parameters represent movements
in the image space.
4) If you notice that from one iteration to the next, the
transform is moving you, let's say: 100 pixels,
That's too much of a big jump, you have to *divide* the
learning rate by, let's say 10, so the step size is smaller
(about 10 pixels).
5) If on the other hand, you find that the transform is only
moving you 0.0001 pixels between iterations, then you need
to *multiply* the learning rate by at least 10^4 so your
steps are in the range of 1 to 10 pixels per iteration.
As you see, the problem here is that the good learning rate
value depend on the value of the Metric derivative between
the two images.... and that can only be know when you run
a couple of tries. It totaly depends on the nature and content
of your images as well as the type of transform you are using.
So the only way to get a sense is trough experimentation.
You could change the GradientDescent optimizer by the
RegularStepGradientDescent optimizer which also uses the
metric derivative to determin in which direction to move,
but it applies a different strategy for computing the
step size. It just use an independent initial value for the
step ( a value that you have to provide). let's say that you
start with 10 pixels (or the equivalent in millimeters).
This optimizer will compute the derivative of the metric
with respect to (x,y,th) and *normalize* it, so only the
direction of the derivative is used. This normal vector
is then multiplied by the step that you provided.
In this way you are sure that the first step is always
going to be 10 pixels no matter what the value of the
metric derivative happens to be.
The optimizer will continue using this step size until
it takes one step in which the metric value instead of
improving becomes degraded. So, it realizes that is time
to slow down and step back, and it is exactly what it
does, it multiplies the current step size by -0.5, so
it change the direction and makes only half of the step.
(let's say in this case: 5 pixels). Then it goes on,
advancing until it finds again a step in which the
metric doen't improve and in this cases multiplies the
step size again by -0.5.... and so on.
The algorithm stops when the step size reaches a minimum
value (that you also have to provide). In this way you
have a better control about how the optimizer moves in
the parametric space.
Both optimizers have advantages and disadvantages and
that's the reason why we want to keep both of them int
the toolkit, You will see that there are other more
sophisticated optimizers (e.g. conjugate gradient)....
but it is better to start when the simpler ones while
you find the fine-tuning of all the other parameters.
The nice thing with ITK is that once you configure
one registration method, you can go ahead and experiment
switching any of the components to see if you can get a
better results or better performance. Of course the paremeters
will require a new fine-tunning as a result of the different
characteristics of the new component, but (hopefully) the
new good values will not be far from the current ones.
Please let me know if this help to answer any of
your questions, or if you have any other concerns.
Thanks
Luis
BTW:
Would you mind if I send copy of this email
to the users list ?
Your experience could be useful for other users.
=============================================
Jon Harald Kaspersen wrote:
> Hi Luis,
>
> I have been trying to compile and link ITK on Mac OS X for months now.
> Bill Hoffman and Brad King has been very helpful in this process, which
> is not at a final stage yet. The problem is the OS X linker which is not
> supporting weak symbols.
>
> Anyway, while waiting for Apple to fix their OS X linker, I have
> compiled and linked ITK on a SGI Onyx station. I have started to explore
> ITK and the registration possibilites in ITK. I have concentrated my
> work around the MultResMIRegistration example. What I am trying to do is
> to register a 3D ultrasound volume to a 3D CT scan. I have resampled the
> volumes to have equally spacing, and thereby the ultrasound volume will
> contain large amounts of black pixels. How will this influence the
> registratioin algorithm ? Do you know of others that have registrated
> ultrasound to CT ? Is it in fact possible to do ( I mean as the these
> imaging modalities are quite different ) ? My last question is related
> to the parameter file, on what background do I set the learning rate at
> each resolution level ?
>
> PS! I have verified the MultResMIRegistration example by registrating a
> CT volume to the same CT volume suppressed with a translation.
>
> Regards
> Jon
> ========================================================
> Jon Harald Kaspersen Tel: +47 73 59 75 89
> Ph.D. Mechanical Engineering Mob: +47 93 03 65 90
> Senior Scientist Pager +47 96 84 29 94
> SINTEF Unimed - Ultralyd Fax: +47 73 59 78 73
> N-7465 Trondheim
> NORWAY e-mail: Jon.H.Kaspersen@unimed.sintef.no
> WEB: _http://www.us.unimed.sintef.no/_
> ========================================================
--------------000000020003090505070400--