From isshaa7 at gmail.com Thu Jun 1 22:09:55 2017 From: isshaa7 at gmail.com (slvnk151) Date: Thu, 1 Jun 2017 19:09:55 -0700 (MST) Subject: [ITK-users] quickview header not found with recent itk4.11.1 Message-ID: <1496369395820-7589996.post@n2.nabble.com> Hi Everyone, I was interested in integrating vtk and itk for my laptop. I am using latest cmake 3.8.1, VTK 7.1.1 and ITK4.11.1 I am on windows 7 and using visual studio 2017 community edition. I managed to build and test vtk 7.1.1 on my laptop. I was also able to configure and generate ITK 4.11.1 with cmake 3.8.1 I turned on the module Module_ITKVtkGlue and pointed VTK_DIR towards vtk/bin (obtained with cmake generation) on my computer. I was also interested in Module_ITKMINC and Module_ITKIOMINC and Module_ITKIOTRansformMINC When I am building the ITK.sln(243 projects) in debug x64 mode using ALL_BUILD->build I am getting build errors for MINC module, which I am ignoring at the moment. Then I build INSTALL->build; which obviously fails for minc but is successful for the rest. But when I try to locate "QuickView.h" in the include folder I am unable to find it. This causes linker errors when trying to implement vtk-itk examples. I would like to know, how can I resolve the quickview.h header issue and also how can I make ITK.sln build successfully for MINC modules. Could someone please tell me what step am I missing. -- View this message in context: http://itk-insight-users.2283740.n2.nabble.com/quickview-header-not-found-with-recent-itk4-11-1-tp7589996.html Sent from the ITK Insight Users mailing list archive at Nabble.com. From matt.mccormick at kitware.com Fri Jun 2 10:20:40 2017 From: matt.mccormick at kitware.com (Matt McCormick) Date: Fri, 2 Jun 2017 10:20:40 -0400 Subject: [ITK-users] quickview header not found with recent itk4.11.1 In-Reply-To: <1496369395820-7589996.post@n2.nabble.com> References: <1496369395820-7589996.post@n2.nabble.com> Message-ID: Hi, The MINC build issue has been addressed in ITK 4.12.0, which will be released within the next week. Please try your build again with this version and see if the missing QuickView.h issue persists. Thanks, Matt On Thu, Jun 1, 2017 at 10:09 PM, slvnk151 wrote: > Hi Everyone, > > I was interested in integrating vtk and itk for my laptop. > > I am using latest cmake 3.8.1, VTK 7.1.1 and ITK4.11.1 > > I am on windows 7 and using visual studio 2017 community edition. > > I managed to build and test vtk 7.1.1 on my laptop. > > I was also able to configure and generate ITK 4.11.1 with cmake 3.8.1 > > I turned on the module Module_ITKVtkGlue and pointed VTK_DIR towards > vtk/bin > (obtained with cmake generation) on my computer. > > I was also interested in Module_ITKMINC and Module_ITKIOMINC and > Module_ITKIOTRansformMINC > > When I am building the ITK.sln(243 projects) in debug x64 mode using > ALL_BUILD->build > > I am getting build errors for MINC module, which I am ignoring at the > moment. > > Then I build INSTALL->build; which obviously fails for minc but is > successful for the rest. > > But when I try to locate "QuickView.h" in the include folder I am unable > to > find it. This causes linker errors when trying to implement vtk-itk > examples. > > I would like to know, how can I resolve the quickview.h header issue and > also how can I make ITK.sln build successfully for MINC modules. > > Could someone please tell me what step am I missing. > > > > > -- > View this message in context: http://itk-insight-users. > 2283740.n2.nabble.com/quickview-header-not-found-with-recent-itk4-11-1- > tp7589996.html > Sent from the ITK Insight Users mailing list archive at Nabble.com. > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzenanz at gmail.com Fri Jun 2 10:28:28 2017 From: dzenanz at gmail.com (=?UTF-8?B?RMW+ZW5hbiBadWtpxIc=?=) Date: Fri, 2 Jun 2017 10:28:28 -0400 Subject: [ITK-users] quickview header not found with recent itk4.11.1 In-Reply-To: <1496369395820-7589996.post@n2.nabble.com> References: <1496369395820-7589996.post@n2.nabble.com> Message-ID: Hi Isshaa, thanks for a clear report on what is happening. I just tried installing recent ITK master in debug mode with VTK_Glue on, and I do get QuickView.h in C:\Program Files\ITK\Include folder. If you don't want to wait for 4.12 release, you could try one of the recent release candidates or the current git master. Regards, D?enan Zuki?, PhD, Senior R&D Engineer, Kitware (Carrboro, N.C.) On Thu, Jun 1, 2017 at 10:09 PM, slvnk151 wrote: > Hi Everyone, > > I was interested in integrating vtk and itk for my laptop. > > I am using latest cmake 3.8.1, VTK 7.1.1 and ITK4.11.1 > > I am on windows 7 and using visual studio 2017 community edition. > > I managed to build and test vtk 7.1.1 on my laptop. > > I was also able to configure and generate ITK 4.11.1 with cmake 3.8.1 > > I turned on the module Module_ITKVtkGlue and pointed VTK_DIR towards > vtk/bin > (obtained with cmake generation) on my computer. > > I was also interested in Module_ITKMINC and Module_ITKIOMINC and > Module_ITKIOTRansformMINC > > When I am building the ITK.sln(243 projects) in debug x64 mode using > ALL_BUILD->build > > I am getting build errors for MINC module, which I am ignoring at the > moment. > > Then I build INSTALL->build; which obviously fails for minc but is > successful for the rest. > > But when I try to locate "QuickView.h" in the include folder I am unable > to > find it. This causes linker errors when trying to implement vtk-itk > examples. > > I would like to know, how can I resolve the quickview.h header issue and > also how can I make ITK.sln build successfully for MINC modules. > > Could someone please tell me what step am I missing. > > > > > -- > View this message in context: http://itk-insight-users. > 2283740.n2.nabble.com/quickview-header-not-found-with-recent-itk4-11-1- > tp7589996.html > Sent from the ITK Insight Users mailing list archive at Nabble.com. > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noreply at insightsoftwareconsortium.org Fri Jun 2 10:58:10 2017 From: noreply at insightsoftwareconsortium.org (Insight Journal) Date: Fri, 2 Jun 2017 10:58:10 -0400 (EDT) Subject: [ITK-users] New Submission: Isotropic and Steerable Wavelets in N Dimensions. A multiresolution analysis framework. Message-ID: <20170602145811.515E43D620D2@insight-journal.org> Hello, A new submission has been added to the Insight Journal. Title: Isotropic and Steerable Wavelets in N Dimensions. A multiresolution analysis framework. Authors: Hernandez-Cerdan P. Abstract: This document describes the implementation of the external module ITKIsotropicWavelets, a multiresolution (MRA) analysis framework using isotropic and steerable wavelets in the frequency domain. This framework provides the backbone for state of the art filters for denoising, feature detection or phase analysis in N-dimensions. It focus on reusability, and highly decoupled modules for easy extension and implementation of new filters, and it contains a filter for multiresolution phase analysis, The backbone of the multi-scale analysis is provided by an isotropic band-limited wavelet pyramid, and the detection of directional features is provided by coupling the pyramid with a generalized Riesz transform. The generalized Riesz transform of order N behaves like a smoothed version of the Nth order derivatives of the signal. Also, it is steerable: its components impulse responses can be rotated to any spatial orientation, reducing computation time when detecting directional features. This paper is accompanied with the source code, input data, parameters and output data that the author used for validating the algorithm described in this paper. This adheres to the fundamental principle that scientific publications must facilitate reproducibility of the reported results. Download and review this publication at: http://hdl.handle.net/10380/3558 Generated by the Insight Journal You are receiving this email because you asked to be informed by the Insight Journal for new submissions. To change your email preference visit http://www.insight-journal.org/ . From dr.tim.allman at gmail.com Mon Jun 5 10:06:40 2017 From: dr.tim.allman at gmail.com (Tim Allman) Date: Mon, 5 Jun 2017 10:06:40 -0400 Subject: [ITK-users] itkDICOMSeriesFileNames Message-ID: <644a5a5f-7d51-b9c5-0c06-3ef66c74ecca@gmail.com> I have been looking at reading and writing DICOM images and it didn't take long to stumble upon the class itkDICOMSeriesFileNames. It is however, almost undocumented in the Guide and is completely without Doxygen documentation (https://itk.org/Doxygen/html/itkDICOMSeriesFileNames_8h.html) but is used in the examples. I see as well that the file itkDICOMSeriesFileNames.h is located in /Modules/Compatibility/Deprecated// suggesting that I should be using something else. Is DCMTK the preferred package now? Thanks, Tim -- Tim Allman Ph.D., 35 Margaret St., Guelph, Ont., N1E 5R6 Canada 519-837-0276 -------------- next part -------------- An HTML attachment was scrubbed... URL: From keepdash at hotmail.com Tue Jun 6 03:57:41 2017 From: keepdash at hotmail.com (keepdash) Date: Tue, 6 Jun 2017 00:57:41 -0700 (MST) Subject: [ITK-users] ComputeMeanCurvature() in itkLevelSetFunction.hxx Message-ID: <1496735861540-7590001.post@n2.nabble.com> >From level set papers, we can find the curvature (2D) is: K = (fxx*fy*fy + fyy*fx*fx - 2*fx*fy*fxy) / (fx*fx+fy*fy)^(3/2) I compared the equation with the code in "itkLevelSetFunction.hxx", all same except the normalization, seems in ITK, the curvature (2D) is computed as: K = (fxx*fy*fy + fyy*fx*fx - 2*fx*fy*fxy) / (fx*fx+fy*fy) which can be found in function ComputeMeanCurvature(), line.179: return ( curvature_term / gd->m_GradMagSqr ); where the "m_GradMagSqr" is the fx*fx+fy*fy from line.332. Then, why ITK use this way, is it better? Thank you. -- View this message in context: http://itk-insight-users.2283740.n2.nabble.com/ITK-users-ComputeMeanCurvature-in-itkLevelSetFunction-hxx-tp7590001.html Sent from the ITK Insight Users mailing list archive at Nabble.com. From gavinb+itk at antonym.org Tue Jun 6 04:00:16 2017 From: gavinb+itk at antonym.org (Gavin Baker) Date: Tue, 06 Jun 2017 18:00:16 +1000 Subject: [ITK-users] Super-resolution resampling Message-ID: <1496736016.3944632.1000082120.177CCA64@webmail.messagingengine.com> Hello! I have a time series of 3D data (relatively low resolution), captured in sequence, with small positional changes (eg. translation). I would like to perform a super-resolution resampling by first co-registering each volumetric dataset (using rigid registration) in order to reduce noise and improve detail. Is there a registration process that is 1:N (fixed:moving)? Or is the recommended method to pick a fixed image (ie. #0) and register each 1..N individually to it? Given a set of transforms that map each of the 1..N moving images back to the fixed image for registration, is it possible to then resample the volume at a higher spatial resolution, combining all image data? IOW super-resolution resampling? I tried searching for the above and didn't have much luck finding relevant info. Thanks - :: Gavin From dzenanz at gmail.com Tue Jun 6 09:16:27 2017 From: dzenanz at gmail.com (=?UTF-8?B?RMW+ZW5hbiBadWtpxIc=?=) Date: Tue, 6 Jun 2017 09:16:27 -0400 Subject: [ITK-users] Super-resolution resampling In-Reply-To: <1496736016.3944632.1000082120.177CCA64@webmail.messagingengine.com> References: <1496736016.3944632.1000082120.177CCA64@webmail.messagingengine.com> Message-ID: Hi Gavin, your plan sounds good! There is no 1:N registration, so you should proceed with N 1:1 registrations. Pick one as a reference (#0 is good), register all the other time points to it. You can initialize the k+1-st iteration by the resulting transform of k-th registration to speed things up. And yes, you can do super-resolution by resampling all these images onto a higher resolution grid, e.g. same origin and direction, 2x higher size and 2x smaller spacing. ITK has all the required classes for this process. Will you let us know how satisfactory the result was? Ideally with some images :) Regards, D?enan Zuki?, PhD, Senior R&D Engineer, Kitware (Carrboro, N.C.) On Tue, Jun 6, 2017 at 4:00 AM, Gavin Baker wrote: > Hello! > > I have a time series of 3D data (relatively low resolution), captured in > sequence, with small positional changes (eg. translation). I would like > to perform a super-resolution resampling by first co-registering each > volumetric dataset (using rigid registration) in order to reduce noise > and improve detail. > > Is there a registration process that is 1:N (fixed:moving)? > > Or is the recommended method to pick a fixed image (ie. #0) and register > each 1..N individually to it? > > Given a set of transforms that map each of the 1..N moving images back > to the fixed image for registration, is it possible to then resample the > volume at a higher spatial resolution, combining all image data? IOW > super-resolution resampling? > > I tried searching for the above and didn't have much luck finding > relevant info. > > Thanks - > > :: Gavin > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gavinb+itk at antonym.org Tue Jun 6 20:39:49 2017 From: gavinb+itk at antonym.org (Gavin Baker) Date: Wed, 07 Jun 2017 10:39:49 +1000 Subject: [ITK-users] Super-resolution resampling In-Reply-To: References: <1496736016.3944632.1000082120.177CCA64@webmail.messagingengine.com> Message-ID: <1496795989.695579.1001111152.7610AC76@webmail.messagingengine.com> Thanks, D?enan - I'll start with the N x 1:1 registration then. I can see how to resample the moving image, with the transform applied, as per the examples. However it is not clear how to _combine_ the N images together for the super-resolution resampling. Or would it be a two-step process, where each moving image is first resampled, and after that they are averaged together? Thanks - :: Gavin On Tue, 6 Jun 2017, at 11:16 PM, D?enan Zuki? wrote: > Hi Gavin, > > your plan sounds good! There is no 1:N registration, so you should > proceed with N 1:1 registrations. Pick one as a reference (#0 is > good), register all the other time points to it. You can initialize > the k+1-st iteration by the resulting transform of k-th registration > to speed things up.> > And yes, you can do super-resolution by resampling all these images > onto a higher resolution grid, e.g. same origin and direction, 2x > higher size and 2x smaller spacing.> > ITK has all the required classes for this process. Will you let us > know how satisfactory the result was? Ideally with some images :)> > Regards, > D?enan Zuki?, PhD, Senior R&D Engineer, Kitware (Carrboro, N.C.) > > On Tue, Jun 6, 2017 at 4:00 AM, Gavin Baker > wrote:>> Hello! >> >> I have a time series of 3D data (relatively low resolution), >> captured in>> sequence, with small positional changes (eg. translation). I >> would like>> to perform a super-resolution resampling by first co- >> registering each>> volumetric dataset (using rigid registration) in order to >> reduce noise>> and improve detail. >> >> Is there a registration process that is 1:N (fixed:moving)? >> >> Or is the recommended method to pick a fixed image (ie. #0) and >> register>> each 1..N individually to it? >> >> Given a set of transforms that map each of the 1..N moving >> images back>> to the fixed image for registration, is it possible to then >> resample the>> volume at a higher spatial resolution, combining all image data? IOW>> super-resolution resampling? >> >> I tried searching for the above and didn't have much luck finding >> relevant info. >> >> Thanks - >> >> :: Gavin >> _____________________________________ >> Powered by www.kitware.com >> >> Visit other Kitware open-source projects at >> http://www.kitware.com/opensource/opensource.html >> >> Kitware offers ITK Training Courses, for more information visit: >> http://www.kitware.com/products/protraining.php >> >> Please keep messages on-topic and check the ITK FAQ at: >> http://www.itk.org/Wiki/ITK_FAQ >> >> Follow this link to subscribe/unsubscribe: >> http://public.kitware.com/mailman/listinfo/insight-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From gavinb+itk at antonym.org Tue Jun 6 20:41:35 2017 From: gavinb+itk at antonym.org (Gavin Baker) Date: Wed, 07 Jun 2017 10:41:35 +1000 Subject: [ITK-users] [ITK] Super-resolution resampling In-Reply-To: References: <1496736016.3944632.1000082120.177CCA64@webmail.messagingengine.com> Message-ID: <1496796095.695716.1001112368.68B6224E@webmail.messagingengine.com> Thanks, Samuel, Great point - using the N/2 sample makes a lot of sense. I'll start with that example and see how I go. Any thoughts on my followup question about the super-resolution resampling the N images together would be most appreciated. Regards - :: Gavin On Tue, 6 Jun 2017, at 11:33 PM, Samuel Gerber wrote: > Hi Gavin, > > One small addition, I would probably take the N/2 image to register > everybody else to, in order to minimize the maximal transformation > (might not matter in your case since it is only small transformations > but it could minimize errors due to resampling).> > This example has all the required classes you should need: > https://itk.org/Wiki/ITK/Examples/Registration/ImageRegistrationMethod> > You will most likely want to use a different optimizer and you can see > in the code how to set the size etc of the output image in the > resampler.> > > On Tue, Jun 6, 2017 at 9:16 AM, D?enan Zuki? > wrote:>> Hi Gavin, >> >> your plan sounds good! There is no 1:N registration, so you should >> proceed with N 1:1 registrations. Pick one as a reference (#0 is >> good), register all the other time points to it. You can initialize >> the k+1-st iteration by the resulting transform of k-th registration >> to speed things up.>> >> And yes, you can do super-resolution by resampling all these images >> onto a higher resolution grid, e.g. same origin and direction, 2x >> higher size and 2x smaller spacing.>> >> ITK has all the required classes for this process. Will you let us >> know how satisfactory the result was? Ideally with some images :)>> >> Regards, >> D?enan Zuki?, PhD, Senior R&D Engineer, Kitware (Carrboro, N.C.) >> >> On Tue, Jun 6, 2017 at 4:00 AM, Gavin Baker >> wrote:>>> Hello! >>> >>> I have a time series of 3D data (relatively low resolution), >>> captured in>>> sequence, with small positional changes (eg. translation). I >>> would like>>> to perform a super-resolution resampling by first co- >>> registering each>>> volumetric dataset (using rigid registration) in order to reduce >>> noise>>> and improve detail. >>> >>> Is there a registration process that is 1:N (fixed:moving)? >>> >>> Or is the recommended method to pick a fixed image (ie. #0) and >>> register>>> each 1..N individually to it? >>> >>> Given a set of transforms that map each of the 1..N moving >>> images back>>> to the fixed image for registration, is it possible to then >>> resample the>>> volume at a higher spatial resolution, combining all image >>> data? IOW>>> super-resolution resampling? >>> >>> I tried searching for the above and didn't have much luck finding >>> relevant info. >>> >>> Thanks - >>> >>> :: Gavin >>> _____________________________________ >>> Powered by www.kitware.com >>> >>> Visit other Kitware open-source projects at >>> http://www.kitware.com/opensource/opensource.html >>> >>> Kitware offers ITK Training Courses, for more information visit: >>> http://www.kitware.com/products/protraining.php >>> >>> Please keep messages on-topic and check the ITK FAQ at: >>> http://www.itk.org/Wiki/ITK_FAQ >>> >>> Follow this link to subscribe/unsubscribe: >>> http://public.kitware.com/mailman/listinfo/insight-users >> >> >> _____________________________________ >> Powered by www.kitware.com >> >> Visit other Kitware open-source projects at >> http://www.kitware.com/opensource/opensource.html >> >> Kitware offers ITK Training Courses, for more information visit: >> http://www.kitware.com/products/protraining.php >> >> Please keep messages on-topic and check the ITK FAQ at: >> http://www.itk.org/Wiki/ITK_FAQ >> >> Follow this link to subscribe/unsubscribe: >> http://public.kitware.com/mailman/listinfo/insight-users >> >> _______________________________________________ >> Community mailing list >> Community at itk.org >> http://public.kitware.com/mailman/listinfo/community >> > > > > -- > Samuel Gerber > R&D Engineer > Kitware, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzenanz at gmail.com Tue Jun 6 21:32:12 2017 From: dzenanz at gmail.com (=?UTF-8?B?RMW+ZW5hbiBadWtpxIc=?=) Date: Tue, 6 Jun 2017 21:32:12 -0400 Subject: [ITK-users] Super-resolution resampling In-Reply-To: <1496795989.695579.1001111152.7610AC76@webmail.messagingengine.com> References: <1496736016.3944632.1000082120.177CCA64@webmail.messagingengine.com> <1496795989.695579.1001111152.7610AC76@webmail.messagingengine.com> Message-ID: Hi Gavin, if you want to avoid keeping N resampled images, you could have a sum of resampled images which you divide by N at the end to get the average. Regards, D?enan Zuki?, PhD, Senior R&D Engineer, Kitware (Carrboro, N.C.) On Tue, Jun 6, 2017 at 8:39 PM, Gavin Baker wrote: > > Thanks, D?enan - > > I'll start with the N x 1:1 registration then. > > I can see how to resample the moving image, with the transform applied, as > per the examples. However it is not clear how to _combine_ the N images > together for the super-resolution resampling. Or would it be a two-step > process, where each moving image is first resampled, and after that they > are averaged together? > > Thanks - > > :: Gavin > > > On Tue, 6 Jun 2017, at 11:16 PM, D?enan Zuki? wrote: > > Hi Gavin, > > your plan sounds good! There is no 1:N registration, so you should proceed > with N 1:1 registrations. Pick one as a reference (#0 is good), register > all the other time points to it. You can initialize the k+1-st iteration by > the resulting transform of k-th registration to speed things up. > > And yes, you can do super-resolution by resampling all these images onto a > higher resolution grid, e.g. same origin and direction, 2x higher size and > 2x smaller spacing. > > ITK has all the required classes for this process. Will you let us know > how satisfactory the result was? Ideally with some images :) > > Regards, > D?enan Zuki?, PhD, Senior R&D Engineer, Kitware (Carrboro, N.C.) > > On Tue, Jun 6, 2017 at 4:00 AM, Gavin Baker > wrote: > > Hello! > > I have a time series of 3D data (relatively low resolution), captured in > sequence, with small positional changes (eg. translation). I would like > to perform a super-resolution resampling by first co-registering each > volumetric dataset (using rigid registration) in order to reduce noise > and improve detail. > > Is there a registration process that is 1:N (fixed:moving)? > > Or is the recommended method to pick a fixed image (ie. #0) and register > each 1..N individually to it? > > Given a set of transforms that map each of the 1..N moving images back > to the fixed image for registration, is it possible to then resample the > volume at a higher spatial resolution, combining all image data? IOW > super-resolution resampling? > > I tried searching for the above and didn't have much luck finding > relevant info. > > Thanks - > > :: Gavin > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.courtial at univ-rennes1.fr Wed Jun 7 05:46:17 2017 From: nicolas.courtial at univ-rennes1.fr (Nicolas Courtial) Date: Wed, 7 Jun 2017 11:46:17 +0200 Subject: [ITK-users] MultiThreading, ImageRegionIterator crash Message-ID: <13d4dc31-8fb8-40c1-ccd8-abe4e45555ef@univ-rennes1.fr> Hello everyone, I'm quite new in ITK world, and I'm currently doing few experiences in order to learn its logic. After few easy exercices, I'm now at a step I want to multithread a method. As I did in the past, I've been reading the ITK classes to get "inspiration", and e- mails from here in case of troubles. I'm facing an issue at the moment, and I can't figure why, and how to solve it. My filter works with * One 3D InputImage (of any pixel type) * Two "tool" 3D images, respectively of float and unsigned char pixel types From what I read, I've understood there are two main ways of multithreading: * The old fashioned one using BeforeThreadedGenerateData/ThreadedGenerateData/AfterThreadedGenerateData * the one using itk::DomainThreader member. As I'm a bit old school, I used the first option. My problem remains here: (I changed my variables' name to make it clearer) ThreadedComputation(const OutputImageRegionType &outputRegionForThread, ThreadIdType threadId) { itk::ProgressReporter progress(this, threadId, outputRegionForThread.GetNumberOfPixels()); typename TOutputImage::Pointer image = this->GetOutput(0); itk::ImageRegionIterator< TOutputImage > imgIt(image, outputRegionForThread); itk::ImageRegionIterator< FloatImageType> floatIt (m_MyFloatImage, outputRegionForThread); .... When creating the floatIt iterator, I have a crash. Using a try catch block, the issue is due to out of bound region. Everything is correct, but the index, for which it's completly crazy ([156245468,0,156245468] or something approching). I've tried different options to solve this, but rather than doing witchcraft and at some point getting something working, I'd prefer to improve my understanding thanks to your expertise. Thanks all, Nicolas Courtial -------------- next part -------------- An HTML attachment was scrubbed... URL: From xieyi4650 at 126.com Wed Jun 7 07:45:00 2017 From: xieyi4650 at 126.com (XieYi) Date: Wed, 7 Jun 2017 04:45:00 -0700 (MST) Subject: [ITK-users] why jpg image that is converted from a mhd image is so weird ? Message-ID: <1496835900201-38307.post@n7.nabble.com> the mhd image in ParaView software is like this: after being converted to jpg image, it is like this here is my converting code: typedef float PixelType; const unsigned int Dimension = 3; typedef itk::Matrix MatrixType; #ifdef RTK_USE_CUDA typedef itk::CudaImage< PixelType, Dimension > ImageType; #else typedef itk::Image< PixelType, Dimension > ImageType; #endif typedef itk::RGBPixel PixelType_2D; #ifdef RTK_USE_CUDA typedef itk::CudaImage< PixelType_2D, 2 > ImageType; #else typedef itk::Image ImageType2; #endif typedef itk::JPEGImageIO ImageIOType; ImageIOType::Pointer jpegIO = ImageIOType::New(); ImageType::Pointer drr1 = [a function get a image] typedef itk::CastImageFilter ImageCastType; ImageCastType::Pointer Imagecast = ImageCastType::New(); Imagecast->SetInput(drr1); Imagecast->Update(); jpegIO->SetFileTypeToASCII(); typedef itk::ImageFileWriter WriterType2; WriterType2::Pointer writer_jpg = WriterType2::New(); writer_jpg->SetImageIO(jpegIO); writer_jpg->SetFileName("drr.jpg"); writer_jpg->SetInput(Imagecast->GetOutput()); writer_jpg->Update(); -- View this message in context: http://itk-users.7.n7.nabble.com/why-jpg-image-that-is-converted-from-a-mhd-image-is-so-weird-tp38307.html Sent from the ITK - Users mailing list archive at Nabble.com. From tevain at telecom-paristech.fr Wed Jun 7 08:01:38 2017 From: tevain at telecom-paristech.fr (Timothee Evain) Date: Wed, 7 Jun 2017 14:01:38 +0200 (CEST) Subject: [ITK-users] [ITK] why jpg image that is converted from a mhd image is so weird ? In-Reply-To: <1496835900201-38307.post@n7.nabble.com> References: <1496835900201-38307.post@n7.nabble.com> Message-ID: <1953946665.3804674.1496836898409.JavaMail.zimbra@enst.fr> Hello, You can't just "cast" a 3D image into a RGB 2D one. Casting is for type only and cannot be used for dimension, see https://itk.org/Doxygen/html/classitk_1_1CastImageFilter.html One basic solution would be to initialize an ImageType2D image, then parse the "3D" image along spatial dimensions, filling the rgb pixel of the second image with the values along the color dimension. HTH, Tim ----- Mail original ----- De: "XieYi" ?: insight-users at itk.org Envoy?: Mercredi 7 Juin 2017 13:45:00 Objet: [ITK] [ITK-users] why jpg image that is converted from a mhd image is so weird ? the mhd image in ParaView software is like this: after being converted to jpg image, it is like this here is my converting code: typedef float PixelType; const unsigned int Dimension = 3; typedef itk::Matrix MatrixType; #ifdef RTK_USE_CUDA typedef itk::CudaImage< PixelType, Dimension > ImageType; #else typedef itk::Image< PixelType, Dimension > ImageType; #endif typedef itk::RGBPixel PixelType_2D; #ifdef RTK_USE_CUDA typedef itk::CudaImage< PixelType_2D, 2 > ImageType; #else typedef itk::Image ImageType2; #endif typedef itk::JPEGImageIO ImageIOType; ImageIOType::Pointer jpegIO = ImageIOType::New(); ImageType::Pointer drr1 = [a function get a image] typedef itk::CastImageFilter ImageCastType; ImageCastType::Pointer Imagecast = ImageCastType::New(); Imagecast->SetInput(drr1); Imagecast->Update(); jpegIO->SetFileTypeToASCII(); typedef itk::ImageFileWriter WriterType2; WriterType2::Pointer writer_jpg = WriterType2::New(); writer_jpg->SetImageIO(jpegIO); writer_jpg->SetFileName("drr.jpg"); writer_jpg->SetInput(Imagecast->GetOutput()); writer_jpg->Update(); -- View this message in context: http://itk-users.7.n7.nabble.com/why-jpg-image-that-is-converted-from-a-mhd-image-is-so-weird-tp38307.html Sent from the ITK - Users mailing list archive at Nabble.com. _____________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Kitware offers ITK Training Courses, for more information visit: http://www.kitware.com/products/protraining.php Please keep messages on-topic and check the ITK FAQ at: http://www.itk.org/Wiki/ITK_FAQ Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/insight-users _______________________________________________ Community mailing list Community at itk.org http://public.kitware.com/mailman/listinfo/community From xieyi4650 at 126.com Wed Jun 7 08:57:18 2017 From: xieyi4650 at 126.com (XieYi) Date: Wed, 7 Jun 2017 05:57:18 -0700 (MST) Subject: [ITK-users] [ITK] why jpg image that is converted from a mhd image is so weird ? In-Reply-To: <1953946665.3804674.1496836898409.JavaMail.zimbra@enst.fr> References: <1496835900201-38307.post@n7.nabble.com> <1953946665.3804674.1496836898409.JavaMail.zimbra@enst.fr> Message-ID: <1496840238850-38309.post@n7.nabble.com> Thank you very much Tim There is one point not clear in my topic. The point is that the dimension of image is 3, but infact it is a slice, i.e. it is a 2D image but store in a 3D image type. its size is 512x512x1 there are two strange things: 1, some ImageType can be cast into a 2D jpg image, and have no accident. 2, when I define the density ImageType as typedef itk::RGBPixel PixelType_2D; typedef itk::CudaImage< PixelType_2D, 2 > ImageType2; typedef itk::PNGImageIO ImageIOType; the xxxxx.png look like normal. -- View this message in context: http://itk-users.7.n7.nabble.com/why-jpg-image-that-is-converted-from-a-mhd-image-is-so-weird-tp38307p38309.html Sent from the ITK - Users mailing list archive at Nabble.com. From tevain at telecom-paristech.fr Wed Jun 7 09:18:06 2017 From: tevain at telecom-paristech.fr (Timothee Evain) Date: Wed, 7 Jun 2017 15:18:06 +0200 (CEST) Subject: [ITK-users] [ITK] why jpg image that is converted from a mhd image is so weird ? In-Reply-To: <1496840238850-38309.post@n7.nabble.com> References: <1496835900201-38307.post@n7.nabble.com> <1953946665.3804674.1496836898409.JavaMail.zimbra@enst.fr> <1496840238850-38309.post@n7.nabble.com> Message-ID: <1512850014.3888704.1496841486846.JavaMail.zimbra@enst.fr> Ok, I don't see the advantage of storing a slice as a 3D image since it confuses the code quite a bit, but I guess you have some good reasons. Same thing for the cast, even if it works, that is really not a standard way of doing it, and could be tricky in the long term. About the png being normal: Jpeg is a compressed format with loss, that could impact the aspect. But if the image appears normal it is probably because you switched to unsigned short instead of unsigned char, and your original image intensity range overflowed the char one, giving false values. HTH, Tim ----- Mail original ----- De: "XieYi" ?: insight-users at itk.org Envoy?: Mercredi 7 Juin 2017 14:57:18 Objet: Re: [ITK] [ITK-users] why jpg image that is converted from a mhd image is so weird ? Thank you very much Tim There is one point not clear in my topic. The point is that the dimension of image is 3, but infact it is a slice, i.e. it is a 2D image but store in a 3D image type. its size is 512x512x1 there are two strange things: 1, some ImageType can be cast into a 2D jpg image, and have no accident. 2, when I define the density ImageType as typedef itk::RGBPixel PixelType_2D; typedef itk::CudaImage< PixelType_2D, 2 > ImageType2; typedef itk::PNGImageIO ImageIOType; the xxxxx.png look like normal. -- View this message in context: http://itk-users.7.n7.nabble.com/why-jpg-image-that-is-converted-from-a-mhd-image-is-so-weird-tp38307p38309.html Sent from the ITK - Users mailing list archive at Nabble.com. _____________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Kitware offers ITK Training Courses, for more information visit: http://www.kitware.com/products/protraining.php Please keep messages on-topic and check the ITK FAQ at: http://www.itk.org/Wiki/ITK_FAQ Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/insight-users _______________________________________________ Community mailing list Community at itk.org http://public.kitware.com/mailman/listinfo/community From dzenanz at gmail.com Wed Jun 7 09:34:38 2017 From: dzenanz at gmail.com (=?UTF-8?B?RMW+ZW5hbiBadWtpxIc=?=) Date: Wed, 7 Jun 2017 09:34:38 -0400 Subject: [ITK-users] MultiThreading, ImageRegionIterator crash In-Reply-To: <13d4dc31-8fb8-40c1-ccd8-abe4e45555ef@univ-rennes1.fr> References: <13d4dc31-8fb8-40c1-ccd8-abe4e45555ef@univ-rennes1.fr> Message-ID: Hi Nicolas, ThreadedComputation should be called ThreadedGenerateData, otherwise the code looks OK. If you overrode AllocateOutputs(), then you might not have allocated the output (assuming index is wrong for imgIt). Can you provide a runnable example ? Regards, D?enan Zuki?, PhD, Senior R&D Engineer, Kitware (Carrboro, N.C.) On Wed, Jun 7, 2017 at 5:46 AM, Nicolas Courtial < nicolas.courtial at univ-rennes1.fr> wrote: > Hello everyone, > > I'm quite new in ITK world, and I'm currently doing few experiences in > order to learn its logic. > > After few easy exercices, I'm now at a step I want to multithread a method. > As I did in the past, I've been reading the ITK classes to get > "inspiration", and e- mails from here in case of troubles. > > I'm facing an issue at the moment, and I can't figure why, and how to > solve it. > > My filter works with > > - One 3D InputImage (of any pixel type) > - Two "tool" 3D images, respectively of float and unsigned char pixel > types > > From what I read, I've understood there are two main ways of > multithreading: > > - The old fashioned one using BeforeThreadedGenerateData/ > ThreadedGenerateData/AfterThreadedGenerateData > - the one using itk::DomainThreader member. > > As I'm a bit old school, I used the first option. > My problem remains here: (I changed my variables' name to make it clearer) > > ThreadedComputation(const OutputImageRegionType &outputRegionForThread, > ThreadIdType threadId) { > itk::ProgressReporter progress(this, threadId, > outputRegionForThread.GetNumberOfPixels()); > typename TOutputImage::Pointer image = this->GetOutput(0); > > itk::ImageRegionIterator< TOutputImage > imgIt(image, > outputRegionForThread); > itk::ImageRegionIterator< FloatImageType> floatIt (m_MyFloatImage, > outputRegionForThread); > .... > > When creating the floatIt iterator, I have a crash. Using a try catch > block, the issue is due to out of bound region. > Everything is correct, but the index, for which it's completly crazy > ([156245468,0,156245468] or something approching). > > I've tried different options to solve this, but rather than doing > witchcraft and at some point getting something working, I'd prefer to > improve my understanding thanks to your expertise. > > Thanks all, > > Nicolas Courtial > > > > > > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien.jomier at kitware.com Fri Jun 9 02:16:21 2017 From: julien.jomier at kitware.com (Julien Jomier) Date: Fri, 9 Jun 2017 08:16:21 +0200 Subject: [ITK-users] [ANN] CMake Training Course - October 9 Message-ID: <2d50072b-a62a-a58a-19a2-dd2747a38b39@kitware.com> Kitware will be holding a CMake training course on October 9, 2017 in Lyon, France. This one-day course will cover CMake, CTest, CPack and CDash. Please visit our website for more information and registration details: https://training.kitware.fr/browse/153 Note that the course will be taught in English. If you have any questions, please contact us at training at kitware.fr or email me directly. We are looking forward to seeing you in Lyon, Julien -- Kitware SAS 26 rue Louis Gu?rin 69100 Villeurbanne, France http://www.kitware.eu From sidharta.gupta93 at gmail.com Fri Jun 9 05:43:06 2017 From: sidharta.gupta93 at gmail.com (sidharta) Date: Fri, 9 Jun 2017 02:43:06 -0700 (MST) Subject: [ITK-users] Set different intensity values along a Spatial Object Message-ID: <1497001386137-38313.post@n7.nabble.com> Dear all, I am trying to create a blank image with a Spatial Object in it. What I want is to set different intensity values along a mask. Is this possible or do I have to group multiple spatial objects together? -- View this message in context: http://itk-users.7.n7.nabble.com/Set-different-intensity-values-along-a-Spatial-Object-tp38313.html Sent from the ITK - Users mailing list archive at Nabble.com. From coyarzunlaura at googlemail.com Mon Jun 12 12:04:14 2017 From: coyarzunlaura at googlemail.com (Cristina Oyarzun) Date: Mon, 12 Jun 2017 18:04:14 +0200 Subject: [ITK-users] Deadline extended!! MICCAI CLIP 2017 Workshop on Clinical Image-based Procedures: Translational Research in Medical Imaging Message-ID: CALL FOR PAPERS MICCAI 2017 Workshop on Clinical Image-based Procedures: Translational Research in Medical Imaging September 17, 2017 Quebec City, Canada Website:http://miccai-clip.org/ ============================== ========================================== SCOPE The outstanding proliferation of medical image applications has created a need for greater study and scrutiny of the clinical application and validation of such methods. New strategies are essential to ensure a smooth and effective translation of computational image-based techniques into the clinic. For these reasons CLIP 2017?s major focus is on translational research filling the gaps between basic science and clinical applications. A highlight of the workshop is the subject of strategies for personalized medicine to enhance diagnosis, treatment and interventions. Authors are encouraged to submit work centered on specific clinical applications, including techniques and procedures based on comprehensive clinical image data. Submissions related to applications already in use and evaluated by clinical users are particularly encouraged. The event will bring together world-class specialists to present ways to strengthen links between computer scientists and engineers, and clinicians. TOPICS *Strategies for patient-specific and anatomical modeling to support planning and interventions *Clinical studies employing advanced image-guided methods *Clinical translation and validation of image-guided systems *Current challenges and emerging techniques in image-based procedures *Identification of parameters and error analysis in image-based procedures *Multimodal image integration for modeling, planning and guidance *Clinical applications in open and minimally invasive procedures PAPER SUBMISSION Papers will be limited to ten pages following the MICCAI submission guidelines. Prospective authors should refer to the Paper Submission section on the workshop website for details on how to submit papers to be presented at the workshop. All submissions will be peer-reviewed by at least 2 members of the program committee. The selection of the papers will be based on the significance of results, novelty, technical merit, relevance and clarity of presentation. Papers will be presented in a day long single track workshop starting with plenary sessions. Accepted papers will be published as a proceedings volume in the Springer Lecture Notes in Computer Science (LNCS) series after the workshop. WORKSHOP FORMAT Accepted papers will be presented in a day long single track workshop. The final program will consist of invited speakers and original papers with time allocated for discussions. Electronic proceedings will be arranged for all of the papers presented at the workshop. IMPORTANT DATES * June 18, 2017: Paper submission due date * June 29, 2017: Notification of acceptance * July 3, 2017: Final camera-ready paper submission deadline CONTACT Inquires about the workshop should be sent to the Information Desk ( info at miccai-clip.org). ORGANIZERS (in alphabetical order) Klaus Drechsler (Fraunhofer IGD, Germany) Marius Erdt (Fraunhofer IDM at NTU, Singapore) Miguel Gonz?lez Ballester (ICREA - Universitat Pompeu Fabra, Spain) Marius George Linguraru (Children's National Medical Center, USA) Cristina Oyarzun Laura (Fraunhofer IGD, Germany) Raj Shekhar (Children's National Medical Center, USA) Stefan Wesarg (Fraunhofer IGD, Germany) ======================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.boucher88 at gmail.com Tue Jun 20 12:53:51 2017 From: marc.boucher88 at gmail.com (MAB12) Date: Tue, 20 Jun 2017 09:53:51 -0700 (MST) Subject: [ITK-users] ITK MESH Message-ID: <1497977631780-38315.post@n7.nabble.com> Hi I have a binary mask, (segmentation result) and I convert it to a an itk::Mesh using BinaryMask3DMeshSource. I write the result into a byu Mesh after. The outcome is a byu Mesh file but that lost all spatial configuration and is far away in the spatial coordinate from my ground truth segmentation. The binary mask before was fine I used it to convert it manually with itk-SNAP. Any idea was is the problem or insight into how itk Mesh works? -- View this message in context: http://itk-users.7.n7.nabble.com/ITK-MESH-tp38315.html Sent from the ITK - Users mailing list archive at Nabble.com. From matt.mccormick at kitware.com Tue Jun 20 13:12:19 2017 From: matt.mccormick at kitware.com (Matt McCormick) Date: Tue, 20 Jun 2017 13:12:19 -0400 Subject: [ITK-users] ITK MESH In-Reply-To: <1497977631780-38315.post@n7.nabble.com> References: <1497977631780-38315.post@n7.nabble.com> Message-ID: Hi, Points in the itk::Mesh are in physical coordinates. So, it is important that the metadata such as the Image Spacing and Origin are preserved, etc. For more information see the Mesh section of the ITK Software Guide [1] and the Image section [2]. Hope this helps, Matt [1] https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch4.html#x38-620004.3 [2] https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch4.html#x38-470004.1 On Tue, Jun 20, 2017 at 12:53 PM, MAB12 wrote: > Hi > > I have a binary mask, (segmentation result) and I convert it to a an > itk::Mesh using BinaryMask3DMeshSource. I write the result into a byu Mesh > after. The outcome is a byu Mesh file but that lost all spatial > configuration and is far away in the spatial coordinate from my ground > truth > segmentation. The binary mask before was fine I used it to convert it > manually with itk-SNAP. > > Any idea was is the problem or insight into how itk Mesh works? > > > > -- > View this message in context: http://itk-users.7.n7.nabble. > com/ITK-MESH-tp38315.html > Sent from the ITK - Users mailing list archive at Nabble.com. > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.boucher88 at gmail.com Tue Jun 20 14:44:09 2017 From: marc.boucher88 at gmail.com (MAB12) Date: Tue, 20 Jun 2017 11:44:09 -0700 (MST) Subject: [ITK-users] ITK MESH In-Reply-To: References: <1497977631780-38315.post@n7.nabble.com> Message-ID: <1497984249177-38317.post@n7.nabble.com> Hi Thank for response. Since I simply use the itk::BinaryMask3DMeshSource filter to convert to a mesh and I don't see parameters to set the spatial parameters I assumed it should be done automatically... I write the mesh into a byu format, is it possible that byu format is not well handle in itk? I saw a post saying that only vtk handle byu file... -- View this message in context: http://itk-users.7.n7.nabble.com/ITK-MESH-tp38315p38317.html Sent from the ITK - Users mailing list archive at Nabble.com. From 438821082 at qq.com Wed Jun 21 04:44:17 2017 From: 438821082 at qq.com (Tendy Fang) Date: Wed, 21 Jun 2017 01:44:17 -0700 (MST) Subject: [ITK-users] vnl_numeric_traits.h :Error C2413, C2510& FATAL error C1004 Message-ID: <1498034657374-38318.post@n7.nabble.com> I'm reeditting a series of old MFC codes based on ITK in my lab which was wrote in 2005 since there is some function that is useful for our lab's present work. I download ITK 1.6.0 (released in January,2004) , Cmake2.4 and VC6.0 to reappear the compile environment at that time since there are numerous errors built in VS2010 or using latest ITK package. In fact I've tried ITK package from 1.2.0 to 2.0.0 and the results tell there are least errors( 24 errors) when I use ITK 1.6.0 . But there are the errors of vnl_numeric_traits.h as referred in topic and they are also shown in pictures above. Part codes of vnl_numeric_traits.h are shown below. Has anyone faced the errors like this before ? I need your help. Best regards Tendy -- View this message in context: http://itk-users.7.n7.nabble.com/vnl-numeric-traits-h-Error-C2413-C2510-FATAL-error-C1004-tp38318.html Sent from the ITK - Users mailing list archive at Nabble.com. From thanosxania at gmail.com Fri Jun 23 05:31:13 2017 From: thanosxania at gmail.com (Thanos) Date: Fri, 23 Jun 2017 02:31:13 -0700 (MST) Subject: [ITK-users] Conductance parameter on CurvatureAnisotropicDiffusion Filtering Message-ID: <1498210273594-38319.post@n7.nabble.com> Hello everyone, I am using the algorithm for Curvature Anisotropic Diffusion where it uses the algorithm from Whitaker MCDE. As it is also mentioned on the User's guide the conductance modified curvature term is the divergence of the normalized gradient. (I also had a look on the original paper) So, as far as I understand the conductance, which is the curvature of the level set, is defined by the level set function and therefore the image. Then why do we have to set the value of the parameter in order to run the example? Please forgive me if I understood something wrong. Looking forward for your answers! Best regards, Thanos -- View this message in context: http://itk-users.7.n7.nabble.com/Conductance-parameter-on-CurvatureAnisotropicDiffusion-Filtering-tp38319.html Sent from the ITK - Users mailing list archive at Nabble.com. From 787aditi at gmail.com Sat Jun 24 10:31:12 2017 From: 787aditi at gmail.com (mojo_jojo) Date: Sat, 24 Jun 2017 07:31:12 -0700 (MST) Subject: [ITK-users] CUDA 7.5 compatibility with ITK 4.11.0 Message-ID: <1498314672496-7590021.post@n2.nabble.com> Hello, I am trying to optimize some image processes in ITK, using CUDA. I am using ITK 4.11.0, built using gcc 4.9, and CUDA version 7.5 for the purpose. I ended up getting a build error. Also I couldn't find anything much about direct CUDA version and ITK version compatibility. I built the ITK and CUDA using same gcc version successfully, and they work well separately. It will be great if I could get some help with this. Thanks. -- View this message in context: http://itk-insight-users.2283740.n2.nabble.com/CUDA-7-5-compatibility-with-ITK-4-11-0-tp7590021.html Sent from the ITK Insight Users mailing list archive at Nabble.com. From andx_roo at live.com Mon Jun 26 12:39:02 2017 From: andx_roo at live.com (Andaharoo) Date: Mon, 26 Jun 2017 09:39:02 -0700 (MST) Subject: [ITK-users] Recentering Output of fast marching Message-ID: <1498495142874-7590022.post@n2.nabble.com> I've been using the fast marching filter to segment out a particular piece of a 3d image and now I would like to recenter this segmented part. Sometimes the segmented part can be at the edge of the image which makes it hard to rotate since it will only rotate around the center. I tried using fast marchings getOutputOrigin but that doesn't do what I had hoped it would do. If I were to write it would go something along the lines of this with a binary image: for every pixel if the pixel is 1 if the pixels x val is greater than maxX maxX = the pixels x val if the pixels x val is smaller than minX minX = the pixels x val Do the same with the y and z and mins Then after getting the extents I coudl just do (minX + maxX) / 2, (minY + maxY) / 2, (minZ + maxZ) / 2 -- View this message in context: http://itk-insight-users.2283740.n2.nabble.com/Recentering-Output-of-fast-marching-tp7590022.html Sent from the ITK Insight Users mailing list archive at Nabble.com. From matt.mccormick at kitware.com Mon Jun 26 13:03:52 2017 From: matt.mccormick at kitware.com (Matt McCormick) Date: Mon, 26 Jun 2017 13:03:52 -0400 Subject: [ITK-users] Recentering Output of fast marching In-Reply-To: <1498495142874-7590022.post@n2.nabble.com> References: <1498495142874-7590022.post@n2.nabble.com> Message-ID: Hi, This example shows how to find the bounding box of a binary image: https://itk.org/Insight/Doxygen/html/Examples_2SpatialObjects_2BoundingBoxFromImageMaskSpatialObject_8cxx-example.html Hope this helps, Matt On Mon, Jun 26, 2017 at 12:39 PM, Andaharoo wrote: > I've been using the fast marching filter to segment out a particular piece of > a 3d image and now I would like to recenter this segmented part. Sometimes > the segmented part can be at the edge of the image which makes it hard to > rotate since it will only rotate around the center. I tried using fast > marchings getOutputOrigin but that doesn't do what I had hoped it would do. > If I were to write it would go something along the lines of this with a > binary image: > for every pixel > if the pixel is 1 > if the pixels x val is greater than maxX > maxX = the pixels x val > if the pixels x val is smaller than minX > minX = the pixels x val > Do the same with the y and z and mins > Then after getting the extents I coudl just do (minX + maxX) / 2, (minY + > maxY) / 2, (minZ + maxZ) / 2 > > > > -- > View this message in context: http://itk-insight-users.2283740.n2.nabble.com/Recentering-Output-of-fast-marching-tp7590022.html > Sent from the ITK Insight Users mailing list archive at Nabble.com. > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users From blowekamp at mail.nih.gov Mon Jun 26 13:06:13 2017 From: blowekamp at mail.nih.gov (Lowekamp, Bradley (NIH/NLM/LHC) [C]) Date: Mon, 26 Jun 2017 17:06:13 +0000 Subject: [ITK-users] Recentering Output of fast marching In-Reply-To: <1498495142874-7590022.post@n2.nabble.com> References: <1498495142874-7590022.post@n2.nabble.com> Message-ID: <251F3CA8-2958-4F1D-9340-55D90492FBD6@mail.nih.gov> Hello, Have you looked at the LabelImageStatisticsImageFilter[1]? This should compute what you describe. The ?GetRegion? methods returns a bounding ImageRegion, for a given label. In your case this would be for the label 1. Brad [1]https://itk.org/Doxygen/html/classitk_1_1LabelStatisticsImageFilter.html#aa0de894e901cf64495f5690f77b73efb On 6/26/17, 12:39 PM, "Andaharoo" wrote: I've been using the fast marching filter to segment out a particular piece of a 3d image and now I would like to recenter this segmented part. Sometimes the segmented part can be at the edge of the image which makes it hard to rotate since it will only rotate around the center. I tried using fast marchings getOutputOrigin but that doesn't do what I had hoped it would do. If I were to write it would go something along the lines of this with a binary image: for every pixel if the pixel is 1 if the pixels x val is greater than maxX maxX = the pixels x val if the pixels x val is smaller than minX minX = the pixels x val Do the same with the y and z and mins Then after getting the extents I coudl just do (minX + maxX) / 2, (minY + maxY) / 2, (minZ + maxZ) / 2 -- View this message in context: http://itk-insight-users.2283740.n2.nabble.com/Recentering-Output-of-fast-marching-tp7590022.html Sent from the ITK Insight Users mailing list archive at Nabble.com. _____________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Kitware offers ITK Training Courses, for more information visit: http://www.kitware.com/products/protraining.php Please keep messages on-topic and check the ITK FAQ at: http://www.itk.org/Wiki/ITK_FAQ Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/insight-users From matt.mccormick at kitware.com Mon Jun 26 13:11:03 2017 From: matt.mccormick at kitware.com (Matt McCormick) Date: Mon, 26 Jun 2017 13:11:03 -0400 Subject: [ITK-users] Conductance parameter on CurvatureAnisotropicDiffusion Filtering In-Reply-To: <1498210273594-38319.post@n7.nabble.com> References: <1498210273594-38319.post@n7.nabble.com> Message-ID: Hello Thanos, The Conductance parameter is a global scalar that controls the sensitivity to the conductance term in the level set evolution equation. For more information, see the Doxygen pages: - https://itk.org/Doxygen/html/classitk_1_1AnisotropicDiffusionFunction.html - https://itk.org/Insight/Doxygen/html/classitk_1_1AnisotropicDiffusionImageFilter.html#a14a5b3acabcd97c07645750bb735e251 - https://itk.org/Insight/Doxygen/html/classitk_1_1CurvatureAnisotropicDiffusionImageFilter.html Hope this helps, Matt On Fri, Jun 23, 2017 at 5:31 AM, Thanos wrote: > Hello everyone, > > I am using the algorithm for Curvature Anisotropic Diffusion where it uses > the algorithm from Whitaker MCDE. As it is also mentioned on the User's > guide the conductance modified curvature term is the divergence of the > normalized gradient. (I also had a look on the original paper) So, as far as > I understand the conductance, which is the curvature of the level set, is > defined by the level set function and therefore the image. Then why do we > have to set the value of the parameter in order to run the example? > Please forgive me if I understood something wrong. > Looking forward for your answers! > > Best regards, > Thanos > > > > -- > View this message in context: http://itk-users.7.n7.nabble.com/Conductance-parameter-on-CurvatureAnisotropicDiffusion-Filtering-tp38319.html > Sent from the ITK - Users mailing list archive at Nabble.com. > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users From matt.mccormick at kitware.com Mon Jun 26 13:49:02 2017 From: matt.mccormick at kitware.com (Matt McCormick) Date: Mon, 26 Jun 2017 13:49:02 -0400 Subject: [ITK-users] [ANN] Binary ITK Python Packages now available on PyPI! Message-ID: Hi folks, Binary Python wheels are now available on PyPI for Linux, MacOS, and Windows, for Python 2.7 and the recent Python 3.X. These binary wheels are built to be compatible with Python distributions from Python.org, system package managers like apt and Homebrew, and Anaconda. When a binary package is not available for the current platform, an sdist is provided that will guide a researcher through the steps to build the packages from source code. To install ITK from the command line, run: python -m pip install --upgrade pip python -m pip install itk The itk metapackage will pull in the subpackages itk-segmentation, itk-registration, itk-numerics, itk-io, itk-filtering, and itk-core. These packages can also be installed independently if only a portion of the toolkit is desired. Enjoy ITK! From Gordian.Kabelitz at medma.uni-heidelberg.de Wed Jun 28 09:51:41 2017 From: Gordian.Kabelitz at medma.uni-heidelberg.de (Kabelitz, Gordian) Date: Wed, 28 Jun 2017 13:51:41 +0000 Subject: [ITK-users] Writing from an external buffer to VectorImage Message-ID: <81be8e27dd684f54944a7ff7b0d67c42@exch06.ad.uni-heidelberg.de> Hello, i computed a gradient with my own function and as a result a pointer to an image buffer is provided. I know the size, origin and spacing of the gradient component image. I want to copy the gradient image into an itk::VectorImage with the components for the x,y,z gradients. The way I copied the image to the GPU is that I retrieved the buffer pointer from my input image and use the pointer to copy the image data to the GPU. I used the way proposed in [1]. The computeGradientImage method is listed at the end of this mail. [...] // get float pointer to image data ImageType::Pointer image = reader->GetOutput(); Image->Update(); float* data = image.GetBufferPointer(); // copy image data to GPU texture memory (this works) gpu_dev->setVoxels(dimension, voxelSize, data); [...] computeGradientImage<<>> (dev_gradientImage, dimension); // copy resulting gradientImage to host variable float4* host_gradientImage; cudaMemcpy(host_gradient, dev_gradientImage, numberOfVoxels*sizeof(float4)); --> Pseudo Code <-- // Now I want to reverse the copy process. I have a float4 image and want to copy this into a itk::VectorImage with VariableVectorLength of 3 (skipping the magnitude value). [...] -> size, spacing, origin, region definition Itk::VectorImageType vecImage = VectorImageType::New(); vecImage->setRegion(region); vecImage ->SetVectorLength(3); vecImage->Allocate(); // copy image buffer to vecImage, component by component auto vecBuffer = vecImage->getBufferPointer(); auto j = 0; for (i=0; i References: <81be8e27dd684f54944a7ff7b0d67c42@exch06.ad.uni-heidelberg.de> Message-ID: Hi Gordian, this approach looks like it should work. What is wrong with it? Regards, D?enan Zuki?, PhD, Senior R&D Engineer, Kitware (Carrboro, N.C.) On Wed, Jun 28, 2017 at 9:51 AM, Kabelitz, Gordian < Gordian.Kabelitz at medma.uni-heidelberg.de> wrote: > Hello, > > i computed a gradient with my own function and as a result a pointer to an > image buffer is provided. I know the size, origin and spacing of the > gradient component image. > I want to copy the gradient image into an itk::VectorImage with the > components for the x,y,z gradients. > > The way I copied the image to the GPU is that I retrieved the buffer > pointer from my input image and use the pointer to copy the image data to > the GPU. > I used the way proposed in [1]. The computeGradientImage method is listed > at the end of this mail. > > [...] > // get float pointer to image data > ImageType::Pointer image = reader->GetOutput(); > Image->Update(); > > float* data = image.GetBufferPointer(); > // copy image data to GPU texture memory (this works) > gpu_dev->setVoxels(dimension, voxelSize, data); > [...] > computeGradientImage<<>> (dev_gradientImage, dimension); > > // copy resulting gradientImage to host variable > float4* host_gradientImage; > cudaMemcpy(host_gradient, dev_gradientImage, numberOfVoxels*sizeof(float4)) > ; > > --> Pseudo Code <-- > // Now I want to reverse the copy process. I have a float4 image and want > to copy this into a itk::VectorImage with VariableVectorLength of 3 > (skipping the magnitude value). > [...] -> size, spacing, origin, region definition > Itk::VectorImageType vecImage = VectorImageType::New(); > vecImage->setRegion(region); > vecImage ->SetVectorLength(3); > vecImage->Allocate(); > > // copy image buffer to vecImage, component by component > auto vecBuffer = vecImage->getBufferPointer(); > auto j = 0; > for (i=0; i { > vecbuffer[j] = host_gradient[i].x; j++; > vecbuffer[j] = host_gradient[i].y; j++; > vecbuffer[j] = host_gradient[i].z; j++; > } > > // save vecImage as nrrd image > [...] > > I haven't found a way to achieve my idea. > Are there any suggestions or examples? > As far I can see I cannot use the itk::ImportImageFilter. > > Thank you for any suggestions. > With kind regards, > Gordian > > [1]: https://itk.org/CourseWare/Training/GettingStarted-V.pdf > > void computeGradientImage(float4* gradientImage, int* dimension) > { > // every thread computes the float4 voxel with theta,phi,magnitude > from gradient image > int idx = blockIdx.x * blockDim.x + threadIdx.x; > int idy = blockIdx.y * blockDim.y + threadIdx.y; > int idz = blockIdx.z * blockDim.z + threadIdx.z; > > if (idx < dimension[0] && idy < dimension[1] && idz < dimension[2]) > { > // define sobel filter for each direction > [...] > > // run sobel on image in texture memory for each direction > and put result into a float4 image > gradientImage[idx + dimension[0] * (idy + idz * > dimension[1])] = make_float4(sobelX, sobelY, sobelZ, magn); > } > } > > > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt.mccormick at kitware.com Wed Jun 28 12:25:46 2017 From: matt.mccormick at kitware.com (Matt McCormick) Date: Wed, 28 Jun 2017 12:25:46 -0400 Subject: [ITK-users] Writing from an external buffer to VectorImage In-Reply-To: <81be8e27dd684f54944a7ff7b0d67c42@exch06.ad.uni-heidelberg.de> References: <81be8e27dd684f54944a7ff7b0d67c42@exch06.ad.uni-heidelberg.de> Message-ID: Hi Gordian, Examining or using the code in ITKGPUCommon may be helpful. The methods transfer data from CPU to GPU memory and back. https://github.com/InsightSoftwareConsortium/ITK/tree/master/Modules/Core/GPUCommon Hope this helps, Matt On Wed, Jun 28, 2017 at 9:51 AM, Kabelitz, Gordian wrote: > Hello, > > i computed a gradient with my own function and as a result a pointer to an image buffer is provided. I know the size, origin and spacing of the gradient component image. > I want to copy the gradient image into an itk::VectorImage with the components for the x,y,z gradients. > > The way I copied the image to the GPU is that I retrieved the buffer pointer from my input image and use the pointer to copy the image data to the GPU. > I used the way proposed in [1]. The computeGradientImage method is listed at the end of this mail. > > [...] > // get float pointer to image data > ImageType::Pointer image = reader->GetOutput(); > Image->Update(); > > float* data = image.GetBufferPointer(); > // copy image data to GPU texture memory (this works) > gpu_dev->setVoxels(dimension, voxelSize, data); > [...] > computeGradientImage<<>> (dev_gradientImage, dimension); > > // copy resulting gradientImage to host variable > float4* host_gradientImage; > cudaMemcpy(host_gradient, dev_gradientImage, numberOfVoxels*sizeof(float4)); > > --> Pseudo Code <-- > // Now I want to reverse the copy process. I have a float4 image and want to copy this into a itk::VectorImage with VariableVectorLength of 3 (skipping the magnitude value). > [...] -> size, spacing, origin, region definition > Itk::VectorImageType vecImage = VectorImageType::New(); > vecImage->setRegion(region); > vecImage ->SetVectorLength(3); > vecImage->Allocate(); > > // copy image buffer to vecImage, component by component > auto vecBuffer = vecImage->getBufferPointer(); > auto j = 0; > for (i=0; i { > vecbuffer[j] = host_gradient[i].x; j++; > vecbuffer[j] = host_gradient[i].y; j++; > vecbuffer[j] = host_gradient[i].z; j++; > } > > // save vecImage as nrrd image > [...] > > I haven't found a way to achieve my idea. > Are there any suggestions or examples? > As far I can see I cannot use the itk::ImportImageFilter. > > Thank you for any suggestions. > With kind regards, > Gordian > > [1]: https://itk.org/CourseWare/Training/GettingStarted-V.pdf > > void computeGradientImage(float4* gradientImage, int* dimension) > { > // every thread computes the float4 voxel with theta,phi,magnitude from gradient image > int idx = blockIdx.x * blockDim.x + threadIdx.x; > int idy = blockIdx.y * blockDim.y + threadIdx.y; > int idz = blockIdx.z * blockDim.z + threadIdx.z; > > if (idx < dimension[0] && idy < dimension[1] && idz < dimension[2]) > { > // define sobel filter for each direction > [...] > > // run sobel on image in texture memory for each direction and put result into a float4 image > gradientImage[idx + dimension[0] * (idy + idz * dimension[1])] = make_float4(sobelX, sobelY, sobelZ, magn); > } > } > > > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users From aharr8 at uwo.ca Wed Jun 28 15:14:51 2017 From: aharr8 at uwo.ca (Andrew Harris) Date: Wed, 28 Jun 2017 15:14:51 -0400 Subject: [ITK-users] [ITK] applying a transform to an ITK Point object results in it moving the opposite direction from the image Message-ID: Hi there, I have been trying for a while to get this working: I want to be able to select corresponding points in a fixed and moving image, and determine how well the moving image is transformed to overlay the fixed image by using target registration error. The problem is, using the same transform I applied to a moving image that translated it to the left, the point selected in the moving image gets translated to the right for example. -- AH Andrew Harris, Honours BSc (Medical Physics) PhD (CAMPEP) & MClSc Candidate ----------------------------------------------------------------------------------------------- *This email and any attachments thereto may contain private, confidential, and privileged materials for the sole use of the intended recipient. Any reviewing, copying, or distribution of this email (or any attachments thereto) by other than the intended recipient is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently destroy this email and any attachments thereto.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From zivrafael.yaniv at nih.gov Wed Jun 28 16:50:04 2017 From: zivrafael.yaniv at nih.gov (Yaniv, Ziv Rafael (NIH/NLM/LHC) [C]) Date: Wed, 28 Jun 2017 20:50:04 +0000 Subject: [ITK-users] [ITK] applying a transform to an ITK Point object results in it moving the opposite direction from the image In-Reply-To: References: Message-ID: <6ABFBEEB-EBE8-478C-BEEB-747A5373FBE9@mail.nih.gov> Hello Andrew, In ITK the result of a registration maps points from the fixed image coordinate system to the moving coordinate system, so T(p_f) = p_m and the TRE is || T(p_f) ? p_m||. I suspect you just need to use the inverse transform. You may be interested in this SimpleITK notebook (http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/67_Registration_Semiautomatic_Homework.html) which has a linked cursor GUI (gui. RegistrationPointDataAquisition). The source code for the UI is here: https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks/blob/master/Python/gui.py . hope this helps Ziv From: Andrew Harris Date: Wednesday, June 28, 2017 at 3:14 PM To: Insight-users Subject: [ITK-users] [ITK] applying a transform to an ITK Point object results in it moving the opposite direction from the image Hi there, I have been trying for a while to get this working: I want to be able to select corresponding points in a fixed and moving image, and determine how well the moving image is transformed to overlay the fixed image by using target registration error. The problem is, using the same transform I applied to a moving image that translated it to the left, the point selected in the moving image gets translated to the right for example. -- AH Andrew Harris, Honours BSc (Medical Physics) PhD (CAMPEP) & MClSc Candidate ----------------------------------------------------------------------------------------------- This email and any attachments thereto may contain private, confidential, and privileged materials for the sole use of the intended recipient. Any reviewing, copying, or distribution of this email (or any attachments thereto) by other than the intended recipient is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently destroy this email and any attachments thereto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aharr8 at uwo.ca Thu Jun 29 13:21:11 2017 From: aharr8 at uwo.ca (Andrew Harris) Date: Thu, 29 Jun 2017 13:21:11 -0400 Subject: [ITK-users] [ITK] applying a transform to an ITK Point object results in it moving the opposite direction from the image In-Reply-To: <6ABFBEEB-EBE8-478C-BEEB-747A5373FBE9@mail.nih.gov> References: <6ABFBEEB-EBE8-478C-BEEB-747A5373FBE9@mail.nih.gov> Message-ID: Thanks for getting back to me. Using the inverse transform on the point selected in the moving image works to transform the point within a reasonable amount to the homologous feature selected in the fixed image when I use two identical images with a known offset of 100 voxels in each direction. However, upon testing identical images with a known rotation the selected points again fail to line up. The transform I have been using is the Rigid3DVersorTransform, and the lines of code I'm using to set the inverse are: newTransform->SetCenter(oldTransform->GetCenter()); oldTransform->GetInverse(newTransform); Any idea why translation would work but rotation causes a problem? -- AH Andrew Harris, Honours BSc (Medical Physics) PhD (CAMPEP) & MClSc Candidate ----------------------------------------------------------------------------------------------- *This email and any attachments thereto may contain private, confidential, and privileged materials for the sole use of the intended recipient. Any reviewing, copying, or distribution of this email (or any attachments thereto) by other than the intended recipient is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently destroy this email and any attachments thereto.* On Wed, Jun 28, 2017 at 4:50 PM, Yaniv, Ziv Rafael (NIH/NLM/LHC) [C] < zivrafael.yaniv at nih.gov> wrote: > Hello Andrew, > > > > In ITK the result of a registration maps points from the fixed image > coordinate system to the moving coordinate system, so T(p_f) = p_m and the > TRE is || T(p_f) ? p_m||. I suspect you just need to use the inverse > transform. > > > > You may be interested in this SimpleITK notebook (http:// > insightsoftwareconsortium.github.io/SimpleITK-Notebooks/ > Python_html/67_Registration_Semiautomatic_Homework.html) which has a > linked cursor GUI (gui. RegistrationPointDataAquisition). The source code > for the UI is here: https://github.com/InsightSoftwareConsortium/ > SimpleITK-Notebooks/blob/master/Python/gui.py . > > > > hope this helps > > Ziv > > > > > > > > > > *From: *Andrew Harris > *Date: *Wednesday, June 28, 2017 at 3:14 PM > *To: *Insight-users > *Subject: *[ITK-users] [ITK] applying a transform to an ITK Point object > results in it moving the opposite direction from the image > > > > Hi there, > > I have been trying for a while to get this working: I want to be able to > select corresponding points in a fixed and moving image, and determine how > well the moving image is transformed to overlay the fixed image by using > target registration error. The problem is, using the same transform I > applied to a moving image that translated it to the left, the point > selected in the moving image gets translated to the right for example. > > -- > > AH > > > > Andrew Harris, Honours BSc (Medical Physics) > > PhD (CAMPEP) & MClSc Candidate > > ------------------------------------------------------------ > ----------------------------------- > > *This email and any attachments thereto may contain private, > confidential, and privileged materials for the sole use of the intended > recipient. Any reviewing, copying, or distribution of this email (or any > attachments thereto) by other than the intended recipient is strictly > prohibited. If you are not the intended recipient, please contact the > sender immediately and permanently destroy this email and any attachments > thereto.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zivrafael.yaniv at nih.gov Thu Jun 29 15:04:59 2017 From: zivrafael.yaniv at nih.gov (Yaniv, Ziv Rafael (NIH/NLM/LHC) [C]) Date: Thu, 29 Jun 2017 19:04:59 +0000 Subject: [ITK-users] [ITK] applying a transform to an ITK Point object results in it moving the opposite direction from the image In-Reply-To: References: <6ABFBEEB-EBE8-478C-BEEB-747A5373FBE9@mail.nih.gov> Message-ID: The code snippet looks correct. I would advise that you print the transformations to see that you are getting what you expect. The relevant entries, Matrix, Center, Translation, Offset: Original transform is: T(x)=A(x?c)+t+c Where: A ? matrix c ? center t ? translation t+c ?Ac ? offset The inverse should have: A^{-1} ? matrix c ? center -A^{-1} t ? translation c - A^{-1} t - A^{-1} c - offset hope this helps Ziv p.s. When working with ITK always remember that you are dealing with physical space, distances are in mm/km?, volumes in mm^3?. Don?t be tempted to measure things in pixels/voxels. From: Andrew Harris Date: Thursday, June 29, 2017 at 1:21 PM To: "Yaniv, Ziv Rafael (NIH/NLM/LHC) [C]" Cc: Insight-users Subject: Re: [ITK-users] [ITK] applying a transform to an ITK Point object results in it moving the opposite direction from the image Thanks for getting back to me. Using the inverse transform on the point selected in the moving image works to transform the point within a reasonable amount to the homologous feature selected in the fixed image when I use two identical images with a known offset of 100 voxels in each direction. However, upon testing identical images with a known rotation the selected points again fail to line up. The transform I have been using is the Rigid3DVersorTransform, and the lines of code I'm using to set the inverse are: newTransform->SetCenter(oldTransform->GetCenter()); oldTransform->GetInverse(newTransform); Any idea why translation would work but rotation causes a problem? -- AH Andrew Harris, Honours BSc (Medical Physics) PhD (CAMPEP) & MClSc Candidate ----------------------------------------------------------------------------------------------- This email and any attachments thereto may contain private, confidential, and privileged materials for the sole use of the intended recipient. Any reviewing, copying, or distribution of this email (or any attachments thereto) by other than the intended recipient is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently destroy this email and any attachments thereto. On Wed, Jun 28, 2017 at 4:50 PM, Yaniv, Ziv Rafael (NIH/NLM/LHC) [C] > wrote: Hello Andrew, In ITK the result of a registration maps points from the fixed image coordinate system to the moving coordinate system, so T(p_f) = p_m and the TRE is || T(p_f) ? p_m||. I suspect you just need to use the inverse transform. You may be interested in this SimpleITK notebook (http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/67_Registration_Semiautomatic_Homework.html) which has a linked cursor GUI (gui. RegistrationPointDataAquisition). The source code for the UI is here: https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks/blob/master/Python/gui.py . hope this helps Ziv From: Andrew Harris > Date: Wednesday, June 28, 2017 at 3:14 PM To: Insight-users > Subject: [ITK-users] [ITK] applying a transform to an ITK Point object results in it moving the opposite direction from the image Hi there, I have been trying for a while to get this working: I want to be able to select corresponding points in a fixed and moving image, and determine how well the moving image is transformed to overlay the fixed image by using target registration error. The problem is, using the same transform I applied to a moving image that translated it to the left, the point selected in the moving image gets translated to the right for example. -- AH Andrew Harris, Honours BSc (Medical Physics) PhD (CAMPEP) & MClSc Candidate ----------------------------------------------------------------------------------------------- This email and any attachments thereto may contain private, confidential, and privileged materials for the sole use of the intended recipient. Any reviewing, copying, or distribution of this email (or any attachments thereto) by other than the intended recipient is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently destroy this email and any attachments thereto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andx_roo at live.com Thu Jun 29 22:19:36 2017 From: andx_roo at live.com (Andaharoo) Date: Thu, 29 Jun 2017 19:19:36 -0700 (MST) Subject: [ITK-users] Problems Using GPU Filters Message-ID: <1498789176254-7590036.post@n2.nabble.com> I've been looking to use some of the GPU filters provided by ITK recently. Namely the GPU anisotropic filter and binary threshold filter. I downloaded CUDA and ran cmake again with use gpu on flag and it found all my cuda stuff automatically. Everything appeared to build fine. I then looked up some implementation details and examples but couldn't really find any. There was one itk powerpoint from a long while ago that seemed to suggest I'd be able to just put a gpu filter in the pipeline like any other filter. I liked this implementation but it didn't work. So right now I have a pipeline that looks like the following in order to load images with vtk, process with itk, and then render with vtk. vtkToItkFilter -> array of ImageToImageFilters all connected -> itkToVtkFilter I tried to stick the GPUBinaryThresholdImageFilter in the array like I would a normal filter connecting it appropriately and calling update but it broke throwing a read access violation on line 46 of the GPUGenerateData function in itkGPUUnaryFunctorImageFilter.hxx saying otPtr.m_Pointer was nullptr. My code goes a little something like this: typedef itk::Image Image; typedef itk::GPUBinaryThresholdImageFilter GPUBinaryThresholdFilter; GPUBinaryThresholdFilter::Pointer filter = GPUBinaryThresholdFilter::New(); double* range = pipe->GetOutput()->GetScalarRange(); filter->SetLowerThreshold(filterWidget->getDoubleFromSlider("Lower")); filter->SetUpperThreshold(filterWidget->getDoubleFromSlider("Upper")); filter->SetOutsideValue(range[0]); filter->SetInsideValue(range[1]); filters.push_back(filter); if (filters.size() > 0) filter->SetInput(filters.back()->GetOutput()); else filter->SetInput(vtkToItkLoader->GetOutput()); itkToVtkFilter->SetInput(filters.back()->GetOutput()); -- View this message in context: http://itk-insight-users.2283740.n2.nabble.com/Problems-Using-GPU-Filters-tp7590036.html Sent from the ITK Insight Users mailing list archive at Nabble.com. From felix.burk at gmail.com Fri Jun 30 06:58:40 2017 From: felix.burk at gmail.com (Felix Burk) Date: Fri, 30 Jun 2017 12:58:40 +0200 Subject: [ITK-users] Combining several segmenations Message-ID: Hello, I'm using ITKs ScalarChanAndVeseDenseLevelSetImageFilter to segment parts of a medical image file series. The problem is that the resolution is pretty low. So far its working well, but I'd like to have more accurate results. The images in the file series are similar, but change more after each time step. Some segmentations work really well but others differ quite a bit from the desired result. My idea is to combine segmentations from different time steps, like the current and previous one. I think this might lead to better results, because the segmentations should be similar too. I'm quite new to ITK and image analysis in general, maybe that's why I couldn't find something useful for this kind of problem. I already tried to compare them using SquaredDifferenceImageFilter and SimilarityIndexImageFilter, but I don't know if the results are of any use. Is there any way to combine segmentations from several time steps to obtain better results for a single time step? I implemented the algorithm as a Paraview Plugin, if that matters. I could provide some Screenshots too if this might help. Thanks, Felix -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilianoberonich at gmail.com Fri Jun 30 07:52:48 2017 From: emilianoberonich at gmail.com (Emiliano Beronich) Date: Fri, 30 Jun 2017 08:52:48 -0300 Subject: [ITK-users] Problems Using GPU Filters In-Reply-To: <1498789176254-7590036.post@n2.nabble.com> References: <1498789176254-7590036.post@n2.nabble.com> Message-ID: Hi Andarahoo, GPUBinaryThresholdImageFilter should be used with GPUImage. Try defining: typedef itk::GPUImage Image; There is a cast in the method GPUUnaryFunctorImageFilter::GenerateData (ancester of GPUBinaryThresholdImageFilter) which may be the cause of the null pointer: typename GPUOutputImage::Pointer otPtr = dynamic_cast< GPUOutputImage * >( this->ProcessObject::GetOutput(0) ); Cheers, Emiliano 2017-06-29 23:19 GMT-03:00 Andaharoo : > I've been looking to use some of the GPU filters provided by ITK recently. > Namely the GPU anisotropic filter and binary threshold filter. I downloaded > CUDA and ran cmake again with use gpu on flag and it found all my cuda > stuff > automatically. Everything appeared to build fine. I then looked up some > implementation details and examples but couldn't really find any. There was > one itk powerpoint from a long while ago that seemed to suggest I'd be able > to just put a gpu filter in the pipeline like any other filter. I liked > this > implementation but it didn't work. So right now I have a pipeline that > looks > like the following in order to load images with vtk, process with itk, and > then render with vtk. > > vtkToItkFilter -> array of ImageToImageFilters all connected -> > itkToVtkFilter > > I tried to stick the GPUBinaryThresholdImageFilter in the array like I > would > a normal filter connecting it appropriately and calling update but it broke > throwing a read access violation on line 46 of the GPUGenerateData function > in itkGPUUnaryFunctorImageFilter.hxx saying otPtr.m_Pointer was nullptr. > > My code goes a little something like this: > > typedef itk::Image Image; > typedef itk::GPUBinaryThresholdImageFilter > GPUBinaryThresholdFilter; > GPUBinaryThresholdFilter::Pointer filter = GPUBinaryThresholdFilter::New( > ); > double* range = pipe->GetOutput()->GetScalarRange(); > filter->SetLowerThreshold(filterWidget->getDoubleFromSlider("Lower")); > filter->SetUpperThreshold(filterWidget->getDoubleFromSlider("Upper")); > filter->SetOutsideValue(range[0]); > filter->SetInsideValue(range[1]); > filters.push_back(filter); > if (filters.size() > 0) > filter->SetInput(filters.back()->GetOutput()); > else > filter->SetInput(vtkToItkLoader->GetOutput()); > itkToVtkFilter->SetInput(filters.back()->GetOutput()); > > > > -- > View this message in context: http://itk-insight-users. > 2283740.n2.nabble.com/Problems-Using-GPU-Filters-tp7590036.html > Sent from the ITK Insight Users mailing list archive at Nabble.com. > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt.mccormick at kitware.com Fri Jun 30 17:02:43 2017 From: matt.mccormick at kitware.com (Matt McCormick) Date: Fri, 30 Jun 2017 17:02:43 -0400 Subject: [ITK-users] Combining several segmenations In-Reply-To: References: Message-ID: Hello Felix, Once approach is to use the LabelVotingImageFilter: https://itk.org/Doxygen/html/classitk_1_1LabelVotingImageFilter.html Hope this helps, Matt On Fri, Jun 30, 2017 at 6:58 AM, Felix Burk wrote: > Hello, > > I'm using ITKs ScalarChanAndVeseDenseLevelSetImageFilter to segment parts of > a medical image file series. The problem is that the resolution is pretty > low. So far its working well, but I'd like to have more accurate results. > > The images in the file series are similar, but change more after each time > step. Some segmentations work really well but others differ quite a bit from > the desired result. > My idea is to combine segmentations from different time steps, like the > current and previous one. I think this might lead to better results, because > the segmentations should be similar too. > I'm quite new to ITK and image analysis in general, maybe that's why I > couldn't find something useful for this kind of problem. > > I already tried to compare them using SquaredDifferenceImageFilter and > SimilarityIndexImageFilter, but I don't know if the results are of any use. > Is there any way to combine segmentations from several time steps to obtain > better results for a single time step? > > I implemented the algorithm as a Paraview Plugin, if that matters. I could > provide some Screenshots too if this might help. > > Thanks, > Felix > > _____________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Kitware offers ITK Training Courses, for more information visit: > http://www.kitware.com/products/protraining.php > > Please keep messages on-topic and check the ITK FAQ at: > http://www.itk.org/Wiki/ITK_FAQ > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/insight-users >