[Insight-users] Scale Image

Mike Jackson mike.jackson at bluequartz.net
Wed May 6 15:19:48 EDT 2009


I am trying to scale an image after I have manually created the
itkImage object. in the full scale image I have already read the image
into an unsigned char buffer then create a new image using the
following code:

typedef itk::ImportImageFilter< PixelType, Dimension>  ImportFilterType

 ImportFilterType::Pointer importFilter = ImportFilterType::New();
  ImportFilterType::SizeType size;
  size[0] = mosaicPixelWidth; // size along X
  size[1] = mosaicPixelHeight; // size along Y

  ImportFilterType::IndexType start;
  start.Fill(0);
  ImportFilterType::RegionType region;
  region.SetIndex(start);
  region.SetSize(size);
  importFilter->SetRegion(region);

  double origin[Dimension];
  float mosaicOrigin[2];
  sliceInfo->getMosaicAbsoluteOrigin(mosaicOrigin[0], mosaicOrigin[1]);
  origin[0] = mosaicOrigin[0];
  origin[1] = mosaicOrigin[1];
  importFilter->SetOrigin(origin);

  double spacing[Dimension];
  float mosaicSpacing[2];
  sliceInfo->getMosaicScalingFactor(mosaicSpacing[0], mosaicSpacing[1]);
  spacing[0] = mosaicSpacing[0];
  spacing[1] = mosaicSpacing[1];
  importFilter->SetSpacing(spacing);

  const bool importImageFilterWillOwnTheBuffer = true;
  importFilter->SetImportPointer(imageData, size[0] * size[1],
importImageFilterWillOwnTheBuffer);
  importFilter->Update();

// Then I go on to try and resample the image:

  typedef itk::ResampleImageFilter<ImageType, ImageType> FilterType;
  FilterType::Pointer filter = FilterType::New();
  typedef itk::AffineTransform<double, Dimension> TransformType;
  TransformType::Pointer transform = TransformType::New();

  typedef itk::NearestNeighborInterpolateImageFunction<ImageType,
double> InterpolatorType;
  InterpolatorType::Pointer interpolator = InterpolatorType::New();
  filter->SetInterpolator(interpolator);
  filter->SetDefaultPixelValue(0);


  ImageType::SpacingType spacing = inputImage->GetSpacing();
  std::cout << logTime() << "Old Spacing: " << spacing[0] << " x " <<
spacing[1] << std::endl;
  spacing[0] = spacing[0] / scaleFactor;
  spacing[1] = spacing[1] / scaleFactor;
  filter->SetOutputSpacing(spacing);
  std::cout << logTime() << "New Spacing: " << spacing[0] << " x " <<
spacing[1] << std::endl;

  ImageType::PointType origin = inputImage->GetOrigin();
  filter->SetOutputOrigin(origin);
  std::cout << logTime() << "Origin: " << origin[0] << " x " <<
origin[1] << std::endl;

  ImageType::SizeType size;
  const ImageType::RegionType fixedRegion =
inputImage->GetLargestPossibleRegion();
  const ImageType::SizeType fixedSize = fixedRegion.GetSize();

  size[0] = fixedSize[0] / scaleFactor; // number of pixels along X
  size[1] = fixedSize[1] / scaleFactor; // number of pixels along Y

  filter->SetSize(size);
  std::cout << logTime() << "Old Size: " << fixedSize[0] << " x " <<
fixedSize[1] << std::endl;
  std::cout << logTime() << "New Size: " << size[0] << " x " <<
size[1] << std::endl;

  filter->SetInput(inputImage);
  transform->Scale(6.0, false);
  filter->SetTransform(transform);
  filter->Update();


All I get in the output is an all black image. I don't think I need to
do any translations as I want the same origins as the original image.
Is there something I am missing with this code?

Here is some output from the program from the debug statements from above.
[2009:05:06 15:18:50] Old Spacing: 0.207987 x 0.207987
[2009:05:06 15:18:50] New Spacing: 0.0346644 x 0.0346644
[2009:05:06 15:18:50] Origin: 46979.1 x 48347.6
[2009:05:06 15:18:50] Old Size: 7440 x 8330
[2009:05:06 15:18:50] New Size: 1240 x 1388


Thanks for any help
--
Mike Jackson


More information about the Insight-users mailing list