[Insight-users] Lack of memory for segmentation

Atwood, Robert C r.atwood at imperial.ac.uk
Wed Jan 25 10:29:09 EST 2006

Peter, Thanks for the advice. It is a linux originally RedHat 7.2 with
numerous piecemeal upgrades (but never a complete reinstall to a later
version) The kernel is 2.4.20 SMP and the GCC I am using is 3.3.3
I tested the hypothesis that only ~2Gb can be used by a program...

With my own C program to fill up the memory, I can access about 3Gb
before the program gets killed. This is of course without other jobs
running, just the usual system stuff. 

However, I don't know much about the technical difference between using
C memory allocation versus C++. 

I tried continuously monitoring the swap usage during the execution of
my ITK filter pipeline, and it seems that swap was never touched at all.

Furthermore, with the current pipeline including streaming, the message
that the memory is not available appears very near the beginning of
execution, rather than after the size of memory used gets up to the
limiting size. So, I am wondering if within ITK, there is some checking
going on that tests the expected size against the physical memory size
and refuses to go on ?



Free output:

             total       used       free     shared    buffers
Mem:       2069172    2063800       5372          0       3884
-/+ buffers/cache:    1930744     138428
Swap:      2048276    1126828     92

Top output:

 Mem:  2069172K av, 2064412K used,    4760K free,       0K shrd,
9516K buff
Swap: 2048276K av,  925552K used, 1122724K free                    6608K

 1025 rcatwood  17   0 2861M 1.9G   320 R    48.7 97.1   0:50 testmem
 1023 rcatwood   9   0  1156 1068   916 R     6.3  0.0   0:04 top

-----Original Message-----
From: insight-users-bounces+r.atwood=imperial.ac.uk at itk.org
[mailto:insight-users-bounces+r.atwood=imperial.ac.uk at itk.org] On Behalf
Of Peter Cech
Sent: 25 January 2006 15:00
To: insight-users at itk.org
Subject: Re: [Insight-users] Lack of memory for

On Wed, Jan 25, 2006 at 14:26:43 -0000, Atwood, Robert C wrote:
> Unfortunately I still cannot process an image of greater than 1 gb on
> 2gb memory node it cannot allocate the memory for the output image, so
> it seems that the biggest image is half the memory, to enable the
> original and the final whole images to be in memory despite having
> swap space set up? Does this make sense?

It could be address space limitation. Application data have only a part
of 32bit (4GiB) address space available (how much depends on
configuration of operating system). If your node is 64bit capable, try
compile for 64bit instead of 32.

Peter Cech
Insight-users mailing list
Insight-users at itk.org

More information about the Insight-users mailing list