[ITK-users] How to make ITK-binaries NOT ignore kernel out of	memory messages?
    Dr. Roman Grothausmann 
    grothausmann.roman at mh-hannover.de
       
    Fri Jun  6 04:23:02 EDT 2014
    
    
  
Dear mailing list members,
Quite often now I've experienced the problem that some of my programs using ITK 
seem to ignore kernel out of memory messages. This seems to happen if some 
itk-filter dynamically increases its allocated memory. If the dataset operated 
on needs more memory than there is available in RAM (swap already turned off) 
the linux kernel goes nuts and within a few seconds the whole server is not 
responding to anything any more (k-worker and migration processes often at 100% 
CPU before top stops responding) and needs a cold restart. I'm used to other 
programs that normally just exit with a message "out of memory" if the kernel 
cannot provide any more memory and all is fine.
Am I missing anything during the configuration of ITK or in the actual program 
code that would give me such behaviour for these itk-programs?
I also tried to limit the memory available for programs with ulimit -S -v but 
that does not help if two or more of such itk-programs are run simultaneously 
and at some time together eat up all system memory available.
Any ideas what I could do to prevent our server from crashing/freezing when such 
itk-programs are executed with large dataset?
Any help or hints are very much appreciated
Roman
-- 
Dr. Roman Grothausmann
Tomographie und Digitale Bildverarbeitung
Tomography and Digital Image Analysis
Institut für Funktionelle und Angewandte Anatomie, OE 4120
Medizinische Hochschule Hannover
Carl-Neuberg-Str. 1
D-30625 Hannover
Tel. +49 511 532-9574
    
    
More information about the Insight-users
mailing list