I noticed when I run a big job in my grails application (that reads huge amounts of data from db) I have this log(no idea what generates the log, maybe the melody plugin?): Memory usage: used=588MB free=14MB total=683MB in the console and soon I receive:
Exception in thread "http-nio-8180-AsyncTimeout" Exception in thread "http-nio-8180-ClientPoller-1" Exception in thread "Thread-11" java.lang.OutOfMemoryError: GC overhead limit exceeded,
even though the heap memory seems to have much more memory left as displayed in intellij:
.
Also, starting the application I explicitly added JVM params:-Xmx40968m -Xms2048m.
On what type of memory do I run low?
Related
I have a similar problem to Examining Erlang crash dumps - how to account for all memory?, my app crashed with eheap_alloc: Cannot allocate 34385784 bytes of memory (of type "old_heap") and I can't figure out which process caused it.
According to the Memory tab in the crash dump viewer, Process used is 2153MB, 4but when I sum up all the Memory: lines in the erl_crash.dump (which are in bytes, see guide) the result is only around 285MB. Old heap would be another 62MB, but I think that's included in Memory:. Where could the rest be coming from? Usually the app has a total memory usage of around 300MB.
Also at the top of the dump file it says Calling Thread: scheduler:0 but there is no further information about it. There are only entries for scheduler:1 and scheduler:2. Could they be involved in this or are the other scheduler processes unrelated?
What is the term for when a computer process uses too much memory, and the OS has to terminate it? I was thinking memory leak, but that implies there is memory not being used that is taken up, which is not the case. I wouldn't use the term stack overflow either, because it is possible for the memory to be allocated on the heap.
when a computer process uses too much memory, and the OS has to terminate it
What you describe here doesn't happen. The behavior differ from OS to OS, but none happens as you describe it. On Windows for example a memory allocation may fail, but that does not imply the OS terminating the process. The call to allocate memory returns an error code and the process decides how it handles the situation. Linux has this crazy memory allocation scheme on which allocation succeeds without any backing, and then actual reference to the memory may fail. In this case Linux runs the oom-killer:
It is the job of the linux 'oom killer' to sacrifice one or more processes in order to free up memory for the system when all else fails.
Note that the oom-killer a process chosen by the badness() function, not necessarily the process that actually touched a page that had no backing (ie. not the process that requested the memory). In Linux is also important to distinguish between the memory being 'allocated' and the memory being 'referenced' the first time (ie. the PTE shenanigans).
So, strictly speaking, what you describe doesn't exists. However, the common name for a process 'running out of memory' is out of memory, commonly abbreviated as OOM. In most modern systems OOM condition manifests itself as an exception, and is an exception raised voluntarily by the process, not by OS.
One situation when an OS kills a process on-the-spot is when an OOM occurs during a guard-page PTE miss (ie. the OS cannot commit the virtual page). As the OS has no room to actually allocate the guard-page, it has no room to actually write the exception record for the process and it cannot raise the exception (that would a a stack overflow exception, since we're talking about a guard page). Th OS has no choice but to obliterate the process (technically is not a kill, since a kill is a type of signal).
Neither "memory leak" nor "stack overflow" cuts it, really. A memory leak is a bug in a program that could result in running out of memory in the long run. A "stack overflow" is an exhaustion of the call stack.
Not all systems terminate processes that uses up all memory. It is usually the process itself that fails to allocate more memory (within the constraints set by the system), causing an "out of memory" error to occur (in C on Unix, malloc() returns a NULL pointer and the errno variable is set to ENOMEM). This does not necessarily occur when a single process is hogging all memory, but could happen to any process when lots of memory is used by many other processes (using sloppy language, there might be system-imposed limits on a per-user basis etc.)
I would probably call the process that grabs a huge proportion of the memory on a system a "memory hog", but I have never seen a term describing the event of running out of memory. "Memory exhaustion" maybe, or simply "causing an out of memory situation".
OutOfMemory-Exceptions is my best guess
In my process i have created 10 threads and will use those threads till my application is alive. Each thread will perform some file input and output operation every time. So the problem is every time thread start executing then my process virtual memory is getting increased.
My analysis is that when one file input output task is allowcated to the thread then the file will be loaded to thread address space when thread start to copy the file and after copy is completed then the thread address space will not be cleared as still the thread is not exited. So if i once again assign another task to the thread then the new file will be loaded to the thread address space.
Hence the main process virtual memory address space will be increase. SO Please correct me if i am wrong and also help to know this has some problem if the process run for log time.
A few things here.
1) Threads do not have their own memory address space. Processes do. (However, threads do get their own thread local storage.)
2) In managed languages, objects are not cleaned up and the heap compacted until the garbage collector is run. The garbage collector is not run until it needs to (e.g. the program is close to running out of memory). As long as the object has no strong references to it (nothing running can reach it) then the object will get cleaned up when the program needs it to be cleaned up, and you don't need to do anything else. If you want the garbage collector to run early, however, tell it to.
By the way, if resources are needed commonly amongst many different threads, you could consider having some sort of global cache for them. However, early optimization is a grievous sin, so don't go to all that effort until you've determined it solves a REAL problem.
While load testing my erlang server with increasing number (100, 200, 3000,....) of processes using +P which is the maximum number of concurrent processes, as well as making 10 process sending 1 message to the rest of the created processes, I got a message on the erlang console:
"Crash dump was written to: erl_crash.dump. eheap_alloc: Cannot allocate 298930300 bytes of memory (of type "old_heap"). Abnormal termination".
I'm using Windows XP. The is no problem when I create the process (it's working). The crash happens after the process starts communicating (sending hi & receiving hello) and this is the only problem I have (by the way, +hms which sets the default heap size of processes).
How can I resolve this?
If somebody will find it useful as one of possible reasons for such problem(since I haven't found any specific answer anywhere)
we've experienced similar problem with rabbitmq server (linux, 64bit, persistent queue, watermarks with default config)
eheap_alloc: Cannot allocate yyy bytes of memory (of type "heap")
eheap_alloc: Cannot allocate xxx bytes of memory (of type "old_heap")
The problem was in re-queueing too much messages at once. Our "monitoring" code used "get" message with re-queue option without limiting number of messages to get & re-queue(in our case -all messages in the queue which was 4K)
So at a time it tried to add all this messages back to queue the server failed with above message.
hope it will save few hours to someone.
Have a look at that erl_crash.dump file using the Crashdump Viewer:
/usr/local/lib/erlang/lib/observer-1.0/priv/bin/cdv erl_crash.dump
(Apologies for the Unix path; you should be able to find a cdv.bat in your installation on Windows.)
Look at the process list; in my experience there's fairly often a process with a really long message queue where you didn't expect it.
You ran out of memory. Try decreasing the default heap size or limit the number of processes you start.
More advanced solutions include profiling your application to see if you can save some memory there, for example better sharing of binaries or less use of lists and large messages (which will copy the data to every process it's sent to).
One of your processes tries allocate almost 300MB memory. You have probably memory leak in your server. In proper design you should not have so much big heap in one process if it is not intended.
I have deployed my Rails application on server.
It is working fine, but it crashed on sign_up page and server will be stop.
I Checked my mongrel.log file, it give the following error.
libgomp: Thread creation failed: Cannot allocate memory
How can I resolved this error?
Thanks.
Sounds like your system was out of memory. You're probably on a VM with a limited amount of memory (and no swap). You'll need to get more memory, or use less.