I have a C program loading the libjvm dynamically with dlopen(), finding the JNI_CreateJavaVM function address with dlsym() and initializing the JVM with it.
After doing some Java stuff (that already works fine), I would like to release the JVM resources and make the process reuse the memory that was (m)allocated by the JVM on the heap...
I am trying (*jvm)->DestroyJavaVM(), then allocating more memory, but it does not seem to reuse what was allocated by the JVM.
Is there some other API to be called to force the JVM to free the allocated memory?
I mean, in a regular usage, you don't care to release the resources at the end of a program, because the OS will release the memory used by the process. But here I want to force a cleanup to reuse resources for another library consuming a lot of memory.
Does that make sense by the way or do I miss something?
Thanks!
Seb
HotSpot JVM does not ever attempt to release all the resources at exit. In particular, it does not deallocate Object Heap, Code Cache, Metaspace and so on. This memory cannot be reclaimed until the process terminates.
The VM shutdown sequence is described in the comments to Threads::destroy_vm. Note that it does not say anything about releasing memory. Also, JVM does not attempt to kill or suspend daemon threads.
The documentation to DestroyJavaVM says that
Unloading of the VM is not supported.
A typical solution is to launch JVM in a separate process.
Related
I searched quite a lot for the question but was unable to find my exact query although it seems general enough that might have been asked and answered somewhere.
I wanted to know what happens after a process causes a memory leak and terminates. In my opinion it's not big deal because of virtual memory. After all physical pages can be still allocated to other/new process even if it was causing memory leak earlier (after old process caused memory leak)
But I also read somewhere that due to memory leaks you need to restart your system, and I dont seem to understand why???
Recommended reading : Operating Systems: Three Easy Pieces
On common OSes (e.g. Linux, Windows, MacOSX, Android) each process has its own virtual address space (and the heap memory, e.g. used for malloc or mmap, is inside that virtual address space), and when the process terminates, its entire virtual address space is destroyed.
So memory leaks don't survive the process itself.
There could be subtle corner cases (e.g. leaks on using shm_overview(7) or shmget(2)).
Read (for Linux) proc(5), try cat /proc/self/maps, and see also this. Learn to use valgrind and the Address Sanitizer.
Read also about Garbage Collection. It is quite relevant.
In modern operating systems the address space is divided into a user space and an system space. The system space is the same for all processes.
When you kill a process, that destroys the user space for the process. If an application has a memory leak, killing the process remedies the leak.
However,the operating system can also allocate memory in the system space. When there is a memory leak in the operating system's allocation of system space memory, killing processes does not free it up.
That is the type of memory leak that forces you to reboot the system.
Let's say that I have an OS that implements malloc by storing a list of segments that the process points to in a process control block. I grab my memory from a free list and give it to the process.
If that process dies, I simply remove the reference to the segment from the process control block, and move the segment back to my free list.
Is it possible to create an idempotent function that does this process cleanup? How is it possible to create a function such that it can be called again, regardless of whether it was called many times before or if previous calls died in the middle of executing the cleanup function? It seems to me that you can't execute two move commands atomically.
How do modern OS's implement the magic involved in culling memory from processes that randomly die? How do they implement it so that it's okay for even the process performing the cull to randomly die, or is this a false assumption that I made?
I'll assume your question boils down to how the OS culls a process's memory if that process crashes.
Although I'm self educated in these matters, I'll give you two ways an OS can make sure any memory used by a process is reclaimed if the process crashes.
In a typical modern CPU and modern OS with virtual memory:
You have two layers of allocation. Whenever the process calls malloc, malloc tries to satisfy the request from already available memory pages the kernel gave the process. If not enough pages are available, malloc asks the kernel to allocate more pages.
In this case, whenever a process crashes or even if it exits normally, the kernel doesn't care what malloc did, or what memory the process forgot to release. It only needs to free all the pages it gave the process.
In a simpler OS that doesn't care much about performance, memory fragmentation or virtual memory and maybe not even about memory protection:
Malloc/free is implemented completely on the kernel side (e.g: system calls). Whenever a process calls malloc/free, the kernel does all the work, and therefore knows about all the memory that needs to be freed. Once the process crashes or exits, the kernel can cleanup. Since the kernel is never supposed to crash, and keep a record of all the allocated memory per process, it's trivial.
Like I said, I'm self educated, and I didn't check how for example Linux or Windows implement it.
We have some problems with Dart. It seems like after some period of time the garbage collector can't clear the memory in VM, so application hangs. Anyone with this issue? Are there any memory limits?
You should reuse your objects instead of creating new ones. You should use pool pattern:
http://en.wikipedia.org/wiki/Object_pool_pattern
Be careful about canvas and it's proper destruction.
Another GC performance papers:
http://blog.tojicode.com/2012/03/javascript-memory-optimization-and.html
http://qt-project.org/doc/qt-5/qtquick-performance.html
Are there any memory limits?
Yes. Dart apparently runs with a maximum sizes that can be configured at launch time:
How to run a dart program with big memory?
(The following applies to all garbage-collected languages ...)
If your application starts to run out of space (i.e. the heap is slowly filing with objects that the GC can't remove) then you may get into a nasty situation where the GC runs more and more frequently, and manages to reclaim less and less memory each time. Eventually you run out of memory, but before that happens the application gets really slow.
The solution is typically to do one or both of the following:
Find what is causing the memory to run out. It is typically not that you are allocating too many objects. Rather, the typical cause is that the unwanted objects are all still reachable ... via some data structure that your application has built.
Set the "quick death" tuning option for the GC .... if available. For example, Java garbage collectors can be configured to measure the time spent garbage collecting. (The GC overhead.) When the GC overhead exceeds a preset ratio, the Java virtual machine throws an OutOfMemoryError to "pull the plug".
Is there a way to access (read or free) memory chunks that are outside the memory that is allocated for the program without getting access violation exceptions.
Well what I actually would like to understand apart from this, is how a memory cleaner (system garbage collector) works. I've always wanted to write such a program. (The language isn't an issue)
Thanks in advance :)
No.
Any modern operating system will prevent one process from accessing memory that belongs to another process.
In fact, it you understood virtual memory, you'd understand that this is impossible. Each process has its own virtual address space.
The simple answer (less I'm mistaken), no. Generally it's not a good idea for 2 reasons. First is because it causes a trust problem between your program and other programs (not to mention us humans won't trust your application either). second is if you were able to access another applications memory and make a change without the application knowing about it, you will cause the application to crash (also viruses do this).
A garbage collector is called from a runtime. The runtime "owns" the memory space and allows other applications to "live" within that memory space. This is why the garbage collector can exist. You will have to create a runtime that the OS allocates memory to, have the runtime execute the application under it's authority and use the GC under it's authority as well. You will need to allow some instrumentation or API that allows the application developer to "request" memory from your runtime (not the OS) and your runtime have a way to not only response to such a request but also keep track of the memory space it's allocating to that application. You will probably need to have a framework (set of DLL's) that makes these calls available to the application (the developer would use them to form the request inside their application).
You have to be sure that your garbage collector does not remove memory other then the memory that is used by the application being executed, as you may have more then 1 application running within your runtime at the same time.
Hope this helps.
Actually the right answer is YES.. there are some programs that does it (and if they exists.. it means it is possible...)
maybe you need to write a kernel drive to accomplish this, but it is possible.
Oh - and I have another example... Debugger attach command... here is one program that interacts with another program memory even though both started as a different process....
of course - messing with another program memory.. if you don't know what you're doing will probably make it crush...
what exactly are un-managed and managed memory?
can anybody explain me in brief?
Also, what exactly would mean when the managed-memory concept is taken to RAM, calling managed-RAM. What are some of the specifics about "managed RAM" and "un-managed-RAM"?
It is all the same physical memory. The difference is who is controlling it.
The Microsoft definition is that managed memory is cleaned up by a Garbage Collector (GC), i.e. some process that periodically determines what part of the physical memory is in use and what is not.
Unmanaged memory is cleaned up by something else e.g. your program or the operating system.
The term unmanaged memory is a bit like the World War 1, it wasn't called that until after World War 2. Previously it was just memory.