We are using Dart VM version: 1.24.3 (Wed Dec 13 16:10:39 2017) on "linux_x64" and I've been using the observatory to find out how memory is allocated.
Under Allocation Profile -> Old Generation in observatory, I can see the following: 83.7MB of 498.5MB used
My questions are
I'm assuming 83.7MB is the memory used, but then what's the 498.5MB? Amount of memory allocated by the VM?
I can see the 498.5MB increasing over time even though the memory used is not that high. Why would the VM allocate more and more memory even though the app doesn't more than half?
When would Dart's VM release memory back to system? GC doesn't lower the allocated memory much.
How else could I narrow down where the potential memory leak is?
Thanks!
Related
Allocate memory on heap is an expensive operation and so some programming languages avoid to give it back to the operating system, even if the allocated memory is not being used anymore.
But for many scenarios, for example microservices running on cloud, you would to like to have low memory usage, otherwise the bill can be quite high.
So in these cases it's really important to release the memory after it is not being used.
What is Rust default strategy to uncommit and return memory to the operating system?
How can that be changed?
By default Rust uses the system allocator.
This is based on malloc on Unix platforms and HeapAlloc on Windows, plus related functions.
Whether calling free() actually makes the memory available to other processes depends on the libc implementation and your operating system, and that question is mostly unrelated to Rust (see the links below). In any case, memory that was freed should be available for future allocations, so long-running processes don't leak memory.
My general experience is that resource consumption of Rust servers is very low.
See also:
Does free() unmap the memory of a process?
Why does the free() function not return memory to the operating system?
Will malloc implementations return free-ed memory back to the system?
I searched quite a lot for the question but was unable to find my exact query although it seems general enough that might have been asked and answered somewhere.
I wanted to know what happens after a process causes a memory leak and terminates. In my opinion it's not big deal because of virtual memory. After all physical pages can be still allocated to other/new process even if it was causing memory leak earlier (after old process caused memory leak)
But I also read somewhere that due to memory leaks you need to restart your system, and I dont seem to understand why???
Recommended reading : Operating Systems: Three Easy Pieces
On common OSes (e.g. Linux, Windows, MacOSX, Android) each process has its own virtual address space (and the heap memory, e.g. used for malloc or mmap, is inside that virtual address space), and when the process terminates, its entire virtual address space is destroyed.
So memory leaks don't survive the process itself.
There could be subtle corner cases (e.g. leaks on using shm_overview(7) or shmget(2)).
Read (for Linux) proc(5), try cat /proc/self/maps, and see also this. Learn to use valgrind and the Address Sanitizer.
Read also about Garbage Collection. It is quite relevant.
In modern operating systems the address space is divided into a user space and an system space. The system space is the same for all processes.
When you kill a process, that destroys the user space for the process. If an application has a memory leak, killing the process remedies the leak.
However,the operating system can also allocate memory in the system space. When there is a memory leak in the operating system's allocation of system space memory, killing processes does not free it up.
That is the type of memory leak that forces you to reboot the system.
I have a service which intermittently starts gobbling up server memory over time and needs to be restarted to free it. I turned +ust with gflags, restarted the service, and started taking scheduled UMDH snapshots. When the problem reoccurred, resource manager reported multiple GB under Working set and Private bytes, but the UMDH snapshots account only for a few MB allocations in the process' heaps.
At the top of UMDH snapshot files, it mentions "Only allocations for which the heap manager collected a stack are dumped".
How can an allocation in a process be without a trace when +ust flags were specified?
How can I find out where/how these GBs were allocated?
UMDH is short for User Mode Dump Heap. The term Heap is a key term here: it refers to the C++ heap manager only. This means that all memory which is allocated by other means than the C++ heap manager is not tracked by UMDH.
This can be
direct calls to VirtualAlloc()
memory used by .NET, since .NET has its own heap manager
But even for C++, there is the case that allocations larger than 512 kB are not efficiently manageable by the C++ heap manager, so it just redirects it to VirtualAlloc() and does not create a heap segment of such large allocations.
How can I find out where/how these GBs were allocated?
For direct calls to VirtualAlloc(), the WinDbg command !address -summary may give an answer. For .NET, the SOS extension and the !dumpheap -stat can give an answer.
I have this memory issue with Java 8 on Jetty Server. After exploring about memory leaks, it is not clear for me that: whether a Java Memory leak will cause just jvm OutOfMemoryError only? or can cause excessive physical memory usage(which can't be tracked by profiler) resulting a system crash?
A memory leak can lead to any resource being exhausted.
A common one is running out of file descriptors, i.e. files or keeping unused sockets alive. This limit can be as low as 1024.
It would be possible to do the same thing with GUI components or any component which proxies an external resource.
Does the JVM ever give memory back to the OS it has previously allocated for the heap?
For example, I have a JVM that at set to -Xmx5120M and I have actually used all of that memory, doing stuff that would cause the heap to fill up. Lets say a full GC happens, which brings actual heap usage down significantly. Will that drop cause the total heap size to be reduced, presumably to just above actual usage levels, and the "cleared" memory is returned to the OS? Or will the memory allocated to the JVM remain at the high level even though it may not be "actively" using all of it in the heap now.
Slim down vs hoard I guess.
EDIT: I'm interested in the Sun/Oracle JVM (i.e. 1.6.0_33, 1.7+ or the like)