How OS handles memory leaks - memory

I searched quite a lot for the question but was unable to find my exact query although it seems general enough that might have been asked and answered somewhere.
I wanted to know what happens after a process causes a memory leak and terminates. In my opinion it's not big deal because of virtual memory. After all physical pages can be still allocated to other/new process even if it was causing memory leak earlier (after old process caused memory leak)
But I also read somewhere that due to memory leaks you need to restart your system, and I dont seem to understand why???

Recommended reading : Operating Systems: Three Easy Pieces
On common OSes (e.g. Linux, Windows, MacOSX, Android) each process has its own virtual address space (and the heap memory, e.g. used for malloc or mmap, is inside that virtual address space), and when the process terminates, its entire virtual address space is destroyed.
So memory leaks don't survive the process itself.
There could be subtle corner cases (e.g. leaks on using shm_overview(7) or shmget(2)).
Read (for Linux) proc(5), try cat /proc/self/maps, and see also this. Learn to use valgrind and the Address Sanitizer.
Read also about Garbage Collection. It is quite relevant.

In modern operating systems the address space is divided into a user space and an system space. The system space is the same for all processes.
When you kill a process, that destroys the user space for the process. If an application has a memory leak, killing the process remedies the leak.
However,the operating system can also allocate memory in the system space. When there is a memory leak in the operating system's allocation of system space memory, killing processes does not free it up.
That is the type of memory leak that forces you to reboot the system.

Related

Should the memory of an HashMap be freed when it goes out of scope? [duplicate]

Allocate memory on heap is an expensive operation and so some programming languages avoid to give it back to the operating system, even if the allocated memory is not being used anymore.
But for many scenarios, for example microservices running on cloud, you would to like to have low memory usage, otherwise the bill can be quite high.
So in these cases it's really important to release the memory after it is not being used.
What is Rust default strategy to uncommit and return memory to the operating system?
How can that be changed?
By default Rust uses the system allocator.
This is based on malloc on Unix platforms and HeapAlloc on Windows, plus related functions.
Whether calling free() actually makes the memory available to other processes depends on the libc implementation and your operating system, and that question is mostly unrelated to Rust (see the links below). In any case, memory that was freed should be available for future allocations, so long-running processes don't leak memory.
My general experience is that resource consumption of Rust servers is very low.
See also:
Does free() unmap the memory of a process?
Why does the free() function not return memory to the operating system?
Will malloc implementations return free-ed memory back to the system?

A term for when a computer process exceeds allocated memory by OS

What is the term for when a computer process uses too much memory, and the OS has to terminate it? I was thinking memory leak, but that implies there is memory not being used that is taken up, which is not the case. I wouldn't use the term stack overflow either, because it is possible for the memory to be allocated on the heap.
when a computer process uses too much memory, and the OS has to terminate it
What you describe here doesn't happen. The behavior differ from OS to OS, but none happens as you describe it. On Windows for example a memory allocation may fail, but that does not imply the OS terminating the process. The call to allocate memory returns an error code and the process decides how it handles the situation. Linux has this crazy memory allocation scheme on which allocation succeeds without any backing, and then actual reference to the memory may fail. In this case Linux runs the oom-killer:
It is the job of the linux 'oom killer' to sacrifice one or more processes in order to free up memory for the system when all else fails.
Note that the oom-killer a process chosen by the badness() function, not necessarily the process that actually touched a page that had no backing (ie. not the process that requested the memory). In Linux is also important to distinguish between the memory being 'allocated' and the memory being 'referenced' the first time (ie. the PTE shenanigans).
So, strictly speaking, what you describe doesn't exists. However, the common name for a process 'running out of memory' is out of memory, commonly abbreviated as OOM. In most modern systems OOM condition manifests itself as an exception, and is an exception raised voluntarily by the process, not by OS.
One situation when an OS kills a process on-the-spot is when an OOM occurs during a guard-page PTE miss (ie. the OS cannot commit the virtual page). As the OS has no room to actually allocate the guard-page, it has no room to actually write the exception record for the process and it cannot raise the exception (that would a a stack overflow exception, since we're talking about a guard page). Th OS has no choice but to obliterate the process (technically is not a kill, since a kill is a type of signal).
Neither "memory leak" nor "stack overflow" cuts it, really. A memory leak is a bug in a program that could result in running out of memory in the long run. A "stack overflow" is an exhaustion of the call stack.
Not all systems terminate processes that uses up all memory. It is usually the process itself that fails to allocate more memory (within the constraints set by the system), causing an "out of memory" error to occur (in C on Unix, malloc() returns a NULL pointer and the errno variable is set to ENOMEM). This does not necessarily occur when a single process is hogging all memory, but could happen to any process when lots of memory is used by many other processes (using sloppy language, there might be system-imposed limits on a per-user basis etc.)
I would probably call the process that grabs a huge proportion of the memory on a system a "memory hog", but I have never seen a term describing the event of running out of memory. "Memory exhaustion" maybe, or simply "causing an out of memory situation".
OutOfMemory-Exceptions is my best guess

How to respond to memory pressure notifications from GCD?

I am using GCD to get memory pressure notifications.
GCD documentation describes some constants like so:
DISPATCH_MEMORYPRESSURE_WARN
The system memory pressure condition is at the warning stage. Apps
should release memory that they do not need right now.
DISPATCH_MEMORYPRESSURE_CRITICAL
The system memory pressure condition is at the critical stage. Apps
should release as much memory as possible.
Seems logical that I should free unused memory. However, in other places (man pages and source code) I find this note related to these constants:
Elevated memory pressure is a system-wide condition that applications
registered for this source should react to by changing their future
memory use behavior, e.g. by reducing cache sizes of newly initiated
operations until memory pressure returns back to normal.
However, applications should NOT traverse and discard existing caches
for past operations when the system system tem memory pressure enters
an elevated state, as that is likely to trigger VM operations that
will further further ther aggravate system memory pressure.
This confuses me. So should I free memory, or should I just stop allocating new memory?
MacOS has a virtual memory (VM) system that uses a backing store: the file system. The file system is used to hold memory that is not currently in use. When the system is running low on real memory (RAM), things in memory that are not actively being used can be written to disk and loaded back into RAM later.
iOS has a virtual memory system but not a backing store. When memory runs low the system asks apps to lower their memory footprint. If that does not free up enough memory the system will start killing apps.
The guidance you are quoting from the libdispatch headers is referring to the MacOS virtual memory system, not iOS.
On iOS an application should discard objects and reduce cache sizes when handling a memory warning. Particular attention should be paid to objects that are using dirty (non-purgeable) memory. This is memory the system can not automatically reuse on its own - it must be discarded by the application first. In a typical iOS application images (pictures) use the most dirty memory.

How is disk memory being used/consumed by programs?

A dummy question:
Recently my disk ran out of memory:
I kept getting java.OutOfMemoryError, java heap space, later my Virtual Box encountered "Not Enough Free Space available on disk" error.
Then it turned out that my 256GB SSD had been almost all consumed/used.
So I was wondering how running the programs could consume my memory/disk usage?
How does this work?
I know the basics behind this, allocating space on a heap/stack, then deallocating them after use. (Correct me if I'm wrong.)
But if this is the case, then the disk should not be used up, right? (if I don't add anything else onto my desktop, only using it to run a definite number of programs)
I really wanted to understand how the disk/memory is being consumed/used by running programs.
If this question has been asked before, please relate it to that one.
I apologize for dummy question, but I believe it will be helpful to fellow programmers like me.
Thanks for making it clearer. Q1: Why do programs consume disk space? A2: How does "java.OutOfMemoryError, java heap space" occur? related to memory, is it?
Why do programs consume disk space?
I know the basics behind this, allocating space on a heap/stack, then deallocating them after use. But if this is the case, then the disk should not be used up, right?
In fact, it can be used up. Memory allocations can consume hard-disk space if the allocation in your process's virtual memory happens to be mapped to a pagefile on disk, and your pagefile size is set to be managed by the operating system.
If you want to know more about memory mapping there's a great question here:
Understanding Virtual Address, Virtual Memory and Paging
The page-file grow won't actually be a direct response to your allocation, more a response to the new current commit size being close to the reserved size. If you want to know more about this process (commit vs reserved, stack expansions, etc) I recommend reading Pushing the Limits of Windows: Physical Memory.
Why does java.OutOfMemoryError occur?
http://docs.oracle.com/javase/7/docs/api/java/lang/OutOfMemoryError.html
Thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.
Generally this happens because your pagefile is too small or your disk is too full.
See also:
How to deal with "java.lang.OutOfMemoryError: Java heap space" error (64MB heap size)
java.lang.OutOfMemoryError: Java heap space

Use of Virtual Memory

What happens if a page is present in Virtual Memory, but not in main memory?
How is it executed?
Is the program loaded into the Main Memory from the virtual Memory? If it is loaded to Main Memory from Virtual Memory, that that would be an IO operation since it is on disk.Then what is the use of Virtual Memory , if anyways we have to make an IO operation to execute it.
And when use program generates logical address , and MMU maps it to physical address , and if that address is not present in Main Memory , then does OS check in Virtual Memory??
Thanks in advance
Let me start by saying that this is a very simplified explanation, not the definite guide to virtual memory;
Virtual memory basically gives your process the illusion that it's the only thing running in the memory space of the computer. When the process accesses a virtual memory page, the MMU translates it into a physical memory access. If the physical memory page does not yet exist (or isn't in physical memory), the process is suspended and the operating system is notified and can add the page to memory (for example by fetching it from disk) before resuming the process again.
One reason for virtual memory is that the process doesn't have to worry too much how much memory it uses and doesn't have to change if you for example expand physical memory on the machine, it can just work as if it had all the memory it can address and have the operating system solve how the actual memory is used.
The reason it doesn't (usually) slow the computer to a crawl is that many processes don't use big parts of their memory at all times, if a memory page isn't accessed in an hour, the physical memory can be put to much better use during that hour than to be kept active. Of course, the more memory your processes actively use continuously, the slower your process will appear to run.

Resources