In my process i have created 10 threads and will use those threads till my application is alive. Each thread will perform some file input and output operation every time. So the problem is every time thread start executing then my process virtual memory is getting increased.
My analysis is that when one file input output task is allowcated to the thread then the file will be loaded to thread address space when thread start to copy the file and after copy is completed then the thread address space will not be cleared as still the thread is not exited. So if i once again assign another task to the thread then the new file will be loaded to the thread address space.
Hence the main process virtual memory address space will be increase. SO Please correct me if i am wrong and also help to know this has some problem if the process run for log time.
A few things here.
1) Threads do not have their own memory address space. Processes do. (However, threads do get their own thread local storage.)
2) In managed languages, objects are not cleaned up and the heap compacted until the garbage collector is run. The garbage collector is not run until it needs to (e.g. the program is close to running out of memory). As long as the object has no strong references to it (nothing running can reach it) then the object will get cleaned up when the program needs it to be cleaned up, and you don't need to do anything else. If you want the garbage collector to run early, however, tell it to.
By the way, if resources are needed commonly amongst many different threads, you could consider having some sort of global cache for them. However, early optimization is a grievous sin, so don't go to all that effort until you've determined it solves a REAL problem.
Related
Let's say that I have an OS that implements malloc by storing a list of segments that the process points to in a process control block. I grab my memory from a free list and give it to the process.
If that process dies, I simply remove the reference to the segment from the process control block, and move the segment back to my free list.
Is it possible to create an idempotent function that does this process cleanup? How is it possible to create a function such that it can be called again, regardless of whether it was called many times before or if previous calls died in the middle of executing the cleanup function? It seems to me that you can't execute two move commands atomically.
How do modern OS's implement the magic involved in culling memory from processes that randomly die? How do they implement it so that it's okay for even the process performing the cull to randomly die, or is this a false assumption that I made?
I'll assume your question boils down to how the OS culls a process's memory if that process crashes.
Although I'm self educated in these matters, I'll give you two ways an OS can make sure any memory used by a process is reclaimed if the process crashes.
In a typical modern CPU and modern OS with virtual memory:
You have two layers of allocation. Whenever the process calls malloc, malloc tries to satisfy the request from already available memory pages the kernel gave the process. If not enough pages are available, malloc asks the kernel to allocate more pages.
In this case, whenever a process crashes or even if it exits normally, the kernel doesn't care what malloc did, or what memory the process forgot to release. It only needs to free all the pages it gave the process.
In a simpler OS that doesn't care much about performance, memory fragmentation or virtual memory and maybe not even about memory protection:
Malloc/free is implemented completely on the kernel side (e.g: system calls). Whenever a process calls malloc/free, the kernel does all the work, and therefore knows about all the memory that needs to be freed. Once the process crashes or exits, the kernel can cleanup. Since the kernel is never supposed to crash, and keep a record of all the allocated memory per process, it's trivial.
Like I said, I'm self educated, and I didn't check how for example Linux or Windows implement it.
While load testing my erlang server with increasing number (100, 200, 3000,....) of processes using +P which is the maximum number of concurrent processes, as well as making 10 process sending 1 message to the rest of the created processes, I got a message on the erlang console:
"Crash dump was written to: erl_crash.dump. eheap_alloc: Cannot allocate 298930300 bytes of memory (of type "old_heap"). Abnormal termination".
I'm using Windows XP. The is no problem when I create the process (it's working). The crash happens after the process starts communicating (sending hi & receiving hello) and this is the only problem I have (by the way, +hms which sets the default heap size of processes).
How can I resolve this?
If somebody will find it useful as one of possible reasons for such problem(since I haven't found any specific answer anywhere)
we've experienced similar problem with rabbitmq server (linux, 64bit, persistent queue, watermarks with default config)
eheap_alloc: Cannot allocate yyy bytes of memory (of type "heap")
eheap_alloc: Cannot allocate xxx bytes of memory (of type "old_heap")
The problem was in re-queueing too much messages at once. Our "monitoring" code used "get" message with re-queue option without limiting number of messages to get & re-queue(in our case -all messages in the queue which was 4K)
So at a time it tried to add all this messages back to queue the server failed with above message.
hope it will save few hours to someone.
Have a look at that erl_crash.dump file using the Crashdump Viewer:
/usr/local/lib/erlang/lib/observer-1.0/priv/bin/cdv erl_crash.dump
(Apologies for the Unix path; you should be able to find a cdv.bat in your installation on Windows.)
Look at the process list; in my experience there's fairly often a process with a really long message queue where you didn't expect it.
You ran out of memory. Try decreasing the default heap size or limit the number of processes you start.
More advanced solutions include profiling your application to see if you can save some memory there, for example better sharing of binaries or less use of lists and large messages (which will copy the data to every process it's sent to).
One of your processes tries allocate almost 300MB memory. You have probably memory leak in your server. In proper design you should not have so much big heap in one process if it is not intended.
I'm using NSOperationQueue to manage a phase of an iOS application which is quite long so I would like to manage it asynchronously. Inside that phase I allocate big arrays in C by using directly calloc functions.
With big I mean a 1024x256 bidimensional array of floats and similar things.
If everything resides on the main thread than the app locks up while computing but everything goes fine, if, instead, I move the heavy part to a NSInvocationOperation then I got many strange results, with debugger sometimes I get a strange message in console stating
No memory available to program now: unsafe to call malloc
so I was wondering if threads managed by an operation queue have some different restrictions compared to main thread, and in case what is better to do to get around this issue.
There's no restrictions that I know of.. however, you may be hitting the edge of available RAM. Since iOS doesn't do virtual memory, when memory gets low, it'll send a warning to other apps to free up RAM. That may be the source of your issue.
Use instruments to profile how much RAM you're using. If it's more than about 20MB or so, you're probably in danger of being terminated due to excessive memory usage anyway.
I understand the basic concept of stack and heap but great if any1 can solve following confusions:
Is there a single stack for entire application process or for each thread starting in a project a new stack is created?
Is there a single Heap for entire application process or for each thread starting in a project a new stack is created?
If Stack are created for each thread, then how process manage sequential flow of threads (and hence stacks)
There is a separate stack for every thread. This is true not only for CLR, and not only for Windows, but pretty much for every OS or platform out there.
There is single heap for every Application Domain. A single process may run several app domains at once. A single app domain may run several threads.
To be more precise, there are usually two heaps per domain: one regular and one for really large objects (like, say, a 64K array).
I don't understand what you mean by "sequential flow of threads".
One stack for each thread, all threads share the same heaps.
There is no 'sequential flow' of threads. A thread is an operating system object that stores a copy of the processor state. The processor state includes the register values. One of them is ESP, the stack pointer. Another really important one is EIP, the instruction pointer. When the operating system switches between threads, it stores the processor state in the current thread object and reloads the state from the thread object for the thread that was selected to run next. The processor now simply continues executing where it left off previously.
Getting a thread started is perhaps now easy to understand as well. The operating system allocates a megabyte of memory for the stack. And initializes the ESP register value to point to that memory. And sets the value of the EIP register to the address of the method where the thread should start executing. The value of the ThreadStart delegate in C#.
Each thread must have it's own stack, that's where local variables and parameters are held, and the return addresses of the previous functions.
I am using Lucene.Net-2.3.2.1 in my project. My project also supporting multithreading environment. Lucene Indexing service is working as Windows Service. Problem is when the service is running, it's memory blockage is gradually increasing. So after some hours, it shows a memory of 150 mb in Task Manager where as it start with 13 mb.so it has a memory increasing behavior. I identified by dotTrace Profiler that in Lucene.Net there are some methods and objects that increased the memory. From Call Tree one of my dotTrace out identify that Index(), Segment() related functions hold's memory increased as long as the service perform. So it at a time, it will crash the system.
Please help me how i can recover my application from this memory leakage.
Increasing memory usage doesn't necessarily implies a memory leak. Memory leaks in .NET are not that common, but there are a few options you should check
Events. Make sure that all event listeners are detached from the publisher as soon as they are no longer used. Failing to do so will keep the listeners alive as long as the publisher is alive.
If the code uses any disposable resource that holds handles to native code, be sure to call Dispose on these as soon as they are no longer needed.
A blocking finalizer will prevent other finalizable objects from being garbage collected, so make sure finalizers don't do any more than they have to (and in many cases they are probably not needed anyway).
If you want to examine which objects are being kept alive as well as why they are not collected, I recommend using WinDbg + Sos.