I have a native win 32 application that, during load testing as an HTTP server, causes the Working Set to increase over time. There are no memory leaks (confirmed by tracking Private Bytes in PerfMon and using FastMem to monitor memory usage during runtime). Note: the load is constant, with about 50 concurrent connections, so there is no significant variation.
Using Process Explorer, I've narrowed the problem down to Token handle leaks. I've also used madKernel to report on handle usage counts, which also confirms that token handles keep increasing.
To be precise, I'm seeing the following in Process Explorer:
All the token handles shown in Process Explorer have the same name: 'Doug-M46\Doug:ff739'):
There are no security (or other related API calls that require security credentials) that I can see in the code, but there must be something being called that causes this problem, I just don't know what else to look for.
I've used AQTime to try and track to source of the leak, but have not had any luck. At this point I'm considering hooking all the possible API calls that could cause this leak, so I can track it down, but I'd prefer to avoid such an extreme measure.
My application uses the ICS HTTP Server component in a separate thread to handle HTTP requests for my application (32-bit application, Delphi XE-2, ICS V8 Gold, Windows 7 Professional Build 7601: SP1).
Any insight into the cause of these handle leaks would be very much appreciated, as I've been trying to hunt them down for quite a while now.
References:
What can cause section handle leaks?
Related
For a project I have made several message flows in Websphere Message Broker 7.
One of these flows is quite a complicated flow with lots of database calls and transformations. However, it performs correctly and rather quickly given what it needs to do.
The problem is that while it is active, it consumes more and more resources, until the broker runs out of memory. Even if I use a small test case and it is able to complete before it crashes anything, the resources are not released. In this case, I can confirm the output of the flow (which is fine), but operations reported that it keeps consuming memory.
So, I guess a memory leak. I have no idea how and where to find it. Could anyone point me in a direction where to look?
If additional information is necessary, just ask. I would prefer not to put the entire compute node in this thread due to its size.
That you are having high memory consumption even after the processing is done makes me think that your message flow has some kind of state, that is stored in memory via shared or static variables.
You might be saving a lot of data in shared variables in ESQL, or static variables in Java in your flow.
Or if you are using JavaComputes, you might leak resources like ResultSets.
Or it could be some bug, you should check for known and fixed leaks in the fix packs issued for V7:
http://www-01.ibm.com/support/docview.wss?&uid=swg27019145
As stated in my comment above, a DataFlowEngine never releases its resources after completion.
This is the thread of IBM explaining the matter (bullet 8):
http://www-01.ibm.com/support/docview.wss?uid=swg21665926#8
Apart from that, the real issue seemed to be the use of Environment-variables inside a loop, which consumed a lot of memory. Deleting the variables after use is a good practice I can recommend.
According to process explorer / task manager my application has a private working set size of around 190MB even while not performing a specific task, which is way more than I would expect it to need. Using FastMM I have validated that none of this is an actual memory leak in a traditional sense.
I have also read the related discussion going on here, which suggests using FastMM's LogMemoryManagerStateToFile();. However the output generated states "21299K Allocated, 49086K Overhead", which combined (70MB) is way less than the task manager suggests.
Is there any way I can find out what causes the huge differences, might 190MB even be an expectable value for an application with ~15 forms? Also, is having 70% overhead "bad", any way of reducing that number?
You can use VMMap from Sysinternals to get a complete overview of the virtual memory addres space your proces is using. This should allow you to work out the difference you are seeing between taks manager and FastMM.
I doubt that FastMM reports or even can report sections like Mapped File, Shareable, Page Table while those sections do occupy Private WS.
DDDebug can give you insights about memory allocation by objects in your app. You can monitor changes live.
Test the trial version or checkout the introductory video on the website.
Is there a way to access (read or free) memory chunks that are outside the memory that is allocated for the program without getting access violation exceptions.
Well what I actually would like to understand apart from this, is how a memory cleaner (system garbage collector) works. I've always wanted to write such a program. (The language isn't an issue)
Thanks in advance :)
No.
Any modern operating system will prevent one process from accessing memory that belongs to another process.
In fact, it you understood virtual memory, you'd understand that this is impossible. Each process has its own virtual address space.
The simple answer (less I'm mistaken), no. Generally it's not a good idea for 2 reasons. First is because it causes a trust problem between your program and other programs (not to mention us humans won't trust your application either). second is if you were able to access another applications memory and make a change without the application knowing about it, you will cause the application to crash (also viruses do this).
A garbage collector is called from a runtime. The runtime "owns" the memory space and allows other applications to "live" within that memory space. This is why the garbage collector can exist. You will have to create a runtime that the OS allocates memory to, have the runtime execute the application under it's authority and use the GC under it's authority as well. You will need to allow some instrumentation or API that allows the application developer to "request" memory from your runtime (not the OS) and your runtime have a way to not only response to such a request but also keep track of the memory space it's allocating to that application. You will probably need to have a framework (set of DLL's) that makes these calls available to the application (the developer would use them to form the request inside their application).
You have to be sure that your garbage collector does not remove memory other then the memory that is used by the application being executed, as you may have more then 1 application running within your runtime at the same time.
Hope this helps.
Actually the right answer is YES.. there are some programs that does it (and if they exists.. it means it is possible...)
maybe you need to write a kernel drive to accomplish this, but it is possible.
Oh - and I have another example... Debugger attach command... here is one program that interacts with another program memory even though both started as a different process....
of course - messing with another program memory.. if you don't know what you're doing will probably make it crush...
On a production environment, how can one discover which Asp.Net http requests, whether aspx or asmx or custom, are causing the most memory pressure within a w3wp.exe process?
I don't mean memory leaks here. It's a good healthy application that disposes all it's objects nicely. Microsoft's generational GC does it's work fine.
Some requests however, cause the w3wp process to grow its memory footprint considerably, but only for the duration of the request.
It is simply a question of the cost-efficiency and scalability of a production environment for a SAAS app, in order to regularly report back to the development department on their most memory hogging "pages", to return that (memory) pressure where it belongs, so to speak.
There doesn't seem to be anything like:
HttpContext.Request.PeakPrivateBytes or .CurrentPrivateBytes
or
Session.PeakPrivateBytes
You might want to use a tool like Performance Monitor to monitor the "Process\Working Set" for the W3WP.exe process and record it to a database. You then could could correlate it to the HTTP logs for the IIS Server.
It helps to have both the Perfmon data and HTTP logs both writing to an SQL database. Then you can use T-SQL to bring up requested pages by Date/Time around the time of the observed memory pressure. Use the DatePart function to build a Date/Time rounded to the desired accuracy of Second or Minute as needed.
Hope this helps.
Thanks,
-Glenn
If you are using InProc session state, all your session data is stored in w3wp's memory, and may be the cause of it growing.
I wouldn't worry about it.
It could be that the GC is happening during the request, and the CLR is allocating memory to move things around. Or it could be some other periodic servicing thing that comes along with ASPNET.
Unless you are prepared to go spelunking with perf counter analysis of generation 0,1,2 GC events , and etc, then I wouldn't worry about solving this "problem".
And it doesn't sound like it's a problem anyway - just a curiosity thing.
I set up a project and ran it, and looked at it in Process Explorer, and it turns out it's using about 5x more RAM than I would have guessed, just to start up. Now if my program's going too slowly, I hook it up to a profiler and have it tell me what's using all my cycles. Is there any similar tool I can hook it up to and have it tell me what's using all my RAM?
AQTime can help with that too.
What figures are you using from Process Explorer?
"Memory Use" in Windows is not a straightforward topic. Almost every application incorporates some form of memory manager that attempts to satisfy the memory needs of the application, about which the operating system has surprisingly little knowledge - the OS knows what memory the applications memory manager is using, but that is not always the same thing as what your application is actually using.
A simple way to see this is to watch the memory use reported by Task Manager.... start up a Delphi application, note it's "memory use" in Task Manager. Then minimise that application to the taskbar and you should see the memory use fall. Even restoring the application again won't result in the memory use climbing back to the previous level.
In crude terms, when you minimise the application the memory manager takes that as a cue that it should return any unnecessarily "used" memory back to the OS. That is, memory that the memory manager is using to efficiently service your application but which your application itself is not actually using.
The memory manager should also return this memory to the system if the system requires it, due to low memory conditions for example. The minimise to taskbar "trick" is simply a sensible optimisation - since a minimised app is typically not actively in use it's an opportune time to do such "housekeeping" automatically.
(This is not "a bad thing", it's just something to be aware of when considering "memory use")
To make matters worse, in addition to memory that the memory manager is using but which your application is not, there is also the question of "commit charge", which won't necessarily show up as memory that is used by either your application OR it's memory manager!
In a Delphi application (from Delphi 2006 onward) the memory manager is FastMM and that has a built in tool that will show you what your application memory use is like from "the inside" (or at least it used to have such a tool - I've not used it in a while).
iirc using it was a question of simply adding a unit to your project and creating a form at runtime (via some "debug only" menu item on the Help menu, or whatever mechanism you choose) that would then give you a "map" of your memory usage.
If you are using a version of Delphi earlier than 2006 you can still use FastMM - it's free and open source. Just download it from sourceforge.
AQTime has been for us an amazing profiling tool. It works amazingly well and allowed us to pinpoint bottlenecks in places where never thought were any, while sometimes showing us there was no bottleneck where we were sure there was.
It is, along with Finalbuilder, Araxis Merge, and TestComplete, an indispensable tool!
In addition to the others: Before I switched to D2006+ (and started using fastmm) I used AQTime's free memproof. It has some issues but it is workable.