WCF Windows Service not releasing resources/memory after every call - windows-services

I have created WCF application which is running on Windows Service. It was installed using Windows Installer. I have followed procedure mentioned in following article for same.
http://msdn.microsoft.com/en-us/library/bb332338.aspx#msdnwcfhc_topic4
Most WCF properties are kept as default for net.tcp protocol, per call instance and so on.
Memory consumption of service keeps on increasing after every call and does not decrease. At the end it throws OutOfMemory consumption.
Application returns very heavy string based data. With memory-profiler I found memory is still allocated to string objects and increases during call.
As per my understanding string is managed objects should release data once out of scope.
Let me know if any other configuration/coding information is needed specifically.

There must be something keeping references to those strings in the code. Can you use your profiler to trace the references that are keeping the string objects alive?

After many unsuccessful attempts to handle LOH (http://msdn.microsoft.com/en-us/magazine/cc534993.aspx) which was very large string in my context, I have created custom class to handle it.
Instead of storing large string in single object am storing it as collection of small strings in custom object. .NET disposed it properly without creating above mentioned problem.
Other possible solution with worked for me is to use file object to store large file and access it as text reader. This works well and keeps footprint of application small. Unfortunately for me it did not work and accessing file location was not allowed for application.

It would be very difficult to answer this question without some code to look at. You can always call GC.Collect(GC.MaxGeneration) to force garbage collection and see if this doesn't reduce your memory consumption. Ideally this would only be temporary code to track down what is going on in the application. If forcing garbage collection does not reduce memory consumption then references to the strings must be being retained, via static member variables or whatever: having no conception of what the code is, any theory would be a shot in the dark

Related

CoreData(swift) memory issue while inserting thousands of records

My application is in swift(latest version) language, And its has a bit complex database structure.
I'm dumping records while app launch first time as app must support offline information, My app can have millions of records.
Now Saving records in entities, which has the relationship with around 14-15 entity(one to one and one to many).
My Application through memory warning and gets terminated after around 1000 thousand records. I tried with profiling for leakages but that time app is working fine, however, it take a long time.
I have tried to create singleton class of context manager and also tried with creating local kind of variable while inserting a chunk of records.
For now, I'm fetching 50 records from web API and saving my context by updating my entities.
I have tried with autoreleasepool, but no success.
Please suggest me what should it do?
Thank you
Ashwin
I can advice you to watch this video. It is very inspiring and explains a lot of useful things about core data:
https://developer.apple.com/videos/play/wwdc2013/211/
Are you using fetchBatchSize property?
https://developer.apple.com/reference/coredata/nsfetchrequest/1506558-fetchbatchsize
If you are processing large amounts of Core Data objects in a loop, then you need to periodically save the context so that core data can turn modified objects back into faults instead of keeping them in memory. How often you need to save and when depends on your application and the code you are using to process, which it would be helpful to see in your question. You'll need to experiment yourself to find a balance between speed and memory use.
Use the allocations instrument and you will see where your memory is going. You're not leaking memory, you're just using too much of it.
Disable zombie object of your project. Below I have posted how to disable zombie object follow it via images.
For more details about zombie object enter link description here
Image 1
Image 2

Understanding file mapping

I try to understand mmap and got the following link to read:
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
I understand the text in general and it makes sense to me. But at the end is a paragraph, which I don't really understand or it doesn't fit to my understanding.
The read-only page table entries shown above do not mean the mapping is read only, they’re merely a kernel trick to share physical memory until the last possible moment. You can see how ‘private’ is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it’s what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that’s it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
The folloing to lines doesn't match for me. I see no sense.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
It is private. So it can't see changes by others!
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
Don't know what the author means with this. Is their a flag "MAP_READ_ONLY"? Until a write occurs, every pointer from the programs virtual-pages to the page-table-entries in the page-cache is read-only.
Can you help me to understand this two lines?
Thanks
Update
It seems it got it, with some help.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
Although a mapping is private, the virtual page really can see the changes by others, until it modifiy itselfs a page. The modification becomes is private and is only visible to the virtual page of the writing program.
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
I'm told that pages itself can also have permissions (read/write/execute).
Tell me if I'm wrong.
This fragment:
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
is telling you that the kernel cheats a little bit in the name of optimization. Even though you've asked for a private mapping, the kernel will actually give you a shared one at first. Then, if you write the page, it becomes private.
Observe that this "cheating" doesn't matter (doesn't make any difference) if all processes which are accessing the file are doing it with MAP_PRIVATE, because no actual changes to the file will ever occur in that case. Different processes' mappings will simply be upgraded from "fake cheating MAP_PRIVATE" to true "MAP_PRIVATE" at different times according to whenever each process first writes to the file. This is probably a common scenario. It's only if the file is being concurrently updated by other means (MAP_SHARED with PROT_WRITE or else regular, non-mmap I/O operations) that it makes a difference.
I'm told that pages itself can also have permissions (read/write/execute).
Sure, they can. You have to ask for the permissions you want when you initially map the file, in fact: the third argument to mmap, which will be a combination of PROT_READ, PROT_WRITE, PROT_EXEC, and PROT_NONE.

creating new object again and again is bad practice . why ?

I heard numerous times that creating object for same database classes again and again is the bad practice.i really dont understand why is it so . Please somebody explain .
It's a bad idea in general , not just for database classes.
The more objects you have , the more memory being used to maintain your application.
For instance , take a look at: PHP Object Creation and Memory Usage
Hope I helped.
http://particletree.com/notebook/object-oriented-php-memory-concerns/
Have a look at that link to see how much memory is required to create an object (which has only 1 variable). Heavy class means bigger object. Having several memory-hungry scripts and a certain level of user base are enough to take up all memory in no time.
With regards to database classes, depending on whether the class opens up a connection on initialisation, as both the web server and database server have settings for maximum number of connections. The more objects you create, the more connections would be created which is not a good practice. One connection per database, if you can. If your database classes reuses the connection or use lazy initialisation to create the connection when required, you will still face the aforementioned memory problem.
To put it simply, reuse your objects (and your database connection).

moving data between processes

The reason I ask this is widows do not support a good method to communicate between processes. So I want to create a DLL for a communications point between windows processes. A thread is owned by a process and cannot be given to another process.
Each thread has a stack of its own.
If a DLL is loaded (loadlibray) and a DLL function is called that asks windows for memory. Am I write to think the thread is still being owned by the same process and allocates memory into that same process.
So I’m thinking can I turn to assembly to reallocate a small memory block to another process. Create a critical section, copy the data over to another (already created) memory block and return to the original block to its original process with out up setting windows. Has any one done that before. Or is thier a better way.
Best regards,
Lex Dean.
I see other methods that mite be quite fast but I would like a very fast method that has little over head. Pipes and internet will obviously work but are not the best option yet simple to implement (thanks to offer such suggestions guys). I want to send quite a few 500 byte blocks at quite regular intervals sometimes. I like WM_COPYDATA because it looks fast, my biggest question that I have been looking all over the internet is:- GetCurrentProcess and DuplicateHandle to get the real handle. Finding the other process. And using messages to set up memory and then use WM_COPYDATA. I only need two messages a) the pointer and size b) the data has been copied.
I get my application process easy ‘GetCurrentProcess’ except it’s a pseudo handle, that’s always $FFFFFFE. I need the real process handle and no body on the internet gives an example of DuplicateHandle. That’s what’s got me stumped. Can you show me an example of DuplicateHandle as that’s what’s got me stumped?
I do not like turning to a form to get a handle as one application dose not always have a current form.
I do not like turning to a form to get a handle as one application dose not always have a current form.
In Delphi I have seen message sending with TSpeedButton to set up a simple fast communication methods between applications that most probably uses about 80 instructions I guess. And so I still thinking to think dll’s. The example Mads Elvheim sent is on that same line as what I already know.
I'm still willing to understand any other options of using my own *.Dll
Because my applications important to me can simply register/unregister on the *.DLL its own process rather than searching all the time to see if a process is current.
It’s how I manage memory with a *.DLL between process but I’m not told about.
To me DLL’s are not hard to implement to me as I already have one of my own in operation.
The real bottom line is access to windows to create a good option. As I’m very open to idea’s. Even the assembly instructions for between processes or a windows call. But I do not what to get court crashing windows ether by doing things illegal.
So please show an example of what you have done that is to my needs. That is fast and I’m interested as I most probably will use it anyway.
I have a very fast IPC (interprocess communication) solution based on named pipes. It is very fast and very easy to use (It hides the actual implementation from you. You just work with data packets). Also tested and proven. You can find the code and the demo here.
http://www.cromis.net/blog/downloads/cromis-ipc/
It also works across computers in the same LAN.
If your processes have message loops (with windows), you can send/receive serialized data with the WM_COPYDATA message: http://msdn.microsoft.com/en-us/library/ms649011(VS.85).aspx
Just remember that only the allocated memory for the COPYDATASTRUCT::lpData member is allowed to be read. Again, you can not pass a structure that has pointers. The data must be serialized instead. And the receiving side can only read this structure, it can not write to it. Example:
/* Both are conceptual windows procedures. */
/* For sending : */
{
...
TCHAR msg[] = _T("This is a test\r\n");
HWND target;
COPYDATASTRUCT cd = {0};
cd.lpData = _tcsdup(msg); // We allocate and copy a string, which is fine.
cd.cbData = _tcsclen(msg) + 1; //The size of our data. Windows needs to know this.
target = FindWindow(..); //or EnumProcesses
SendMessage(target, WM_COPYDATA, (LPARAM)hwnd, (WPARAM)&cd);
}
/* For receiving */
{
...
case WM_COPYDATA:
{
TCHAR* msg;
COPYDATASTRUCT* cb = (COPYDATASTRUCT*)wParam;
sender = FindWindow(..); //or EnumProcesses
//check if this message is sent from the window/process we want
if(sender == (HWND)lParam){
msg = _tcsdup(cb->ldData);
...
}
break;
}
}
Otherwise, use memory mapped files, or network sockets.
I currently use Mailslots in Delphi to do it and it is very efficient.
"Win32 DLLs are mapped into the address space of the calling process. By default, each process using a DLL has its own instance of all the DLLs global and static variables. If your DLL needs to share data with other instances of it loaded by other applications, you can use either of the following approaches:
•Create named data sections using the data_seg pragma.
•Use memory mapped files. See the Win32 documentation about memory mapped files."
http://msdn.microsoft.com/en-us/library/h90dkhs0(VS.80).aspx
You cannot share pointers between processes, they only make sense to the process that alloc'd it. You're likely to run into issues.
Win32 is not different from any other modern OS in this aspect. There are plenty IPC services at your disposal in Windows.
Try to describe, which task you want to solve - not the "...then I think that I need to copy that block of memory here...". It's not your task. Your customer didn't say you: "I want to transfer thread from one process to another".

Session Management in TWebModule

I am using a TWebModule with Apache. If I understand correctly Apache will spawn another instance of my TWebModule object if all previous created objects are busy processing requests. Is this correct?
I have created my own SessionObject and have created a TStringList to store them. The StringList is created in the initialization section at the bottom of my source code file holding the TWebModule object. I am finding initialization can be called multiple times (presumably when Apache has to spawn another process).
Is there a way I could have a global "Sessions" TStringlist to hold all of my session objects? Or is the "Safe", proper method to store session information in a database and retrieve it based on a cookie for each request?
The reason I want this is to cut down on database access and instead hold session information in memory.
Thanks.
As Stijn suggested, using a separate storage to hold the session data really is the best way to go. Even better is to try to write your application so that the web browser contains the state inherently in the design. This will greatly increase the ability to scale your application into the thousands or tens of thousands of concurrent users with much less hardware.
Intraweb is a great option, but suffers from the scale issue in the sense that more concurrent users, even IDLE users, require more hardware to support. It is far better to design from the onset a method of your server running as internally stateless as possible. Of course if you have a fixed number of users and don't expect any growth, then this is less of an issue.
That's odd. If initialization sections get called more than once, it might be because the DLL is loaded in separate process spaces. One option I can think up is to check if the "Sessions" object already exists when you create it on initialization. If the DLL really is loaded in separate processes, this will not help, and then I suggest writing a central Session storage process and use inter-process-communication from within your TWebModule (there are a few methods: messages, named pipes, COM...)
Intraweb in application mode really handles session management and database access very smoothly, and scales well. I've commented on it previously. While this doesn't directly answer the question you asked, when I faced the same issues Intraweb solved them for me.

Resources