How to properly release BreezeJS entities for memory cleanup - breeze

I'm using BreezeJS with Angular/SQL/EF/WebAPI, based on a customized version of John Papa's HotTowel template. All is working very well but I need assistance with memory management.
In my case my users download "missions" to the browser. A mission is big clump of data that I crunch locally in the browser. When the user requests a new mission, another big clump of data is downloaded. After three missions are downloaded the browser is consuming hundreds of MegaBytes of memory as shown in windows perf monitor. This browser eventually chokes. I believe the answer is to simply release/dispose the previous mission's entities. What's the best way to clean up unneeded entities so they aren't consuming memory? I've tried setDetach, nulling objects etc but memory never seems to be released.
Thanks
Mark

Related

Core Data refuses to clear external data references from memory

I am loading large amounts of data into Core Data on a background thread with a background NSManagedObjectContext. I frequently reset this background context after it's saved in order to clear the object graph from memory. The context is also disposed of once the operation is complete.
The problem is that no matter what I do, Core Data refuses to release large chunks of data that are stored as external references. I've verified this in the Allocations instrument. Once the app restarts the memory footprint stays extremely low as these external references are only unfaulted when accessed by the user. I need to be able to remove these BLOBS from memory after the initial download and import since they take up too much space collectively. On average they are just html so most are less than 1MB.
I have tried refreshObject:mergeChanges: with the flag set to NO on pretty much everything. I've even tried reseting my main NSManagedObjectContext too. I have plenty of autorelease pools, there are no memory leaks, and zombies isn't enabled. How can I reduce my Core Data memory footprint when external references are initially created?
I've reviewed all of Apple's documentation and can't find anything about the life cycle of external BLOBS. I've also searched the many similar questions on this site with no solution: Core Data Import - Not releasing memory
Everything works fine after the app first reboots, but I need this first run to be stable too. Anyone else been able to successfully fault NSData BLOBS with Core Data?
I'm assuming the "clear from memory" means "cause the objects to be deallocated" and not "return the address space to the system". The former is under your control. The latter is not.
If you can see the allocations in the Allocations instrument, have you turned on tracking of reference count events and balanced the retains and releases? There should be an indicative extra retain (or more).
If you can provide a simple example project, it would be easier to figure out what is going on.

Massive text file, NSString and Core Data

I've scratched my head about this issue for a long, long time now, but I still haven't figured out a way to do this efficiently and without using too much memory at once (on iOS, where memory is very limited).
I essentially have very large plain text files (15MB on average), which I then need to parse and import to Core Data.
My current implementation is to have an Article entity on Core Data, that has a "many" relationship with a Page entity.
I am also using a slightly modified version of this line reader library: https://github.com/johnjohndoe/LineReader
Naturally, the more Page entities I create, the more memory overhead I create (on top of the actual NSString lines).
No matter how much I tweak the amount of lines per page, or the amount of characters per line, the memory usage goes absolutely crazy (~300MB+), while just importing the whole text file as a single string quickly peaks at ~180MB and finished in a matter of seconds, with the former taking a couple of minutes.
The line reader itself might be at fault here, since I am refreshing the pages in the managed context after they're done, which to my knowledge should release them from memory.
At any rate, does anyone have any notes, techniques or ideas on how I should go about implementing this? Ideally I'd like to have support for pages, since I need to be able to navigate the text anyway, and later loading the entire text to memory doesn't sound like much fun.
Note: The Article & Page entity method works fine after the import, but the importing itself is going way overboard with memory usage.
EDIT: For some reason, Core Data is also consuming ~300MB of memory when removing an Article entity from the context. Any ideas on why that might be happening, or how that could be remedied?

Out of memory .NETCF Windows Mobile 5

We have a .NETCF 3.5 app written in C# where we use some fairly large lists and dictionaries of objects, populated with data from a SQL Server and persisted to SQLCE databases on the device.
The app was running very well until recently. The amount of data is such that we are getting Out of memory exceptions quite frequently. Using the Hibernate event, I have confirmed that the OS is indeed asking the app to free up resources (the Hibernate event gets fired constantly). The rub is that I really do not see anything substantial that I can free up - the lists and dictionaries, etc. are all being used by the application.
I know there is a hard 32 MB / app limit in Mobile 5/6 (in reality only 18-20 MB, per http://dev.fittingsites.com/bol/2008/windows-mobile-6-1-memory-management-changes).
I am a bit at a loss here. If the app needs about 25 MB to operate, how can it run on Mobile 5? Are there workarounds, like storing lists or dictionaries in Memory Mapped Files or similar that would not require a ton of work (or slow things down much)?
Which method are you using to read data from your SQLCE database? SQLCE provides two main approaches: DataSets and ResultSets. DataSets are known to consume huge amounts of memory and to reduce the application performance. If you are using DataSets I would recommend trying to switch your application to use RecordSets instead. See this page for more details.

What will happen if a application is large enough to be loaded into the available RAM memory?

There is chance were a heavy weight application that needs to be launched in a low configuration system.. (Especially when the system has too less memory)
Also when we have already opened lot of application in the system & we keep on trying opening new new application what would happen?
I have only seen applications taking time to process or hangs up for sometime when I try operating with it in low config. system with low memory and old processors..
How it is able to accomodate many applications when the memory is low..? (like 128 MB or lesser..)
Does it involves any paging or something else..?
Can someone please let me know the theory behind this..!
"Heavyweight" is a very vague term. When the OS loads your program, the EXE is mapped in your address space, but only the code pages that run (or data pages that are referenced) are paged in as necessary.
You will likely get horrible performance if pages need to constantly be swapped as the program runs (aka many hard page faults), but it should work.
Since your commit charge is near the commit limit, and the commit limit will likely have no room to grow, you will also likely recieve many malloc()/VirtualAlloc(..., MEM_COMMIT)/HeapAlloc()/{Local|Global}Alloc() failures so you need to watch the return codes in your program.
Some keywords for search engines are: paging, swapping, virtual memory.
Wikipedia has an article called Paging (Redirected from Swap space).
There is often the use of virtual memory. Virtual memory pages are mapped to physical memory if they are used. If a physical page is needed and no page is available, another is written to disk. This is called swapping and that explains why crowded systems get slow and memory upgrades have positive effects on performance.

Finding a Memory Bubble

This is either ridiculously simple, or too complex . . . .
In our application there is a form that loads some data from the database and displays it in the grid (putting it simply). When the data is refreshed the total memory usage climbs by about 50K (depending on how much data is displayed no doubt). Sounds like a memory leak, but when we shut down the application, FastMM is set with ReportMemoryLeakOnShutDown := True, and it doesn't report any abnormal memory leaks.
So it appears we have a memory bubble or bag. Something that is accumulating more memory each time it is run. Like a TList that keeps getting new items added to it, but the old ones never get removed. Then in the shutdown process all the items get destroyed. The rows displayed in the grid do not increase, but there are a lot of object lists behind the scenes that make this work, so it could be anywhere.
So my question is if anyone knows of a good trick for finding out what parts of an application are using how much memory . . . . I can think of lots of tedious ways of doing it (which I am in the process of doing - checking each list I can find), so I am hoping someone has a trick or technique I have not thought of.
Thanks in advance!
Update: Every refresh results in an additional 10-50K of memory being used. The users are reporting that eventually the application stops responding. It certainly acts like a memory leak, but FastMM (the memory manager) does not see anything leaking. I'll try some other memory tools . . .
Just F8 through the critical part and look at the process usage graph (Process Explorer from Mark Russinovich works great for that). When you find the culprit method, repeat the process but descend into that method.
Tools like AQTime can report difference in memory/object usage between snapshots. This might help you find out what keeps growing.
It looks like there is some memory allocated via custom AllocMem() calls, bypassing FastMM.
This can be midas. Andreas has a solution for this
Or some other InitXXX WinAPI call that allocates something, without freeing. Or some other third-party or windows dll used by project.
Does this happen every time you refresh the data or only the first time? If it's only the first time it could be that the system just reserves the memory for your application, despite the fact that it's not used at this time. (Maybe at some point the old and new data existed simultaneously in memory?)
There are many tools which provide you with informations about memory leaks, have you tried a different one?
Im not a FastMM expert, but I suppose that after a memory manager get memory, after you free the objects/components, it holds for future use with some zeroes or flag, I dont know, avoiding the need to ask the OS for more memory any time, like a cache.
How about you create the same form/open same data, N times in a row?
Will increase 50K each time?
Once I had the same problem. The application was certainly leaking, but I got no report on shutdown. The reason for this was that I had included sharemem in the uses-section of the project.
Have you tried the full FastMM-version? I have found that tweaking its settings gives me a more verbose information of memory usage.
As Lars Truijens mentioned, AQTime provides a live memory consumption graph, so in runtime, you can see what objects are using more memory whenever you refresh data.

Resources