Out of memory .NETCF Windows Mobile 5 - memory

We have a .NETCF 3.5 app written in C# where we use some fairly large lists and dictionaries of objects, populated with data from a SQL Server and persisted to SQLCE databases on the device.
The app was running very well until recently. The amount of data is such that we are getting Out of memory exceptions quite frequently. Using the Hibernate event, I have confirmed that the OS is indeed asking the app to free up resources (the Hibernate event gets fired constantly). The rub is that I really do not see anything substantial that I can free up - the lists and dictionaries, etc. are all being used by the application.
I know there is a hard 32 MB / app limit in Mobile 5/6 (in reality only 18-20 MB, per http://dev.fittingsites.com/bol/2008/windows-mobile-6-1-memory-management-changes).
I am a bit at a loss here. If the app needs about 25 MB to operate, how can it run on Mobile 5? Are there workarounds, like storing lists or dictionaries in Memory Mapped Files or similar that would not require a ton of work (or slow things down much)?

Which method are you using to read data from your SQLCE database? SQLCE provides two main approaches: DataSets and ResultSets. DataSets are known to consume huge amounts of memory and to reduce the application performance. If you are using DataSets I would recommend trying to switch your application to use RecordSets instead. See this page for more details.

Related

Use log4j 2 for writing to data files or database table

I used log4j (v. 1) in the past and was glad to know that a major refactoring was done to the project, resulting in log4j 2, which solves the issues that plagued version 1.
I was wondering if I could use log4j 2 to write to data files, not only log files.
The application I will be soon developing will need to be able to receive many events from different sources and write them very fast either to a data file or to a database (I haven't decided which yet).
The thread that receives the events must not be blocked by I/O while attempting to write events, so log4j2's Asynchronous Loggers, based on the LMAX Disruptor library, will definitely fit this scenario.
Moreover, my application must be able to recover either from a 'not enough space on disk' or 'unable to reach database' conditions, when writing to a data file or to a database table, respectively. In other words, when the application runs out of disk space or the database is temporarily unavailable, my application needs to store events in memory and wait for storage to become available and when it does, write all waiting events to disk or database.
Do you think I can do this with log4j?
Many thanks for your help.
Regards,
Nuno Guerreiro
Yes.
I'm aware of at least one production implementation in a similar scenario, where in gathered events are written to disk at high throughput.
Write to a volume other than your system volume to minimize the chances of system crashes due to disk space overrun.
Upfront capacity planning can help in ensuring h/w configuration with adequate resources to handle projected average load and bursts, for a reasonable period of time.
Do not let the system run out of disk space :). Keep track of disk usage, and proactively drop older data in extreme circumstances.

How to properly release BreezeJS entities for memory cleanup

I'm using BreezeJS with Angular/SQL/EF/WebAPI, based on a customized version of John Papa's HotTowel template. All is working very well but I need assistance with memory management.
In my case my users download "missions" to the browser. A mission is big clump of data that I crunch locally in the browser. When the user requests a new mission, another big clump of data is downloaded. After three missions are downloaded the browser is consuming hundreds of MegaBytes of memory as shown in windows perf monitor. This browser eventually chokes. I believe the answer is to simply release/dispose the previous mission's entities. What's the best way to clean up unneeded entities so they aren't consuming memory? I've tried setDetach, nulling objects etc but memory never seems to be released.
Thanks
Mark

Core data storage limit, Cache Limit, RAM limit on iphone

I am making a some complex app, in which every detail is imp. i have some questions
1. how much storage limit we have, if we plan to save big files on core data/cache.
2. Whats the RAM limit on iphone? Actually searching for some table that can give detailed info about IOS devices on this. Because i need to handle memory warnings and defend App crashes.
3. Its better to save images in cache or core data, assume you have a lot of images approx. 200-250.
Thanks
1) I am not aware of any storage limit. Obviously, you will never get 64GB or more - since no device is larger ;-). My wife's facebook app consumes >5GB at the moment... I suppose they did something wrong. The only important point is to fail gracefully (show a dialog, clean some space, ...) if the storage is full.
2) The RAM limit varies depending on the iPhone model and the currently running applications. Also there are some iPods with less memory in market. 30MB should be pretty safe. Total physical memory of the device can be retrieved as described here while retrival of the available RAM can be derived from that question.
3) Maybe this is a good starting point. I would always write image data to the file system and just store the file name inside the database, as suggested here.

Multiple hits to an API bringing server to it's knees

I am using an API (Let's pretend its facebook) to gather data between two given dates. Because of API restrictions (like most) I can only grab so many at a time, and therefor have to page my way through the results.
Here is my issue/question though.. Is it better to
get fewer results back, and make more calls to the api
get more results back, and fewer calls to the api
I am running a 4GB instance of a cloud server..
The data I'm looking at is in XML format, and contains about 20k entries. Each entry contains probably another 20 tags within it. Once completely pulled down the data ends up being about 10MB.. my problem is that when my server is hitting the api, gathering this information the CPU and Memory spike to nearly 100%. I've tried retrieving 500 at a time, 1000 at a time, 5000 at a time.. is this something where I need to gather 20 at a time.. or is there something else I should look at?
I'm not sure what else to provide, if there is something I can provide just let me know
Updates based on answers
I host with Storm on Demand, which runs perfectly for us and seems to be great hardware - https://www.stormondemand.com/cloud-server/
I use HPricot to parse the XML (which could probably be optimized, I'm no expert here)
I do need all of the data, this service doesn't offer an export, only API.
EDIT [to help people stumbling on this later]
I switched from Hpricot to Nokogiri, MUCH faster.
Also, I was building an XML file in memory, apparently that is extremely intense, and was a very time consuming task. I've cut this operation down from about 10 minutes, to just over 1 minute by fixing these two things.
Here's a list of things to look at:
optimize your code. try profiling your code and see if you can improve it. Mast likely using a better parser (DOM vs SAX) is possible.
get a better hardware/hosting. 4GB is just memory. Most likely you are on a shared hosting/vm and CPU limited
offload some CPU/memory heavy operations to a faster service/application, like XML processing, data analysis, file io can be done in C/C++
in a proper cloud environment you should be able to spawn more VMs and adjust your jobs/load accordingly. That will cost more tough and require some kind of job manager.
The questions you need to ask is why is your CPU+ memory spiking? 4GB is plenty to be handling this data, so is your code optimized to handle this task? If not, what can you do?
Is your code optimized enough? Fair enough. You can now rewrite them using C extensions.
After optimizing your code, I'd suggest checking out processing this data 'later', as in a delayed job. This way you aren't blocking on the entire dataset which may strain your server.
You also mentioned you are running a cloud server, which I can assume you have access to more Virtual Machines. You can process this data in pararel to reduce stress per machine.

Data Storage with AWE Memory via collections / lists / other containers

Does anyone have any suggestions (product, toolsets, methods or other) for the storage and processing of custom data (delphi collections, binary trees, DIContainers etc) that DOES NOT restrict itself to a standard win32 memory address space? To put that in the extreme, is there anything off the shelf that can do the equivalent of holding a 10GB TList, thereby blowing the /3GB switch barrier and the 4GB 'windows on windows' limit?
What we ideally need is something that is pretty transparent to the Delphi application programmer, but allows very fast access to the data held in its structures, preferably via key lookup. The equivalent of a delphi colletion container would be fine, but its memory usage needs to be via AWE. It would also need to take care of mapping and unmapping the physical space it uses into the win32 process making use of it i.e. that would be the transaprent bit...
Moving the data into a database is not the answer - the information needs to remain memory resident for very fast access. The in-memory databases/tables that we've tried do not make use of AWE and also are slow at accessing. Our current Delphi data structures are fine, but straining the limits of win32 address space.
I'm going to be a complete dork, and tell you that I've made something even more advanced than what you're describing.... at work. So it's all closed source I'm afraid. Never saw anything like this anywhere. We combine VM, AWE, MMF and (soon) 32<>64 bit IPC into one big, mean data-processing machine, addressing up to 64 GB of memory, while processing hundreds of datasets, tens of GBs each.
But I can give you a few tips : AWE view-swapping is rather slow, because it forcibly pauses all running threads during the swap. Therefor, choose your window-sizes wisely (the smaller, the faster the swap - but call-overhead is lower with larger sizes ofcourse). We've settled with AWE view-sizes equal to the Windows default page-size (4 KB), but only because random-access performs best that way. Lineair data-access could run faster with bigger view-sizes.
Each view can map to any part of the allocated AWE memory, so one thing that can help is mapping only those pages into a view that need to be accessed - and try to save on unnessecary view-swaps (a priority-queue comes to mind).
Also, there should be a registration-mechanism somewhere in your design that handles the linkage between a view and the AWE memory behind this. And this better be thread-safe!
As for general usage : No, this doesn't fit in with regular Delphi classes. You should switch over to another concept altogether - and base your data-structures on that.
Anyway, good luck mate! You're going to need it... ;-)
There are system calls that can do this but it is not supported on all versions of Windows (in particular, Windows XP does not support AWE).
Transparency would be something of an issue as the API could not return pointers to objects. Mapping more than 4GB of RAM into a 4GB address space means that a 32 bit pointer could be ambiguous - you could potentially map different objects into the same location.
This ambiguity means that you would have to generate proxies for the objects which hold a handle that could be used to access the 'record'. Some SQL server versions use this technique to store disk buffers in AWE memory. An approach like this would probably work for something like rows in a matrix where the operations are done on the whole row. Finer grained access would be more fiddly.
In order to provide direct access to the mapped object you would have to implement a protocol where a temporary pointer to the mapped memory was made available. This would also require the object to be locked in memory while in use - again, bang goes your transparency.
Assuming you can get a 64 bit version of Delphi now you might be better off going to a 64 bit version of Windows for customers that need more RAM.
You state that you do not want to move to a database, but what about a database that specifically uses AWE?
I've not tried it personally, but would consider using products from this company for my own projects.
[Edit]: NexusDB is Delphi-friendly: it originated from the old Turbopower FlashFiler development (but has moved on a long way since then).
The issue with AWE it works very much alike the old, DOS-based EMS and XMS - if you ever used them. Basically, a range of addressable memory is reserved, and the memory outside the addressable range is then mapped to the addressable range when needed, and unmapped when no longer need, allowing other memory to be mapped at the same addresses. Thereby most non-AWE aware data structures or containers wouldn't work in such a scenario - probably a TMemoryStream descendant is easier to build. It should be easy enough to build a TList or the like that store data in AWE memory, it should keep track where the data are really stored and recall them when needed, adjusting addresses as well when data are mapped to addressable memory. I am not aware of any Delphi containers library using AWE, and there is another issue: desktop 32 bit operating systems can't use more than 4GB of physical RAM, a server version would be required, and the supported physical RAM depends on what version is used, see here for a complete list.
Assuming the data is loaded once in bulk and fits available memory, NexusDB AWE will be very very fast. The database can be created as an in-memory only DB and will then not need any further harddrive access while manipulating.
Sounds to me like you guys might consider dropping the current database SQL backend and going to a 100% NexusDB + AWE solution.
(Or rather, dropping the day to day access to the SQL backend, and having an export/sync function that can write out any required NexusDB reporting data to an MSSQL reporting db.)
W
Your situation sounds similar to ours, our application uses a huge datafile that we store in a memory-mapped file. The files are around 750MB, and we allocate data structures from them that use up to 1.5GB of RAM.
We have found no solution to the 4GB limit other than moving some of it off to FPC/Lazarus until Delphi is 64-bit, unfortunately. AWE does not work with Vista Home versions, also we couldn't get it to work with MMFs.
You could try memory-mapped files with a sliding window, meaning you dynamically create views of different chunks of the file depending on what part of it the application is using. Sounds like that won't work though because you need the entire file in memory at once.

Resources