I have a three.js interactive program that is loading multiple models. After the initial run, the program resets, removing all the models and clearing out all of the variables, but the memory usage does not decrease.
The .json models are taking up a lot of memory, which is interfering on many levels. We are trying to reduce the size of the models, but that is only going to go so far if the memory can't be reallocated.
From the research I've done, .deallocate() has been deprecated. I am loading using THREE.ObjectLoader(), so I'm not sure how the .dispose() would work in that instance. I tried:
scene.remove(basketContents[type][i]);
basketContents[type][i].geometry.dispose();
basketContents[type][i].material.dispose();
basketContents[type][i].texture.dispose();
But that gives me errors that .geometry.dispose(), etc. does not exist.
How can I remove the object from memory so that the memory can be used for other objects?
Related
I am loading large amounts of data into Core Data on a background thread with a background NSManagedObjectContext. I frequently reset this background context after it's saved in order to clear the object graph from memory. The context is also disposed of once the operation is complete.
The problem is that no matter what I do, Core Data refuses to release large chunks of data that are stored as external references. I've verified this in the Allocations instrument. Once the app restarts the memory footprint stays extremely low as these external references are only unfaulted when accessed by the user. I need to be able to remove these BLOBS from memory after the initial download and import since they take up too much space collectively. On average they are just html so most are less than 1MB.
I have tried refreshObject:mergeChanges: with the flag set to NO on pretty much everything. I've even tried reseting my main NSManagedObjectContext too. I have plenty of autorelease pools, there are no memory leaks, and zombies isn't enabled. How can I reduce my Core Data memory footprint when external references are initially created?
I've reviewed all of Apple's documentation and can't find anything about the life cycle of external BLOBS. I've also searched the many similar questions on this site with no solution: Core Data Import - Not releasing memory
Everything works fine after the app first reboots, but I need this first run to be stable too. Anyone else been able to successfully fault NSData BLOBS with Core Data?
I'm assuming the "clear from memory" means "cause the objects to be deallocated" and not "return the address space to the system". The former is under your control. The latter is not.
If you can see the allocations in the Allocations instrument, have you turned on tracking of reference count events and balanced the retains and releases? There should be an indicative extra retain (or more).
If you can provide a simple example project, it would be easier to figure out what is going on.
I am working on a Browser application in which I use a UIWebView for opening web pages. I run the Instruments tool with Memory Monitor. I am totally confused by the terms which are used in Instruments and why they're important. Please explain some of my questions with proper reasons:
Live Bytes is important for checking memory optimization or memory consumption? Why ?
Why would I care about the Overall Bytes/ Real Memory, if it contains also released objects?
When and why are these terms used (Live Bytes/ Overall Bytes/Real Memory)?
Thanks
"Live Bytes" means "memory which has been allocated, but not yet deallocated." It's important because it's the most easily graspable measure of "how much memory your app is using."
"Overall Bytes" means "all memory which has ever been allocated including memory that has been deallocated." This is less useful, but gives you some idea of "heap churn." Churn leads to fragmentation, and heap fragmentation can be a problem (albeit a pretty obscure one these days.)
"Real Memory" is an attempt to distinguish how much physical RAM is in use (as opposed to how many bytes of address space are valid). This is different from "Live Bytes" because "Live Bytes" could include ranges of memory that correspond to memory-mapped files (or shared memory, or window backing stores, or whatever) that are not currently paged into physical RAM. Even if you don't use memory-mapped files or other exotic VM allocation methods, the system frameworks do, and you use them, so this distinction will always have some importance to every process.
EDIT: Since you're clearly concerned about memory use incurred by using UIWebView, let me see if I can shed some light on that:
There is a certain memory "price" to using UIWebView at all (i.e. global caches and the like). These include various global font caches, JavaScript JIT caches, and stuff like that. Most of these are going to behave like singletons: allocated the first time you use them (indirectly by using UIWebView) and never deallocated until the process ends. There are also some variable size global caches (like those that cache web responses; CFURL typically manages these) but those are expected to be managed by the system. The collective "weight" of these things with respect to UIWebView is, as you've seen, non-trivial.
I don't have any knowledge of UIKit or WebKit internals, but I would expect that if you had a discussion with someone who did, their response to the question of "Why is my use of UIWebView causing so much memory use?" would be two pronged: The first prong would be "this is the price of admission for using UIWebView -- it's basically like running a whole web browser in your process." The second prong would be "system framework caches are automatically managed by the system" by which they would mean that, for instance, the CFURL caches (which is one of the things that using UIWebView causes to be created) are managed by the system, so if a memory warning came in, the system frameworks would be responsible for evicting things from those caches to reduce the memory consumed by them; you have no control over those, and you just have to trust that the system frameworks will do what needs to be done. (That doesn't help you in the case where whatever the system cache managers do isn't aggressive enough for you, but you're not going to get any more control over them, so you need to attack the issue from another angle, either way.) If you're wondering why the memory use doesn't go down once you deallocate your UIWebView, this is your answer. There's a bunch of stuff it's doing behind the scenes, that you can't control.
The expectation that allocating, using, and then deallocating a UIWebView is a net-zero operation ignores some non-trivial, inherent and unavoidable side-effects. The existence of such side-effects is not (in and of itself) indicative of a bug in UIWebView. There are side effects like this all over the place. If you were to create a trivial application that did nothing but launch and then terminate after one spin of the run loop, and you set a breakpoint on exit(), and looked at the memory that had been allocated and never freed, there would be thousands of allocations. This is a very common pattern used throughout the system frameworks and in virtually every application.
What does this mean for you? It means that you effectively have two choices: Use UIWebView and pay the "price of admission" in memory consumption, or don't use UIWebView.
We're currently developing an iPad application using Air for iOS and from time to time experience crashes (on iPad1 with ios 5 only) which seem to be because the application is using up too much memory.
How to catch/handle such errors in the application? how to be notified when memory is low? trying to catch flash.errors.MemoryError doesn't seem to work. Any tips?
I've done some work in this area and here are some tips that I can give you.
Get Flash Builder 4.6 Premium.
Get it if only for the profiler alone. It has one of the best profilers available for diagnosing things like this. With that said, there are other Flash profilers around, that have varying degrees of usefulness.
This alone will help you find and diagnose where most of your memory is going in terms of raw memory usage, but also help you find how many objects you are creating and destroying and how long they are hanging around before the garbage collector is finally getting around to letting them go.
Pool smaller trivial objects
Rather than constantaly creating and destorying smaller objects, create object pools. This will save you the cost of spinning up new objects constantly, and keep you from having to wait until the garbage collector to run before releasing the memory.
There are a lot of examples and patterns to look at for creating object pools in actionscript. It would be easier if AS supported generics, but even without them its still pretty straight forward.
Eagerly dispose of huge objects
This goes directly against the advice in the previous point, but for huge objects, you don't want them hanging around in memory forever. I'm referring to things like BitmapData, when you are done with them (for the foreseeable future), tear them down and null them out, and let the garbage collector clean it up.
When you need them again, rebuild them. Yes, you will take a slight performance hit, but memory on mobile devices is precious and don't waste it by keeping around a 2mb bitmapdata object that only ever appears on the loading screen. Throw it away.
Null out references you don't need anymore
Take some time and try to really understand what the garbage collector needs to do its work, and how its decides which objects can and cannot be thrown away. Try to avoid self referential objects/circular references, while the CG can normally figure it out, sometimes it might need a litle hand holding.
Evaluate every time you use new [Related to 2]
Again using a memory profiler will help for this step, but make sure that every time you instantiate a new object, you need to instantiate a new object. It can be very easy to get lazy when developing for a PC, just throwing new objects into the pool and letting the CG sort it out. See if there are good caching strategies (object pooling, or just reference caching) if its small. And if its a HUGE object that you are building up and tearing down often, it might be time to try to come up with a better architectural solution.
As far as I know, if you get to the point where iOS thinks the memory is low, its already too late. Last time I checked, the framework will try to run the CG when it thinks its running out of memory, and if it can't free up enough memory to continue, it fails out. Do your best to try to avoid getting to the point where the operating system thinks the only safe option is to terminate your thread.
My application keeps consuming more and more memory as seen in the Windows Task Manager and eventually crashes due to OutOfMemory. However when i check for leaks using MemoryValidator (from www.softwareverify.com) no leaks are detected. Why is this happening?
Just because there is a growing amount of memory usage doesn't mean it is necessarily 'leaking'. You could simply be accumulating a large number of live objects and/or very large ones (containing lots and lots of data).
If you can provide more information about what language(s) you are using and what the application is doing I can perhaps help out with some more specific information!
UPDATE AS PER COMMENTS
Well, you'll just want to make sure the garbage collection is happening correctly. I'd suggest the libgc library to help with that perhaps.
http://developers.sun.com/solaris/articles/libgc.html
The only other thing I could think of as being the cause of this is that you are maintaining references to the objects somewhere unintentionally so they are just piling up.
I was going through some of the decisions made to make Xara Xtreme, an open source SVG graphics application. Their memory management decision was quite intriguing to me since I naively took it for granted that on-demand dynamic allocation as the way of writing object oriented application.
The explanation from the documentation is
How on earth can static allocations be efficient?
If you are used to large dynamic data structures, this may seem strange
to you. Firstly, all our objects (and
thus allocation size) are far smaller
(on average) than each dynamic area
allocation within a program such as
Impression. This means that though
there are likely to be many holes
within memory, they are small. Also,
we have far more allocated objects
within memory, and thus these holes
quickly get filled. Furthermore,
virtual memory managers will free up
any pages of memory that contain no
allocations and give this memory back
to the operating system so that it may
be used again (either by us, or by
another task).
We benefit greatly from
the fact that whenever we allocate
memory in this manner, we do not have
to move any memory about. This proved
a bottleneck in ArtWorks which also
had many small allocations being used
concurrently. more
In brief, the presence of plenty of small objects and the need to prevent memory move are the reasons given for choosing static allocation. I don't have clear understanding about the reasons mentioned.
Though this talks about static allocation, what I see from the cursory look at the code is that a block of memory is dynamically allocated at the application start and kept alive till the application ends, roughly simulating static allocation.
Could you explain in what situations Static Allocation fares better than on-demand Dynamic Allocation in order to consider it as the main mode of allocation in a serious applications?
It's quicker because you avoid the overhead of calling a system routine to manage your storage. malloc() maintains a heap, so every request requires a scan for an appropriately-sized block, possibly resizing the block, updating the block list to mark this block as used, etc. If you're allocating a lot of small objects, this overhead can be excessive. With static allocation you can create an allocation pool and just maintain a simple bitmap to show which areas are in use. This assumes that each object is the same size, so you commonly create one pool per object type.
In short, there's really no such thing as static allocation other than the space allocated for your functions themselves and other read-only kinds of memory. (Do an assemble-only "gcc -S" and look for all the memory blocks, if you're interested.) If you're making and breaking objects, you're dynamically allocating. That being said, there's nothing to stop you from tightly controlling the allocation mechanism itself.
That's what functions like mallinfo() and mallopt() do for controlling how malloc() does its magic. However, that might not even be good enough for you. If you know all your chunks are going to be the same size, you can allocate and deallocate much more efficiently. And if you know you have 3 sizes of stuff, you can keep 3 arenas of memory each with their own allocator.
On top of this, you have the situation at runtime where the process doesn't have enough room and needs to ask the os for more - that involves a system call that is more expensive than just incrementing an array index. On unix, it's usually brk() or sbrk() or the like. And that can take valuable time.
Another, rarer situation, would be if you need to multiply-allocate things. Like 3 threads need to share information and only when all 3 release it does it get freed. That's something nonstandard and not generally covered by typical mallopt() or even pthread-specific memory or mutex/semaphore-locked chunks.
So if you have high speed optimization issues or you are running on an embedded system where you need to squeeze all you can out of the available memory, then "static allocation", or at least controlling the allocation mechanism, may be the way to go.