EGOPhotoViewer - Memory warning with many pictures iOS - ios

I have a problem with the memory managment in EGOPhotoViewer. I get a memory warning after scroll about 50 pictures. In total, they have about 270 of them, each weighing approximately 100 kb and is different from others. I tried to resolve this problem by https://github.com/enormego/PhotoViewer/issues/6, but nothing helped.
Please help me, Pawel
//----- EDIT
Add further that all objects are properly released. In my opinion the problem is not removing images from the cache, but I don't know how to bite it...

You're loading all of them in the memory of course, and this is bad.
You need to use something like Core Data\NSIncrementalStore (Explanation of NSIncrementalStore in plain english).
Another option is to use the NSUserDefaults, but this is a bad practice, as it is for storing user data only, but it gets the job done + it caches its data, for fast retrieval.
Need more help? I'm here ;D

Related

Searching for a memory leak on Jruby/Rails/Tomcat application with YourKit

I had a misfortune of getting a task of searching for an unconfirmed memory leak. This is my first time using YourKit so while I know what I should be looking for, I have no idea where to look and how.
My understanding is that over time memory consumption goes up because certain objects are not being released. Pretty hard to do that in Rails, but I guess somebody figured out how.
Here's how memory telemetry looks like:
Ignoring the fact periods between GC increase over time, it looks like Old Gen memory is going up... maybe.
Now we probably need to know what objects are getting piled on there and what spawns them.
Steps I've taken so far:
triggered CG
started 'Object Allocation Recording' (each 100th... I have a feeling it might be useful for something)
Waited for while
Triggered another CG
Did a memory dump
After opening the memory snapshot in YourKit I have no idea what I should be looking for.
There's Call Tree in Allocations. Expanding the tree gives me a hint of some of the Rails code being run, but I have no idea if what I'm looking at is actually what I need.
Any Java profiling, Yourkit wielding, persons able to point me in a right direction?
Edit: Example of what I can see in Merged paths view:
1) Do not use object allocation recording. It is almost useless for memory leak finding.
2) I recommend to periodically advance objects generation (it is very fast and does not add overhead). Take a look at http://www.yourkit.com/docs/java/help/generations.jsp
You will be able to split objects by "age" and understand why heap grows.
Since you are using Tomcat it might be helpful to take a look at "Web applications" http://www.yourkit.com/docs/java/help/web_applications.jsp If your application has problem with class reloading/redeploying it will be visible there.
Best regards,
Vladimir Kondratyev
YourKit, LLC

Massive text file, NSString and Core Data

I've scratched my head about this issue for a long, long time now, but I still haven't figured out a way to do this efficiently and without using too much memory at once (on iOS, where memory is very limited).
I essentially have very large plain text files (15MB on average), which I then need to parse and import to Core Data.
My current implementation is to have an Article entity on Core Data, that has a "many" relationship with a Page entity.
I am also using a slightly modified version of this line reader library: https://github.com/johnjohndoe/LineReader
Naturally, the more Page entities I create, the more memory overhead I create (on top of the actual NSString lines).
No matter how much I tweak the amount of lines per page, or the amount of characters per line, the memory usage goes absolutely crazy (~300MB+), while just importing the whole text file as a single string quickly peaks at ~180MB and finished in a matter of seconds, with the former taking a couple of minutes.
The line reader itself might be at fault here, since I am refreshing the pages in the managed context after they're done, which to my knowledge should release them from memory.
At any rate, does anyone have any notes, techniques or ideas on how I should go about implementing this? Ideally I'd like to have support for pages, since I need to be able to navigate the text anyway, and later loading the entire text to memory doesn't sound like much fun.
Note: The Article & Page entity method works fine after the import, but the importing itself is going way overboard with memory usage.
EDIT: For some reason, Core Data is also consuming ~300MB of memory when removing an Article entity from the context. Any ideas on why that might be happening, or how that could be remedied?

iOS Memory malady madness

I recently ported a project over to ARC as I was having trouble with crashes and actually determining the cause, whether it was leaks or retain cycles etc., Now I have ported it over, I have not done massive testing to see whether it still crashes as I have not managed to get passed the activity monitor giving me the heeby jeebies when it shows my application doing This (activity monitor profiler)
whereas in allocations tools it looks something like
That real memory usage is not even the worst of it, at one point it shot up to around 90 odd MBs, I am unsure on how to proceed as I am not 100 percent sure what to do with the information given here, Except assume that I might be dong something, very wrong, And I have also run the leaks instrument, I have a few but they are minimal, they are all in bytes.
Does anyone have an explanation? or at the very least are able to clarify what I am possibly looking at? what's the difference between real memory usage and live bytes and overall bytes? Also these results were gotten doing exactly the same actions once and then showing you at the end of it.
I have been trying to reduce the real memory usage as pre ARC conversion I was having memory warnings and silent crashes frequently, I have not run into these again after converting, but I have not done any prolonged testing as I cannot conceive of even trying when the real memory usage looks like that. Which actually looks a lot higher than before ARC...Although the live bytes does look lower post ARC...Madness!
Something that confused me for a while is that ARC - wonderful as it is - does not necessarily avoid the need for #autoreleasepool.
https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAutoreleasePools.html
I ran very large memory usage in an app until someone suggested:
#autoreleasepool {
// lots of allocating of objects returned from methods then discarded
} // and the closing brace of the autoreleasepool block causes their memory to be recovered here
Maybe that will help you.
A good explanation of the meaning of the various columns in the profiler is at Instruments ObjectAlloc: Explanation of Live Bytes & Overall Bytes

memory issue iPad 4.2 crashes

I am developing a application which receives 600-700 KB of XML data from the server. I have to do some manipulations in that data so once received the data the memory increases to 600 KB to 2 M.B. Already view occupied 4 M.B of memory in the application.
So while processing the XML data i m doing some manipulation(pre-parsing) and the memory increases to 600 K.B to 2 M.B and finally decreases to 600 K.B. due to increase in memory, application gives the memory warning. While getting memory warning i m releasing all the views in the navigation controller but it releases only 1 M.B of memory. Even though I release all the views the application is crashing.
Please help me out in this issue. It happens in iPad 4.2.
Thanks in advance
There's no magical answer here. You're using too much memory and you need to figure out how to use less. Without knowing more about your application it's difficult to be specific, though clearly loading in nearly 1Mb of data and playing around with it isn't helping.
Maybe you can stream the data rather than loading it all into memory? There's an open source library that helps: StreamingXMLParser.
Also, your view sounds huge (over a megabyte!). I'm sure there's some optimisation that can be performed there. Use Instruments to see where your memory is being used.
Maybe only 1MB is released because of a parameter value which can be altered or you may need to manually start a garbage collection operation during your development session, if relevant to the language in use. You could sectionalise the xml input if possible or you could invoke [your own] compact or compress of the xml when stored if you have access to the script or code in a way that allows it.

Finding a Memory Bubble

This is either ridiculously simple, or too complex . . . .
In our application there is a form that loads some data from the database and displays it in the grid (putting it simply). When the data is refreshed the total memory usage climbs by about 50K (depending on how much data is displayed no doubt). Sounds like a memory leak, but when we shut down the application, FastMM is set with ReportMemoryLeakOnShutDown := True, and it doesn't report any abnormal memory leaks.
So it appears we have a memory bubble or bag. Something that is accumulating more memory each time it is run. Like a TList that keeps getting new items added to it, but the old ones never get removed. Then in the shutdown process all the items get destroyed. The rows displayed in the grid do not increase, but there are a lot of object lists behind the scenes that make this work, so it could be anywhere.
So my question is if anyone knows of a good trick for finding out what parts of an application are using how much memory . . . . I can think of lots of tedious ways of doing it (which I am in the process of doing - checking each list I can find), so I am hoping someone has a trick or technique I have not thought of.
Thanks in advance!
Update: Every refresh results in an additional 10-50K of memory being used. The users are reporting that eventually the application stops responding. It certainly acts like a memory leak, but FastMM (the memory manager) does not see anything leaking. I'll try some other memory tools . . .
Just F8 through the critical part and look at the process usage graph (Process Explorer from Mark Russinovich works great for that). When you find the culprit method, repeat the process but descend into that method.
Tools like AQTime can report difference in memory/object usage between snapshots. This might help you find out what keeps growing.
It looks like there is some memory allocated via custom AllocMem() calls, bypassing FastMM.
This can be midas. Andreas has a solution for this
Or some other InitXXX WinAPI call that allocates something, without freeing. Or some other third-party or windows dll used by project.
Does this happen every time you refresh the data or only the first time? If it's only the first time it could be that the system just reserves the memory for your application, despite the fact that it's not used at this time. (Maybe at some point the old and new data existed simultaneously in memory?)
There are many tools which provide you with informations about memory leaks, have you tried a different one?
Im not a FastMM expert, but I suppose that after a memory manager get memory, after you free the objects/components, it holds for future use with some zeroes or flag, I dont know, avoiding the need to ask the OS for more memory any time, like a cache.
How about you create the same form/open same data, N times in a row?
Will increase 50K each time?
Once I had the same problem. The application was certainly leaking, but I got no report on shutdown. The reason for this was that I had included sharemem in the uses-section of the project.
Have you tried the full FastMM-version? I have found that tweaking its settings gives me a more verbose information of memory usage.
As Lars Truijens mentioned, AQTime provides a live memory consumption graph, so in runtime, you can see what objects are using more memory whenever you refresh data.

Resources