values of levelOfDetail and levelsOfDetailBias to render pdf on CATiledLayer in ios - ipad

i am developing a project in which i render PDF on the CATiledLayers.I have Used the CGPdf class methods to render the pdf and succeeded too.
I would like to know the values to be used for levelsOfDetail and levelsOfDetailBias for avoiding any memory issues either in normal mode or zoom mode.
Right now i am setting the values a s below.
tiledLayer1.levelsOfDetail = 1;
tiledLayer1.levelsOfDetailBias = 30;
Am i using the appropriate values and does the memory get affected with these values?
I got this doubt since i am facing memory issues on zooming the page.I ensured there are no memory leaks and the code is effectively written.
my zoomScale ranges between 1.0 to 2.0.
Can anyone help me out to avoid the memory issue...and the values to be used for the above parameters.
Thanks in advance...

You can try reducing the levelsOfDetailBias. But one thing you should keep in mind is that whatever you do, memory warnings would certainly appear,we just need to handle it.
For instance, a simple pdf page may not trigger memory warning at all in any zoom level, whereas pdf page with high quality images may lead to memory warning. Also memory warning depends on the entire device on what is available for the application to run.

Related

Some gif are taking too much memory

I know it's a very common problem and this link is clearly explained because this happens.
It struck me a lot that react-native-flanimatedimage exists, since, it seems that it solved the problem using FLAnimatedImage which is a very famous module for native IOS.
The truth is that the module fulfills what it promises, but in exchange for conserving memory, completely destroys the animation of the gif, causing it to be seen in slow motion.
Even within the issues of the module many people report this same
https://github.com/Flipboard/FLAnimatedImage/issues?utf8=%E2%9C%93&q=slow
The problem is that some solutions that I have tried do not work or are simply solutions for native IOS workers.
The gif that I am trying to load, increases about 300mb the memory of the device, for each time that i render the gif and in some cases and depending on the device, this is causing a memory crash.
I'm not sure what the clean way to attack this problem is. Release the memory in some way, or if you continue on the path of trying to avoid that the memory is loaded so much, using FLAnimatedImage or some other tool that you can recommend me.

PIXI Webgl memory leak

Hi I am using PIXI and I am destroying textures whenever necessary. I am seeing that in the chrome task manager, the image cache and other resources are consuming very less memory. My only problem is with the new screens and textures getting loaded, the GPU memory increases (Like 5-10 mb on every activity). I am using texture.destroy(true) to destroy resources. I am also destroying base textures. I am also logging PIXI.utils.TextureCache and PIXI.utils.BaseTextureCache and I am seeing that the number of objects in these are bare minimum required for my app.
I am also making use of PIXI Animated Sprite. If it consumes any extra resources which I should be worried about, please let me know.
I am not sure where the webgl memory is increasing. I am using webgl because I make extensive usage of filters which is not possible using canvas renderer. Can any one help on how do I debug the memory usage and delete unnecessary textures from webgl. I am running on a tight timeline, so any help is very much appreciated.

Calling C functions from Objective-C - Passing large amounts of data

I have an iOS app that processes video frames from captureOutput straight from the camera. As part of the processing I'm calling several C functions in another source file. I convert UIImages into raw data and pass these rapidly - all of the processing is done on a queue tied to the video output.
This seems to work fine, up to a point. It seems that I'm hitting a limit when the data I'm passing becomes too large and I get seemingly random EXC_BAD_ACCESS errors popping up during the initialisation phase of the C function.
By initialisation I mean, declaring small static arrays and setting them to zero and suchlike.
I was wondering if I was hitting some kind of stack limit with passing large amounts of data so tried upping the stack size using Other Linker Flags and the -Wl,-stack_size, but this didn't seem to make a difference.
Is there anything else I should be aware of calling C functions from a non-UI thread in this way?
Sorry to be a little general, but I'm unable to post specifics of the code and am looking for general advice and tips for this kind of situation.
Some further information - we had issues with releasing memory and used autorelease pools in the video processing side in Objective-C (as recommended as we're on a different thread) - perhaps we're hitting the same difficulty with the C code. Is there a way to increase the frequency that releases/frees are executed in C or am I just chasing my tail?
So, the root of your problem is memory usage. Even if you don't leak any memory and are very careful, writing an video processing app on iOS is very tricky because there is only so much memory you can actually allocate in the app before the OS will terminate your app due to memory use. If you would like to read my blog post about the this subject, you can find it at video_and_memory_usage_on_ios. Some easy rules to remember are that you basically can allocate and use something like 10 megs of memory for a short time, but anything more than that and you risk upsetting the os and your app can be terminated. With virtual mapped memory in a file, the upper limit is a total of about 700 megs for all mapped memory at any one time. This is not a stack issue, I am talking about heap memory. You should not be putting video memory on the stack, that is just crazy. Be careful to only pass around pointers to memory and NEVER copy the memory from one buffer into another, just pass around refs to the memory in the buffer. The iOS APIs in CoreGraphics and CoreVideo support this type of "allocate a buffer and pass around the pointer" type of approach. The other rule of thumb to remember is to only process one frame at a time and then reuse the same buffer to process the next frame after the data has been written to a file or into a h.264 video via the AVAssets APIs.
After including several #autoreleasepool blocks around some key areas of code, we identified the major cause of the memory problems we were having.
It seemed that including the following block just inside the captureOutput callback function did the trick.
#autoreleaspool
{
imageColour = [self imageFromSampleBuffer:sampleBuffer];
}
Note: imageFromSampleBuffer was taken from this question
ios capturing image using AVFramework

Core Text occasionally fails to produce results [iOS]

I’m helping out a company with a project for iOS, which is using Core Text. Some users of the app have reported that text is occasionally missing from within the app. It seems™ that this is somewhat memory related, because it’s solvable by shutting down the app along with background apps.
I made a few lines of code which simulates the use of the app – so the app “runs itself”, navigating between view controllers randomly, scrolling in text fields etc – to track if this issue occurs by normal use.
I’ve found some memory leaks related to the use of Core Text, but according to instruments the amount of memory lost is quite low. However, when the simulation has been running for about 20 minutes or so, the app is shut down by the os because of memory warnings.
I’m intending to fix this memory leaks, but my problem is that I will not be able to ensure that this fixes the main bug (missing text), since I cannot reproduce it myself.
So my final question is: have anyone experienced issues with missing text on iOS while using Core Text, which are due to leaked memory? Does it sound plausible? If so, is this related to only specific versions of iOS?
I appreciate any answers that can help me out!
UIViewControllers may implement didReceiveMemoryWarning that the system calls when your app is on low memory. Framework classes, as core text, are most likely do implement this and act accordingly to save memory. So it is possible that your core text object aims to help your app resolving the low-mem situation with freeing some of its resources that can even cause it to blank its contents. Fix first all memory leaks in your app.
On the other hand, all bugs are very difficult to correct if you can't reproduce them. If you suspect that the issue is due to low memory, try to simulate this yourself by allocating huge amount of memory in your application and hope that you can reproduce the erroneous behavior that way.

memory issue iPad 4.2 crashes

I am developing a application which receives 600-700 KB of XML data from the server. I have to do some manipulations in that data so once received the data the memory increases to 600 KB to 2 M.B. Already view occupied 4 M.B of memory in the application.
So while processing the XML data i m doing some manipulation(pre-parsing) and the memory increases to 600 K.B to 2 M.B and finally decreases to 600 K.B. due to increase in memory, application gives the memory warning. While getting memory warning i m releasing all the views in the navigation controller but it releases only 1 M.B of memory. Even though I release all the views the application is crashing.
Please help me out in this issue. It happens in iPad 4.2.
Thanks in advance
There's no magical answer here. You're using too much memory and you need to figure out how to use less. Without knowing more about your application it's difficult to be specific, though clearly loading in nearly 1Mb of data and playing around with it isn't helping.
Maybe you can stream the data rather than loading it all into memory? There's an open source library that helps: StreamingXMLParser.
Also, your view sounds huge (over a megabyte!). I'm sure there's some optimisation that can be performed there. Use Instruments to see where your memory is being used.
Maybe only 1MB is released because of a parameter value which can be altered or you may need to manually start a garbage collection operation during your development session, if relevant to the language in use. You could sectionalise the xml input if possible or you could invoke [your own] compact or compress of the xml when stored if you have access to the script or code in a way that allows it.

Resources