iPad Pro 3rd Gen Killing Foreground App Without Cause - ios

I have an app that has been out in the wild for many years.
This app, in order to be 100% functional while offline, needs to download hundreds of thousands of images (1 for each object) one time only (delta updates are processed as needed).
The object data itself comes down without issue.
However, recently, our app has started crashing while downloading just the images, but only on newer iPads (3rd gen iPad Pros with plenty of storage).
The image download process uses NSURLSession download tasks inside an NSOperationQueue.
We were starting to see Energy Logs stating that CPU usage was too high, so we modified our parameters to add a break between each image, as well as between each batch of image, using
[NSThread sleepForTimeInterval:someTime];
This reduced our CPU usage from well above 95% (which, fair enough) to down below 18%!
Unfortunately, the app would still crash on newer iPads after only a couple of hours. However, on our 2016 iPad Pro 1st Gen, the app does not crash at all, even after 24 hours of downloading.
When pulling crash logs from the devices, all we see is that CPU usage was over 50% for more than 3 minutes. No other crash logs come up.
These devices are all plugged in to power, and have their lock time set to never in order to allow the iPad to remain awake and with our app in the foreground.
In an effort to solve this issue, we turned our performance way down, basically waiting 30 seconds in between each image, and 2 full minutes between each batch of images. This worked and the crashing stopped, however, this would take days to download all of our images.
We are trying to find a happy medium where the performance is reasonable, and the app does not crash.
However, what is haunting me, is that no matter the setting, and even at full-bore performance, the app never crashes on the older devices, it only crashes on the newer devices.
Conventional wisdom would suggest that should not be possible.
What am I missing here?
When I profile using Instruments, I see the app sitting at a comfortable 13% average while downloading, and there is a 20 second gap in between batches, so the iPad should have plenty of time to do any cleanup.
Anyone have any ideas? Feel free to request additional information, I'm not sure what else would be helpful.
EDIT 1: Downloader Code Below:
//Assume the following instance variables are set up:
self.operationQueue = NSOperationQueue to download the images.
self.urlSession = NSURLSession with ephemeralSessionConfiguration, 60 second timeoutIntervalForRequest
self.conditions = NSMutableArray to house the NSConditions used below.
self.countRemaining = NSUInteger which keeps track of how many images are left to be downloaded.
//Starts the downloading process by setting up the variables needed for downloading.
-(void)startDownloading
{
//If the operation queue doesn't exist, re-create it here.
if(!self.operationQueue)
{
self.operationQueue = [[NSOperationQueue alloc] init];
[self.operationQueue addObserver:self forKeyPath:KEY_PATH options:0 context:nil];
[self.operationQueue setName:QUEUE_NAME];
[self.operationQueue setMaxConcurrentOperationCount:2];
}
//If the session is nil, re-create it here.
if(!self.urlSession)
{
self.urlSession = [NSURLSession sessionWithConfiguration:[NSURLSessionConfiguration ephemeralSessionConfiguration]
delegate:self
delegateQueue:nil];
}
if([self.countRemaining count] == 0)
{
[self performSelectorInBackground:#selector(startDownloadForNextBatch:) withObject:nil];
self.countRemaining = 1;
}
}
//Starts each batch. Called again on observance of the operation queue's task count being 0.
-(void)startDownloadForNextBatch:
{
[NSThread sleepForTimeInterval:20.0]; // 20 second gap between batches
self.countRemaining = //Go get the count remaining from the database.
if (countRemaining > 0)
{
NSArray *imageRecordsToDownload = //Go get the next batch of URLs for the images to download from the database.
[imageRecordsToDownload enumerateObjectsUsingBlock:^(NSDictionary *imageRecord,
NSUInteger index,
BOOL *stop)
{
NSInvocationOperation *invokeOp = [[NSInvocationOperation alloc] initWithTarget:self
selector:#selector(downloadImageForRecord:)
object:imageRecord];
[self.operationQueue addOperation:invokeOp];
}];
}
}
//Performs one image download.
-(void)downloadImageForRecord:(NSDictionary *)imageRecord
{
NSCondition downloadCondition = [[NSCondition alloc] init];
[self.conditions addObject:downloadCondition];
[[self.urlSession downloadTaskWithURL:imageURL
completionHandler:^(NSURL *location,
NSURLResponse *response,
NSError *error)
{
if(error)
{
//Record error below.
}
else
{
//Move the downloaded image to the correct directory.
NSError *moveError;
[[NSFileManager defaultManager] moveItemAtURL:location toURL:finalURL error:&moveError];
//Create a thumbnail version of the image for use in a search grid.
}
//Record the final outcome for this record by updating the database with either an error code, or the file path to where the image was saved.
//Sleep for some time to allow the CPU to rest.
[NSThread sleepForTimeInterval:0.05]; // 0.05 second gap between images.
//Finally, signal our condition.
[downloadCondition signal];
}]
resume];
[downloadCondition lock];
[downloadCondition wait];
[downloadCondition unlock];
}
//If the downloads need to be stopped, for whatever reason (i.e. the user logs out), this function is called to stop the process entirely:
-(void)stopDownloading
{
//Immediately suspend the queue.
[self.operationQueue setSuspended:YES];
//If any conditions remain, signal them, then remove them. This was added to avoid deadlock issues with the user logging out and then logging back in in rapid succession.
[self.conditions enumerateObjectsUsingBlock:^(NSCondition *condition,
NSUInteger idx,
BOOL * _Nonnull stop)
{
[condition signal];
}];
[self setConditions:nil];
[self setConditions:[NSMutableArray array]];
[self.urlSession invalidateAndCancel];
[self setImagesRemaining:0];
[self.operationQueue cancelAllOperations];
[self setOperationQueue:nil];
}
EDIT 2: CPU usage screenshot from Instruments. Peaks are ~50%, valleys are ~13% CPU usage.
EDIT 3: Running the app until failure in Console, observed memory issue
Alright! Finally observed the crash on my iPhone 11 Pro after over an hour downloading images, which matches the scenario reported by my other testers.
The Console reports my app was killed specifically for using too much memory. If I am reading this report correctly, my apps used over 2 GB of RAM. I'm assuming that this has to do more with the internal management of NSURLSESSIOND, since it is not showing this leak during debugging with either Xcode or Instruments.
Console reports: "kernel 232912.788 memorystatus: killing_specific_process pid 7075 [PharosSales] (per-process-limit 10) 2148353KB - memorystatus_available_pages: 38718"
Thankfully, I start receiving memory warnings around the 1 hour mark. I should be able to pause (suspend) my operation queue for some time (let's say 30 seconds) in order to let the system clear its memory.
Alternatively, I could call stop, with a gcd dispatch after call to start again.
What do you guys think about this solution? Is there a more elegant way to respond to memory warnings?
Where do you think this memory usage is coming from?

EDIT 4: Eureka!! Found internal Apple API memory leak
After digging I 'killing specific process' memory-related console message, I found the following post:
Stack Overflow NSData leak discussion
Based on this discussion surrounding using NSData writeToFile:error:, I looked around to see if I was somehow using this function.
Turns out, the logic that I was using to generate a thumbnail from the original image used this statement to write the generated thumbnail image to disk.
If I commented out this logic, the app no longer crashed at all (was able to pull down all of the images without failure!).
I had already planned on swapping this legacy Core Graphics code out for the WWDC 2018-demonstrated usage of ImageIO.
After recoding this function to use ImageIO, I am pleased to report that the app no longer crashes, and the thumbnail logic is super-optimized as well!
Thanks for all your help!

Related

Why does this for loop bleed memory?

I am using ARC for my iOS project and am using a library called SSKeychain to access/save items to the keychain. I expect my app to access keychain items once every 10 seconds or so (to access API security token) at peak load and as such I wanted to test this library to see how it handles when called frequently. I made this loop to simulate an insane amount of calls and noticed that it bleeds a significant amount (~75 mb) of memory when run on an iPhone (not simulator):
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
dispatch_async(dispatch_get_main_queue(), ^{
NSUInteger beginMemory = available_memory();
for (int i = 0; i < 10000; ++i) {
#autoreleasepool{
NSError * error2 = nil;
SSKeychainQuery* query2 = [[SSKeychainQuery alloc] init];
query2.service = #"Eko";
query2.account = #"loginPINForAccountID-2";
query2.password = nil;
[query2 fetch:&error2];
}
}
NSUInteger endMemory = available_memory();
NSLog(#"Started with %u, ended with %u, used %u", beginMemory, endMemory, endMemory-beginMemory);
});
return YES;
}
static NSUInteger available_memory(void) {
// Requires #import <mach/mach.h>
NSUInteger result = 0;
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
if (task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size) == KERN_SUCCESS) {
result = info.resident_size;
}
return result;
}
I am using SSKeychain which can be found here. This test bleeds about ~75 mb of memory regardless if things are actually stored on the keychain.
Any ideas what is happening? Is my testing methodology flawed?
I ran your code under the Leaks Instrument and this is what I saw from the Allocations track -
Which is what you would expect - a lot of memory allocated during the loop and then it is released.
Looking at the detail you see -
Persistent bytes on the heap of 2.36MB - This is the memory actually used by the app 'now' (i.e. after the loop with the app 'idling')
Persistent objects of 8,646 - again, the number of objects allocated "now".
Transient objects 663,288 - The total number of objects that have been created on the heap over the application lifetime. You can see from the difference between transient and persistent that most have been released.
Total bytes of 58.70MB - This is the total amount of memory that has been allocated during execution. Not the total of memory in use, but the total of the amounts that have been allocated regardless of whether or not those allocations have been subsequently freed.
The difference between the light and dark pink bar also shows the difference between the current 'active' memory use and the total use.
You can also see from the Leak Checks track that there are no leaks detected.
So, in summary, your code use a lot of transient memory as you would expect from a tight loop, but you wouldn't see this memory use in the normal course of your application execution where the keychain was accessed a few times every second or minute or whatever.
Now, I would imagine that having gone to the effort of growing the heap to support all of those objects, iOS isn't going to release that now freed heap memory back to the system straight away; it is possible that your app may need a large heap space again later, which is why your code reports that a lot of memory is in use and why you should be wary of trying to build your own instrumentation rather than using the tools available.
You should use Instruments to figure out where/what is causing a leak. Its a very good tool to know how to use.
This article is a little dated but you should get the basic gist.
Ray Wenderlich - Instruments
Going off of Paulw11's comment I stumbled across this,
From NSAutoreleasePool Class Reference:
The Application Kit creates an autorelease pool on the main thread at
the beginning of every cycle of the event loop, and drains it at the
end, thereby releasing any autoreleased objects generated while
processing an event.
So when you check it with instruments make sure the event loop has had time to finish. Maybe all you need to do is let the program keep running and then pause the debugger and check instruments again.

Unable to sustain a Constant Speed while Uploading files in the background with NSURLSession

I am trying to upload some 100 images to S3 in the background with AFURLSessionManager in small batches of 10 like what is being done here- Manage the number of active tasks in a background NSURLSession
I am using a shared NSURLSession and adding tasks according more tasks to this when some tasks are completed. Average size of each file is about 1.6 MB and the number of tasks that are guaranteed to run per a task queue is 5
Here is my method for adding the tasks:
(also available as an easier-to-read gist)
- (void) addTasksToSessionWithTaskObject:(Task*)taskObject withSessionInitialisationNeeded:(BOOL) needed{
NSString *filePath = [[NSBundle mainBundle] pathForResource:pathForResourceFile ofType:resourceFileType];
S3PutObjectRequest *putObjectRequest = [[S3PutObjectRequest alloc] initWithKey:targetFileKey
inBucket:_bucketname];
putObjectRequest.cannedACL = [S3CannedACL publicReadWrite];
putObjectRequest.filename = filePath;
putObjectRequest.contentType = [resourceFileType isEqualToString:#"MOV"] ? #"movie/mov" : #"image/jpg";
putObjectRequest.endpoint = #"http://s3.amazonaws.com";
putObjectRequest.contentLength=[[[NSFileManager defaultManager]
attributesOfItemAtPath:filePath error:nil] fileSize];
putObjectRequest.delegate = self;
[putObjectRequest configureURLRequest];
NSMutableURLRequest *request = [s3Client signS3Request:putObjectRequest];
NSMutableURLRequest *request2 = [[NSMutableURLRequest alloc]initWithURL:[NSURL URLWithString:[NSString stringWithFormat:#"http://s3.amazonaws.com/UploadTest/%#",taskObject.fileKey]]];
[request2 setHTTPMethod:request.HTTPMethod];
[request2 setAllHTTPHeaderFields:[request allHTTPHeaderFields]];
if(needed) {
sharedSession = [self backgroundSession];
}
NSURLSessionUploadTask *task = [sharedSession uploadTaskWithRequest:request2 fromFile:forFileUrl];
task.taskDescription = pathForResourceFile;
[currentlyActiveTaskIdArray addObject:#([task taskIdentifier])];
[task resume];
}
And this what I've done with the delegate
- (void)URLSession:(NSURLSession *)sessionI task:(NSURLSessionTask *)task
didCompleteWithError:(NSError *)error{
dispatch_async(dispatch_get_main_queue(), ^{
__block UIBackgroundTaskIdentifier bgTaskI = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{
[[UIApplication sharedApplication] endBackgroundTask:bgTaskI];
}];
if([currentlyActiveTaskIdArray containsObject:#([task taskIdentifier])]){
[currentlyActiveTaskIdArray removeObject:#([task taskIdentifier])];
}
if(currentlyActiveTaskIdArray.count < LOWER_SLAB_FOR_TASKS + 1){
[self initiateS3UploadForSetOfTasksIsItBeginningOfUpload:NO];
}
[[UIApplication sharedApplication] endBackgroundTask:bgTaskI];
});
}
Here is the Code to add more tasks
- (void) initiateS3UploadForSetOfTasksIsItBeginningOfUpload:(BOOL)beginning{
int i=0;
for(Task *eachTaskObject in tasksArray){
if(i < numberOfTasksTobeAdded){
[self addTasksToSessionWithTaskObject:eachTaskObject WithSessionInitialisationNeeded:NO];
i++;
}
}
}
I've been running tests with 100 files in Foreground mode and Background mode. In Foreground mode, it uploads the files at a consistant, steady and constant speed, it completes 90 files in the first 3 minutes, and the remaining 10 files in 20 seconds.
When I run the app in Background mode, I would expect it to upload the first 90 files just as fast as it did in the 3 minute Foreground window, and then slow down after that... but that's not the case. In Background mode, it uploads 15 files in the first minute, then it starts slowing down... a lot. It starts uploading in 8 file batches in slower and slower intervals: 1 minute, 3 minutes, 5 minutes, 10 minutes, and now 17 minutes. We're at 65 files 46 minutes in.
Is there a way to either keep it fast for at least the fist 3 minutes, or keep consistent speed in the background?
UPDATE: Following the comments from Clay here Ive switched back to NSURLSession from AFURLSessionManager because as he points out using block based callbacks is an extremely risky business with NSURLSession. Further I've played with HTTPMaximumConnectionsPerHost and set this around 10-this has given better results but nowhere near what I would want to be.
From what I can tell, setTaskDidCompleteBlock: is not an Apple API, NSURLSession-associated method. It is an AFURLSessionManager method (docs). If you are using AFNetworking on this, then you need to be announcing that bold, top, front and center. That is not the same, at all, as using NSURLSession. I would guess AFNetworking's background NSURLSession-based implementation comes with its own foibles and idiosyncrasies.
For my part, whatever success I've had with sustained background NSURLSession uploads are using only the stock API.
Addressing questions, etc.
Regarding AFNetworking: we use it for general web api I/O. At the time NSURLSession came out, AFNetworking really didn't robustly support app-in-background ops, so I didn't use it. Perhaps because I went through the background NSURLSession pain & hazing, I look askance at AFNetworking backgrounding under the rubric of "Now you have two problems". But maybe they have cracked the nut by now.
I strive for one NSURLSession. I started out being cavalier about creation & destruction of sessions, but found this made for some truly gnarly problems. Experiences seem to vary on this.
I use the default HTTPMaximumConnectionsPerHost, no problems there. The Apple docs are silent on the default value, but here's what lldb tells me in the random particular device/OS I chose:
(lldb) p [config HTTPMaximumConnectionsPerHost]
(NSInteger) $0 = 4
If you are having troubles with backgrounding slowing down, I doubt tweaking this is on the right track.
FWIW, background NSURLSessions do not support the block interfaces, delegate only.

Core Data Import - Not releasing memory

My question is about Core Data and memory not being released. I am doing a sync process importing data from a WebService which returns a json. I load in, in memory, the data to import, loop through and create NSManagedObjects. The imported data needs to create objects that have relationships to other objects, in total there are around 11.000. But to isolate the problem I am right now only creating the items of the first and second level, leaving the relationship out, those are 9043 objects.
I started checking the amount of memory used, because the app was crashing at the end of the process (with the full data set). The first memory check is after loading in memory the json, so that the measurement really takes only in consideration the creation, and insert of the objects into Core Data. What I use to check the memory used is this code (source)
-(void) get_free_memory {
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
NSLog(#"Memory in use (in bytes): %f",(float)(info.resident_size/1024.0)/1024.0 );
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
}
My setup:
1 Persistent Store Coordinator
1 Main ManagedObjectContext (MMC) (NSMainQueueConcurrencyType used to read (only reading) the data in the app)
1 Background ManagedObjectContext (BMC) (NSPrivateQueueConcurrencyType, undoManager is set to nil, used to import the data)
The BMC is independent to the MMC, so BMC is no child context of MMC. And they do not share any parent context. I don't need BMC to notify changes to MMC. So BMC only needs to create/update/delete the data.
Plaform:
iPad 2 and 3
iOS, I have tested to set the deployment target to 5.1 and 6.1. There is no difference
XCode 4.6.2
ARC
Problem:
Importing the data, the used memory doesn't stop to increase and iOS doesn't seem to be able to drain the memory even after the end of the process. Which, in case the data sample increases, leads to Memory Warnings and after the closing of the app.
Research:
Apple documentation
Efficiently importing Data
Reducing Memory Overhead
Good recap of the points to have in mind when importing data to Core Data (Stackoverflow)
Tests done and analysis of the memory release. He seems to have the same problem as I, and he sent an Apple Bug report with no response yet from Apple. (Source)
Importing and displaying large data sets (Source)
Indicates the best way to import large amount of data. Although he mentions:
"I can import millions of records in a stable 3MB of memory without
calling -reset."
This makes me think this might be somehow possible? (Source)
Tests:
Data Sample: creating a total of 9043 objects.
Turned off the creation of relationships, as the documentation says they are "expensive"
No fetching is being done
Code:
- (void)processItems {
[self.context performBlock:^{
for (int i=0; i < [self.downloadedRecords count];) {
#autoreleasepool
{
[self get_free_memory]; // prints current memory used
for (NSUInteger j = 0; j < batchSize && i < [self.downloadedRecords count]; j++, i++)
{
NSDictionary *record = [self.downloadedRecords objectAtIndex:i];
Item *item=[self createItem];
objectsCount++;
// fills in the item object with data from the record, no relationship creation is happening
[self updateItem:item WithRecord:record];
// creates the subitems, fills them in with data from record, relationship creation is turned off
[self processSubitemsWithItem:item AndRecord:record];
}
// Context save is done before draining the autoreleasepool, as specified in research 5)
[self.context save:nil];
// Faulting all the created items
for (NSManagedObject *object in [self.context registeredObjects]) {
[self.context refreshObject:object mergeChanges:NO];
}
// Double tap the previous action by reseting the context
[self.context reset];
}
}
}];
[self check_memory];// performs a repeated selector to [self get_free_memory] to view the memory after the sync
}
Measurment:
It goes from 16.97 MB to 30 MB, after the sync it goes down to 28 MB. Repeating the get_memory call each 5 seconds maintains the memory at 28 MB.
Other tests without any luck:
recreating the persistent store as indicated in research 2) has no effect
tested to let the thread wait a bit to see if memory restores, example 4)
setting context to nil after the whole process
Doing the whole process without saving context at any point (loosing therefor the info). That actually gave as result maintaing less amount of memory, leaving it at 20 MB. But it still doesn't decrease and... I need the info stored :)
Maybe I am missing something but I have really tested a lot, and after following the guidelines I would expect to see the memory decreasing again. I have run Allocations instruments to check the heap growth, and this seems to be fine too. Also no memory Leaks.
I am running out of ideas to test/adjust... I would really appreciate if anyone could help me with ideas of what else I could test, or maybe pointing to what I am doing wrong. Or it is just like that, how it is supposed to work... which I doubt...
Thanks for any help.
EDIT
I have used instruments to profile the memory usage with the Activity Monitor template and the result shown in "Real Memory Usage" is the same as the one that gets printed in the console with the get_free_memory and the memory still never seems to get released.
Ok this is quite embarrassing... Zombies were enabled on the Scheme, on the Arguments they were turned off but on Diagnostics "Enable Zombie Objects" was checked...
Turning this off maintains the memory stable.
Thanks for the ones that read trough the question and tried to solve it!
It seems to me, the key take away of your favorite source ("3MB, millions of records") is the batching that is mentioned -- beside disabling the undo manager which is also recommended by Apple and very important).
I think the important thing here is that this batching has to apply to the #autoreleasepool as well.
It's insufficient to drain the autorelease pool every 1000
iterations. You need to actually save the MOC, then drain the pool.
In your code, try putting a second #autoreleasepool into the second for loop. Then adjust your batch size to fine-tune.
I have made tests with more than 500.000 records on an original iPad 1. The size of the JSON string alone was close to 40MB. Still, it all works without crashes, and some tuning even leads to acceptable speed. In my tests, I could claim up to app. 70MB of memory on an original iPad.

ios memory going up very fast

I have a pretty general question here.
What would you do in general to find who's taking your memory?
I have a video encoder, the setup is pretty complex, the images are into a controller and the encoder is in another and i'm asking for the images and get them through delegates which are sometimes going through many levels of controllers, and i'm also using some dispatch_async calls in the process. Images are snapshots of an UIView and processed with CoreGraphics, i'm retaining the final image and releasing it in the other controller after use. Everything works fine, the memory is around 25Mb constantly, but what happens is that after I finish the encoding the memory is going up very fast, in maximum a minute is going from 25Mb to 330Mb and is of course crashing. I tried to put logs and see if is still asking for images but doesn't seem to be any problem, the encoder stops as expected. The encoder is set to run in background.
One important thing is that if I try to find leaks (or allocations because leaks are not reporting anything with ARC) the app is crashing sooner, but not because of the memory. I suspect that I messed the dispatches somehow and because of some delays caused by instruments something is not available at a specified time. However I have troubles finding this too without logs. Can I see logs when i'm debugging with instruments?
Thanks for any info that will help.
Edit: I succeeded to run the instruments with the allocs without doing anything, seems the crash is not consistent. I saved the instruments report and you can see how's the memory going up, there's an alloc that is causing this and i think the question resumes to how to read it. The file is here http://ge.tt/1PF97Pj/v/0?c
The problem here is that you're "busy-waiting" on adaptor.assetWriterInput.readyForMoreMediaData, -- i.e. calling it over and over in a tight loop. This is, generally speaking, bad practice. The headers state that this property is Key-Value Observable, so you would be better off restructuring your code to listen for Key-Value change notifications in order to advance the overall process. Even worse, depending on how AVAssetInputWriter works (I'm not sure if it's run-loop based or not), the act of busy-waiting here may actually prevent the asset input writer from doing any real work, since the run loop may be effectively deadlocked waiting for work to be done that might not happen until you let the run loop continue.
Now you may be asking yourself: How is busy-waiting causing memory pressure? It's causing memory pressure because behind the scenes, readyForMoreMediaData is causing autoreleased objects to be allocated every time you call it. Because you busy-wait on this value, checking it over and over in a tight loop, it just allocates more and more objects, and they never get released, because the run loop never has a chance to pop the autorelease pool for you. (see below for more detail about what the allocations are really for) If you wanted to continue this (ill-advised) busy-waiting, you could mitigate your memory issue by doing something like this:
BOOL ready = NO;
do {
#autoreleasepool {
ready = adaptor.assetWriterInput.readyForMoreMediaData;
}
} while (!ready);
This will cause any autoreleased objects created by readyForMoreMediaData to be released after each check. But really, you would be much better served in the long run by restructuring this code to avoid busy-waiting. If you absolutely must busy-wait, at least do something like usleep(500); on each pass of the loop, so you're not thrashing the CPU as much. But don't busy-wait.
EDIT: I also see that you wanted to understand how to figure this out from Instruments. Let me try to explain. Starting from the file you posted, here's what I did:
I clicked on the Allocations row in the top pane
Then I selected the "Created & Still Living" option (because if the things were getting destroyed, we wouldn't be seeing heap growth.)
Next, I applied a time filter by Option-dragging a small range in the big "ramp" that you see.
At this point, the window looks like this:
Here I see that we have tons of very similar 4K malloc'ed objects in the list. This is the smoking gun.
Now I select one of those, and expand the right pane of the window to show me a stack trace.
At this point, the window looks like this:
In the right panel we see the stack trace where that object is being created, and we see that it's being alloced way down in AVAssetInputWriter, but the first function below (visually above) the last frame in your code is -[AVAssetWriterInput isReadForMoreMediaData]. The autorelease in the backtrace there is a hint that this is related to autoreleased objects, and sitting in a tight loop like that, the standard autorelease mechanism never gets a chance to run (i.e. pop the current pool).
My conclusion from this stack is that something in -[AVAssetWriterInput isReadForMoreMediaData], (probably the _helper function in the next stack frame) does a [[foo retain] autorelease] before returning its result. The autorelease mechanism needs to keep track of all the things that have been autoreleased until the autorelease pool is popped/drained. In order to keep track those, it needs to allocate space for its "list of things waiting to be autoreleased". That's my guess as to why these are malloc blocks and not autoreleased objects. (i.e. there aren't any objects being allocated, but rather just space to keep track of all the autorelease operations that have happened since the pool was pushed -- of which there are MANY because you're checking this property in a tight loop.)
That's how I diagnosed the issue. Hopefully that will help you in the future.
To answer my question, the memory issue is fixed if i remove the dispatch_async calls, however now my UI is blocked which is not good at all. It should be a way to combine all this so i do not block it. Here is my code
- (void) image:(CGImageRef)cgimage atFrameTime:(CMTime)frameTime {
//NSLog(#"> ExporterController image");
NSLog(#"ExporterController image atFrameTime %lli", frameTime.value);
if (!self.isInBackground && frameTime.value % 20 == 0) {
dispatch_async(dispatch_get_main_queue(),^{
//logo.imageView.image = [UIImage imageWithCGImage:cgimage];
statusLabel.text = [NSString stringWithFormat:#"%i%%", frameCount/**100/self.videoMaximumFrames*/];
});
}
if (cgimage == nil || prepareForCancel) {
NSLog(#"FINALIZE THE VIDEO PREMATURELY cgimage == nil or prepareForCancel is YES");
[self finalizeVideo];
[logo stop];
return;
}
// Add the image to the video file
//dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0),^{
NSLog(#"ExporterController buffer");
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:cgimage andSize:videoSize];
NSLog(#"ExporterController buffer ok");
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
//printf("appending framecount %d, %lld %d\n", frameCount, frameTime.value, frameTime.timescale);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if (buffer) CVBufferRelease(buffer);
while (!adaptor.assetWriterInput.readyForMoreMediaData) {}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
//[NSThread sleepForTimeInterval:0.1];
while(!adaptor.assetWriterInput.readyForMoreMediaData) {}
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n", frameCount, j);
}
NSLog(#"ExporterController cgimage alive");
CGImageRelease(cgimage);
NSLog(#"ExporterController cgimage released");
//});
frameCount++;
if (frameCount > 100) {
NSLog(#"FINALIZING VIDEO");
//dispatch_async(dispatch_get_main_queue(),^{
[self finalizeVideo];
//});
}
else {
NSLog(#"ExporterController prepare for next one");
//dispatch_async(dispatch_get_main_queue(),^{
//dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0),^{
NSLog(#"ExporterController requesting next image");
[self.slideshowDelegate requestImageForFrameTime:CMTimeMake (frameCount, (int32_t)kRecordingFPS)];
//});
}
}

Why does an asynchronous NSURLConnection make the UI sluggish on iOS?

I noticed that the framerate while scrolling a collection view on an iPhone 4 dropped significantly (at times to 5 FPS) when a download using NSURLConnection was taking place in the background. I first suspected AFNetworking to be the culprit, but it turns out that the same thing happens when I simply use a block:
- (void)startBlockDownload:(id)sender
{
NSLog(#"starting block download");
dispatch_queue_t defQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
void (^downloadBlock) (void);
downloadBlock = ^{
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:_urlString]];
NSURLResponse *response = nil;
NSError *error = nil;
NSData* result = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
NSLog(#"block request done");
};
dispatch_async(defQueue, downloadBlock);
}
What gives? Is a background download so demanding that it renders the UI extremely sluggish? Is it the slow flash memory? Is there anything that can be done to keep the UI very responsive while doing a background download?
I've created a sample project to demonstrate the issue: https://github.com/jfahrenkrug/AFNetworkingPerformanceTest
Also see this issue on AFNetworking that I have started about the topic: https://github.com/AFNetworking/AFNetworking/issues/1030#issuecomment-18563005
Any help is appreciated!
As the comments you linked to say: The iPhone 4 is still a single-core machine so no matter how much "multitasking" you do, it will still only be executing one set of code at once. What's worse is that the sendSynchronousRequest and the UI code are both blocking sets of code, so they will take up the maximum amount of time allotted to them by the OS and then the OS will perform an expensive context switch to prepare to execute the other.
You can try two things:
1) Use the async API of NSURLConnection instead of dispatching away a synch request (though I have the feeling you tried this with AFNetworking).
2) Lower the priority of the queue you dispatch to (preferably to DISPATCH_QUEUE_PRIORITY_BACKGROUND if possible). This might give the main queue more cycles to execute on, but it may not since the entire operation is just one big chunk.
If those both fail to work, then it probably simply is too much for a single-core processor to handle (scrolling is not a light operation, and downloading requires constant running time to receive data). That's my best guess anyway...

Resources