Reduce memory usage of AVAssetWriter - ios

As the title says, I am having some trouble with AVAssetWriter and memory.
Some notes about my environment/requirements:
I am NOT using ARC, but if there is a way to simply use it and get it all working I'm all for it. My attempts have not made any difference though. And the environment I will be using this in requires memory to be minimised / released ASAP.
Objective-C is a requirement
Memory usage must be as low as possible, the 300mb it takes up now is unstable when testing on my device (iPhone X).
The code
This is the code used when taking the screenshots below https://gist.github.com/jontelang/8f01b895321d761cbb8cda9d7a5be3bd
The problem / items kept around in memory
Most of the things that seem to take up a lot of memory throughout the processing seems to be allocated in the beginning.
So at this point it doesn't seem to me that the issues are with my code. The code that I personally have control over seems to not be an issue, namely loading the images, creating the buffer, releasing it all seems to not be where the memory has a problem. For example if I mark in Instruments the majority of the time after the one above, the memory is stable and none of the memory is kept around.
The only reason for the persistent 5mb is that it is deallocated just after the marking period ends.
Now what?
I actually started writing this question with the focus being on wether my code was releasing things correctly or not, but now it seems like that is fine. So what are my options now?
Is there something I can configure within the current code to make the memory requirements smaller?
Is there simply something wrong with my setup of the writer/input?
Do I need to use a totally different way of making a video to be able to make this work?
A note on using CVPixelBufferPool
In the documentation of CVPixelBufferCreate Apple states:
If you need to create and release a number of pixel buffers, you should instead use a pixel buffer pool (see CVPixelBufferPool) for efficient reuse of pixel buffer memory.
I have tried with this as well, but I saw no changes in the memory usage. Changing the attributes for the pool didn't seem to have any effect as well, so there is a small possibility that I am not actually using it 100% properly, although from comparing to code online it seems like I am, at least. And the output file works.
The code for that, is here https://gist.github.com/jontelang/41a702d831afd9f9ceeb0f9f5365de03
And here is a slightly different version where I set up the pool in a slightly different way https://gist.github.com/jontelang/c0351337bd496a6c7e0c94293adf881f.
Update 1
So I looked a bit deeper into a trace, to figure out when/where the majority of the allocations are coming from. Here is an annotated image of that:
The takeaway is:
The space is not allocated "with" the AVAssetWriter
The 500mb that is held until the end is allocated within 500ms after the processing starts
It seems that it is done internally in AVAssetWriter
I have the .trace file uploaded here: https://www.dropbox.com/sh/f3tf0gw8gamu924/AAACrAbleYzbyeoCbC9FQLR6a?dl=0

When creating Dispatch Queue, ensure you create a queue with Autorlease Pool. Replace DISPATCH_QUEUE_SERIAL with DISPATCH_QUEUE_SERIAL_WITH_AUTORELEASE_POOL.
Wrap each iteration of for loop into autorelease pool as well
like this:
[assetWriterInput requestMediaDataWhenReadyOnQueue:recordingQueue usingBlock:^{
for (int i = 1; i < 200; ++i) {
#autoreleasepool {
while (![assetWriterInput isReadyForMoreMediaData]) {
[NSThread sleepForTimeInterval:0.01];
}
NSString *path = [NSString stringWithFormat:#"/Users/jontelang/Desktop/SnapperVideoDump/frames/frame_%i.jpg", i];
UIImage *image = [UIImage imageWithContentsOfFile:path];
CGImageRef ref = [image CGImage];
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:ref pool:writerAdaptor.pixelBufferPool];
CMTime presentTime = CMTimeAdd(CMTimeMake(i, 60), CMTimeMake(1, 60));
[writerAdaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
CVPixelBufferRelease(buffer);
}
}
[assetWriterInput markAsFinished];
[assetWriter finishWritingWithCompletionHandler:^{}];
}];

No, I see it is around 240 mb peaking in app. It's my first time using this allocation - interesting.
I'm using AssetWriter to write a video file by streaming cmSampleBuffer : CMSampleBuffer. It gets from AVCaptureVideoDataOutputSampleBufferDelegate by Camera CaptureOutput Realtime.

While I have not yet found the actual issue, the memory problem I described in this question was solved by simply doing it on the actual device instead of the simulator.

#Eugene_Dudnyk Answer is on spot, the auto release pool INSIDE the for or while loop is the key, here is how I got it working for Swift, also, please use AVAssetWriterInputPixelBufferAdaptor for pixel buffer pool:
videoInput.requestMediaDataWhenReady(on: videoInputQueue) { [weak self] in
while videoInput.isReadyForMoreMediaData {
autoreleasepool {
guard let sample = assetReaderVideoOutput.copyNextSampleBuffer(),
let buffer = CMSampleBufferGetImageBuffer(sample) else {
print("Error while processing video frames")
videoInput.markAsFinished()
DispatchQueue.main.async {
videoFinished = true
closeWriter()
}
return
}
// Process image and render back to buffer (in place operation, where ciProcessedImage is your processed new image)
self?.getCIContext().render(ciProcessedImage, to: buffer)
let timeStamp = CMSampleBufferGetPresentationTimeStamp(sample)
self?.adapter?.append(buffer, withPresentationTime: timeStamp)
}
}
}
My memory usage stopped rising.

Related

Why does this for loop bleed memory?

I am using ARC for my iOS project and am using a library called SSKeychain to access/save items to the keychain. I expect my app to access keychain items once every 10 seconds or so (to access API security token) at peak load and as such I wanted to test this library to see how it handles when called frequently. I made this loop to simulate an insane amount of calls and noticed that it bleeds a significant amount (~75 mb) of memory when run on an iPhone (not simulator):
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
dispatch_async(dispatch_get_main_queue(), ^{
NSUInteger beginMemory = available_memory();
for (int i = 0; i < 10000; ++i) {
#autoreleasepool{
NSError * error2 = nil;
SSKeychainQuery* query2 = [[SSKeychainQuery alloc] init];
query2.service = #"Eko";
query2.account = #"loginPINForAccountID-2";
query2.password = nil;
[query2 fetch:&error2];
}
}
NSUInteger endMemory = available_memory();
NSLog(#"Started with %u, ended with %u, used %u", beginMemory, endMemory, endMemory-beginMemory);
});
return YES;
}
static NSUInteger available_memory(void) {
// Requires #import <mach/mach.h>
NSUInteger result = 0;
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
if (task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size) == KERN_SUCCESS) {
result = info.resident_size;
}
return result;
}
I am using SSKeychain which can be found here. This test bleeds about ~75 mb of memory regardless if things are actually stored on the keychain.
Any ideas what is happening? Is my testing methodology flawed?
I ran your code under the Leaks Instrument and this is what I saw from the Allocations track -
Which is what you would expect - a lot of memory allocated during the loop and then it is released.
Looking at the detail you see -
Persistent bytes on the heap of 2.36MB - This is the memory actually used by the app 'now' (i.e. after the loop with the app 'idling')
Persistent objects of 8,646 - again, the number of objects allocated "now".
Transient objects 663,288 - The total number of objects that have been created on the heap over the application lifetime. You can see from the difference between transient and persistent that most have been released.
Total bytes of 58.70MB - This is the total amount of memory that has been allocated during execution. Not the total of memory in use, but the total of the amounts that have been allocated regardless of whether or not those allocations have been subsequently freed.
The difference between the light and dark pink bar also shows the difference between the current 'active' memory use and the total use.
You can also see from the Leak Checks track that there are no leaks detected.
So, in summary, your code use a lot of transient memory as you would expect from a tight loop, but you wouldn't see this memory use in the normal course of your application execution where the keychain was accessed a few times every second or minute or whatever.
Now, I would imagine that having gone to the effort of growing the heap to support all of those objects, iOS isn't going to release that now freed heap memory back to the system straight away; it is possible that your app may need a large heap space again later, which is why your code reports that a lot of memory is in use and why you should be wary of trying to build your own instrumentation rather than using the tools available.
You should use Instruments to figure out where/what is causing a leak. Its a very good tool to know how to use.
This article is a little dated but you should get the basic gist.
Ray Wenderlich - Instruments
Going off of Paulw11's comment I stumbled across this,
From NSAutoreleasePool Class Reference:
The Application Kit creates an autorelease pool on the main thread at
the beginning of every cycle of the event loop, and drains it at the
end, thereby releasing any autoreleased objects generated while
processing an event.
So when you check it with instruments make sure the event loop has had time to finish. Maybe all you need to do is let the program keep running and then pause the debugger and check instruments again.

UIImage and NSData memory leak

I have an app that needs to take screenshots and save them as files. I'm using ARC, so not releasing variables manually, and it seems my code has some serious leaks.
Here is what I'm running:
- (BOOL) saveNow:(NSString *)filePath {
UIImage *image = [self.view getImage];
NSData *imageData = UIImagePNGRepresentation(image);
return [imageData writeToFile:filePath atomically:YES];
}
Where getImage is a method of a category on UIView:
- (UIImage *)getImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [[UIScreen mainScreen]scale]);
[[self layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
When running this on a non-retina iPad, the creation of a UIImage object fills the memory with an additional 1 MB, NSData adds a further 4 MB, and as I run this many times, this memory is not released! On a retina iPad each call to saveNow: costs ~17 MB, which causes the device to run out of memory after a few runs.
A little extra info. I'm running this code in a loop that iterates a total of over 300 times (small changes are made to the view on each iteration and I need a screenshot of each for review purposes). If I reduce the amount of iterations so that the device does not run out of memory, I can see that the memory is released once the method that contains the loop returns. However, this is not ideal, and I would expect that taking the memory heavy code out into it's own function (saveNow:) should have made an improvement, but it does not. How can I force these objects to be released as soon as they are not needed instead of waiting for the parent method to return? Hopefully without having to disable ARC on the entire project.
Edit: I tried using #autoreleasepool like this:
#autoreleasepool {
[self saveNow:filePath];
}
The results are better but not perfect. It releases about 4 MB of memory when the block is complete, but another 1 MB is still stuck until the container method returns. So it's an 80% improvement (yey!) but I'm aiming for 100% :) I'll read up more about #autoreleasepool as I have not used it before.
I'll make my comment on #autoreleaspool a legitimate answer to help you out.
Apple suggests to use #autoreleaspool where the memory is in concern. The following paragraph is taken from Core Data documentation, but I believe can be applied in this situation as well:
In common with many other situations, when you use Core Data to import
a data file it is important to remember “normal rules” of Cocoa
application development apply. If you import a data file that you have
to parse in some way, it is likely you will create a large number of
temporary objects. These can take up a lot of memory and lead to
paging. Just as you would with a non-Core Data application, you can
use local autorelease pool blocks to put a bound on how many
additional objects reside in memory. For more about the interaction
between Core Data and memory management, see “Reducing Memory
Overhead.”
Basically, #autoreleasepool serves as a hint for a compiler to release all temporary objects once they are out of bounds.
You're expecting for the memory to be released completely, which might not be the case with Apple frameworks. There might be some caching going in behind the curtains (This is just and idea). That is why that remaining 1MB may be ok. However, just to be safe, I would recommend to increase the iteration number and see what happens.
As you mentioned in your comment your loop is big and nested, so there might be something else going on. Try to get rid of all extra operations and see what happens.
Hope this helps, Cheers!

high peak memory usage when writing large amount images to video file using AVAssetWriter

I create a function to assemble images into video file by using AvAssetWriter. There are a few threads in forum about this implementation. I have successfully written the video using AVAssetWriter. My question is not this implementation but about memory consumption. In my case, when I write 4 sec 30FPS video 1024*768, the peak memory usage will be around 300MB. For longer time, 10sec etc. it will crash with memory warning. The problem is that the memory usage is accumulated during the loop which write each image into the video file.After the loop, memory usage will drop back to normal level without leakage.
The following code is one iterate of the loop. It appends a new image to the avassetwriter
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:[newimg CGImage]
size:CGSizeMake(self.frameOrigWidth, self.frameOrigHeight) poolRef:adaptor.pixelBufferPool];
BOOL append_ok = NO;
int j = 0;
CMTime frameTime = CMTimeMake(frameCount,(int32_t)FPS);
while (!append_ok && j < 30) //attemp maximum 30 times
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
if(frameCount==0) append_ok = [adaptor appendPixelBuffer:buffer
withPresentationTime:kCMTimeZero];
else append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
[NSThread sleepForTimeInterval:0.05];//use sleep instead of runloop because it is not in main thread
}else{
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if(buffer) CVBufferRelease(buffer);
even there is buffer release at the last line, the memory will not be released during the whole loop procedure, I guess it is because the writer retains this buffer till after loop and the writer execute finishwriting.
I tried using #autoreleasepool{} to wrap this part. It effectively stopped the peak memory usage accumulation, but it also does not successfully write video file anymore even there is no running error.
The above explanation is from real device debug.
I thought of possible solution is to segment the writing, or pause a few times during the writing cycle to allow buffer really released by the writer. But i do not find a way to do that. I appreciate anyone who knows the method to solve this peak memory problem.
i have same issue with you
i try use sleepForTimeInterval inside
(void)didReceiveMemoryWarning
i was hope when this event triggered, it will pause main process nad give little time to
system to flush unused memory
but i dont know if this effective or not

Core Data Import - Not releasing memory

My question is about Core Data and memory not being released. I am doing a sync process importing data from a WebService which returns a json. I load in, in memory, the data to import, loop through and create NSManagedObjects. The imported data needs to create objects that have relationships to other objects, in total there are around 11.000. But to isolate the problem I am right now only creating the items of the first and second level, leaving the relationship out, those are 9043 objects.
I started checking the amount of memory used, because the app was crashing at the end of the process (with the full data set). The first memory check is after loading in memory the json, so that the measurement really takes only in consideration the creation, and insert of the objects into Core Data. What I use to check the memory used is this code (source)
-(void) get_free_memory {
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
NSLog(#"Memory in use (in bytes): %f",(float)(info.resident_size/1024.0)/1024.0 );
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
}
My setup:
1 Persistent Store Coordinator
1 Main ManagedObjectContext (MMC) (NSMainQueueConcurrencyType used to read (only reading) the data in the app)
1 Background ManagedObjectContext (BMC) (NSPrivateQueueConcurrencyType, undoManager is set to nil, used to import the data)
The BMC is independent to the MMC, so BMC is no child context of MMC. And they do not share any parent context. I don't need BMC to notify changes to MMC. So BMC only needs to create/update/delete the data.
Plaform:
iPad 2 and 3
iOS, I have tested to set the deployment target to 5.1 and 6.1. There is no difference
XCode 4.6.2
ARC
Problem:
Importing the data, the used memory doesn't stop to increase and iOS doesn't seem to be able to drain the memory even after the end of the process. Which, in case the data sample increases, leads to Memory Warnings and after the closing of the app.
Research:
Apple documentation
Efficiently importing Data
Reducing Memory Overhead
Good recap of the points to have in mind when importing data to Core Data (Stackoverflow)
Tests done and analysis of the memory release. He seems to have the same problem as I, and he sent an Apple Bug report with no response yet from Apple. (Source)
Importing and displaying large data sets (Source)
Indicates the best way to import large amount of data. Although he mentions:
"I can import millions of records in a stable 3MB of memory without
calling -reset."
This makes me think this might be somehow possible? (Source)
Tests:
Data Sample: creating a total of 9043 objects.
Turned off the creation of relationships, as the documentation says they are "expensive"
No fetching is being done
Code:
- (void)processItems {
[self.context performBlock:^{
for (int i=0; i < [self.downloadedRecords count];) {
#autoreleasepool
{
[self get_free_memory]; // prints current memory used
for (NSUInteger j = 0; j < batchSize && i < [self.downloadedRecords count]; j++, i++)
{
NSDictionary *record = [self.downloadedRecords objectAtIndex:i];
Item *item=[self createItem];
objectsCount++;
// fills in the item object with data from the record, no relationship creation is happening
[self updateItem:item WithRecord:record];
// creates the subitems, fills them in with data from record, relationship creation is turned off
[self processSubitemsWithItem:item AndRecord:record];
}
// Context save is done before draining the autoreleasepool, as specified in research 5)
[self.context save:nil];
// Faulting all the created items
for (NSManagedObject *object in [self.context registeredObjects]) {
[self.context refreshObject:object mergeChanges:NO];
}
// Double tap the previous action by reseting the context
[self.context reset];
}
}
}];
[self check_memory];// performs a repeated selector to [self get_free_memory] to view the memory after the sync
}
Measurment:
It goes from 16.97 MB to 30 MB, after the sync it goes down to 28 MB. Repeating the get_memory call each 5 seconds maintains the memory at 28 MB.
Other tests without any luck:
recreating the persistent store as indicated in research 2) has no effect
tested to let the thread wait a bit to see if memory restores, example 4)
setting context to nil after the whole process
Doing the whole process without saving context at any point (loosing therefor the info). That actually gave as result maintaing less amount of memory, leaving it at 20 MB. But it still doesn't decrease and... I need the info stored :)
Maybe I am missing something but I have really tested a lot, and after following the guidelines I would expect to see the memory decreasing again. I have run Allocations instruments to check the heap growth, and this seems to be fine too. Also no memory Leaks.
I am running out of ideas to test/adjust... I would really appreciate if anyone could help me with ideas of what else I could test, or maybe pointing to what I am doing wrong. Or it is just like that, how it is supposed to work... which I doubt...
Thanks for any help.
EDIT
I have used instruments to profile the memory usage with the Activity Monitor template and the result shown in "Real Memory Usage" is the same as the one that gets printed in the console with the get_free_memory and the memory still never seems to get released.
Ok this is quite embarrassing... Zombies were enabled on the Scheme, on the Arguments they were turned off but on Diagnostics "Enable Zombie Objects" was checked...
Turning this off maintains the memory stable.
Thanks for the ones that read trough the question and tried to solve it!
It seems to me, the key take away of your favorite source ("3MB, millions of records") is the batching that is mentioned -- beside disabling the undo manager which is also recommended by Apple and very important).
I think the important thing here is that this batching has to apply to the #autoreleasepool as well.
It's insufficient to drain the autorelease pool every 1000
iterations. You need to actually save the MOC, then drain the pool.
In your code, try putting a second #autoreleasepool into the second for loop. Then adjust your batch size to fine-tune.
I have made tests with more than 500.000 records on an original iPad 1. The size of the JSON string alone was close to 40MB. Still, it all works without crashes, and some tuning even leads to acceptable speed. In my tests, I could claim up to app. 70MB of memory on an original iPad.

ios memory going up very fast

I have a pretty general question here.
What would you do in general to find who's taking your memory?
I have a video encoder, the setup is pretty complex, the images are into a controller and the encoder is in another and i'm asking for the images and get them through delegates which are sometimes going through many levels of controllers, and i'm also using some dispatch_async calls in the process. Images are snapshots of an UIView and processed with CoreGraphics, i'm retaining the final image and releasing it in the other controller after use. Everything works fine, the memory is around 25Mb constantly, but what happens is that after I finish the encoding the memory is going up very fast, in maximum a minute is going from 25Mb to 330Mb and is of course crashing. I tried to put logs and see if is still asking for images but doesn't seem to be any problem, the encoder stops as expected. The encoder is set to run in background.
One important thing is that if I try to find leaks (or allocations because leaks are not reporting anything with ARC) the app is crashing sooner, but not because of the memory. I suspect that I messed the dispatches somehow and because of some delays caused by instruments something is not available at a specified time. However I have troubles finding this too without logs. Can I see logs when i'm debugging with instruments?
Thanks for any info that will help.
Edit: I succeeded to run the instruments with the allocs without doing anything, seems the crash is not consistent. I saved the instruments report and you can see how's the memory going up, there's an alloc that is causing this and i think the question resumes to how to read it. The file is here http://ge.tt/1PF97Pj/v/0?c
The problem here is that you're "busy-waiting" on adaptor.assetWriterInput.readyForMoreMediaData, -- i.e. calling it over and over in a tight loop. This is, generally speaking, bad practice. The headers state that this property is Key-Value Observable, so you would be better off restructuring your code to listen for Key-Value change notifications in order to advance the overall process. Even worse, depending on how AVAssetInputWriter works (I'm not sure if it's run-loop based or not), the act of busy-waiting here may actually prevent the asset input writer from doing any real work, since the run loop may be effectively deadlocked waiting for work to be done that might not happen until you let the run loop continue.
Now you may be asking yourself: How is busy-waiting causing memory pressure? It's causing memory pressure because behind the scenes, readyForMoreMediaData is causing autoreleased objects to be allocated every time you call it. Because you busy-wait on this value, checking it over and over in a tight loop, it just allocates more and more objects, and they never get released, because the run loop never has a chance to pop the autorelease pool for you. (see below for more detail about what the allocations are really for) If you wanted to continue this (ill-advised) busy-waiting, you could mitigate your memory issue by doing something like this:
BOOL ready = NO;
do {
#autoreleasepool {
ready = adaptor.assetWriterInput.readyForMoreMediaData;
}
} while (!ready);
This will cause any autoreleased objects created by readyForMoreMediaData to be released after each check. But really, you would be much better served in the long run by restructuring this code to avoid busy-waiting. If you absolutely must busy-wait, at least do something like usleep(500); on each pass of the loop, so you're not thrashing the CPU as much. But don't busy-wait.
EDIT: I also see that you wanted to understand how to figure this out from Instruments. Let me try to explain. Starting from the file you posted, here's what I did:
I clicked on the Allocations row in the top pane
Then I selected the "Created & Still Living" option (because if the things were getting destroyed, we wouldn't be seeing heap growth.)
Next, I applied a time filter by Option-dragging a small range in the big "ramp" that you see.
At this point, the window looks like this:
Here I see that we have tons of very similar 4K malloc'ed objects in the list. This is the smoking gun.
Now I select one of those, and expand the right pane of the window to show me a stack trace.
At this point, the window looks like this:
In the right panel we see the stack trace where that object is being created, and we see that it's being alloced way down in AVAssetInputWriter, but the first function below (visually above) the last frame in your code is -[AVAssetWriterInput isReadForMoreMediaData]. The autorelease in the backtrace there is a hint that this is related to autoreleased objects, and sitting in a tight loop like that, the standard autorelease mechanism never gets a chance to run (i.e. pop the current pool).
My conclusion from this stack is that something in -[AVAssetWriterInput isReadForMoreMediaData], (probably the _helper function in the next stack frame) does a [[foo retain] autorelease] before returning its result. The autorelease mechanism needs to keep track of all the things that have been autoreleased until the autorelease pool is popped/drained. In order to keep track those, it needs to allocate space for its "list of things waiting to be autoreleased". That's my guess as to why these are malloc blocks and not autoreleased objects. (i.e. there aren't any objects being allocated, but rather just space to keep track of all the autorelease operations that have happened since the pool was pushed -- of which there are MANY because you're checking this property in a tight loop.)
That's how I diagnosed the issue. Hopefully that will help you in the future.
To answer my question, the memory issue is fixed if i remove the dispatch_async calls, however now my UI is blocked which is not good at all. It should be a way to combine all this so i do not block it. Here is my code
- (void) image:(CGImageRef)cgimage atFrameTime:(CMTime)frameTime {
//NSLog(#"> ExporterController image");
NSLog(#"ExporterController image atFrameTime %lli", frameTime.value);
if (!self.isInBackground && frameTime.value % 20 == 0) {
dispatch_async(dispatch_get_main_queue(),^{
//logo.imageView.image = [UIImage imageWithCGImage:cgimage];
statusLabel.text = [NSString stringWithFormat:#"%i%%", frameCount/**100/self.videoMaximumFrames*/];
});
}
if (cgimage == nil || prepareForCancel) {
NSLog(#"FINALIZE THE VIDEO PREMATURELY cgimage == nil or prepareForCancel is YES");
[self finalizeVideo];
[logo stop];
return;
}
// Add the image to the video file
//dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0),^{
NSLog(#"ExporterController buffer");
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:cgimage andSize:videoSize];
NSLog(#"ExporterController buffer ok");
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
//printf("appending framecount %d, %lld %d\n", frameCount, frameTime.value, frameTime.timescale);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if (buffer) CVBufferRelease(buffer);
while (!adaptor.assetWriterInput.readyForMoreMediaData) {}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
//[NSThread sleepForTimeInterval:0.1];
while(!adaptor.assetWriterInput.readyForMoreMediaData) {}
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n", frameCount, j);
}
NSLog(#"ExporterController cgimage alive");
CGImageRelease(cgimage);
NSLog(#"ExporterController cgimage released");
//});
frameCount++;
if (frameCount > 100) {
NSLog(#"FINALIZING VIDEO");
//dispatch_async(dispatch_get_main_queue(),^{
[self finalizeVideo];
//});
}
else {
NSLog(#"ExporterController prepare for next one");
//dispatch_async(dispatch_get_main_queue(),^{
//dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0),^{
NSLog(#"ExporterController requesting next image");
[self.slideshowDelegate requestImageForFrameTime:CMTimeMake (frameCount, (int32_t)kRecordingFPS)];
//});
}
}

Resources