changing textures using nineveh gl frsmework - ios

If i am trying to change the another texture when the previous one is still in progress application is crashing..
Here is my code.
-(IBAction)changeTexture:(id)sender{
self.text = [arrayEyes objectAtIndex:[sender tag]];
NGLTexture *texture;
texture = [NGLTexture texture2DWithFile:self.text];
NGLMaterialMulti *material = (NGLMaterialMulti *)mesh.material;
[[material materialWithName:#"lambert16SG"] setDiffuseMap:texture];
mesh.material = material;
[mesh compileCoreMesh];
}

I'm going to assume that this code is hit right at the beginning of program execution. So for a bit there the model is still being loaded in a background thread.
So it's likely NGLTexture is being assigned to the mesh's material while it's being processed in another thread. You may run into assignment issues that will either throw an exception or outright crash. Try waiting for the model loader to finish processing before making assignments to it. Look up the NGLMeshDelegate protocol and try making the assignment in your -meshLoadingDidFinish: handler.

Related

Reduce memory usage of AVAssetWriter

As the title says, I am having some trouble with AVAssetWriter and memory.
Some notes about my environment/requirements:
I am NOT using ARC, but if there is a way to simply use it and get it all working I'm all for it. My attempts have not made any difference though. And the environment I will be using this in requires memory to be minimised / released ASAP.
Objective-C is a requirement
Memory usage must be as low as possible, the 300mb it takes up now is unstable when testing on my device (iPhone X).
The code
This is the code used when taking the screenshots below https://gist.github.com/jontelang/8f01b895321d761cbb8cda9d7a5be3bd
The problem / items kept around in memory
Most of the things that seem to take up a lot of memory throughout the processing seems to be allocated in the beginning.
So at this point it doesn't seem to me that the issues are with my code. The code that I personally have control over seems to not be an issue, namely loading the images, creating the buffer, releasing it all seems to not be where the memory has a problem. For example if I mark in Instruments the majority of the time after the one above, the memory is stable and none of the memory is kept around.
The only reason for the persistent 5mb is that it is deallocated just after the marking period ends.
Now what?
I actually started writing this question with the focus being on wether my code was releasing things correctly or not, but now it seems like that is fine. So what are my options now?
Is there something I can configure within the current code to make the memory requirements smaller?
Is there simply something wrong with my setup of the writer/input?
Do I need to use a totally different way of making a video to be able to make this work?
A note on using CVPixelBufferPool
In the documentation of CVPixelBufferCreate Apple states:
If you need to create and release a number of pixel buffers, you should instead use a pixel buffer pool (see CVPixelBufferPool) for efficient reuse of pixel buffer memory.
I have tried with this as well, but I saw no changes in the memory usage. Changing the attributes for the pool didn't seem to have any effect as well, so there is a small possibility that I am not actually using it 100% properly, although from comparing to code online it seems like I am, at least. And the output file works.
The code for that, is here https://gist.github.com/jontelang/41a702d831afd9f9ceeb0f9f5365de03
And here is a slightly different version where I set up the pool in a slightly different way https://gist.github.com/jontelang/c0351337bd496a6c7e0c94293adf881f.
Update 1
So I looked a bit deeper into a trace, to figure out when/where the majority of the allocations are coming from. Here is an annotated image of that:
The takeaway is:
The space is not allocated "with" the AVAssetWriter
The 500mb that is held until the end is allocated within 500ms after the processing starts
It seems that it is done internally in AVAssetWriter
I have the .trace file uploaded here: https://www.dropbox.com/sh/f3tf0gw8gamu924/AAACrAbleYzbyeoCbC9FQLR6a?dl=0
When creating Dispatch Queue, ensure you create a queue with Autorlease Pool. Replace DISPATCH_QUEUE_SERIAL with DISPATCH_QUEUE_SERIAL_WITH_AUTORELEASE_POOL.
Wrap each iteration of for loop into autorelease pool as well
like this:
[assetWriterInput requestMediaDataWhenReadyOnQueue:recordingQueue usingBlock:^{
for (int i = 1; i < 200; ++i) {
#autoreleasepool {
while (![assetWriterInput isReadyForMoreMediaData]) {
[NSThread sleepForTimeInterval:0.01];
}
NSString *path = [NSString stringWithFormat:#"/Users/jontelang/Desktop/SnapperVideoDump/frames/frame_%i.jpg", i];
UIImage *image = [UIImage imageWithContentsOfFile:path];
CGImageRef ref = [image CGImage];
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:ref pool:writerAdaptor.pixelBufferPool];
CMTime presentTime = CMTimeAdd(CMTimeMake(i, 60), CMTimeMake(1, 60));
[writerAdaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
CVPixelBufferRelease(buffer);
}
}
[assetWriterInput markAsFinished];
[assetWriter finishWritingWithCompletionHandler:^{}];
}];
No, I see it is around 240 mb peaking in app. It's my first time using this allocation - interesting.
I'm using AssetWriter to write a video file by streaming cmSampleBuffer : CMSampleBuffer. It gets from AVCaptureVideoDataOutputSampleBufferDelegate by Camera CaptureOutput Realtime.
While I have not yet found the actual issue, the memory problem I described in this question was solved by simply doing it on the actual device instead of the simulator.
#Eugene_Dudnyk Answer is on spot, the auto release pool INSIDE the for or while loop is the key, here is how I got it working for Swift, also, please use AVAssetWriterInputPixelBufferAdaptor for pixel buffer pool:
videoInput.requestMediaDataWhenReady(on: videoInputQueue) { [weak self] in
while videoInput.isReadyForMoreMediaData {
autoreleasepool {
guard let sample = assetReaderVideoOutput.copyNextSampleBuffer(),
let buffer = CMSampleBufferGetImageBuffer(sample) else {
print("Error while processing video frames")
videoInput.markAsFinished()
DispatchQueue.main.async {
videoFinished = true
closeWriter()
}
return
}
// Process image and render back to buffer (in place operation, where ciProcessedImage is your processed new image)
self?.getCIContext().render(ciProcessedImage, to: buffer)
let timeStamp = CMSampleBufferGetPresentationTimeStamp(sample)
self?.adapter?.append(buffer, withPresentationTime: timeStamp)
}
}
}
My memory usage stopped rising.

iOS: Handling OpenGL code running on background threads during App Transition

I am working on an iOS application that, say on a button click, launches several threads, each executing a piece of Open GL code. These threads either have a different EAGLContext set on them, or if they use same EAGLContext, then they are synchronised (i.e. 2 threads don't set same EAGLContext in parallel).
Now suppose the app goes into background. As per Apple's documentation, we should stop all the OpenGL calls in applicationWillResignActive: callback so that by the time applicationDidEnterBackground: is called, no further GL calls are made.
I am using dispatch_queues to create background threads. For e.g.:
__block Byte* renderedData; // some memory already allocated
dispatch_sync(glProcessingQueue, ^{
[EAGLContext setCurrentContext:_eaglContext];
glViewPort(...)
glBindFramebuffer(...)
glClear(...)
glDrawArrays(...)
glReadPixels(...) // read in renderedData
}
use renderedData for something else
My question is - how to handle applicationWillResignActive: so that any such background GL calls can be not just stopped, but also be able to resume on applicationDidBecomeActive:? Should I wait for currently running blocks to finish before returning from applicationWillResignActive:? Or should I just suspend glProcessingQueue and return?
I have also read that similar is the case when app is interrupted in other ways, like displaying an alert, a phone call, etc.
I can have multiple such threads at any point of time, invoked by possibly multiple ViewControllers, so I am looking for some scalable solution or design pattern.
The way I see it you need to either pause a thread or kill it.
If you kill it you need to ensure all resources are released which means again calling openGL most likely. In this case it might actually be better to simply wait for the block to finish execution. This means the block must not take too long to finish which is impossible to guarantee and since you have multiple contexts and threads this may realistically present an issue.
So pausing seems better. I am not sure if there is a direct API to pause a thread but you can make it wait. Maybe a s system similar to this one can help.
The linked example seems to handle exactly what you would want; it already checks the current thread and locks that one. I guess you could pack that into some tool as a static method or a C function and wherever you are confident you can pause the thread you would simply do something like:
dispatch_sync(glProcessingQueue, ^{
[EAGLContext setCurrentContext:_eaglContext];
[ThreadManager pauseCurrentThreadIfNeeded];
glViewPort(...)
glBindFramebuffer(...)
[ThreadManager pauseCurrentThreadIfNeeded];
glClear(...)
glDrawArrays(...)
glReadPixels(...) // read in renderedData
[ThreadManager pauseCurrentThreadIfNeeded];
}
You might still have an issue with main thread if it is used. You might want to skip pause on that one otherwise your system may simply never wake up again (not sure though, try it).
So now you are look at interface of your ThreadManager to be something like:
+ (void)pause {
__threadsPaused = YES;
}
+ (void)resume {
__threadsPaused = NO;
}
+ (void)pauseCurrentThreadIfNeeded {
if(__threadsPaused) {
// TODO: insert code for locking until __threadsPaused becomes false
}
}
Let us know what you find out.

iOS interface freeze caused by background thread

I have an app that needs to preload a bunch of streamed videos as soon as possible so that they play instantly when the user clicks on them.
I am able to achieve this with a collection of AVPlayer objects, initialized right when the app is launched:
-(void)preloadVideos {
for (Video* video in arrayOfVideos){
NSString *streamingURL = [NSString stringWithFormat:#"https://mywebsite.com/%#.m3u8", video.fileName];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:[NSURL URLWithString:streamingURL] options:nil];
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
pthread_mutex_lock(&mutex_videoPlayers);
[_videoPlayers setObject:player forKey:videoKey];
pthread_mutex_unlock(&mutex_videoPlayers);
}
}
The lock is defined in init as:
pthread_mutex_init(&mutex_videoPlayers, NULL);
My problem is that when I invoke this function, the app freezes for about 1 minute, then continues on with no problem. This is obviously because there is a lot of processing going on - according to the debug dashboard in xcode, CPU usage spikes to about 67% during the freeze.
So I thought I could solve this by putting the operation into a background thread:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
[self preloadVideos];
});
but the app still froze briefly in exactly the same way, and CPU usage had the same pattern. I thought maybe its because the task is too intensive and needed to be broken up into smaller tasks, so I tried serializing the loop as distinct tasks:
preloadQueue = dispatch_queue_create("preloadQueue", NULL);
...
-(void)preloadVideos {
for (Video* video in arrayOfVideos){
dispatch_async(preloadQueue, ^(void){
[self preloadVideo:video]; // a new function with the logic above
});
}
but that seemed to make the freeze period longer, even though max CPU usage went down to 48%.
Am I missing something with these GCD functions? Why does the AVPlayer creation block the main thread when put into background tasks?
I know its not that there are too many AVPlayers created, because there are only 6 of them, and the app runs fine after they are created.
After adding log messages I notice that (in all implementations), the setObject call is called for every single video player before the interface's viewDidAppear method is called. Also, 5 videos load instantly, and the last - a longer one - takes a while but the freeze ends right when it completes.
Why is the app waiting for background tasks to finish before updating the views?
Update:
The app accesses videoPlayers while these tasks are running, but since I use a lock while writing, I don't lock while reading. Here is the definition:
#property (atomic, retain) NSMutableDictionary *videoPlayers;
Update: updated preloadVideos with mutex locks, still seeing the freezing
Turns out the background thread was locking a resource that the main thread was accessing elsewhere. The main thread needed to wait for the resource to become freed, which caused the interface to freeze.
Your dispatch_async code should not be freezing the main thread. That should be creating the asset objects in the background. It will take time before the assets become available, but that should be ok.
What do you mean "...the app still froze briefly..." Froze how? And for how long?
How are you using the _videoPlayers array once you've loaded it? What are you doing to handle the fact that the array may only be partially loaded? (If you are looping through the _videoPlayers array when it gets saved to from the background you may crash.) At the very least you should make videoPlayers an atomic property of you class and always reference it (read and write) using property notation (self.videoPlayers or [self videoPlayers], never _videoPlayers.) You will probably need better protection than that, like using #synchronized for the code that accesses the array.

ARC with cocos2d causing unbounded heap growth and eventual memory crash?

I'm trying to track down a memory-related crash in my game. The exact error, if I happen to catch it while attached to a debugger varies. One such error message is:
Message from debugger: Terminated due to memory issue.
No crash report is generated. Here's a screenshot from the XCode7 Memory Report as I play on my iPhone6; after about 10 minutes it will crash, as I enter into the ~600MB+ range:
Running generational analysis with Instruments I've found that playing through battles appears to create unbounded persistent memory growth; here you can see what happens as I play through two battles:
What is confusing is that the allocations revealed by the twirl-down are pretty much every single bit of allocated memory in the whole game. Any read of a string from a dictionary, any allocation of an array, appears in this twirl-down. Here's a representitive example from drilling into an NSArray caller-analysis:
At this point, it occurs to me I started this project using cocos2d-iphone v2.1 a couple of years ago, and I started an ARC project despite using a pre-ARC library. I'm wondering if I'm just now realizing I configured something horribly, horribly wrong. I set the -fno-objc-arc flag on the cocos2d files:
Either that, or I must have done something else very very stupid. But I'm at a loss. I've checked some of the usual culprits, for example:
Put a NSLog in dealloc on my CCScene subclass to make sure scenes were going away
Made sure to implement cleanup (to empty cached CCNodes) and call super in my sublcasses
Dumped the cocos2d texture cache size, and checked it was not growing unbounded
Added low memory warning handlers, doing things like clearing the cocos2d cache
I've also been pouring over the Apple instruments documentation, in particular this link explains the steps I took to create the above generational analysis.
Update
Here's another representative example, this time in tree format. You can see that I have a CCScheduler which called an update function which triggered the UI to draw a sprite. The last time you see my code, before it delves into cocos2d code, is the highlighted function, the code for which I've also pasted below.
+ (instancetype)spriteAssetSource:(NSString*)assetName {
if(!assetName.length) {
return nil;
}
BOOL hasImageSuffix = [assetName hasSuffix:EXT_IMG];
NSString *frameName = hasImageSuffix ? [assetName substringToIndex:assetName.length-EXT_IMG.length] : assetName;
NSString *hdFrameName = [NSString stringWithFormat:#"%#%#",frameName,EXT_HD];
// First, hit up the sprite sheets...
if([[CCSpriteFrameCache sharedSpriteFrameCache] hasSpriteFrameName:hdFrameName]) {
CCSpriteFrame *frame = [[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:hdFrameName];
return [[self alloc] initWithSpriteFrame:frame];
}
// No sprite sheet? Try the assets.
else {
NSString *assetFp = hasImageSuffix ? assetName : [NSString stringWithFormat:#"%#%#",assetName,EXT_IMG];
return [[self alloc] initWithFile:assetFp];
}
}
What's so weird about this is that the captured memory is just the simple line to check if the file name is in the cocos2d cache:
- (BOOL)hasSpriteFrameName:(NSString*)name {
return [_spriteFrames.allKeys containsObject:name];
}
Yet this simple function shows up all over the place in these traces...
What it feels like is that any locally-scoped variable I create and pass into cocos2d gets its retain count incremented, and thus never deallocates (such as the case with both hdFrameName and other variables above).
Update 2
While it's no surprise that the CCScheduler sits at the top of the abandoned objects tree, what is surprising is that some of the objects are completely unrelated to cocos2d or UI. For example, in the highlighted row, all I've done is call a function on AMLocalPlayerData that does a [NSDate date]. The entire line is:
NSTimeInterval now = [NSDate date].timeIntervalSince1970;
It seems absurd that the NSDate object could be retained somehow here, yet that seems to be what Instruments is suggesting...
Update 3
I tried upgrading my version of cocos2d to 2.2, the last version to exist in the repository. The issue persists.

Does metal have a back buffer?

I'm currently tracking down some visual popping in my Metal app, and believe it is because I'm drawing directly to framebuffer, not a back-buffer
// this is when I've finished passing commands to the render buffer and issue the draw command. I believe this sends all the images directly to the framebuffer instead of using a backbuffer
[renderEncoder endEncoding];
[mtlCommandBuffer presentDrawable:frameDrawable];
[mtlCommandBuffer commit];
[mtlCommandBuffer release];
//[frameDrawable present]; // This line isn't needed (and I believe is performed by presentDrawable
Several googles later, I haven't found any documentation of back-buffers in metal. I know I could roll my own, but I can't believe metal doesn't support a back buffer.
Here is the code snippet of how I've setup my CAMetalLayer object.
+ (id)layerClass
{
return [CAMetalLayer class];
}
- (void)initCommon
{
self.opaque = YES;
self.backgroundColor = nil;
...
}
-(id <CAMetalDrawable>)getMetalLayer
{
id <CAMetalDrawable> frameDrawable;
while (!frameDrawable && !frameDrawable.texture)
{
frameDrawable = [self->_metalLayer nextDrawable];
}
return frameDrawable;
}
Can I enable a backbuffer on my CAMetalLayer object, or will I need to roll my own?
I assume by back-buffer, you mean a renderbuffer that is being rendered to, while the corresponding front-buffer is being displayed?
In Metal, the concept is provided by the drawables that you extract from CAMetalLayer. The CAMetalLayer instance maintains a small pool of drawables (generally 3), retrieves one of them from the pool each time you invoke nextDrawable, and returns it back to the pool after you've invoked presentDrawable and once rendering is complete (which may be some time later, since the GPU runs asynchronously from the CPU).
Effectively, on each frame loop, you grab a back-buffer by invoking nextDrawable, and make it eligible to become the front-buffer by invoking presentDrawable: and committing the MTLCommandBuffer.
Since there are only 3 drawables in the pool, the catch is that you have to manage this lifecycle yourself, by adding appropriate CPU resource synchronization at the time you invoke nextDrawable and in the callback you get once rendering is complete (as per the MTLCommandBuffer addCompletedHandler: callback set-up).
Typically you use a dispatch_semaphore_t for this:
_resource_semaphore = dispatch_semaphore_create(3);
then put the following just before you invoke nextDrawable:
dispatch_semaphore_wait(_resource_semaphore, DISPATCH_TIME_FOREVER);
and this in your addCompletedHandler: callback handler:
dispatch_semaphore_signal(_resource_semaphore);
Have a look at some of the simple Metal sample apps from Apple to see this in action. There is not a lot in terms of Apple documentation on this.

Resources