Here is the context:
I've been developing an audio-related app for a while now and I sort of hit a wall and not sure what to do next.
I've recently implemented in the app a custom class that plots a FFT display of the audio output. This class is a subclass of UIView meaning that every time I need to plot a new FFT update I need to call setNeedDisplay on my instance of the class with new sample values.
As I need to plot a new FFT for every frame (frame ~= 1024 samples), it means that the display function of my FFT gets called a lot (1024 / SampleRate ~= 0.02321 second). As for the sample calculation, it is done 44'100 / sec. I am not really experienced with managing threading in iOS so I read a little bit about it and here is how I have done it.
How it has been done: I have a subclass of NSObject "AudioEngine.h" that is taking care of all the DSP processing in my app and this is where I am setting my FFT display. All the sample values are calculated and assigned to my FFT subclass inside a dispatch_get_global_queue block as the values need to constantly be updated in the background. The setneedDisplay method is called once the samples index has reached the maximum frame number, and this is done inside a dispatch_async(dispatch_get_main_queue) block
In "AudioEngine.m"
for (k = 0; k < nchnls; k++) {
buffer = (SInt32 *) ioData->mBuffers[k].mData;
if (cdata->shouldMute == false) {
buffer[frame] = (SInt32) lrintf(spout[nsmps++]*coef) ;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
#autoreleasepool {
// FFT display init here as a singleton
SpectralView *specView = [SpectralView sharedInstance];
//Here is created a pointer to the "samples" property of my subclass
Float32 *specSamps = [specView samples];
//set the number of frames the FFT should take
[specView setInNumberFrames:inNumberFrames];
//scaling sample values
specSamps[frame] = (buffer[frame] * (1./coef) * 0.5);
}
});
} else {
// If output is muted
buffer[frame] = 0;
}
}
//once the number of samples has reached ksmps (vector size) we update the FFT
if (nsmps == ksmps*nchnls){
dispatch_async(dispatch_get_main_queue(), ^{
SpectralView *specView = [SpectralView sharedInstance];
[specView prepareToDraw];
[specView setNeedsDisplay];
});
What my issue is:
I get various threading issues, especially on the main thread such as Thread 1: EXC_BAD_ACCESS (code=1, address=0xf00000c), sometimes on the app launch as the viewDidLoad is being called, but also whenever I try to interact with any UI object.
The UI responsiveness becomes insanely slow, even on the FFT display.
What I think the problem is: It is definitely related to a threading issue as you may know but I am really unexperienced with this topic. I thought about maybe force any UI display update on the main thread in order to solve the issues I have but again; I am not even sure how to do that properly.
Any input/insight would be a huge help.
Thanks in advance!
As written, your SpectralView* needs to be fully thread safe.
Your for() loop is first shoving frame/sample processing off to the high priority concurrent queue. Since this is asynchronous, it is going to return immediately, at which point your code is going to enqueue a request on the main threat to update the spectral view's display.
This pretty much guarantees that the spectral view is going to have to be updating the display simultaneously with the background processing code also updating the spectral view's state.
There is a second issue; your code is going to end up parallelizing the processing of all channels. In general, unmitigated concurrency is a recipe for slow performance. Also, you're going to cause an update on the main thread for each channel, regardless of whether or not the processing of that channel is completed.
The code needs to be restructured. You really should split the model layer from the view layer. The model layer could either be written to be thread safe or, during processing, you can grab a snapshot of the data to be displayed and toss that at the SpectralView. Alternatively, your model layer could have an isProcessing flag that the SpectralView could key off of to know that it shouldn't be reading data.
This is relevant:
https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html
I'm trying to move lengthy OpenGL draw operations into a GCD queue so I can get other stuff done while the GPU grinds along. I would much rather prefer to do this with GCD vs. adding real threading to my app. Literally all I want to do is be able to
Not block on a glDrawArrays() call so the rest of the UI can stay responsive when GL rendering gets very slow.
Drop glDrawArrays() calls when we aren't finishing them anyway (don't build up a queue of frames that just grows and grows)
On Apple's website, the docs say:
GCD and NSOperationQueue objects can execute your tasks on a thread of their choosing. They may create a thread specifically for that task, or they may reuse an existing thread. But in either case, you cannot guarantee which thread executes the task. For an OpenGL ES app, that means:
Each task must set the context before executing any OpenGL ES commands.
Two tasks that access the same context may never execute simultaneously.
Each task should clear the thread’s context before exiting.
Sounds pretty straightforward.
For the sake of simplicity in this question, I'm starting with a new, bone-stock version of the Apple template that comes up in the "New Project" dialog for "OpenGL ES" game. When you instantiate it, compile, and run, you should see two cubes rotating on a grey field.
To that code, I have added a GCD queue. Starting with the interface section of ViewController.m:
dispatch_queue_t openGLESDrawQueue;
Then setting those up in ViewController viewDidLoad:
openGLESDrawQueue = dispatch_queue_create("GLDRAWINGQUEUE", NULL);
Finally, I make these very small changes to the drawInRect method that CADisplayLink ends up triggering:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
void (^glDrawBlock)(void) = ^{
[EAGLContext setCurrentContext:self.context];
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArrayOES(_vertexArray);
// Render the object with GLKit
[self.effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36);
// Render the object again with ES2
glUseProgram(_program);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glDrawArrays(GL_TRIANGLES, 0, 36);
};
dispatch_async(openGLESDrawQueue, glDrawBlock);
}
This does not work. The drawing goes crazy. Drawing with the same block with dispatch_sync() works fine, though.
Let's double check Apple's list:
Each task must set the context before executing any OpenGL ES commands.
Ok. I'm setting the context. It's Objective-C object pointers that have a lifetime longer than the block anyway, so they should get closed over fine. Also, I can check them in the debugger and they are fine. Also, when I draw from dispatch_sync, it works. So this does not appear to be the problem.
Two tasks that access the same context may never execute simultaneously.
The only code accessing the GL context once it's set up is the code in this method, which is in turn, in this block. Since this a serial queue, only one instance of this should ever be drawing at a time anyway. Further, if I add a synchronized(self.context){} block, it doesn't fix anything. Also, in other code with very slow drawing I added a semaphore to skipping adding blocks to the queue when the previous one hadn't finished and it dropped frames fine (according to the NSLog() messages it was spitting out), but it didn't fix the drawing. HOWEVER, there is the possibility that some of the GLKit code that I can't see manipulates the context in ways I don't understand from the main thread. This is my second-highest rated theory right now, despite the fact that synchronized() doesn't change the problem and OpenGL Profiler doesn't show any thread conflicts.
Each task should clear the thread’s context before exiting.
I'm not totally clear on what this means. The GCD thread's context? That's fine. We're not adding anything to the queue's context so there's nothing to clean up. The EAGLContext that we're drawing to? I don't know what else we could do. Certainly not actually glClear it, that will just erase everything. Also, there's some code in Sunset Lake's Molecules that renders that looks like this:
Code:
dispatch_async(openGLESContextQueue, ^{
[EAGLContext setCurrentContext:context];
GLfloat currentModelViewMatrix[9];
[self convert3DTransform:¤tCalculatedMatrix to3x3Matrix:currentModelViewMatrix];
CATransform3D inverseMatrix = CATransform3DInvert(currentCalculatedMatrix);
GLfloat inverseModelViewMatrix[9];
[self convert3DTransform:&inverseMatrix to3x3Matrix:inverseModelViewMatrix];
GLfloat currentTranslation[3];
currentTranslation[0] = accumulatedModelTranslation[0];
currentTranslation[1] = accumulatedModelTranslation[1];
currentTranslation[2] = accumulatedModelTranslation[2];
GLfloat currentScaleFactor = currentModelScaleFactor;
[self precalculateAOLookupTextureForInverseMatrix:inverseModelViewMatrix];
[self renderDepthTextureForModelViewMatrix:currentModelViewMatrix translation:currentTranslation scale:currentScaleFactor];
[self renderRaytracedSceneForModelViewMatrix:currentModelViewMatrix inverseMatrix:inverseModelViewMatrix translation:currentTranslation scale:currentScaleFactor];
const GLenum discards[] = {GL_DEPTH_ATTACHMENT};
glDiscardFramebufferEXT(GL_FRAMEBUFFER, 1, discards);
[self presentRenderBuffer];
dispatch_semaphore_signal(frameRenderingSemaphore);
});
This code works, and I don't see any additional cleanup. I can't figure out what this code is doing differently than mine. One thing that's different is it looks like literally everything that touches the GL context is being done from the same GCD dispatch queue. However, when I make my code like this, it doesn't fix anything.
The last thing that's different is that this code does not appear to use GLKit. The code above (along with the code I'm actually interested in) does use GLKit.
At this point, I have three theories about this problem:
1. I am making a conceptual error about the interaction between blocks, GCD, and OpenGL ES.
2. GLKit's GLKViewController or GLKView do some drawing to or manipulation of the EAGLContext in between calls to drawInRect. While my drawInRect blocks are being worked on, this happens, messing things up.
3. The fact that I'm relying on the - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect method is ITSELF the problem. I think of this method as, "Hey, you automatically have a CADisplayLink configured, and every time it wants a frame, it'll hit this method. Do whatever the hell you want. I mean, in normal code here you just issue glDrawArrays commands. It's not like I'm passing back a framebuffer object or a CGImageRef containing what I want to end up on the screen. I'm issuing GL commands. HOWEVER, this could be wrong. Maybe you just can't defer drawing in this method in anyway without causing problems. To test this theory, I moved all the draw code into a method called drawStuff and then replaced the body of the drawRect method with:
[NSTimer scheduledTimerWithTimeInterval:10 target:self selector:#selector(drawStuff) userInfo:nil repeats:NO];
The app comes up, displays the color the view is glClear'd to for ten seconds, and then draws like normal. So that theory doesn't look too strong either.
There is a similar question posted here that has one answer, which is upvoted and accepted:
The code in the dispatch block isn't going to work. By the time it gets executed, all of the OpenGL state for that frame will have long since been destroyed. If you were to put a call to glGetError() in that block, I'm sure it would tell you the same. You need to make sure that all your drawing code is done in that glkView method for the OpenGL state to be valid. When you do that dispatch, you're essentially shunting the execution of that drawing code out of the scope of that method.
I don't see why this should be true. But:
I'm only closing over references to things in the block that are going to outlive the block, and they're things like Objective-C pointers from the enclosing object scope.
I can check them in the debugger, they look fine.
I inserted a getGLError() call after every GL operation and it never returns anything but zero.
Drawing from a block with dispatch_sync works.
I tried a think where in the drawInRect method, I save the block to an ivar and then set a NSTimer to call drawStuff. In drawStuff I just invoke the block. Draws fine.
The NSTimer case draws asynchronously, but it does not involve drawing from another thread since AFAIK NSTimer invocations just get scheduled on the setting thread's runloop. So it has to do with threads.
Can anyone clue me in on what I'm missing here?
This is not working because as borrrden says, GLKit is calling presentRenderbuffer: immediately after - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect completes.
This is working in your case of using a timer, because the drawStuff method is being called on the main thread at the beginning of a draw cycle. Your - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect effectively does nothing but schedule this to happen on the main thread again in another 10 seconds, and the previously scheduled drawing call is then rendered at the end of the drawInRect: method. This is doing nothing except delaying the drawing by 10 seconds, everything is still occurring on the main thread.
If you want to go the route of rendering off the main thread, GLKit is not going to be a good match. You are going to need to set up your own thread with a runloop, hook up a CADisplayLink to this runloop, and then render from this queue. GLKViewController is configured to use the main runloop for this, and will always present the render buffer at the end of each frame, which will cause havoc with whatever you are doing on a different thread.
Depending on your GL needs you may find it simpler doing all the GL stuff on the main thread, and doing the "other stuff" off the main thread.
Is it possible to create a VBO and reuse it between calls to glDrawElements in the same rendering cycle? (I tried and obtained weird results). The example below is missing bindings, etc.
Init code (executed only once) *:*
glGenBuffers(...)
glBufferData(...)
Render frame code (executed for each frame):
glMapBufferOES(...)
//... Update buffer from index 0 to X
glDrawElements(...)
//... Update buffer from index 0 to Y
glDrawElements(...)
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
You need to unmap your buffer before drawing with it. If you don't unmap, that's probably why you're seeing weird results with glDrawElements.
http://www.opengl.org/sdk/docs/man/xhtml/glMapBuffer.xml
After glDrawElements is called, you can remap your buffer and fill it in again.
You will probably get better performance by not reusing the buffer right away. Remapping right after the draw will probably block until the draw is completed.
I'm trying to draw onto a CCRenderTexture in a thread. I've got:
EAGLSharegroup *sharegroup = [[[[CCDirector sharedDirector] openGLView] context] sharegroup];
EAGLContext *k_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:sharegroup];
[EAGLContext setCurrentContext:k_context];
at the beginning of the thread. Everything worked besides CCSprite.draw . I've tested the render texture with:
[rt beginWithClear:1 g:1 b:1 a:1];
[sprite visit];
[rt end];
Calling [CCSprite visit] will not draw the sprite. glGetError returns 0 in all steps.
To further investigate this problem, I put everything in the UI thread, and removed the context calls, I'll see the sprite correctly drawn with the same code. I've also verified that the sprite is correct by adding this sprite to the scene.
And even if I don't use threads, putting the above "context setting calls" will make CCSprite.draw stop working, but only when drawing to a CCRenderTexture. If it's drawing on the screen, it works fine.
Any ideas how to solve this problem? Thanks in advance!
You can only draw on the same thread on which the cocos2d OpenGL view (CCGLView) is created, that's normally the main thread. That's also why the dispatch to the main queue fixes the issue but it also prevents parallel execution of the code in question, since it now is run on the main thread.
If you want to speed things up by using multiple threads consider parallelizing other algorithms of your app, for example game logic like pathfinding, AI or other critical code paths.
When a GLKView is resized, there are some behind-the-scenes operations that take place on the buffers and context of that GLKView. During the time it takes to perform these behind-the-scenes operations, drawing to the GLKView does not produce correct results.
In my scenario, I have a GLKView that has setNeedsDisplay enabled, so that anytime I need to update it's contents on screen, I just call -setNeedsDisplay on that
GLKView. I'm using GLKView to draw images, so if I need to draw an image with a different size, I need to also change the size of the GLKView.
The problem: When I change the size of the GLKView and call setNeedsDisplay on that view, the result on screen is not correct. This is because the GLKView is not done finishing the behind-the-scenes operations invoked by the new size change before it tries to draw the new image.
I found a work-around to this by calling: performSelector:#selector(setNeedsDisplay) withObject:nil afterDelay:0 instead of just setNeedsDisplay on the GLKView. This basically forces the main thread to wait for all the behind-the-scenes openGL operations to be completed before calling setNeedsDisplay. Although this works ok, I am wondering if there is a better solution. For example, is there an openGL call to make the thread wait for all openGL operations to be completed before continuing?
The solution was to reset the CIContext object after the GLKView has been resized.