Here is the context:
I've been developing an audio-related app for a while now and I sort of hit a wall and not sure what to do next.
I've recently implemented in the app a custom class that plots a FFT display of the audio output. This class is a subclass of UIView meaning that every time I need to plot a new FFT update I need to call setNeedDisplay on my instance of the class with new sample values.
As I need to plot a new FFT for every frame (frame ~= 1024 samples), it means that the display function of my FFT gets called a lot (1024 / SampleRate ~= 0.02321 second). As for the sample calculation, it is done 44'100 / sec. I am not really experienced with managing threading in iOS so I read a little bit about it and here is how I have done it.
How it has been done: I have a subclass of NSObject "AudioEngine.h" that is taking care of all the DSP processing in my app and this is where I am setting my FFT display. All the sample values are calculated and assigned to my FFT subclass inside a dispatch_get_global_queue block as the values need to constantly be updated in the background. The setneedDisplay method is called once the samples index has reached the maximum frame number, and this is done inside a dispatch_async(dispatch_get_main_queue) block
In "AudioEngine.m"
for (k = 0; k < nchnls; k++) {
buffer = (SInt32 *) ioData->mBuffers[k].mData;
if (cdata->shouldMute == false) {
buffer[frame] = (SInt32) lrintf(spout[nsmps++]*coef) ;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
#autoreleasepool {
// FFT display init here as a singleton
SpectralView *specView = [SpectralView sharedInstance];
//Here is created a pointer to the "samples" property of my subclass
Float32 *specSamps = [specView samples];
//set the number of frames the FFT should take
[specView setInNumberFrames:inNumberFrames];
//scaling sample values
specSamps[frame] = (buffer[frame] * (1./coef) * 0.5);
}
});
} else {
// If output is muted
buffer[frame] = 0;
}
}
//once the number of samples has reached ksmps (vector size) we update the FFT
if (nsmps == ksmps*nchnls){
dispatch_async(dispatch_get_main_queue(), ^{
SpectralView *specView = [SpectralView sharedInstance];
[specView prepareToDraw];
[specView setNeedsDisplay];
});
What my issue is:
I get various threading issues, especially on the main thread such as Thread 1: EXC_BAD_ACCESS (code=1, address=0xf00000c), sometimes on the app launch as the viewDidLoad is being called, but also whenever I try to interact with any UI object.
The UI responsiveness becomes insanely slow, even on the FFT display.
What I think the problem is: It is definitely related to a threading issue as you may know but I am really unexperienced with this topic. I thought about maybe force any UI display update on the main thread in order to solve the issues I have but again; I am not even sure how to do that properly.
Any input/insight would be a huge help.
Thanks in advance!
As written, your SpectralView* needs to be fully thread safe.
Your for() loop is first shoving frame/sample processing off to the high priority concurrent queue. Since this is asynchronous, it is going to return immediately, at which point your code is going to enqueue a request on the main threat to update the spectral view's display.
This pretty much guarantees that the spectral view is going to have to be updating the display simultaneously with the background processing code also updating the spectral view's state.
There is a second issue; your code is going to end up parallelizing the processing of all channels. In general, unmitigated concurrency is a recipe for slow performance. Also, you're going to cause an update on the main thread for each channel, regardless of whether or not the processing of that channel is completed.
The code needs to be restructured. You really should split the model layer from the view layer. The model layer could either be written to be thread safe or, during processing, you can grab a snapshot of the data to be displayed and toss that at the SpectralView. Alternatively, your model layer could have an isProcessing flag that the SpectralView could key off of to know that it shouldn't be reading data.
This is relevant:
https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html
Related
At first I wanted to use -viewWill/DidAppear but I need to know the time for all subsequent updates in all possible view controllers so I came up with a UIView subclass that is always a root view in every controller and has an instance variable for its parent VC and does this in -drawRect:
- (void)drawRect:(CGRect)rect
{
if (self.parentViewController) {
[self.parentViewController resetTime];
[self.parentViewController performSelector:#selector(measureTimeMethod) withObject:nil afterDelay:.0];
}
}
Seems to kind of work but can I be certain that this gets called on the next event cycle? I am sure there must be a better way to do it.
You could (and should) use Inspector to measure timing issues of your CoreGraphics routines to build up your user interface.
If you have heavy draw calls in your -drawRect: class method it will be slower for sure then with less calls. Which is the reason why Apple suggests to avoid the use of this method if there are no continuous changes in graphics or even if drawRect is empty. The less pixels you have to draw over and less layers and elements are in them for the next drawing cycle the faster it will be. Consider decreasing the size of your Rect for the absolute needed ranges to draw, this will fasten it very much. And rethink your design pattern if drawRect is really the way to go for your needs. Maybe you want to compose classes that draw once at initiation and only update size and positions after.
If you want to make faster UI: it is recommended to use -(void)layoutSubviews instead and update the frames of your pre-defined layout/sublayers UI elements.
If really measuring is your thing: one measurement will give you only a rough clue about improvements but being more precise than performing a selector on top of your object model.
instead use
#include <mach/mach_time.h>
mach_absolute_time();
cache the output in a global to calculate time diff on it
Traversing layout code in your -drawRect will give you wrong timing anyway.
DrawRect is for drawing. In other words you would place your if statement in -(void)layoutSubviews or -(void)layoutSublayers.
As on iOS your UI is usually running always in Main Thread you can measure before and after the allocation and initiation of your Subview containing what you want to exam. In case you want to measure changes in drawing while running you will want to figure out what of the mentioned drawing methods is called when.
Another extreme simple solution is a NSLog(#"t"); and watching the corresponding time prints in your debugger.
In a webrtc app there is a callback called didReceiveFrame, which is called when there is a new frame to be rendered. It passes the new frame as argument and one can extract a texture from this frame. However, if for some reason the main thread is delayed (think breakpoint, device rotation, still busy rendering, etc...) then this callback is called separately for every 'missed' frame. This effectively adds a delay between what the camera captures and what is rendered.
How can I make sure that didReceiveFrame is only called with the latest frame? Or how can I see if the passed frame is the latest (so I can skip the function)?
I never found a way to do this with the library, but I implemented my own simple check using an NSDate. I store a property in the renderer class like this:
#property (strong, nonatomic) NSDate* timeOfLastFrame;
I initialise it in my viewDidLoad like this:
self.timeOfLastFrame = [[NSDate alloc] initWithTimeIntervalSince1970:0.0];
and I use it in my render method like this:
- (void)renderFrame:(RTCI420Frame*)frame {
if ([[NSDate date] timeIntervalSinceDate:self.timeOfLastFrame] > 1.0f/24.0f) { // or whatever fps you want
self.timeOfLastFrame = [NSDate date];
// do rendering
}
}
The 1.0f/24.0f part determines the max fps. In this case, it won't render more than 24 frames per second, which is pretty decent for a video stream and low enough to notice a performance increase. If the renderFrame method is called before 1/24th of a second has passed it simply doesn't do anything.
Note: Doing it like this means the rendering won't be at a steady 24fps, because it doesn't skip every x frames, instead it relies on when the render method is called and simply drops a frame if it came too soon after the previous one. I don't notice it in my app, but it's something to keep in mind, especially when using a low max fps.
I am working on a tracking algorithm and one of the earliest steps it does is background subtraction. The algorithm gets a series of frames that represent the video with a moving object and static background. The object is in every frame.
In my first version of this process I computed a median image from all the frames and got a very good background scene approximation. Then I subtracted the resulting image from every frame in video sequence to get foreground (moving objects).
The above method worked well, but then I tried to replace it by using OpenCV's background subtractors MOG and MOG2.
What I do not understand is how these two classes perform the "precomputation of the background model"? As far as I understood from dozens of tutorials and documentations, these subtractors update the background model every time I use the apply() method and return a foreground mask.
But this means thet the first result of the apply() method will be a blank mask. And the later images wil have initial object's position ghost in it (see example below):
What am I missing? I googled a lot and seem to be the only one with this problem... Is there a way to run background precomputation that I am not aware of?
EDIT: I found a "trick" to do it: Before using OpenCV's MOG or MOG2 I first compute median background image, then I use it in first apply() call. The following apply() calls produce the foreground mask without the initial position ghost.
But still, is this how it should be done or is there a better way?
If your moving objects are present right from the start, all updating background estimators will place them in the background initially. A solution to that is to initialize your MOG on all frames and then run MOG again with this initialization (as with your median estimate). Depending on the number of frames you might want to adjust the update parameter of MOG (learningRate) to make sure its fully initialized (if you have 100 frames it probably needs to be higher at least 0.01):
void BackgroundSubtractorMOG::operator()(InputArray image, OutputArray fgmask, double **learningRate**=0)
If your moving objects are not present right from the start, make sure that MOG is fully initialized when they appear by setting a high enough value for the update parameter learningRate.
I'm currently developping an iPad app which is using OpenGL to draw some very simple (no more than 1000 or 2000 vertices) rotating models in multiple OpenGL views.
There are currently 6 view in a grid, each one running its own display link to update the drawing. Due to the simplicity of the models, it's by far the simplest method to do it, I don't have the time to code a full OpenGL interface.
Currently, it's doing well performance-wise, but there are some annoying glitches. The first 3 OpenGL views display without problems, and the last 3 only display a few triangles (while still retaining the ability to rotate the model). Also there are some cases where the glDrawArrays call is going straight into EXC_BAD_ACCESS (especially on the simulator), which tell me there is something wrong with the buffers.
What I checked (as well as double- and triple-checked) is :
Buffer allocation seems OK
All resources are freed on dealloc
The instruments show some warnings, but nothing that seems related
I'm thinking it's probably related to my having multiple views drawing at the same time, so is there any known thing I should have done there? Each view has its own context, but perhaps I'm doing something wrong with that...
Also, I just noticed that in the simulator, the afflicted views are flickering between the right drawing with all the vertices and the wrong drawing with only a few.
Anyway, if you have any ideas, thanks for sharing!
Okay, I'm going to answer my own question since I finally found what was going on. It was a small missing line that was causing all those problems.
Basically, to have multiple OpenGL views displayed at the same time, you need :
Either, the same context for every view. Here, you have to take care not to draw with multiple threads at the same time (i.e. lock the context somehow, as explained on this answer. And you have to re-bind the frame- and render-buffers every time on each frame.
Or, you can use different contexts for each view. Then, you have to re-set the context on each frame, because other display links, could (and would, as in my case) cause your OpenGL calls to use the wrong data. Also, there is no need for re-binding frame- and render-buffers since your context is preserved.
Also, call glFlush() after each frame, to tell the GPU to finish rendering each frame fully.
In my case (the second one), the code for rendering each frame (in iOS) looks like :
- (void) drawFrame:(CADisplayLink*)displayLink {
// Set current context, assuming _context
// is the class ivar for the OpenGL Context
[EAGLContext setCurrentContext:_context]
// Clear whatever you want
glClear (GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// Do matrix stuff
...
glUniformMatrix4fv (...);
// Set your viewport
glViewport (0, 0, self.frame.size.width, self.frame.size.height);
// Bind object buffers
glBindBuffer (GL_ARRAY_BUFFER, _vertexBuffer);
glVertexAttribPointer (_glVertexPositionSlot, 3, ...);
// Draw elements
glDrawArrays (GL_TRIANGLES, 0, _currentVertexCount);
// Discard unneeded depth buffer
const GLenum discard[] = {GL_DEPTH_ATTACHMENT};
glDiscardFramebufferEXT (GL_FRAMEBUFFER, 1, discard);
// Present render buffer
[_context presentRenderbuffer:GL_RENDERBUFFER];
// Unbind and flush
glBindBuffer (GL_ARRAY_BUFFER, 0);
glFlush();
}
EDIT
I'm going to edit this answer, since I found out that running multiple CADisplayLinks could cause some issues. You have to make sure to set the frameInterval property of your CADisplayLink instance to something other than 0 or 1. Else, the run loop will only have time to call the first render method, and then it'll call it again, and again. In my case, that was why only one object was moving. Now, it's set to 3 or 4 frames, and the run loop has time to call all the render methods.
This applies only to the application running on the device. The simulator, being very fast, doesn't care about such things.
It gets tricky when you want multiple UIViews that are openGLViews,
on this site you should be able to read all about it: Using multiple openGL Views and uikit
My OpenGL ES classes and code are derived from Apple's GLES2Sample code sample. I use them to show 3D objects rotating smoothly around one axis, at what I expect to be a constant rotation speed. Currently, the app uses a frame interval of 1, and each time the OpenGL view is drawn (in the EAGLView's drawView method), I rotate the model by a certain angle.
In practice, this gives decent results, but not perfect: during the rotation, when large parts of the object go out of sight, rendering becomes faster and the rotation thus does not have constant angular velocity. My question is: how do I make it smoother?
Although I welcome all suggestions, I already have one idea: measure the rendering FPS every half-second, and adjust the rotation angle at each redraw based on that. It does not sound very good, however, so: what do you think of this, and how would you handle the issue?
I tend to use a CADisplayLink to trigger new frames and a simple time calculation within the frame request to figure out how far to advance my logic.
Suppose you have a member variable, timeOfLastDraw, of type NSTimeInterval. You want your logic to tick at, say, 60 beats per second. Then (with a whole bunch of variables to make the code more clear):
- (void)displayLinkDidTick
{
// get the time now
NSTimeInterval timeNow = [NSDate timeIntervalSinceReferenceDate];
// work out how many quantums (rounded down to the nearest integer) of
// time have elapsed since last we drew
NSTimeInterval timeSinceLastDraw = timeNow - timeOfLastDraw;
NSTimeInterval desiredBeatsPerSecond = 60.0;
NSTimeInterval desiredTimeInterval = 1.0 / desiredBeatsPerSecond;
NSUInteger numberOfTicks = (NSUInteger)(timeSinceLastDraw / desiredTimeInterval);
if(numberOfTicks > 8)
{
// if we're more than 8 ticks behind then just do 8 and catch up
// instantly to the correct time
numberOfTicks = 8;
timeOfLastDraw = timeNow;
}
else
{
// otherwise, advance timeOfLastDraw according to the number of quantums
// we're about to apply. Don't update it all the way to now, or we'll lose
// part quantums
timeOfLastDraw += numberOfTicks * desiredTimeInterval;
}
// do the number of updates
while(numberOfTicks--)
[self updateLogic];
// and draw
[self draw];
}
In your case, updateLogic would apply a fixed amount of rotation. If constant rotation is really all you want then you could just multiply the rotation constant by numberOfTicks, or even skip this whole approach and do something like:
glRotatef([NSDate timeIntervalSinceReferenceData] * rotationsPerSecond, 0, 0, 1);
instead of keeping your own variable. In anything but the most trivial case though, you usually want to do a whole bunch of complicated things per time quantum.
If you don't want the rendering to speed to vary, and your running open-loop (i.e. full tilt) with CADisplayLink or other animation timer, there are two things you can do:
1) Optimize your code so it never wanders below 60 FPS - the maximum frame rate for the device under any circumstances with your model.
2) At run time, measure the frame rate of your application through a few complete cycles and set the draw rate such that it will never exceed your lowest measured drawing performance.
I believe adjusting your rotation angle isn't the right approach for this problem because you're now trying to keep two parameters jiving with one another (draw rate and rotation rate) rather than simply pinning down a single parameter: draw rate.
Cheers.