In a webrtc app there is a callback called didReceiveFrame, which is called when there is a new frame to be rendered. It passes the new frame as argument and one can extract a texture from this frame. However, if for some reason the main thread is delayed (think breakpoint, device rotation, still busy rendering, etc...) then this callback is called separately for every 'missed' frame. This effectively adds a delay between what the camera captures and what is rendered.
How can I make sure that didReceiveFrame is only called with the latest frame? Or how can I see if the passed frame is the latest (so I can skip the function)?
I never found a way to do this with the library, but I implemented my own simple check using an NSDate. I store a property in the renderer class like this:
#property (strong, nonatomic) NSDate* timeOfLastFrame;
I initialise it in my viewDidLoad like this:
self.timeOfLastFrame = [[NSDate alloc] initWithTimeIntervalSince1970:0.0];
and I use it in my render method like this:
- (void)renderFrame:(RTCI420Frame*)frame {
if ([[NSDate date] timeIntervalSinceDate:self.timeOfLastFrame] > 1.0f/24.0f) { // or whatever fps you want
self.timeOfLastFrame = [NSDate date];
// do rendering
}
}
The 1.0f/24.0f part determines the max fps. In this case, it won't render more than 24 frames per second, which is pretty decent for a video stream and low enough to notice a performance increase. If the renderFrame method is called before 1/24th of a second has passed it simply doesn't do anything.
Note: Doing it like this means the rendering won't be at a steady 24fps, because it doesn't skip every x frames, instead it relies on when the render method is called and simply drops a frame if it came too soon after the previous one. I don't notice it in my app, but it's something to keep in mind, especially when using a low max fps.
Related
At first I wanted to use -viewWill/DidAppear but I need to know the time for all subsequent updates in all possible view controllers so I came up with a UIView subclass that is always a root view in every controller and has an instance variable for its parent VC and does this in -drawRect:
- (void)drawRect:(CGRect)rect
{
if (self.parentViewController) {
[self.parentViewController resetTime];
[self.parentViewController performSelector:#selector(measureTimeMethod) withObject:nil afterDelay:.0];
}
}
Seems to kind of work but can I be certain that this gets called on the next event cycle? I am sure there must be a better way to do it.
You could (and should) use Inspector to measure timing issues of your CoreGraphics routines to build up your user interface.
If you have heavy draw calls in your -drawRect: class method it will be slower for sure then with less calls. Which is the reason why Apple suggests to avoid the use of this method if there are no continuous changes in graphics or even if drawRect is empty. The less pixels you have to draw over and less layers and elements are in them for the next drawing cycle the faster it will be. Consider decreasing the size of your Rect for the absolute needed ranges to draw, this will fasten it very much. And rethink your design pattern if drawRect is really the way to go for your needs. Maybe you want to compose classes that draw once at initiation and only update size and positions after.
If you want to make faster UI: it is recommended to use -(void)layoutSubviews instead and update the frames of your pre-defined layout/sublayers UI elements.
If really measuring is your thing: one measurement will give you only a rough clue about improvements but being more precise than performing a selector on top of your object model.
instead use
#include <mach/mach_time.h>
mach_absolute_time();
cache the output in a global to calculate time diff on it
Traversing layout code in your -drawRect will give you wrong timing anyway.
DrawRect is for drawing. In other words you would place your if statement in -(void)layoutSubviews or -(void)layoutSublayers.
As on iOS your UI is usually running always in Main Thread you can measure before and after the allocation and initiation of your Subview containing what you want to exam. In case you want to measure changes in drawing while running you will want to figure out what of the mentioned drawing methods is called when.
Another extreme simple solution is a NSLog(#"t"); and watching the corresponding time prints in your debugger.
Here is the context:
I've been developing an audio-related app for a while now and I sort of hit a wall and not sure what to do next.
I've recently implemented in the app a custom class that plots a FFT display of the audio output. This class is a subclass of UIView meaning that every time I need to plot a new FFT update I need to call setNeedDisplay on my instance of the class with new sample values.
As I need to plot a new FFT for every frame (frame ~= 1024 samples), it means that the display function of my FFT gets called a lot (1024 / SampleRate ~= 0.02321 second). As for the sample calculation, it is done 44'100 / sec. I am not really experienced with managing threading in iOS so I read a little bit about it and here is how I have done it.
How it has been done: I have a subclass of NSObject "AudioEngine.h" that is taking care of all the DSP processing in my app and this is where I am setting my FFT display. All the sample values are calculated and assigned to my FFT subclass inside a dispatch_get_global_queue block as the values need to constantly be updated in the background. The setneedDisplay method is called once the samples index has reached the maximum frame number, and this is done inside a dispatch_async(dispatch_get_main_queue) block
In "AudioEngine.m"
for (k = 0; k < nchnls; k++) {
buffer = (SInt32 *) ioData->mBuffers[k].mData;
if (cdata->shouldMute == false) {
buffer[frame] = (SInt32) lrintf(spout[nsmps++]*coef) ;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
#autoreleasepool {
// FFT display init here as a singleton
SpectralView *specView = [SpectralView sharedInstance];
//Here is created a pointer to the "samples" property of my subclass
Float32 *specSamps = [specView samples];
//set the number of frames the FFT should take
[specView setInNumberFrames:inNumberFrames];
//scaling sample values
specSamps[frame] = (buffer[frame] * (1./coef) * 0.5);
}
});
} else {
// If output is muted
buffer[frame] = 0;
}
}
//once the number of samples has reached ksmps (vector size) we update the FFT
if (nsmps == ksmps*nchnls){
dispatch_async(dispatch_get_main_queue(), ^{
SpectralView *specView = [SpectralView sharedInstance];
[specView prepareToDraw];
[specView setNeedsDisplay];
});
What my issue is:
I get various threading issues, especially on the main thread such as Thread 1: EXC_BAD_ACCESS (code=1, address=0xf00000c), sometimes on the app launch as the viewDidLoad is being called, but also whenever I try to interact with any UI object.
The UI responsiveness becomes insanely slow, even on the FFT display.
What I think the problem is: It is definitely related to a threading issue as you may know but I am really unexperienced with this topic. I thought about maybe force any UI display update on the main thread in order to solve the issues I have but again; I am not even sure how to do that properly.
Any input/insight would be a huge help.
Thanks in advance!
As written, your SpectralView* needs to be fully thread safe.
Your for() loop is first shoving frame/sample processing off to the high priority concurrent queue. Since this is asynchronous, it is going to return immediately, at which point your code is going to enqueue a request on the main threat to update the spectral view's display.
This pretty much guarantees that the spectral view is going to have to be updating the display simultaneously with the background processing code also updating the spectral view's state.
There is a second issue; your code is going to end up parallelizing the processing of all channels. In general, unmitigated concurrency is a recipe for slow performance. Also, you're going to cause an update on the main thread for each channel, regardless of whether or not the processing of that channel is completed.
The code needs to be restructured. You really should split the model layer from the view layer. The model layer could either be written to be thread safe or, during processing, you can grab a snapshot of the data to be displayed and toss that at the SpectralView. Alternatively, your model layer could have an isProcessing flag that the SpectralView could key off of to know that it shouldn't be reading data.
This is relevant:
https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html
I want to show an animation in my view .so I used CADisplaylink as a timer so that it will call the update method 60 times in a second (60FPS).
But when the tableview which in the same superview reload ,the CADisplaylink called only 40-50times a second (40-50FPS).
self.displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(update)];
self.displayLink.paused = YES;
[self.displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];
How can I fixed it?
From the CADisplayLink class reference:
The target can read the display link’s timestamp property to retrieve
the time that the previous frame was displayed. For example, an
application that displays movies might use the timestamp to calculate
which video frame will be displayed next. An application that performs
its own animations might use the timestamp to determine where and how
displayed objects appear in the upcoming frame. The duration property
provides the amount of time between frames. You can use this value in
your application to calculate the frame rate of the display, the
approximate time that the next frame will be displayed, and to adjust
the drawing behavior so that the next frame is prepared in time to be
displayed.
Meaning the frame rate is determined by the system, and you should base your animation off that. There is no way to set the frame rate, rather it will be determined by a combination of factors (e.g. how processor intensive active actives are, how graphic intensive the UI is, etc.).
We are developing a game that has 2d elements displayed with UIViews over an OpenGL ES view (specifically, we're using GLKit's GLKView) and are having problems keeping the positions perfectly in sync.
In the parent view's layoutSubviews, we're projecting 3d positions in the world onto the screen, and using those as locations for several UIView "markers" in the game. The whole game only updates in response to the user moving the camera, and the camera tells the view setNeedsLayout each time it moves.
Everthing's working fine, except that the markers seem to be roughly 1 frame out of sync with the 3d rendering. I say roughly because (1) it's an estimate! and (2) I'm wondering whether there's potentially a multithreading issue: doesn't GLKView sync to a special screen refresh callback or something?
Is there some way of hooking a view's layoutSubviews so that it sync's to the 3d view update?
Update: Weirdly, calling layoutIfNeeded immediately after setNeedsLayout makes the problem worse! Possibly 2 or more frames out. Really don't understand that!
What's triggering your call to LayoutSubviews?
It all depends where in the RunLoop your call is triggered vs. where your GLK update call is triggered.
In general, for what you're doing, I'd aim to do your layout as a side-effect of the GLK update - i.e. don't wait for layoutSubviews to change your position.
(if you're using OpenGL, then the whole "layout" system isn't much use to you: GLK is running in its own little world of variable frame rate, and you want to make that your reference point)
This is impossible to do correctly without drawing the frames of the video using OpenGL (drawing in the same context so that you are always sure that one frame contains the same time of video and animation). Everyting else you do, framerate compensation, lag prediction, only depends on chance, and will always be a little bit unsynchronized.
I'm not familiar with UIView but if there is any way to let it play audio and copy the frames to a texture, do that. The lag in the audio is much easier to compensate and much less noticeable by the humans than in the video.
My OpenGL ES classes and code are derived from Apple's GLES2Sample code sample. I use them to show 3D objects rotating smoothly around one axis, at what I expect to be a constant rotation speed. Currently, the app uses a frame interval of 1, and each time the OpenGL view is drawn (in the EAGLView's drawView method), I rotate the model by a certain angle.
In practice, this gives decent results, but not perfect: during the rotation, when large parts of the object go out of sight, rendering becomes faster and the rotation thus does not have constant angular velocity. My question is: how do I make it smoother?
Although I welcome all suggestions, I already have one idea: measure the rendering FPS every half-second, and adjust the rotation angle at each redraw based on that. It does not sound very good, however, so: what do you think of this, and how would you handle the issue?
I tend to use a CADisplayLink to trigger new frames and a simple time calculation within the frame request to figure out how far to advance my logic.
Suppose you have a member variable, timeOfLastDraw, of type NSTimeInterval. You want your logic to tick at, say, 60 beats per second. Then (with a whole bunch of variables to make the code more clear):
- (void)displayLinkDidTick
{
// get the time now
NSTimeInterval timeNow = [NSDate timeIntervalSinceReferenceDate];
// work out how many quantums (rounded down to the nearest integer) of
// time have elapsed since last we drew
NSTimeInterval timeSinceLastDraw = timeNow - timeOfLastDraw;
NSTimeInterval desiredBeatsPerSecond = 60.0;
NSTimeInterval desiredTimeInterval = 1.0 / desiredBeatsPerSecond;
NSUInteger numberOfTicks = (NSUInteger)(timeSinceLastDraw / desiredTimeInterval);
if(numberOfTicks > 8)
{
// if we're more than 8 ticks behind then just do 8 and catch up
// instantly to the correct time
numberOfTicks = 8;
timeOfLastDraw = timeNow;
}
else
{
// otherwise, advance timeOfLastDraw according to the number of quantums
// we're about to apply. Don't update it all the way to now, or we'll lose
// part quantums
timeOfLastDraw += numberOfTicks * desiredTimeInterval;
}
// do the number of updates
while(numberOfTicks--)
[self updateLogic];
// and draw
[self draw];
}
In your case, updateLogic would apply a fixed amount of rotation. If constant rotation is really all you want then you could just multiply the rotation constant by numberOfTicks, or even skip this whole approach and do something like:
glRotatef([NSDate timeIntervalSinceReferenceData] * rotationsPerSecond, 0, 0, 1);
instead of keeping your own variable. In anything but the most trivial case though, you usually want to do a whole bunch of complicated things per time quantum.
If you don't want the rendering to speed to vary, and your running open-loop (i.e. full tilt) with CADisplayLink or other animation timer, there are two things you can do:
1) Optimize your code so it never wanders below 60 FPS - the maximum frame rate for the device under any circumstances with your model.
2) At run time, measure the frame rate of your application through a few complete cycles and set the draw rate such that it will never exceed your lowest measured drawing performance.
I believe adjusting your rotation angle isn't the right approach for this problem because you're now trying to keep two parameters jiving with one another (draw rate and rotation rate) rather than simply pinning down a single parameter: draw rate.
Cheers.