Is there a way to program games without depending on fram rate? - ios

Im programming an iOS game and I use the method update for a lot of things, which is called at the game speed refresh (for the moment 60 times per second) but the problem is if the frame rate drops down (for example a notification, or any behavior in the game that, when called, it makes drop down a little bit the fps...) then the bugs comes....
A fast example is if I have an animation of 80 pictures, 40 for jump up and 40 for fall, I would need 1,2 second to run the animation, so if the jump takes 1,2 second it would be ok, the animation would run. But if my fps drop down to 30 then the animation would cut because it would need 2,4 seconds to run the animation but the jump remains 1,2 second. This is only a fast example, there is a lot of unexpected behaviors in the game if the frame rate drops, so my question is, are games developers depending so much on frame rate or there is a way to avoid those fps-bugs? (another way to program or any trick?)

Base your timing on the time, rather than the frame count. So, save a time stamp on each frame, and on the next frame, calculate how much time has elapsed, and based on your ideal frame rate, figure out how many frames of animation to advance. At full speed, you shouldn’t notice a difference, and when the frame rate drops, your animations may get jerky but the motion will never get more than 1 frame behind where it should be.
Edit: as uliwitness points out, be careful what time function you use, so you don’t encounter issues when, for example, the computer goes to sleep or the game pauses.

Always use the delta value in your update method. This is platform and engine independent. Multiply any speed or change value by the delta value (the time interval between the current and the last frames).
In case of the animation, one way to fix the issue could be to multiply the animation counter by delta (and an inverse of the expected interval). Then round this value to get the correct image for the animation.
// currentFrame is a float ivar set to 0 at the beginning of the animation.
currentFrame = currentFrame + delta * 60.0;
int imageIndex = roundf(currentFrame);
However, with Sprite Kit there is a better way to do this kind of animation, as there is a premade SKAction dealing with sprite animation.
[SKAction animateWithTextures:theTextures timePerFrame:someInterval];
With this solution you don't have to deal with timing the images at all. The engine will do that for you.

There's a great discussion about FPS-based and Time-based techniques here:
Why You Should be Using Time-based Animation and How to Implement it
It's the best on my opinion, very complete, easy to follow and provides JsFiddle examples. I translated those examples to C++/Qt.
Click here to watch a video of my app running:

Related

UIScrollView contentOffset not updating when changed

Seeing an odd issue surrounding changing a scroll view's content offset property.
I have a CADisplayLink that calls a method every frame. This method calculates how much to adjust the content offset by to produce an auto scroll type effect.
#objc private func tick() {
let fps = 1 / (displayLink.targetTimestamp - displayLink.timestamp)
let relativeAutoscrollAmount = autoscrollAmount / CGFloat(fps)
scrollView.contentOffset.x += relativeAutoscrollAmount
}
autoscrollAmount is a CGFloat property that represents how many pixels to move each second. On a 60Hz screen like an iPhone, this would mean a shift of 5/60 per invocation of that method, if this property is 5. However, the content offset never actually changes! Either visually or in memory, I can break and inspect it at any time and it's always 0!
Note that if I adjust it by 1 or greater each time, it works just fine. The animation is far to quick doing this, though.
Any thoughts?
EDIT: Obviously you can't actually adjust by less than a pixel at a time, but when I was doing this previously with a constraint constant, the system just calculated how to deal with this. (I assume by only moving every few ticks).
I believe I have the answer, or at the very least, an explanation based on a theory backed by some pretty good evidence. Here we go...
In the question, I provided an example of 5/60, where 5 is the amount of pixels to move per second, and 60 is the refresh rate of my screen. This comes out at approximately 0.083, which, as I said, caused no updates to contentOffset to take place.
At this point, I assumed that the minimum value was 1 (as you can't make changes to half a pixel) but this is in fact not the case. I began experimenting with different decimal values, in the hope of finding the threshold at which the updates to contentOffset stop taking place.
After a lot of trial and error, I found that value. It is 0.167. In my head, this had absolutely no significance whatsoever; but there obviously had to be something so I set about manipulating it in various ways to try to observe a pattern of some kind.
It soon became clear that 0.167 * 6 == 1, which although an interesting observation, again seemed to have little significance. That is until you note that the refresh rate of the display on my iPhone X that I was testing with is 60Hz, 10 times 6. At this point, I'm still stabbing blindly in the dark but this was at least a lead that I could explore a bit.
From this, I speculated that the system evaluates changes in layer's positions either every 6ms, or, perhaps more realistically, 10 times per display cycle. This supports the behaviour I am seeing in so far as if the movement value passes is too small (IE it cannot be represented in this 10 times per display cycle theorm), it is simply ignored.
This is quite a bold speculation so I decided to see if I could gather evidence to support the theory. I fired up my iPad pro which has a 120Hz display (as opposed to the 60Hz display on my iPhone X) to see if there was a trend. Sure enough, there was. The minimum value to see movement was now half what it was on the 60Hz screen. Given the greater refresh rate (double, in fact), and the original assumption of 10 updates per screen cycle, I am now seeing 20 updates per screen cycle, every 6ms, as before. There's definitely a relationship here.
Now I'd like to stress that this is all purely speculation, but at least I can sleep tonight having a good idea as to why this is happening! I'd love to hear other's thoughts, too.

iOS dynamically slow down the playback of a video, with continuous value

I have a problem with the iOS SDK. I can't find the API to slowdown a video with continuous values.
I have made an app with a slider and an AVPlayer, and I would like to change the speed of the video, from 50% to 150%, according to the slider value.
As for now, I just succeeded to change the speed of the video, but only with discrete values, and by recompiling the video. (In order to do that, I used AVMutableComposition APIs.
Do you know if it is possible to change continuously the speed, and without recompiling?
Thank you very much!
Jery
The AVPlayer's rate property allows playback speed changes if the associated AVPlayerItem is capable of it (responds YES to canPlaySlowForward or canPlayFastForward). The rate is 1.0 for normal playback, 0 for stopped, and can be set to other values but will probably round to the nearest discrete value it is capable of, such as 2:1, 3:2, 5:4 for faster speeds, and 1:2, 2:3 and 4:5 for slower speeds.
With the older MPMoviePlayerController, and its similar currentPlaybackRate property, I found that it would take any setting and report it back, but would still round it to one of the discrete values above. For example, set it to 1.05 and you would get normal speed (1:1) even though currentPlaybackRate would say 1.05 if you read it. Set it to 1.2 and it would play at 1.25X (5:4). And it was limited to 2:1 (double speed), beyond which it would hang or jump.
For some reason, the iOS API Reference doesn't mention these discrete speeds. They were found by experimentation. They make some sense. Since the hardware displays video frames at a fixed rate (e.g.- 30 or 60 frames per second), some multiples are easier than others. Half speed can be achieved by showing each frame twice, and double speed by dropping every other frame. Dropping 1 out of every 3 frames gives you 150% (3:2) speed. But to do 105% is harder, dropping 1 out of every 21 frames. Especially if this is done in hardware, you can see why they might have limited it to only certain multiples.

Cocos2dx 2.1.4 Game, Continuos FPS drop and never recovers

I am creating a game using cocos2dx 2.1.4. Its FPS drops continuously , and never recover.
Please find the details as follows
Background about the way I am doing things:-
Its game about scrolling down some shapes, each shape is made up of some square blocks.I have 7 kind of blocks. All loaded in a Sprite Sheet and using these blocks from this sprite sheet I create a shape.
A level file is consist of these shapes. I load two levels at the same time one onscreen and another off screen to make it seamless scrolling. For loading two levels at the same time I used two different CCSprite game batch nodes as :-
CCSpriteFrameCache::sharedSpriteFrameCache()->addSpriteFramesWithFile("56blackglow.plist");
_gameBatchNode1 = CCSpriteBatchNode::create("56blackglow.png", 200);
_gameBatchNode1->retain();
this->addChild(_gameBatchNode1,kForeground);
_gameBatchNode2= CCSpriteBatchNode::create("56blackglow.png", 200);
_gameBatchNode2->retain();
this->addChild(_gameBatchNode2,kForeground);
The problem I am facing is that as I keep on playing the game frame rate drops continuously , from 60 fps till 10 fps and never recovers or might recover in near future , as I observed for 20 minutes but its too much time to wait.
My observations:-
1> I used Time profiler it shows maximum time is in draw() calls. Also if I play game very fast the peak of time increases in track, that should be fine as I am giving more work to do, but once a peak is attained it remains approximately at that height only, even if I leave the game Idle. Is it normal ? According to me it should have returned to its normal peak once the current work is done.
2> Some where I thought its happening because I used two batch nodes and removing its children on a user touch immediately might causing it slow but then after removing the children it should run normal. to give an idea is it ok to remove 10 children from batch node immediately ? some guys say its very slow process. Just to check if this causing problem , I did :-
Instead of removing them I just set visibility of the children to false.But still FPS drops and never recovers.
Please share your thoughts on this.
Though SpriteBatchNodes are generally quite good for drawing a lot of elements efficiently, I think there are best used for static/not-so-dynamic elements. In your case, if you have a lot of elements which go out of the screen but are still alive the draw() function will have to make some checks, thus hogging your performance (even if you set isVisible(false); explicitly, it still nedds to be checked).
In your case I think it would be better if you simply added new shapes outside of screen via some time-based function, and if they scroll outside of view simply remove them from scene, without using batchNodes.
Just found that with every touch, I am adding 8 new sprites to the layer, and its adding every time I touch . So with time I am giving more and more work to do. This is the problem
Actually I wanted to replace the sprite at 8 places with a touch, the way I was doing every time :-
_colorBottom1=CCSprite::createWithSpriteFrameName(png[0]);
this->addChild(_colorBottom1,kForeground);
_colorBottom1->setPosition(ccp((_colorPanelLeftPad)*_blockWidth,_blockWidth));
It was causing this sprite to be added with every touch.
but it should have been (Replace the texture instead of creating the Sprite again):-
CCSpriteFrame *frame1=CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName(png0);
_colorBottom1->setDisplayFrame(frame1);

How to discover if OpenGL ES frame rate is stuttering - without Instruments?

A friend just asked me this interesting question and I had no answer to this.
He's making a game and sometimes he experiences lags in the frame rate. As if 10 or more frames get dropped.
The run loop function is called by CADisplayLink.
Is there a way to tell programmatically if frame rate was lagging? I'd just measure the time in the run loop function and then check if it's bigger than supposed to be. And if it is, remember that there was a lag.
Could be useful to test on various devices on the go. How would you go about tracking this without being connected to Xcode?
The proper way is with a delta (difference between lastTime and now), something like this:
CFTimeInterval time;
time = CACurrentMediaTime();
if (lastTime == 0) {
delta = 0.0;
} else {
delta = time - lastTime;
}
lastTime = time;
But he should use this delta for a lot more stuff in the game, framerate is always a fickle thing, so anything time sensitive should be offset by the delta.
Say you want to update the position of an object which has a speed, it should be something like this:
objectXPos.pos = delta/targetDelta*objectXSpeed;
Target Delta would be the perfect frame rate, say 60 fps would give you a targetDelta of 1/60=0.0167
This way game wont be affected by poor of better performance.
You should also implement a disaster mode, say the bloody thing stopped for a second or more, and you can opt to not update the game logic, in that case it's better than everything happening magically for the user. This part depends on the type a game, a second might be overkill, maybe 0.033 s is already too much (half the frame rate)
Simple. Increment a integer frame count variable each frame render. Set a repeating NSTimer for every second or every ten seconds. In the timer callback, sample how much the frame counter has advanced in that 1 or 10 seconds and display the min/max/average in a hidden debug view, or append it a log file that can be fetched using Document sharing.

How can I ensure the correct frame rate when recording an animation using DirectShow?

I am attempting to record an animation (computer graphics, not video) to a WMV file using DirectShow. The setup is:
A Push Source that uses an in-memory bitmap holding the animation frame. Each time FillBuffer() is called, the bitmap's data is copied over into the sample, and the sample is timestamped with a start time (frame number * frame length) and duration (frame length). The frame rate is set to 10 frames per second in the filter.
An ASF Writer filter. I have a custom profile file that sets the video to 10 frames per second. Its a video-only filter, so there's no audio.
The pins connect, and when the graph is run, a wmv file is created. But...
The problem is it appears DirectShow is pushing data from the Push Source at a rate greater than 10 FPS. So the resultant wmv, while playable and containing the correct animation (as well as reporting the correct FPS), plays the animation back several times too slowly because too many frames were added to the video during recording. That is, a 10 second video at 10 FPS should only have 100 frames, but about 500 are being stuffed into the video, resulting in the video being 50 seconds long.
My initial attempt at a solution was just to slow down the FillBuffer() call by adding a sleep() for 1/10th second. And that indeed does more or less work. But it seems hackish, and I question whether that would work well at higher FPS.
So I'm wondering if there's a better way to do this. Actually, I'm assuming there's a better way and I'm just missing it. Or do I just need to smarten up the manner in which FillBuffer() in the Push Source is delayed and use a better timing mechanism?
Any suggestions would be greatly appreciated!
I do this with threads. The main thread is adding bitmaps to a list and the recorder thread takes bitmaps from that list.
Main thread
Animate your graphics at time T and render bitmap
Add bitmap to renderlist. If list is full (say more than 8 frames) wait. This is so you won't use too much memory.
Advance T with deltatime corresponding to desired framerate
Render thread
When a frame is requested, pick and remove a bitmap from the renderlist. If list is empty wait.
You need a threadsafe structure such as TThreadList to hold the bitmaps. It's a bit tricky to get right but your current approach is guaranteed to give to timing problems.
I am doing just the right thing for my recorder application (www.videophill.com) for purposes of testing the whole thing.
I am using Sleep() method to delay the frames, but am taking great care to ensure that timestamps of the frames are correct. Also, when Sleep()ing from frame to frame, please try to use 'absolute' time differences, because Sleep(100) will sleep about 100ms, not exactly 100ms.
If it won't work for you, you can always go for IReferenceClock, but I think that's overkill here.
So:
DateTime start=DateTime.Now;
int frameCounter=0;
while (wePush)
{
FillDataBuffer(...);
frameCounter++;
DateTime nextFrameTime=start.AddMilliseconds(frameCounter*100);
int delay=(nextFrameTime-DateTime.Now).TotalMilliseconds;
Sleep(delay);
}
EDIT:
Keep in mind: IWMWritter is time insensitive as long as you feed it with SAMPLES that are properly time-stamped.

Resources