UIScrollView contentOffset not updating when changed - ios

Seeing an odd issue surrounding changing a scroll view's content offset property.
I have a CADisplayLink that calls a method every frame. This method calculates how much to adjust the content offset by to produce an auto scroll type effect.
#objc private func tick() {
let fps = 1 / (displayLink.targetTimestamp - displayLink.timestamp)
let relativeAutoscrollAmount = autoscrollAmount / CGFloat(fps)
scrollView.contentOffset.x += relativeAutoscrollAmount
}
autoscrollAmount is a CGFloat property that represents how many pixels to move each second. On a 60Hz screen like an iPhone, this would mean a shift of 5/60 per invocation of that method, if this property is 5. However, the content offset never actually changes! Either visually or in memory, I can break and inspect it at any time and it's always 0!
Note that if I adjust it by 1 or greater each time, it works just fine. The animation is far to quick doing this, though.
Any thoughts?
EDIT: Obviously you can't actually adjust by less than a pixel at a time, but when I was doing this previously with a constraint constant, the system just calculated how to deal with this. (I assume by only moving every few ticks).

I believe I have the answer, or at the very least, an explanation based on a theory backed by some pretty good evidence. Here we go...
In the question, I provided an example of 5/60, where 5 is the amount of pixels to move per second, and 60 is the refresh rate of my screen. This comes out at approximately 0.083, which, as I said, caused no updates to contentOffset to take place.
At this point, I assumed that the minimum value was 1 (as you can't make changes to half a pixel) but this is in fact not the case. I began experimenting with different decimal values, in the hope of finding the threshold at which the updates to contentOffset stop taking place.
After a lot of trial and error, I found that value. It is 0.167. In my head, this had absolutely no significance whatsoever; but there obviously had to be something so I set about manipulating it in various ways to try to observe a pattern of some kind.
It soon became clear that 0.167 * 6 == 1, which although an interesting observation, again seemed to have little significance. That is until you note that the refresh rate of the display on my iPhone X that I was testing with is 60Hz, 10 times 6. At this point, I'm still stabbing blindly in the dark but this was at least a lead that I could explore a bit.
From this, I speculated that the system evaluates changes in layer's positions either every 6ms, or, perhaps more realistically, 10 times per display cycle. This supports the behaviour I am seeing in so far as if the movement value passes is too small (IE it cannot be represented in this 10 times per display cycle theorm), it is simply ignored.
This is quite a bold speculation so I decided to see if I could gather evidence to support the theory. I fired up my iPad pro which has a 120Hz display (as opposed to the 60Hz display on my iPhone X) to see if there was a trend. Sure enough, there was. The minimum value to see movement was now half what it was on the 60Hz screen. Given the greater refresh rate (double, in fact), and the original assumption of 10 updates per screen cycle, I am now seeing 20 updates per screen cycle, every 6ms, as before. There's definitely a relationship here.
Now I'd like to stress that this is all purely speculation, but at least I can sleep tonight having a good idea as to why this is happening! I'd love to hear other's thoughts, too.

Related

Gtkmm 3.0 draw blinking shapes and use of timeouts

In a Gtk::DrawingArea I have a pixbuf showing the layout of my house. I draw the measured room temperatures on it. I also would like to draw the state of my shutters on it with some lines. When and only when a shutter changes its state, I would love to make these lines blink with a time offset of 1 second. I assume, I would have to make use of a timeout triggered every second to redraw the lines for the shutters. I am already making use of a timeout every 2 minutes to fetch new data from the internet to be shown on my screen. I could set up the timeout to get called every second and then I would have to remember, when my last 2-minute fetch was accomplished, to trigger the next one on time. Also, if my shutters are not changing state like in 99.9 percent of their lifetime, I do not need blinking. It feels over engineered to me to call a method every second just to make a line blink. Is there a smarter way to do this?
I could post a lot of code here, but I think that would not help anybody understand my question. I am helpful for any hint.

SceneKit scenes lag when resuming app

In my app, I have several simple scenes (a single 80 segment sphere with a 500px by 1000px texture, rotating once a minute) displaying at once. When I open the app, everything goes smoothly. I get constant 120fps with less than 50mb of memory usage and around 30% cpu usage.
However, if I minimize the app and come back to it a minute later, or just stop interacting with the app for a while, the scenes all lag terribly and get around 4 fps, despite Xcode reporting 30fps, normal memory usage, and super low (~3%) cpu usage.
I get this behavior when testing on a real iPhone 7 iOS 10.3.1, and I'm not sure if this behavior exists on other devices or the emulator.
Here is a sample project I pulled together to demonstrate this issue. (link here) Am I doing something wrong here? How can I make the scenes wake up and resume using as much cpu as they need to maintain good fps?
I won't probably answer the question you've asked directly, but can give you some points to think about.
I launched you demo app on my iPod 6-th gen (64-bit), iOS 10.3.1 and it lags from the very beginning up to about a minute with FPS 2-3. Then after some time it starts to spin smoothly. The same after going background-foreground. It can be explained with some caching of textures.
I resized one of the SCNView's so that it fits the screen, other views stayed behind. Set v4.showsStatistics = true
And here what I got
as you can see Metal flush takes about 18.3 ms for one frame and its only for one SCNView.
According to this answer on Stackoverflow
So, if my interpretation is correct, that would mean that "Metal
flush" measures the time the CPU spends waiting on video memory to
free up so it can push more data and request operations to the GPU.
So we might suspect that problem is in 4 different SCNViews working with GPU simultaneously.
Let's check it. Comparing to the 2-nd point, I've deleted 3 SCNViews behind and put 3 planets from those views to the front one. So that one SCNView has 4 planets at once. And here is the screenshot
and as you can see Metal flush takes up to 5 ms and its from the beginning and everything goes smoothly. Also you may notice that amount of triangles (top right icon) is four times as many as what we can see on the first screenshot.
To sum up, just try to combine all SCNNodes on one SCNView and possibly you'll get a speed up.
So, I finally figured out a partially functional solution, even though its not what I thought it would be.
The first thing I tried was to keep all the nodes in a single global scene as suggested by Sander's answer and set the delegate on one of the SCNViews as suggested in the second answer to this question. Maybe this used to work or it worked in a different context, but it didn't work for me.
How Sander ended up helping me was the use of the performance statistics, which I didn't know existed. I enabled them for one of my scenes, and something stood out to me about performance:
In the first few seconds of running, before the app gets dramatic frame drops, the performance display read 240fps. "Why was this?", I thought. Who would need 240 fps on a mobile phone with a 60hz display, especially when the SceneKit default is 60. Then it hit me: 60 * 4 = 240.
What I guess was happening is that each update in a single scene triggered a "metal flush", meaning that each scene was being flushed 240 times per second. I would guess that this fills the gpu buffer (or memory? I have no idea) slowly, and eventually SceneKit needs to start clearing it out, and 240 fps across 4 views is simply too much for it to keep up with. (which explains why it initially gets good performance before dropping completely.).
My solution (and this is why I said "partial solution"), was to set the preferedFramesPerSecond for each SceneView to 15, for a total of 60 (I can also get away with 30 on my phone, but I'm not sure if this holds up on weaker devices). Unfortunately 15fps is noticeably choppy, but way better than the terrible performance I was getting originally.
Maybe in the future Apple will enable unique refreshes per SceneView.
TL;DR: set preferredFramesPerSecond to sum to 60 over all of your SceneViews.

Is there a way to program games without depending on fram rate?

Im programming an iOS game and I use the method update for a lot of things, which is called at the game speed refresh (for the moment 60 times per second) but the problem is if the frame rate drops down (for example a notification, or any behavior in the game that, when called, it makes drop down a little bit the fps...) then the bugs comes....
A fast example is if I have an animation of 80 pictures, 40 for jump up and 40 for fall, I would need 1,2 second to run the animation, so if the jump takes 1,2 second it would be ok, the animation would run. But if my fps drop down to 30 then the animation would cut because it would need 2,4 seconds to run the animation but the jump remains 1,2 second. This is only a fast example, there is a lot of unexpected behaviors in the game if the frame rate drops, so my question is, are games developers depending so much on frame rate or there is a way to avoid those fps-bugs? (another way to program or any trick?)
Base your timing on the time, rather than the frame count. So, save a time stamp on each frame, and on the next frame, calculate how much time has elapsed, and based on your ideal frame rate, figure out how many frames of animation to advance. At full speed, you shouldn’t notice a difference, and when the frame rate drops, your animations may get jerky but the motion will never get more than 1 frame behind where it should be.
Edit: as uliwitness points out, be careful what time function you use, so you don’t encounter issues when, for example, the computer goes to sleep or the game pauses.
Always use the delta value in your update method. This is platform and engine independent. Multiply any speed or change value by the delta value (the time interval between the current and the last frames).
In case of the animation, one way to fix the issue could be to multiply the animation counter by delta (and an inverse of the expected interval). Then round this value to get the correct image for the animation.
// currentFrame is a float ivar set to 0 at the beginning of the animation.
currentFrame = currentFrame + delta * 60.0;
int imageIndex = roundf(currentFrame);
However, with Sprite Kit there is a better way to do this kind of animation, as there is a premade SKAction dealing with sprite animation.
[SKAction animateWithTextures:theTextures timePerFrame:someInterval];
With this solution you don't have to deal with timing the images at all. The engine will do that for you.
There's a great discussion about FPS-based and Time-based techniques here:
Why You Should be Using Time-based Animation and How to Implement it
It's the best on my opinion, very complete, easy to follow and provides JsFiddle examples. I translated those examples to C++/Qt.
Click here to watch a video of my app running:

Cocos2d 2.0 - 3 numbers on the bottom left?

I'm having a problem that I'm not sure how to solve.
In cocos2d 2.0 , the second number on the bottom left drops to a low number like 0.002 and causes lag in my game!!
The second number is the 'Frames Per Second's Milliseconds', or the amount of time it takes to go to the next frame. I got this info from a question similar to mine, here is a link to that question:
Cocos2d 2.0 - 3 numbers on the bottom left
The games FPS's Milliseconds usually runs at about 0.016 or 0.021 and there is no lag.
Shouldn't it run smoother at numbers as low as 0.002?
How can I stop this lag?
Is there anybody that knows enough about cocos2d to help me out?
When your app runs really, really slow (around 10 fps or less) the milliseconds display is no longer accurate and will display a very low number.
You need to find out what's causing the drop in framerate. If the number of draw calls is high (100+) then your problem is that you're rendering too much and/or inefficiently (use sprite batching).
If the number of draw calls is reasonably low (no more than 50) then your problem is not the rendering but your own code. Possibly some time consuming (inefficient?) algorithm or frequently loading/unloading objects and/or data (files), those are the most common cases.
How are you observing/measuring this 'lag' ?
If you are reporting this based on running on a simulator, please go check on a device. Simulator numbers are meaningless. Btw, the number does not CAUSE lag, the number is the result (measure) of resource consumption by the app , ie low FPS is caused by laggy software, not the reverse.

DirectX: Game loop order, draw first and then handle input?

I was just reading through the DirectX documentation and encountered something interesting in the page for IDirect3DDevice9::BeginScene :
To enable maximal parallelism between
the CPU and the graphics accelerator,
it is advantageous to call
IDirect3DDevice9::EndScene as far
ahead of calling present as possible.
I've been accustomed to writing my game loop to handle input and such, then draw. Do I have it backwards? Maybe the game loop should be more like this: (semi-pseudocode, obviously)
while(running) {
d3ddev->Clear(...);
d3ddev->BeginScene();
// draw things
d3ddev->EndScene();
// handle input
// do any other processing
// play sounds, etc.
d3ddev->Present(NULL, NULL, NULL, NULL);
}
According to that sentence of the documentation, this loop would "enable maximal parallelism".
Is this commonly done? Are there any downsides to ordering the game loop like this? I see no real problem with it after the first iteration... And I know the best way to know the actual speed increase of something like this is to actually benchmark it, but has anyone else already tried this and can you attest to any actual speed increase?
Since I always felt that it was "awkward" to draw-before-sim, I tended to push the draws until after the update but also after the "present" call. E.g.
while True:
Simulate()
FlipBuffers()
Render()
While on the first frame you're flipping nothing (and you need to set up things so that the first flip does indeed flip to a known state), this always struck me as a bit nicer than putting the Render() first, even though the order of operations are the same once you're under way.
The short answer is yes, this is how it's commonly done. Take a look at the following presentation on the game loop in God of War III on the PS3:
http://www.tilander.org/aurora/comp/gdc2009_Tilander_Filippov_SPU.pdf
If you're running a double buffered game at 30 fps, the input lag will be 1 / 30 ~= 0.033 seconds which is way to small to be detected by a human (for comparison, any reaction time under 0.1 seconds on 100 metres is considered to be a false start).
Its worth noting that on nearly all PC hardware BeginScene and EndScene do nothing. In fact the driver buffers up all the draw commands and then when you call present it may not even begin drawing. They commonly buffer up several frames of draw commands to smooth out frame rate. Usually the driver does things based around the present call.
This can cause input lag when frame rate isn't particularly high.
I'd wager if you did your rendering immediately before the present you'd notice no difference to the loop you give above. Of course on some odd bits of hardware this may then cause issues so, in general, you are best off looping as you suggest above.

Resources