IOS weight scale pointer rotates abruptedly - ios

I am requested to make an application, displaying the data on a virtual scale face animatedly. The data is obtained from BLE HW scale over BT 4.0. App scale face is designed just as a real watch, with the pointer rotating to indicate the data. As a note, the data from HW seems to be dis-contiguous. I am sure that's caused by the irregular data sampling of HW. Here is my design : make a queue and queue the incoming data. Then consume the data in a timer and dispatch animation That's the primary background knowledge. Currently, I am stuck in the problem of abruptly rotating the pointer as the real one. I have tried several solutions but none of them is acceptable to customers. I suppose that it is closely related to the poor user experience of pointer animation stuff. Please help me on the following question.
I try to invoke CATransform3DMakeRotation in the timer (50ms) to make the animation of rotation, the pointer always slightly trembles when it rotates. I suppose it is caused by the fact that a new animation is dispatched while the previous animation is not completed. This conclusion can be verified by the fact that the increasing the timer frequency causes the more seriously trembling while the decreasing recovers. Then I am trying invoke the next rotation in the completion callback when the last one is done. Now the trembling really disappears. but the rotation is not contiguous visually. Suppose 0, 450, 500, 600. Then rotation looks like consisting of 3 sub-animations (0~450, 450 ~500 and 500 ~600). That also gives rise to the poor experience.
So, How can I deal with it?
Please help to enlighten me. Thanks

Related

When exactly is drawInMTKView called?

MetalKit calls drawInMTKView when it wants a your delegate to draw a new frame, but I wonder if it waits for the last drawable to have been presented before it asks your delegate to draw on a new one?
From what I understand reading this article, CoreAnimation can provide up to three "in flight" drawables, but I can't find whether MetalKit tries to draw to them as soon as possible or if it waits for something else to happen.
What would this something else be? What confuses me a little is the idea of drawing to up to two frames in advance, since it means the CPU must already know what it wants to render two frames in the future, and I feel like it isn't always the case. For instance if your application depends on user input, you can't know upfront the actions the user will have done between now and when the two frames you are drawing to will be presented, so they may be presented with out of date content. Is this assumption right? In this case, it could make some sense to only call the delegate method at a maximum rate determined by the intended frame rate.
The problem with synchronizing with the frame rate is that this means the CPU may sometimes be inactive when it could have done some useful work.
I also have the intuition this may not be happening this way since in the article aforementioned, it seems like drawInMTKView is called as often as a drawable is available, since they seem to rely on it being called to make work that uses resources in a way that avoids CPU stalling, but since there are many points that are unclear to me I am not sure of what is happening exactly.
MTKView documentation mentions in paused page that
If the value is NO, the view periodically redraws the contents, at a frame rate set by the value of preferredFramesPerSecond.
Based on samples there are for MTKView, it probably uses a combination of an internal timer and CVDisplayLink callbacks. Which means it will basically choose the "right" interval to call your drawing function at the right times, usually after other drawable is shown "on-glass", so at V-Sync interval points, so that your frame has the most CPU time to get drawn.
You can make your own view and use CVDisplayLink or CADisplayLink to manage the rate at which your draws are called. There are also other ways such as relying on back pressure of the drawable queue (basically just calling nextDrawable in a loop, because it will block the thread until the drawable is available) or using presentAfterMinimumDuration. Some of these are discussed in this WWDC video.
I think Core Animation triple buffers everything that gets composed by Window Server, so basically it waits for you to finish drawing your frame, then it draws it with the other frames and then presents it to "glass".
As to a your question about the delay: you are right, the CPU is two or even three "frames" ahead of the GPU. I am not too familiar with this, and I haven't tried it, but I think it's possible to actually "skip" the frames you drew ahead of time if you delay the presentation of your drawables up until the last moment, possibly until scheduled handler on one of your command buffers.

SceneKit scenes lag when resuming app

In my app, I have several simple scenes (a single 80 segment sphere with a 500px by 1000px texture, rotating once a minute) displaying at once. When I open the app, everything goes smoothly. I get constant 120fps with less than 50mb of memory usage and around 30% cpu usage.
However, if I minimize the app and come back to it a minute later, or just stop interacting with the app for a while, the scenes all lag terribly and get around 4 fps, despite Xcode reporting 30fps, normal memory usage, and super low (~3%) cpu usage.
I get this behavior when testing on a real iPhone 7 iOS 10.3.1, and I'm not sure if this behavior exists on other devices or the emulator.
Here is a sample project I pulled together to demonstrate this issue. (link here) Am I doing something wrong here? How can I make the scenes wake up and resume using as much cpu as they need to maintain good fps?
I won't probably answer the question you've asked directly, but can give you some points to think about.
I launched you demo app on my iPod 6-th gen (64-bit), iOS 10.3.1 and it lags from the very beginning up to about a minute with FPS 2-3. Then after some time it starts to spin smoothly. The same after going background-foreground. It can be explained with some caching of textures.
I resized one of the SCNView's so that it fits the screen, other views stayed behind. Set v4.showsStatistics = true
And here what I got
as you can see Metal flush takes about 18.3 ms for one frame and its only for one SCNView.
According to this answer on Stackoverflow
So, if my interpretation is correct, that would mean that "Metal
flush" measures the time the CPU spends waiting on video memory to
free up so it can push more data and request operations to the GPU.
So we might suspect that problem is in 4 different SCNViews working with GPU simultaneously.
Let's check it. Comparing to the 2-nd point, I've deleted 3 SCNViews behind and put 3 planets from those views to the front one. So that one SCNView has 4 planets at once. And here is the screenshot
and as you can see Metal flush takes up to 5 ms and its from the beginning and everything goes smoothly. Also you may notice that amount of triangles (top right icon) is four times as many as what we can see on the first screenshot.
To sum up, just try to combine all SCNNodes on one SCNView and possibly you'll get a speed up.
So, I finally figured out a partially functional solution, even though its not what I thought it would be.
The first thing I tried was to keep all the nodes in a single global scene as suggested by Sander's answer and set the delegate on one of the SCNViews as suggested in the second answer to this question. Maybe this used to work or it worked in a different context, but it didn't work for me.
How Sander ended up helping me was the use of the performance statistics, which I didn't know existed. I enabled them for one of my scenes, and something stood out to me about performance:
In the first few seconds of running, before the app gets dramatic frame drops, the performance display read 240fps. "Why was this?", I thought. Who would need 240 fps on a mobile phone with a 60hz display, especially when the SceneKit default is 60. Then it hit me: 60 * 4 = 240.
What I guess was happening is that each update in a single scene triggered a "metal flush", meaning that each scene was being flushed 240 times per second. I would guess that this fills the gpu buffer (or memory? I have no idea) slowly, and eventually SceneKit needs to start clearing it out, and 240 fps across 4 views is simply too much for it to keep up with. (which explains why it initially gets good performance before dropping completely.).
My solution (and this is why I said "partial solution"), was to set the preferedFramesPerSecond for each SceneView to 15, for a total of 60 (I can also get away with 30 on my phone, but I'm not sure if this holds up on weaker devices). Unfortunately 15fps is noticeably choppy, but way better than the terrible performance I was getting originally.
Maybe in the future Apple will enable unique refreshes per SceneView.
TL;DR: set preferredFramesPerSecond to sum to 60 over all of your SceneViews.

Redraw Scenes Only When the Scene Data Changes

I read this from page on Tuning Your OpenGL ES App :
Redraw Scenes Only When the Scene Data Changes :
Your app should wait until something in the scene changes before rendering a new frame. Core Animation caches the last image presented to the user and continues to display it until a new frame is presented.
Even when your data changes, it is not necessary to render frames at the speed the hardware processes commands. A slower but fixed frame rate often appears smoother to the user than a fast but variable frame rate. A fixed frame rate of 30 frames per second is sufficient for most animation and helps reduce power consumption.
From what I understand, there is an event loop which keeps on running and re-rendering the scene. We just override the onDrawFrame method and put our rendering code there. I don't have any control on when this method gets called. How can then I "Redraw Scenes Only When the Scene Data Changes" ?
In my case, there is a change in scene only when user interacts (click, pinch etc.). Ideally I would like to not render when user is not interacting with my scene, but this function is getting called continuously. Am confused.
At the lowest exposed level, there is an OpenGL-containing type of CoreAnimation layer, CAEAGLLayer. That can supply a colour buffer that is usable to construct a framebuffer object, to which you can draw as and when you wish, presenting as and when you wish. So that's the full extent of the process for OpenGL in iOS: draw when you want, present when you want.
The layer then has a pixel copy of the scene. Normal Core Animation rules apply so you're never autonomously asked to redraw. Full composition, transitions, Core Animation animations, etc, can all occur without any work on your part.
It is fairly common practice to connect up a timer, usually a CADisplayLink, either manually or just by taking advantage of one of the GLKit constructs. In that case you're trying to produce one frame per timer tick. Apple is suggesting that running the timer at half the refresh rate is acceptable and, if you wake up to perform a draw and realise that you'd just be drawing exactly the same frame as you did last time, not bothering to draw at all. Ideally stopping the timer completely if you have sufficient foresight.
As per the comment, onDrawFrame isn't worded like an Objective-C or Swift method and isn't provided by any Apple-supplied library. Whomever is posting that — presumably to try to look familiar to Android authors — needs to take responsibility for appropriate behaviour.

Cocos2dx 2.1.4 Game, Continuos FPS drop and never recovers

I am creating a game using cocos2dx 2.1.4. Its FPS drops continuously , and never recover.
Please find the details as follows
Background about the way I am doing things:-
Its game about scrolling down some shapes, each shape is made up of some square blocks.I have 7 kind of blocks. All loaded in a Sprite Sheet and using these blocks from this sprite sheet I create a shape.
A level file is consist of these shapes. I load two levels at the same time one onscreen and another off screen to make it seamless scrolling. For loading two levels at the same time I used two different CCSprite game batch nodes as :-
CCSpriteFrameCache::sharedSpriteFrameCache()->addSpriteFramesWithFile("56blackglow.plist");
_gameBatchNode1 = CCSpriteBatchNode::create("56blackglow.png", 200);
_gameBatchNode1->retain();
this->addChild(_gameBatchNode1,kForeground);
_gameBatchNode2= CCSpriteBatchNode::create("56blackglow.png", 200);
_gameBatchNode2->retain();
this->addChild(_gameBatchNode2,kForeground);
The problem I am facing is that as I keep on playing the game frame rate drops continuously , from 60 fps till 10 fps and never recovers or might recover in near future , as I observed for 20 minutes but its too much time to wait.
My observations:-
1> I used Time profiler it shows maximum time is in draw() calls. Also if I play game very fast the peak of time increases in track, that should be fine as I am giving more work to do, but once a peak is attained it remains approximately at that height only, even if I leave the game Idle. Is it normal ? According to me it should have returned to its normal peak once the current work is done.
2> Some where I thought its happening because I used two batch nodes and removing its children on a user touch immediately might causing it slow but then after removing the children it should run normal. to give an idea is it ok to remove 10 children from batch node immediately ? some guys say its very slow process. Just to check if this causing problem , I did :-
Instead of removing them I just set visibility of the children to false.But still FPS drops and never recovers.
Please share your thoughts on this.
Though SpriteBatchNodes are generally quite good for drawing a lot of elements efficiently, I think there are best used for static/not-so-dynamic elements. In your case, if you have a lot of elements which go out of the screen but are still alive the draw() function will have to make some checks, thus hogging your performance (even if you set isVisible(false); explicitly, it still nedds to be checked).
In your case I think it would be better if you simply added new shapes outside of screen via some time-based function, and if they scroll outside of view simply remove them from scene, without using batchNodes.
Just found that with every touch, I am adding 8 new sprites to the layer, and its adding every time I touch . So with time I am giving more and more work to do. This is the problem
Actually I wanted to replace the sprite at 8 places with a touch, the way I was doing every time :-
_colorBottom1=CCSprite::createWithSpriteFrameName(png[0]);
this->addChild(_colorBottom1,kForeground);
_colorBottom1->setPosition(ccp((_colorPanelLeftPad)*_blockWidth,_blockWidth));
It was causing this sprite to be added with every touch.
but it should have been (Replace the texture instead of creating the Sprite again):-
CCSpriteFrame *frame1=CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName(png0);
_colorBottom1->setDisplayFrame(frame1);

Multithreading OpenGL ES on iOS for games

Currently, I have a fixed timestep game loop running on a second thread in my game. The OpenGL context is on the same thread, and rendering is done once per frame after any updates. So the main "loop" has to wait for drawing each frame before it can proceed. This wasn't really a problem until I wrote my particle system. Upwards of 1500+ particles with a physics step of 16ms causes the framerate to drop just below 30, anymore and it's worse. The particle rendering can't be optimized anymore without losing capability, so I decided to try moving OpenGL to a 3rd thread. I know this is somewhat of an extreme case, but I feel it should be able to handle it.
I've thought of running 2 loops concurrently, one for the main stepping (fixed timestep) and one for drawing (however fast it can go). However the rendering calls pass in data that may be changed each update, so I was concerned that locking would slow it down and negate benefit. However, after implenting a test to do this, I'm just getting EXC_BAD_ACCESS after less than a second of runtime. I assume because they're trying to access the same data at the same time? I thought the system automatically handled this?
When I was first learning OpenGL on the iPhone, I had OpenGL setup on the main thread, and would call performSelectorOnMainThread:withObject:waitUntilDone: with the rendering selector, and these errors would happen any time waitUntilDone was false. If it was true, it would happen randomly sometimes, but sometimes I could let it run for 30 mins and it would be fine. Same concept as what's happening now I assume. I am getting the first frame drawn to the screen before it crashes though, so I know something is working.
How would this be properly handled, and if so would it even provide the speed up I'm looking for? Or would multiple access slow it down just as much?

Resources