I used to test my apps using an iPhone 5S - now I switched to a iPhone SE.
Now I am asking myself why the default form slide transitions still stutter on such a fast device - the animation should always appear smooth if the calculated locations are correct relative to the timeline.
Looking into CommonTransitions I saw that there is a CommonTransitions.TYPE_FAST_SLIDE and wondered if this was the key to smooth transitions, is it?
In Theme Constants in the Codename One Designer under FormTransitionOut however there is no option fastSlide - why is that?
I've wondered the same, although I have only a cheap device to test on.
This may only add fuel to the fire, but have you tried Display.setFramerate(int rate). The docs say the default is an (attempted) rate of 10 (redraws per second).
Maybe it would look smoother with 20?
Related
In our AR app and appclip made with SceneKit, we experience very significant drops in framerate when we make our 3D content appear at different steps of the experience. We have conducted our tests on an iPhone X (iOS 15.2.1), on an iPhone 12 Pro (iOS 14), and on an iPad Pro 2020 (iPad OS 14.8.1).
For now, all of our 3D objects are in our Main Scene. Those which are supposed to appear at some point in the experience have their opacity set to 0.01 at the beginning and then fade in with a SCNAction (the reason why we tried setting their opacity to 0.01 at the start was to make sure that these objects are rendered from the start of the experience).
However, if the objects all have their opacity set to 1 from the start of the experience, we do not experience any fps drop.
It is worth noting that the fps drops only happen the first time the app is opened. If I close it and re-open it, then it unfolds without any freeze.
What would be the best way to load (or pre-load) these 3D elements to avoid these freezes?
I have been told the prepareObject:shouldAbortBlock: and the prepareObjects:withCompletionHandler methods could solve our problem.
In our case the SceneView is an ARCNView, so we can't seem to access the SCNSceneRenderer methods. We have only managed to implement the "Prepare" method to preload our assets:
https://developer.apple.com/documentation/scenekit/scnscenerenderer/1523375-prepare
It seems to be doing something because we get a significantly longer freeze at the start of the experience, however, we still experience the same fps drops during the experience when our objects appear for the first time.
Is there any documentation on how to use the prepareObject:shouldAbortBlock: and the prepareObjects:withCompletionHandler methods in an ARCNView?
I’ve been developing an iOS app using Swift 4 but recently (today) decided to switch to Flutter/dart after hearing about its capabilities.
In my iOS app, I have a moving background of waves (actual waves when you think of an ocean).
The width of the the wave .png is 1606 units and I animated it in a way where it will translate from right to left 265 units in 1 second, then it repeats itself. This way, the waves are moving continuously when in reality, it’s only a fraction of the entire png repeating itself.
I needed this same background to apply to all ViewControllers (screens) in the app and I did this by sending the last known x value right before the transition through a segue (transition between viewcontrollers, I believe it’s a “route” for dart?) and used this value as the initial position of the waves in the next ViewController. When I swipe up or down, to move to different view controllers, the waves would also move accordingly.
For some reason, the animations were a bit choppy but I’d say 80% of the time, it was perfectly smooth. I need it to be 100% for when I release my app tho.
How would I go about accomplishing this type of animation in Dart?
For animations, Flare seems very promising and I’m kind of steering towards using that to accomplish my goal but I’d like to hear any advice on how I should approach this.
The animation performance depends on how it's implemented. Without a minimal repro, it'll be challenging to figure out the issue. Though other options that you may consider to animate the background is by using AnimatedPositioned widget. Using this should help manipulate the position of widgets displayed on screen.
Another one is by using Adobe After Effects animations and rendering it with lottie plugin on your app.
In my app, I have several simple scenes (a single 80 segment sphere with a 500px by 1000px texture, rotating once a minute) displaying at once. When I open the app, everything goes smoothly. I get constant 120fps with less than 50mb of memory usage and around 30% cpu usage.
However, if I minimize the app and come back to it a minute later, or just stop interacting with the app for a while, the scenes all lag terribly and get around 4 fps, despite Xcode reporting 30fps, normal memory usage, and super low (~3%) cpu usage.
I get this behavior when testing on a real iPhone 7 iOS 10.3.1, and I'm not sure if this behavior exists on other devices or the emulator.
Here is a sample project I pulled together to demonstrate this issue. (link here) Am I doing something wrong here? How can I make the scenes wake up and resume using as much cpu as they need to maintain good fps?
I won't probably answer the question you've asked directly, but can give you some points to think about.
I launched you demo app on my iPod 6-th gen (64-bit), iOS 10.3.1 and it lags from the very beginning up to about a minute with FPS 2-3. Then after some time it starts to spin smoothly. The same after going background-foreground. It can be explained with some caching of textures.
I resized one of the SCNView's so that it fits the screen, other views stayed behind. Set v4.showsStatistics = true
And here what I got
as you can see Metal flush takes about 18.3 ms for one frame and its only for one SCNView.
According to this answer on Stackoverflow
So, if my interpretation is correct, that would mean that "Metal
flush" measures the time the CPU spends waiting on video memory to
free up so it can push more data and request operations to the GPU.
So we might suspect that problem is in 4 different SCNViews working with GPU simultaneously.
Let's check it. Comparing to the 2-nd point, I've deleted 3 SCNViews behind and put 3 planets from those views to the front one. So that one SCNView has 4 planets at once. And here is the screenshot
and as you can see Metal flush takes up to 5 ms and its from the beginning and everything goes smoothly. Also you may notice that amount of triangles (top right icon) is four times as many as what we can see on the first screenshot.
To sum up, just try to combine all SCNNodes on one SCNView and possibly you'll get a speed up.
So, I finally figured out a partially functional solution, even though its not what I thought it would be.
The first thing I tried was to keep all the nodes in a single global scene as suggested by Sander's answer and set the delegate on one of the SCNViews as suggested in the second answer to this question. Maybe this used to work or it worked in a different context, but it didn't work for me.
How Sander ended up helping me was the use of the performance statistics, which I didn't know existed. I enabled them for one of my scenes, and something stood out to me about performance:
In the first few seconds of running, before the app gets dramatic frame drops, the performance display read 240fps. "Why was this?", I thought. Who would need 240 fps on a mobile phone with a 60hz display, especially when the SceneKit default is 60. Then it hit me: 60 * 4 = 240.
What I guess was happening is that each update in a single scene triggered a "metal flush", meaning that each scene was being flushed 240 times per second. I would guess that this fills the gpu buffer (or memory? I have no idea) slowly, and eventually SceneKit needs to start clearing it out, and 240 fps across 4 views is simply too much for it to keep up with. (which explains why it initially gets good performance before dropping completely.).
My solution (and this is why I said "partial solution"), was to set the preferedFramesPerSecond for each SceneView to 15, for a total of 60 (I can also get away with 30 on my phone, but I'm not sure if this holds up on weaker devices). Unfortunately 15fps is noticeably choppy, but way better than the terrible performance I was getting originally.
Maybe in the future Apple will enable unique refreshes per SceneView.
TL;DR: set preferredFramesPerSecond to sum to 60 over all of your SceneViews.
From what I've researched online, the iPhone screen refresh rate is 60Hz (not sure if this applies to iPhone 6 as well) - meaning, it can refresh an image up to 60 times a second.
However, I have a project in which I need a very fast blinking animation - to animate a view back-and-forth (from visible to invisible), more than 60 times a second. I thought about using CADisplayLink, so I'll get called every time the screen refreshes, but unfortunately, as stated above, this is not fast enough (gets called 60 times a second only).
Is there something I'm missing here, or is there a way to achieve a higher blinking rate? Do iOS games achieve better rates than this?
Thanks
There's no way to achieve a faster display rate than the screen refresh itself, mainly because the screen can't refresh as fast, so people won't see it anyway.
Hence all iOS games are effectively vsynced at 60 fps.
That said, depending on what you're doing, you might not be getting 60 fps. Have you profiled your app to determine the fps it's running?
If it's not doing 50+, there's probably some optimization that you can do to get it as close to 60 as possible.
I'm developing a simple kinetic menu UI component for AIR for iPad. It's basically a lightweight fill-in for a combobox that matches the style of iOS. I have a sprite containing any where from 2 to 60 item buttons that pops up and lets you flick/ scroll through them, only showing about 7 items at any given time.
My first attempt at this used a mask over my sprite, moving my menu sprite up and down under the stationary mask. This produced lackluster results on the test device (< 20 fps).
I then tried a blitting solution, leaving the menu sprite off the display list and using BitmapData.draw() to render only the part to the list i needed visible. This produced the best results on my Windows dev platform, but this time the framerate dropped below 10 fps on iPad. I am assuming I was incurring either a taxing CPU usage or a GPU readback penalty. Originally I had hoped to be able to run my app a 60 fps, however I've ratcheted my goal down to a more humble 30 fps.
Which brings me to my 3rd attempt at this UI component using the sprite's .scrollRect masking function in conjunction with .cacheAsBitmap . Again, the observed behaviors differ wildly between AIR on Windows vs. iOS. On Windows it only redraws the part of the menu sprite bounded by the dimensions of the scrollRect as it should. With iOS i can touch the area of the screen above or below the visible area of the menu sprite and still drag the menu even though my finger is over "empty" space! The performance here is decent, hovering between (19 - 25 fps) and would almost certainly be perfect at 30 if it worked as it did on windows.
Does anyone have any ideas either about the scrollRect feature's behavior on AIR for iOS or a better way of implementing an iOS native style gliding menu in AIR for iOS?
Note, the above methods were tried in both CPU and GPU mode, but CPU mode performed vastly better. I used AIR 2.7 installed on top of Flash Pro CS 5.5, with FlashDevelop as my IDE.
http://esdot.ca/site/2011/fast-rendering-in-air-3-0-ios-android#comment-10
Really nice guy from the above link: "Ya, scrollRect is basically a no-go on mobile, basically forget that API even exists. Believe it or not… old school masking is the way to go. Round and round we go!"