SKView in UICollectionView terrible performance - ios

Adding empty SKView to UICollectionView cell makes the scrolling almost impossible on iPhone 6 (iOS 9.x). Lets say collection view contains 6 items of which first 3 are visible, scrolling horizontally for next 3 items takes 3sec with jerkiness.
Here's the relating part of code:
func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell
{
let cell = collectionView.dequeueReusableCellWithReuseIdentifier(ACDFilterCollectionViewCellConstants.CellIdentifier, forIndexPath: indexPath) as! ACDFilterCollectionViewCell
let filterType = filters[indexPath.row]
cell.titleLabel.text = filterType.rawValue
let skview = SKView(frame: cell.frame)
cell.filterView.addSubview(skview)
return cell
}
What should I do to have a smooth scrolling with SKViews inside my UICollectionViewCells?
On iOS 8.x I'm getting the exception on rendering this view:

I don't know what the specific answer is w.r.t. the cause of the poor performance, but I can give you some pointers as to how to proceed.
The first thing to do is to profile the code. Use Instruments to figure out where the app is spending the most time when you start scrolling. Is setting up the SKViews as they come onto the screen the expensive part? Or is it drawing all those sprite views over and over? Is it something in the sprite views that's taking a long time to draw? I hear that SKShapeNode can be a drag on performance. What happens if you remove all the subnodes from the SKViews?
Once you know where the bottleneck is, you can decide how to proceed. From your comment it sounds like you're only using SKView to render some paths, and maybe you don't need those to animate once they're drawn? If so, perhaps you can use a single offscreen SKView to render your path into an image, and then insert that image into the collection cell. People use collections full of images all the time, and they scroll beautifully.

My initial suggestion would be simply this: Scale back your expectations of what's possible in terms of framework interactions and relationship building between them.
Look into CAShapeLayer and see if it can do enough of what you're attempting to do with SKShapeNode and shaders. There is a "shouldRasterize" feature of of CAShapeLayers, and you can then apply things like Core Image filters, which might get you close to the desired result without using SKViews, SKShapeNode and SKShaders.
The problem you're up against is artificially inflated expectations of quality and performance within Apple's newer APIs and Frameworks, and their ability to interact and contain one another.
This is not your fault.
Apple has mastered the art of Showmanship and Salesmanship, long ago.
For a very long time these arts were reserved for physical product releases - the pitching and promoting of the devices people buy.
Sometime around the ailing and death of the late great Mr Jobs (may he RIP) there was a transition underway to promote far more actively the rapid development and early release of software to the App Store.
Most of this was part of the war on Android and Google. There must have been some consensus (probably amongst the supplier focused channels of the company that now lead) that getting as many apps into the store as possible was a way to nullify and/or beat Android.
To this end iOS 7 was created, UICollectionViews, AutoLayout and all manner of other "wonderful" goodies designed to remove the need (and concern) for design and designers for most app creation.
These facilities give the impression that anything and everything can be done with APIs, and that there's little to no need to consider design, nor even technical design.
So Developers plod into the frameworks, bright eyed, bushy tailed, and optimistic about their chances or realising anything they conceive... if they simply use a blend of the frameworks and APIs available to them, via the relationships they perceive to exist between frameworks from their understanding of the WWDC videos and other Apple promotional materials.
Unfortunately the aforementioned salesmanship and showmanship means that what you perceive to be possible is actually limited to the literal demonstrations you're shown. The inferred and suggested possible potential of the frameworks for interwoven relationships and success are just that... POSSIBLE POTENTIAL future functionality.
Most everything other than what is literally demonstrated is broken, or in some other way massively constrained and ill-performing. In some cases that which is demonstrated is also subsequently broken or doesn't make it to the release.
So if you're taking the time to imagine something like combining SKViews within UICollectionViews, and that this relationship should work, that assumption is your problem.
This gets far worse when you start thinking about SKShapeNodes and Shaders, both of which have known issues within SceneKit. Obviously you've found that the issue is that SKViews inside UICollectionViews aren't performant. That is the first problem. You're going to have far more problems gleaning performance from SKShapeNodes and your shaders later, and then the issue of immutability of the SKShapeNode is going to crop up for your animations, too.
It's not an incorrect assumption, you've been lead to believe this modularity of frameworks is a thing, and a huge feature of the massive frameworks of iOS.
However it's all early days, and that's not mentioned. Sprite Kit is fundamentally broken in iOS 9, you can read more about many of its problems here:
https://forums.developer.apple.com/thread/14487
here:
https://forums.developer.apple.com/thread/20758
here:
https://forums.developer.apple.com/thread/17463
There has been no formal communication about the cause and/or nature of the issues blighting those using Scene Kit in iOS 9 that I'm aware of, despite the noise about its many issues.
One enormous issue that's plagued many is that any use of UIKit and SKViews (in either way they can inter-relate) causes huge performance problems in iOS 9. At this point there are no apparent ways around this problem other than keeping these two frameworks separate.

One big change on iOS 9 vs. iOS 8 is the switch from OpenGL to Metal for SpriteKit and SceneKit. I suggest you try overriding that default, and switch back to OpenGL, by adding the key/value pair PrefersOpenGL/YES to your Info.plist. See Technical Q&A QA1904, Specifying the renderer for SpriteKit and SceneKit.

Related

SceneKit objects not showing up in ARSCNView about 1 out of 100 times

I can't share much code because it's proprietary, but this is a bug that's been haunting me for awhile. We have SceneKit geometry added to the ARKit face node and displayed inside an ARSCNView. It works perfectly almost all of the time, but about 1 in 100 times, nothing shows up at all. The ARSession is running, and none of the parent nodes are set to hidden. Further, when I look at Debug Memory Graph function in Xcode, the geometry appears to be entirely visible there (and doesn't seem to be set to hidden). I can see all the nodes attached to the face node perfectly within the ARSCNView of the memory graph, but on the screen, nothing shows up. This has been an issue for multiple iOS versions, so it didn't just appear with a recent update.
Has anybody run into a similar problem, or does anybody have any ideas to look into? Is it an apple bug, or is there a timing issue I might not be aware of? It's been really hard to debug because of how infrequent it is, and I haven't found it discussed on any other forums (but point me in the right direction if there is a previous discussion). Thanks!
This is pretty common practice if AR tracking is poor for some reason.
I ran into a similar problem too. I think it's definitely a tracking error which arises due to the fault of the user of AR app. Sometimes, if you're using World Tracking Config in ARKit and track a surrounding environment offhandedly or if you are tracking under inappropriate conditions – you get a sloppy tracking data which results in situation when your World Grid/Axis may be unpredictably shifted aside and your model may fly away somewhere. If such a situation arises - look for your model somewhere nearby – maybe it’s behind you.
If you're using a gadget with a LiDAR, the aforementioned situation is almost impossible, but if you're using a gadget with no LiDAR you need thoroughly track your room. Also there must be good lighting conditions and high-contrast real-world objects with distinguishable non-repetitive textures.

Fixing or avoiding memory leak in default third party library

I developed an app that includes the ability to preview the subdivision results of a 3D model on the fly. I have my own catmull clark subdivision functions to permanently modify the geometry, but I use the .subdivisionLevel property of the SCNGeometry to temporarily subdivide the model as a preview. In most cases previewing does not automatically mean the user will go for the permanent option.
.subdivisionLevel uses (just as MDLMesh’s subdivision, which I tried as a workaround) Pixar’s OpenSubdiv to do the actual subdivision and smoothing. It works faster than my own but more importantly it doesn’t permanently modify the vertex data I provide through a SCNGeometry source.
The problem is, I can’t get it to stop leaking memory. I first noticed this a long time ago, figured it was something in my code. I don’t think it’s just one specific IOS version and it happens in both Swift and Objective C. Eventually I set up a small example adding just 1 line to the SceneKit game template in Xcode, setting the ship’s subdivisionLevel to 1. Instruments shows that immediately results in memory leaks:
I submitted a bug report to Apple a week ago but I’m not sure I can expect a reply or a fix anytime soon or at all. The screenshot is from a test with a very small model, but even with small models (hundreds to couple of thousand vertices) it leaks a lot and fast and will lead to the app crashing.
To reproduce, create a new project in Xcode based on the SceneKit game template and add the following lines to handletap:
if result.node.geometry!.subdivisionLevel == 3 {
result.node.geometry!.subdivisionLevel = 0
} else {
result.node.geometry!.subdivisionLevel = 3
}
(Remove the ! For objective c)
Tap the ship to leak megabytes, tap it some more and it quickly adds up.
OpenSubdiv is used in 3D Studio max as well as others obviously and it appears to be in Apple’s implementation. So my question is: is there a way to fix/avoid this problem without giving up on the subdivision features of SceneKit entirely, or is a response from Apple my only chance?
Going through the WWDC videos to get an idea of how committed Apple is to OpenSubdiv and thus the chance of them fixing the leaks, I found the subdivision can be performed on the GPU by Metal since the latest SceneKit update.
Here are the required two lines (Swift) if you want to use subdivision in SceneKit or Model IO:
let tess = SCNGeometryTessellator()
geometry.tessellator = tess
(from WWDC 2017 What's new in Scenekit, 23:45 into the video)
This will cause the subdivision to be performed on the GPU (thus faster, especially at higher levels), use less memory, and most importantly, releases the memory when setting the subdivision level lower or back to zero.

Which is a better option for displaying irregular shapes in Swift?

let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.

Memory Usage of SKSpriteNodes

I'm making a tile-based adventure game in iOS. Currently my level data is stored in a 100x100 array. I'm considering two approaches for displaying my level data. The easiest approach would be to make an SKSpriteNode for each tile. However, I'm wondering if an iOS device has enough memory for 10,000 nodes. If not I can always create and delete nodes from the level data as needed.
I know this is meant to work with Tiled, but the code in there might help you optimize what you are looking to do. I have done my best to optimize for big maps like the one you are making. The big thing to look at is more so how you are creating textures I know that has been a big killer in the past.
Swift
https://github.com/SpriteKitAlliance/SKATiledMap
Object-C
https://github.com/SpriteKitAlliance/SKAToolKit
Both are designed to load in a JSON string too so there is a chance you could still generate random maps without having to use the Tiled Editor as long as you match the expected format.
Also you may want to consider looking at how culling works in the Objective-C version as we found more recently removing nodes from the parent has really optimized performance on iOS 9.
Hopefully you find some of that helpful and if you have any questions feel free to email me.
Edit
Another option would be to look at Object Pooling. The core concept is to create only sprites you need to display and when you are done store them in a collection of sorts. When you need a new sprite you ask the collection for one and if it doesn't have one you create a new one.
For example you need a grass tile and you ask for one and it doesn't have one that has been already created that is waiting to be used so it creates one. You may do this to fill a 9 x 7 grid to fill up your screen. As you move away grass that gets moved off screen gets tossed into the collection to be used again when the new row comes in and needs grass. This works really well if all you are doing is displaying tiles. Not so great if tiles have dynamic properties that need to be updated and are unique in nature.
Here is a great link even if it is for Unity =)
https://unity3d.com/learn/tutorials/modules/beginner/live-training-archive/object-pooling

Drag , Pinch and zoom images in UIView

I am adding multiple UIImageView to a UIView to perform operations such as drag,pinch and zoom images.I have added gesture recogniser to all the UIImageViews.Since i'm adding multiple images(UIImageViews) it has brought down the performance of my app.Does any one have a better solution to perform this? Thanks
The adding of many images should not generally, cause enough of a problem that your app would slow down. For example, to illustrate the point with an absurd example, I added 250 (!) image views each with three gestures, and it works fine on an iPad 3, including the animating of the images into their final resting place/size/rotation.
Two observations:
Are you doing anything computationally intensive with your image views? For example:
Simply adding shadows with Quartz 2D has a huge performance impact because it's actually quite computationally expensive. In the unlikely even that you're using layer shadows, you can try using shouldRasterize, which can mitigate the problem, but not solve it. There are other (kludgy) techniques for doing computationally efficient shadows if that's the problem.
Another surprising computationally intensive process is if your images are (for example) PNGs with transparency settings or if you have reduced the alpha/opacity for your views.
What is the resolution/size of the images being loaded? If the images are very large, the image view will render them according to the contentMode, but it can be very slow if you're taking large images and scaling them down. You should use screen resolution images if possible.
These are just a few examples of things that seem so innocuous, but are really quite computationally expensive. If you're doing any Quartz embellishments on your image views, I'd suggest temporarily paring them back and see if you see any changes.
In terms of diagnosing the performance problems yourself, I'd suggest watching the following two WWDC videos:
WWDC 2012 - #211 - Building Concurrent User Interfaces on iOS includes a fairly pragmatic demonstration of Instruments to identify the source of performance problems. This video is clearly focused on one particular solution (the moving of computationally expensive processes into the background and implementing a concurrent UI), which may or may not apply in this case, but I like the Instruments demonstration.
WWDC 2012 - #235 - iOS App Performance: Responsiveness is a more focused discussion on how one measures responsiveness in apps and techniques to address problems. I don't find the instruments tutorial to be quite as good as the prior video, but it does go into more detail.
Hopefully this can get you going. If you are still stumped, you should share some relevant code regarding how the views are being added/configured and what the gestures are doing. Perhaps you can also clarify the nature of the performance problem (e.g. is it in the initial rendition, is it a low frame rate while the gestures take place, etc.).

Resources