Significant application performance difference between IOS simulator and Iphone - ios

Problem in a nutshell
I have been building an IOS application in recent weeks and have run into some trouble.The application is plays an animation by manipulating and then drawing an image raster multiple times per second. The image is drawn by assigning it to a UIViews CALayer like so self.layer.contents = (id)pimage.CGImage; The calculation and rendering are seperated in two CADisplayLinks.
This animation technique achieves a satisfactory performance on the IPhone 6.1 simulator but when it is build on the physical device (Iphone 4s running IOS 6.1.3) it experiences a significant slow down. The slow down is so bad that it actually makes the application unusable.
Suspected Issues
I have read, in this question Difference of memory organization between iOS device and iPhone simulator , that the simulator is allowed to use far more memory than the actual device. However, while observing my apps memory usage in in "instruments", I noticed that the total memory usage never exceeds 3Mbs. So Im unsure if that is actually the problem but it's probably worth pointing out.
According to this question, Does the iOS-Simulator use multiple cores? , the IOS simulator runs of an intel chip while actual my device uses an apple A5 chip. I suspect that this may also be the cause of the slowdown.
I am considering rewriting the animation in Open GL, however Id first like to try and improve the existing code before I take any drastic steps.
Any help in identifying what the problem is would be greatly appreciated.
Update
Thanks to all those who offered suggestions.
I discovered while profiling that the main bottleneck was actually clearing the image raster for the next animation. I decided to rewrite the rendering of the animations in opengl. It didn't take as long as anticipated. The app now achieves a pretty good level of performance and is a little bit simpler.

This is a classic problem. The simulator is using the resource of your high-powered workstation/laptop.
Unfortunately the only solutions is to go back and optimize your code, especially the display stuff.
Typically, you want to try to minimize the drawing time from the computation time, which it sounds like you are doing, but make sure you don't compute on the main thread.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dispatch_async(queue, ^{
// Do the computation
});
You can use instruments while running on the device, so the CoreGraphics instruments is available to see what is using all the time and point to the offending code. Unfortunately, you probably already know what it is and it's just going to come down to optimizations.

The slowdown is most likely related to blitting the images. I assume you are using a series of still images that get changed in the display look callback. I believe that if you can use CALayers that get added to your primary view/layer (while removing the old one), and which contain already CGImageRefs, you can then use CGContextDrawImage() to blit the image in the layer's drawInContext method. Set the context to use copy not blend, so it just replaces the old bits.
You can use a dispatch queue to create CALayer subclasses containing an image on a secondary thread, then of course the drawing happens on the main queue. You can use some throttling to maintain a queue of CALayers of 10 or so, and replenishing them as they are consumed.
if this doesn't do it then OpenGL may help, but again none of this helps moving bits between the processor and the GPU (since you are using stacks of images, not just animating one).

Related

Fixing or avoiding memory leak in default third party library

I developed an app that includes the ability to preview the subdivision results of a 3D model on the fly. I have my own catmull clark subdivision functions to permanently modify the geometry, but I use the .subdivisionLevel property of the SCNGeometry to temporarily subdivide the model as a preview. In most cases previewing does not automatically mean the user will go for the permanent option.
.subdivisionLevel uses (just as MDLMesh’s subdivision, which I tried as a workaround) Pixar’s OpenSubdiv to do the actual subdivision and smoothing. It works faster than my own but more importantly it doesn’t permanently modify the vertex data I provide through a SCNGeometry source.
The problem is, I can’t get it to stop leaking memory. I first noticed this a long time ago, figured it was something in my code. I don’t think it’s just one specific IOS version and it happens in both Swift and Objective C. Eventually I set up a small example adding just 1 line to the SceneKit game template in Xcode, setting the ship’s subdivisionLevel to 1. Instruments shows that immediately results in memory leaks:
I submitted a bug report to Apple a week ago but I’m not sure I can expect a reply or a fix anytime soon or at all. The screenshot is from a test with a very small model, but even with small models (hundreds to couple of thousand vertices) it leaks a lot and fast and will lead to the app crashing.
To reproduce, create a new project in Xcode based on the SceneKit game template and add the following lines to handletap:
if result.node.geometry!.subdivisionLevel == 3 {
result.node.geometry!.subdivisionLevel = 0
} else {
result.node.geometry!.subdivisionLevel = 3
}
(Remove the ! For objective c)
Tap the ship to leak megabytes, tap it some more and it quickly adds up.
OpenSubdiv is used in 3D Studio max as well as others obviously and it appears to be in Apple’s implementation. So my question is: is there a way to fix/avoid this problem without giving up on the subdivision features of SceneKit entirely, or is a response from Apple my only chance?
Going through the WWDC videos to get an idea of how committed Apple is to OpenSubdiv and thus the chance of them fixing the leaks, I found the subdivision can be performed on the GPU by Metal since the latest SceneKit update.
Here are the required two lines (Swift) if you want to use subdivision in SceneKit or Model IO:
let tess = SCNGeometryTessellator()
geometry.tessellator = tess
(from WWDC 2017 What's new in Scenekit, 23:45 into the video)
This will cause the subdivision to be performed on the GPU (thus faster, especially at higher levels), use less memory, and most importantly, releases the memory when setting the subdivision level lower or back to zero.

How to animate big images in iOS

I'm looking for solution of animation about 50 images on retina iPad each has 2048*1536 size. I want to animate them on finger move(change images on uiimageview sync with touches moved event). Images loads slowly and animation freezes. I want to find any solution to solve that problem. Thanks.
There are a couple of issues that make this situation very hard to deal with. First, the memory usage of 50 full screen images is very large. For some background on how much memory that actually requires, see this blog post Video and Memory usage on iOS devices. The second issue you have run into is CPU usage. A retina iPad has multiple CPUs, but decoding huge PNG images still takes a lot of CPU cycles and that will prevent the animations from running smoothly. So, the only way you will get this to work well is to avoid decoding the image data at runtime and also avoid holding all the decoded data in memory because that would crash the device. The best solution is to simply mmap() all the decoded data and decode it ahead of time, that makes it possible to blit image data into CoreGraphics without actually having to copy the data. If you would like to use my library that does all that, it is linked at the bottom of the blog post.

performance issues with air app on iphone 4

Currently I am working on an Air app for iOS and Android. Air 3.5 is targeted.
Performance on iPhone 4 / 4s has been acceptable overall, after a lot of optimising: gpu rendering, StageQuality.LOW, avoiding vectors as much as possible etc. I really put a lot of effort in boosting performance.
Still, every once in a while, the app becomes very slow. There is no precise point in time or action or combination of actions after which this occurs. Sometimes, it doesn't occur for days. But when it occurs, only killing the app and launching it again helps, because the app stays slow after that. So I am not talking about minor hiccups that
The problem occurs only on (some) iPhones 4 and 4s. Not on iPad 3,4, iPhone 5, any Android device...
Has anyone had similar experiences and pointers as to where a solution might be found?
What happens when gpu memory fills up? Or device memory? Could this be involved?
Please don't expect Adobe Air to have performance as Native Apps. I am developing App with Adobe Air as well.
By the sound of your development experience. I think it's to do with memory issue, because the performance is not too bad at the begging stage, but it gets bad overtime (so u have to kill the app). I suggest you looking into memory leaking issue.
Hopefully my experience can help you.
I had a similar problem where sometime during gameplay the framerate would drop from 30fps to an unrecoverable 12fps. At first I thought I was running out of GPU memory and it was falling back on rendering with CPU.
Using Adobe Scout I found that when this occurred, the rendering time was ridiculousness high.
Updating to Air 3.8, I fixed the problem by limiting the amount of bitmaps that were being rendered and in memory at once. I would only create new instances of backgrounds for appropriate levels, and then flagging them for garbage collection when the level ended, waiting a few seconds and then moving to the next level.
What might solve your problem is if you reduce the amount of textures you have in memory at one time, only showing the ones you need to. If you want to swap out active textures for new ones, set all the objects with that texture data to null:
testMovieClip = null;
and remove all listeners from it so that garbage collection will pick it up.
Next, you can force garbage collection with AIR:
System.gc();
Instantiate the new texture you want to render a few frames after calling gc. Monitor resources with Scout and the iOS companion app to confirm that it's working.
You could also try to detect when the framerate drops, and set some objects to null then force garbage collection. In my case, if I moved my game to an empty frame for a few seconds with garbage collection, the framerate would recover and the game would resume rendering with GPU.
Hope this helps!

CGContextDrawLayerAtPoint is slow on iPad 3

I have a custom view (inherited from UIView) in my app. The custom view overrides
- (void) drawRect:(CGRect) rect
The problem is: the drawRect: executes many times longer on iPad 3 than on iPad 2 (about 0.1 second on iPad 3 and 0.003 second on iPad 2). It's about 30 times slower.
Basically, I am using some pre-created layers and draw them in the drawRect:. The last call
CGContextDrawLayerAtPoint(context, CGPointZero, m_currentLayer);
takes most of the time (about 95% of total time in drawRect:)
What might be slowing things so much and how should I fix the cause?
UPDATE:
There are no threads directly involved. I do call setNeedsDisplay: in one thread and drawRect: gets called from another but that's it. The same goes for locks (there are no locks used).
The view gets redrawn in response to touches (it's a coloring book app). On iPad 2 I get reasonable delay between a touch and an update of the screen. I want to achieve the same on iPad 3.
So, the iPad 3 is definitely slower in a lot of areas. I have a theory about this. Marco Arment noted that the method renderInContext is ridiculously slow on the new iPad. I also found this to be the case when trying to create a magnifying glass for a custom text view. In the end I had to forego renderInContext for custom Core Graphics drawing.
I've also been having problem hitting the dreaded wait_fences errors on my core graphics drawing here: Only on new iPad 3: wait_fences: failed to receive reply: 10004003.
This is what I've figured out so far. The iPad 3 obviously has 4 times the pixels to drive. This can cause problems in two place:
First, the CPU. All core graphics drawing is done by the CPU. In the case of rotational events, if the CPU takes too long to draw, it hits the wait_fences error, which I believe is simply a call that tells the device to wait a little longer to actually perform the rotation, thus the delay.
Transferring images to the GPU. The GPU obviously handles the retina resolution just fine (see Infinity Blade 2). But when core graphics draws, it draws its images directly to the GPU buffers to avoid memcpy. However, either the GPU buffers haven't changes since the iPad 2 or they just didn't make them large enough, because it's remarkably easy to overload those buffers. When that happens, I believe the CPU writes the images to standard memory and then copies them to the GPU when the GPU buffers can handle it. This, I think is what causes the performance problems. That extra copy is time consuming with so many pixels and slows things down considerably.
To avoid memcpy I recommend several things:
Only draw what you need. Avoid drawing anything offscreen at all costs. If you're drawing a large view, but only display part of that view (subviews covering it, for example) try to find a way to only draw what is visible.
If you have to draw a large view, consider breaking the view up in to parts either as subviews or sublayers (probably sublayers in your case). And only redraw what you need. Take the notability app, for example. When you zoom in, you can literally watch it redraw one square at a time. Or in safari you can watch it update squares as you scroll. Unfortunately, I haven't had to do this so I'm uncertain of the methodology.
Try to keep your drawings simple. I had an awesome looking custom core text view that had to redraw on every character entered. Very slow. I changed the background to simple white (in core graphics) and it sped up well. Even better would be for me to not redraw the background.
I would like to point out that my theory is conjecture. Apple doesn't really explain what exactly they do. My theory is just based on what they have said and how the iPad responds as well as my own experimentation.
UPDATE
So Apple has now released the 2012 WWDC Developer videos. They have two videos that may help you (requires developer account):
iOS App Performance: Responsiveness
iOS App Performance: Graphics and Animation
One thing they talk about I think may help you is using the method: setNeedsDisplayInRect:(CGRect)rect. Using this method instead of the normal setNeedsDisplay and making sure that your drawRect method only draws the rect given to it can greatly help performance. Personally, I use the function: CGContextClipToRect(context, rect); to clip my drawing only to the rect provided.
As an example, I have a separate class I use to draw text directly to my views using Core Text. My UIView subclass keeps a reference to this object and uses it to draw it's text rather than use a UILabel. I used to refresh the entire view (setNeedsDisplay) when the text change. Now I have my CoreText object calculate the changed CGRect and use setNeedsDisplayInRect to only change the portion of the view that contains the text. This really helped my performance when scrolling.
I ended up using approach described in #Kurt Revis answer for similar question.
I minimized number of layers used, added UIImageView and set its image to an UIImage wrapping my CGImageRef. Please read the mentioned answer to get more details about the approach.
In the end my application become even simpler than before and works with almost identical speed on iPad 2 and iPad 3.

Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous?

I'm developing an iPad app that uses large textures in OpenGL ES. When the scene first loads I get a large black artifact on the ceiling for a few frames, as seen in the picture below. It's as if higher levels of the mipmap have not yet been filled in. On subsequent frames, the ceiling displays correctly.
This problem only began showing up when I started using mipmapping. One possible explanation is that the glGenerateMipmap() call does its work asynchronously, spawning some mipmap creation worker (in a separate process, or perhaps in the GPU) and returning.
Is this possible, or am I barking up the wrong tree?
Within a single context, all operations will appear to execute strictly in order. However, in your most recent reply, you mentioned using a second thread. To do that, you must have created a second shared context: it is always illegal to re-enter an OpenGL context. If already using a shared context, there are still some synchronization rules you must follow, documented at http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html
It should be synchronous; OpenGL does not in itself have any real concept of threading (excepting the implicit asynchronous dialogue between CPU and GPU).
A good way to diagnose would be to switch to GL_LINEAR_MIPMAP_LINEAR. If it's genuinely a problem with lower resolution mip maps not arriving until later then you'll see the troublesome areas on the ceiling blend into one another rather than the current black-or-correct effect.
A second guess, based on the output, would be some sort of depth buffer clearing issue.
I followed #Tommy's suggestion and switched to GL_LINEAR_MIPMAP_LINEAR. Now the black-or-correct effect changed to a fade between correct and black.
I guess that although we all know that OpenGL is a pipeline (and therefore asynchronous unless you are retrieving state or explicity synchronizing), we tend to forget it. I certainly did in this case, where I was not drawing, but loading and setting up textures.
Once I confirmed the nature of the problem, I added a glFinish() after loading all my textures, and the problem went away. (Btw, my draw loop is in the foreground and my texture loading loop - because it is so time consuming and would impair interactivity - is in the background. Also, since this may vary between platforms, I'm using iOS5 on an iPad 2)

Resources