I noticed a performance issue when using GMSMapView as part of my view hierarchy. Important note: the map doesn't take up the whole screen, it is used as a table view header. These issues affect the behaviour of the table view itself - low FPS, which results in a bad user experience, so I'm trying to resolve these issues or at least understand what GMSMapView is doing.
Using Time Profiler I found out that this is cased by the GMSMapView re-rendering on every frame (as I can tell), because the heaviest stack trace on the main thread is:
Which is called from:
CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION
(I guess it's just how internals of Google Maps work - it registers a CADisplayLink with a runloop and re-renders it on each frame)
This results in a heavy CPU usage (up to 99%):
Note that on the selected area nothing really happens to the app, it just displays a static map and a table view (no user interaction, no frame changes, no anything)
If I conduct the same test without a GMSMapView as a subview, the CPU usage in the same place is almost 0%:
Now to the question. Why is that happening and how to stop this behaviour?
I found a method called - (void) stopRendering on a GMSMapView, and tested it - the results are good and I get the same performance as when I completely remove the map view. However, this method is marked as deprecated and it states that it will be removed in a future releases of the SDK, which makes it a bad candidate for a long-term solution.
Any help, explanation or clues will be appreciated!
The problem with stopRendering is that it places the burden of managing the map state on the developer.
Instead of using stopRendering, you should limit the frame rate of the object. GMSMapView has a property called preferredFrameRate. You should be using that. It says in the documentation that by default preferredFrameRate is set to maximum, or to re-render every frame.
preferredFrameRate is of an enum called GMSFrameRate. Which has values:
kGMSFrameRatePowerSave
kGMSFrameRateConservative
kGMSFrameRateMaximum
You should be using #2 to ensure that the map stays fluid during user interaction, but also doesnt render needlessly. By default preferredFrameRate is set to #3.
Related
Ever since I am working with preferences (PreferenceKey,..), I receive this message in the console:
Bound preference _ tried to update multiple times per frame.
After countless times of research, I haven't found any way to silence it. So... since there is yet no question specifically to this warning, what would you think are possible reasons?
If not, can this warning be ignored or did I have to fix it?
Thank you so much!
(I have tried to find an example, but somehow I didn't get any warnings with an easy one...)
The SwiftUI change handlers such as onPreferenceChange are called on an arbitrary thread. So if these changes impact your View, you should re-dispatch to ensure that you make those updates on the main thread:
.onPreferenceChange(MyPreferenceKey.self) { newValue in
DispatchQueue.main.async {
widget.value = newValue
}
}
I think this answer from an Apple engineer describes the general issue:
It sounds like you have a cycle in your updates. For example, a
GeometryReader that writes a preference, that causes the containing
view to resize, which causes the GeometryReader to write the
preference again. It’s important to avoid creating such cycles. Often
that can be done by moving the GeometryReader higher in the view
hierarchy so that its size doesn’t change and it can communicate size
to its subviews instead of using a preference. I’m afraid I can’t give
any more specific guidance than that without seeing your code, but
hopefully, that helps you track down the issue!
https://www.bigmountainstudio.com/community/public/posts/65727-wwdc-2021-questions-answers-from-slack-the-unofficial-archive
At least it inspired me to solve the related bug in my case and make the warnings go away (almost :)).
I am building a calendar app, based on the month it set the choseDate as initial value, when not using main queue, it goes into an infinite loop with messages like onChange(of: String) action tried to update multiple times per frame. After change to use main queue solved my problem.
DispatchQueue.main.async {
dateChosen = Date()
}
I have a UICollectionView whose cells each have an AVPlayerLayer, all playing a video at the same time. About 9 cells fit on the screen at a time and it's pretty laggy so I need ways to boost the performance to achieve smooth scroll. There's a few things I've tried already:
1) Each cell has only one instance of AVPlayer. It doesn't get created, instead, player.replaceCurrentItemWithPlayer is called when the video url changes.
2) Because I'm using ReactiveCocoa, it's trivial to skip repeat urls to avoid playing the same video twice.
What else can I do to speed up the scroll and performance?
First I want to say it's a bit crazy to see several players in action at the same time: it's a heavy task of rendering anyway.
As far as I know, simply scaling/re-framing the video to smaller size doesn't make it less content to render: if you have a 100x100 picture to render and you render it in a frame of 10x10, it still consumes the same amount of memory as the original picture would consume; same thing goes for videos. So, try to make your video assets having similar resolution as the frame where you would present them.
Store each cell in an NSCache or NSMapTable using the NSIndexPath as the key for each; then, whenever you need a cell, call one directly from the cache or map table.
Obviously, you'll have to create all the cells at once, but you'll get the same scrolling performance as you do with nothing in the cells at all.
Need sample code?
Here's my latest iteration of a perfectly smooth-scrolling collection view with real-time video previews (up to 16 at a time):
https://youtu.be/7QlaO7WxjGg
It even uses a cover flow custom layout and "reflection" view that mirrors the video preview perfectly. The source code is here:
http://www.mediafire.com/download/ivecygnlhqxwynr/VideoWallCollectionView.zip
Unfortunately, UIKit will hit bottlenecks and drop frames when put under pressure for this use case. If you can muster refactoring code to use Facebook's AsyncDisplayKit / then this would be your best bet.
For objective-c / this project here is a good reference.
https://github.com/TextureGroup/Texture/tree/master/examples/ASDKTube
- (ASCellNode *)tableNode:(ASTableNode *)tableNode nodeForRowAtIndexPath:(NSIndexPath *)indexPath
{
VideoModel *videoObject = [_videoFeedData objectAtIndex:indexPath.row];
VideoContentCell *cellNode = [[VideoContentCell alloc] initWithVideoObject:videoObject];
return cellNode;
}
there is swift code available if you dig deep enough in github.
Im experiencing memory leak on static menu scene, it appears that it happens on every scene, game scene itself but also static menu/gameover. Memory appears to be deallocated correctly (and it's reduced when scene is gone).
Those static scenes does not conatins even update callback defined.
It's all setup in didMoveToView and inside it there are couple SKLabelNodes and SKSpriteNode allocated with spriteNodeWithImage.
I have tried to use dealloc to monitor if scene got's deallocated correctly, and it appears to be so it seems it's not the source of the issue.
Browsing google pointed me to some other threads created on stackoverflow that
spriteNodeWithImage
textureWithImage
May cause
-Memory leaks
-weird error "CUICatalog: Invalid Request: requesting subtype without specifying idiom"
So i have tried to create UIImage imageNamed and then put in into texture and use in SKTexture, actually it has removed CUICatalog error (which anyway, seems like a stupid message which did not been removed by apple - can anyone confirm that ?)
But according to memory leaks this didn't help at all, and anyway anything in that scene is being created once on beginning so i have no idea why this memory keeps growing and growing like 0,5mb per sec.
Looking forward for any tips.
Best regards
Actually i have found the source of the problem.
It seems debugging physics makes huge memory leak
skView.showsPhysics = YES;
It's not a big problem since it happens while debugging only when showsPhysics=YES.
But good to know anyway.
My application performs several rendering operations on the first frame (I am using Metal, although I think the same applies to GLES). For example, it renders to targets that are used in subsequent frames, but not updated after that. I am trying to debug some of draw calls from these rendering operations, and I would like to use the 'GPU Capture Frame' functionality to do so. I have used it in the past for on-demand GPU frame debugging, and it is very useful.
Unfortunately, I can't seem to find a way to capture the first frame. For example, this option is unavailable when broken in the debugger (setting a breakpoint before the first frame). The Xcode behaviors also don't seem to allow for capturing the frame once debugging starts. There also doesn't appear to even be an API for performing GPU captures, in Metal APIs or the CAMetalLayer.
Has anybody done this successfully?
I've come across this again, and figured it out properly now. I'll add this as a separate answer, since it's a completely different approach from my other answer.
First, some background. There are three components to capturing a GPU frame:
Telling Xcode that you want to capture a GPU frame. In typical documented use, you do this manually by clicking the GPU Frame Capture "camera" button in Xcode.
Indicating the start of the next frame to capture. Normally, this occurs at the next occurrence of MTLCommandBuffer presentDrawable:, which is invoked to present the framebuffer to the underlying view.
Indicating the end of the frame being captured. Normally, this occurs at the next-but-one occurrence of MTLCommandBuffer presentDrawable:.
In capturing the first frame, or activity before the first frame, only the third of these is available, so we need an alternate way to perform the first two items:
To tell Xcode to begin capturing a frame, add a breakpoint in Xcode at a line in your code somewhere before the point at which you want to start capturing a frame. Right-click the breakpoint, select Edit Breakpoint... from the pop-up menu, and add a Capture GPU Frame action to the breakpoint:
To indicate the start of the frame to capture, before the first occurrence of MTLCommandBuffer presentDrawable:, you can use the MTLCommandQueue insertDebugCaptureBoundary method. For example, you could invoke this method as soon as you instantiate the MTLCommandQueue, to immediately begin capturing everything submitted to the queue. Make sure the breakpoint in item 1 will be triggered before the point this code is invoked.
To indicate the end of the captured frame, you can either rely on the first normal occurrence of MTLCommandBuffer presentDrawable:, or you can add a second invocation of MTLCommandQueue insertDebugCaptureBoundary.
Finally, the MTLCommandQueue insertDebugCaptureBoundary method does not actually cause the frame to be captured. It just marks a boundary point, so you can leave it in your code for future debugging use. Wrap it in a DEBUG compilation conditional if you want it gone from production code.
Try...
[myMTLCommandEncoder insertDebugSignpost: #"com.apple.GPUTools.event.debug-frame"].
To be honest, I haven't tried it myself, but it's analogous to the similar
glInsertEventMarkerEXT(0, "com.apple.GPUTools.event.debug-frame")
documented for OpenGL ES, and there is some mention on the web of it working for Metal.
First, in Metal, I usually use Metal to do parallel compute, then GPU Capture frame is alway grey. So, there are two ways until now I found is Ok.
In iOS 11
you can use the [[MTLCaptureManager alloc] startCaptureWithDevice:m_Device]; to capture frame so you can profile the compute shader performance
lower than iOS 11 (MTLCaptureManager && MTLCaptureScope are new in iOS 11.0 )
you can use the breakpoint, then edit the Action.Capture GPU Frame
The app uses OpenGL ES2 and the GLKit framework, and the render/update loop provided by GLKitViewController. It used to run at a steady 60 fps on my iPad2 with iOS7.1, but once I updated the iPad2 to iOS8.1, the exact same code now fluctuates between 56-59 FPS. (CPU utlitization, however, remains at 40-60% as before ).
Profiling reveals that the OpenGL drawing commands are using a much larger proportion of CPU time than they used to. The biggest change seems to be that calls to "GLKBaseEffect prepareToDraw" are taking much longer than they used to.
(The app uses a single GLKBaseEffect which is reconfigured at various points during the render loop, neccessitating a call to prepareToDraw each time. I realise it may be possible to optimize by having multiple instances of GLKBaseEffect, and that is something I was considering for later, however, the performance, as it was, was solid on iOS7.1)
I'm now examining the OpenGL ES Analyzer trace in Instruments to determine the OpenGL calls generated by "GLKBaseEffect prepareToDraw", to see if anything seems unusual, and will update the post accordingly once I've managed to figure anything out.
I'd be very grateful for any guidance on how to progress at this point - why might calls to GLKBaseEffect prepareToDraw take longer on iOS8.1?
The cause of the problem was identified by Jim Hillhouse and confirmed by Frogblast on the Apple Dev Forums thread "OpenGL Performance Drops > 50% in iOS 8 GM": setting the text property of a UITextField (or UILabel, in my case) in a view which is a subview of GLKView is causing the GLKView superview to layout, which is then causing framebuffers to be deallocated and reallocated. This wasn't happening in iOS 7.
Jim Hillhouse's workaround was to place the subview inside a UIViewController, and embed that in GLKView. I've done the same, using a Container View to hold the view controller, and can confirm that it works.