GLKViewDrawableMultisample4X is not working - ios

My iOS application stops rendering in case if the GLKView drawableMultisample is GLKViewDrawableMultisample4X. Everything works fine with the GLKViewDrawableMultisampleNone but if I set it to GLKViewDrawableMultisample4X, so I get only blank pink screen.
I've checked it on the iOS Simulator / iOS 7.0.3
Is anybody know how to resolve this issue ? Is it may be related to the iOS simulator and may work good on the real device?

I was having exactly this issue. It's not clear if the cause is the same but, in my case I was triggering my render from a displayLink trigger - without any regard for the semantics of setNeedsDisplay, or how GLKit sets up the render buffer around the execution of the drawInRect method.
I was of the mindset that since I was using displayLink, I could run all my rendering directly off of that trigger - and since it all worked before I tried to set up anti-aliasing, i figured it couldn't be far wrong!
The problem only manifested when I set GLKViewDrawableMultisample4X, much like the OP's problem.
The solution...
Ensure the view is created with enableSetNeedsDisplay = NO
Have displayLink trigger a function that contains nothing more than the following:
- (void)render:(CADisplayLink*)displayLink {
// This function should *not* perform any rendering
// We only want to inform GLKit that we're ready to render
GLKView * view = [self.window.subviews objectAtIndex:0];
// Tell GLKit that we're ready to draw
[view display];
// GLKit will ensure the buffers are setup before
// calling drawInRect
}
Move all rendering into drawInRect. GLKit will ensure the buffers are setup before drawInRect is called.

Related

MapKit iOS rendererForOverlay refreshing out of control

I have a MapKit issue with MKMapView using addOverlay and rendererForOverlay. Testing and debugging is being done on a device (iPhone 7 iOS 11.1.1) with Xcode 9.1 (9B55). The overlay renderer is being refreshed repeatedly for all tiles in the map view (2500 calls per sec to drawMapRect:). The calls to the renderer are ignoring the changed rectangle in setNeedsDisplayInMapRect: and are not initiated by setNeedsDisplayInMapRect. This refreshing continues forever even after all map updates were finished with Xcode reporting the app is using over 160% CPU.
Xcode Debug Navigator Image Link
The MKMapView code is based on the Apple Sample code 'BreadCrumb' available from https://developer.apple.com/library/content/samplecode/Breadcrumb/Introduction/Intro.html. There are no significant structural changes to this code.
Has anyone else experienced this or have any suggestions of where to start looking for a solution?
Running the Apple Breadcrumb sample did not exhibit the same problem. After putting this back into my project and adding the changes from my project I was finally able to isolate the problem to having inserted 'self.alpha = 0.5' into drawMapRect:. It does not matter whether the alpha property is set to 1.0 or some other value, the problem will still occur.
- (void)drawMapRect:(MKMapRect)mapRect
zoomScale:(MKZoomScale)zoomScale
inContext:(CGContextRef)context;
{
CrumbPath *crumbs = (CrumbPath *)(self.overlay);
self.alpha = 0.5; // <-------- THE PROBLEM
With the problem resolved overlay renderer calls reverted to between 40 and 80 per second with no calls occurring without map updates and calls to setNeedsDisplayInMapRect:.

'Capture GPU frame' first frame for iOS app

My application performs several rendering operations on the first frame (I am using Metal, although I think the same applies to GLES). For example, it renders to targets that are used in subsequent frames, but not updated after that. I am trying to debug some of draw calls from these rendering operations, and I would like to use the 'GPU Capture Frame' functionality to do so. I have used it in the past for on-demand GPU frame debugging, and it is very useful.
Unfortunately, I can't seem to find a way to capture the first frame. For example, this option is unavailable when broken in the debugger (setting a breakpoint before the first frame). The Xcode behaviors also don't seem to allow for capturing the frame once debugging starts. There also doesn't appear to even be an API for performing GPU captures, in Metal APIs or the CAMetalLayer.
Has anybody done this successfully?
I've come across this again, and figured it out properly now. I'll add this as a separate answer, since it's a completely different approach from my other answer.
First, some background. There are three components to capturing a GPU frame:
Telling Xcode that you want to capture a GPU frame. In typical documented use, you do this manually by clicking the GPU Frame Capture "camera" button in Xcode.
Indicating the start of the next frame to capture. Normally, this occurs at the next occurrence of MTLCommandBuffer presentDrawable:, which is invoked to present the framebuffer to the underlying view.
Indicating the end of the frame being captured. Normally, this occurs at the next-but-one occurrence of MTLCommandBuffer presentDrawable:.
In capturing the first frame, or activity before the first frame, only the third of these is available, so we need an alternate way to perform the first two items:
To tell Xcode to begin capturing a frame, add a breakpoint in Xcode at a line in your code somewhere before the point at which you want to start capturing a frame. Right-click the breakpoint, select Edit Breakpoint... from the pop-up menu, and add a Capture GPU Frame action to the breakpoint:
To indicate the start of the frame to capture, before the first occurrence of MTLCommandBuffer presentDrawable:, you can use the MTLCommandQueue insertDebugCaptureBoundary method. For example, you could invoke this method as soon as you instantiate the MTLCommandQueue, to immediately begin capturing everything submitted to the queue. Make sure the breakpoint in item 1 will be triggered before the point this code is invoked.
To indicate the end of the captured frame, you can either rely on the first normal occurrence of MTLCommandBuffer presentDrawable:, or you can add a second invocation of MTLCommandQueue insertDebugCaptureBoundary.
Finally, the MTLCommandQueue insertDebugCaptureBoundary method does not actually cause the frame to be captured. It just marks a boundary point, so you can leave it in your code for future debugging use. Wrap it in a DEBUG compilation conditional if you want it gone from production code.
Try...
[myMTLCommandEncoder insertDebugSignpost: #"com.apple.GPUTools.event.debug-frame"].
To be honest, I haven't tried it myself, but it's analogous to the similar
glInsertEventMarkerEXT(0, "com.apple.GPUTools.event.debug-frame")
documented for OpenGL ES, and there is some mention on the web of it working for Metal.
First, in Metal, I usually use Metal to do parallel compute, then GPU Capture frame is alway grey. So, there are two ways until now I found is Ok.
In iOS 11
you can use the [[MTLCaptureManager alloc] startCaptureWithDevice:m_Device]; to capture frame so you can profile the compute shader performance
lower than iOS 11 (MTLCaptureManager && MTLCaptureScope are new in iOS 11.0 )
you can use the breakpoint, then edit the Action.Capture GPU Frame

OpenGL app's framerate has detiorated following an upgrade from iOS7.1 to iOS8.1

The app uses OpenGL ES2 and the GLKit framework, and the render/update loop provided by GLKitViewController. It used to run at a steady 60 fps on my iPad2 with iOS7.1, but once I updated the iPad2 to iOS8.1, the exact same code now fluctuates between 56-59 FPS. (CPU utlitization, however, remains at 40-60% as before ).
Profiling reveals that the OpenGL drawing commands are using a much larger proportion of CPU time than they used to. The biggest change seems to be that calls to "GLKBaseEffect prepareToDraw" are taking much longer than they used to.
(The app uses a single GLKBaseEffect which is reconfigured at various points during the render loop, neccessitating a call to prepareToDraw each time. I realise it may be possible to optimize by having multiple instances of GLKBaseEffect, and that is something I was considering for later, however, the performance, as it was, was solid on iOS7.1)
I'm now examining theĀ OpenGL ES Analyzer trace in Instruments to determine the OpenGL calls generated by "GLKBaseEffect prepareToDraw", to see if anything seems unusual, and will update the post accordingly once I've managed to figure anything out.
I'd be very grateful for any guidance on how to progress at this point - why might calls to GLKBaseEffect prepareToDraw take longer on iOS8.1?
The cause of the problem was identified by Jim Hillhouse and confirmed by Frogblast on the Apple Dev Forums thread "OpenGL Performance Drops > 50% in iOS 8 GM": setting the text property of a UITextField (or UILabel, in my case) in a view which is a subview of GLKView is causing the GLKView superview to layout, which is then causing framebuffers to be deallocated and reallocated. This wasn't happening in iOS 7.
Jim Hillhouse's workaround was to place the subview inside a UIViewController, and embed that in GLKView. I've done the same, using a Container View to hold the view controller, and can confirm that it works.

How to force SCNView to render a new frame?

Is there a way to force SCNView to render a new frame on demand if there is no animation inside the scene? If the scene is static SCNView renders exactly once and then only after something changes.
Usually this makes sense, but I am working together with the Vuforia augmented reality framework which requires me to render a new frame every time it processed a new video frame from the camera. I worked around this issue by creating my own UIView with a CAEAGLLayer which renders the SceneKit content using an SCNRenderer. This works great, but I am curious if there is a way to do this with SCNView so I can avoid directly touching OpenGL ES.
Update
As of iOS 11.0 and macOS 10.13 the rendersContinuously property on SCNView is the preferred way to force the view to continuously render frames.
Previous answer
you can set its playing property to YES
You can increment the sceneTime like so:
sceneView.sceneTime += 1
It will then render a single frame.
As a UIView, you can use the same mechanism to request an update as you would any other UIView: [_sceneView setNeedsDisplay]; However, as pointed out, this shouldn't be used as the primary way to drive updates as it would not be synced with the display.

iOS7 dirty rect drawing weirdness

I have been developing a cad-like drawing app for iOS, which makes careful use of dirty rect clipping to make for a smooth drawing experience - it has worked quite well for months. Now, I know iOS doesn't automatically clip dirty rects, but CoreGraphics will. For reference, here's what the preamble of my main view's drawRect function looks like (leaving out the actual drawing mechanism)
- (void)drawRect:(CGRect)dirtyRect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
if ( !CGRectIsNull(dirtyRect))
{
CGContextClipToRect( context, dirtyRect );
}
CGContextConcatCTM( context, self.viewTransform );
.... drawing here, all done by CG* vector and bitmap functionsa
CGContextRestoreGState( context );
}
Now, this code worked beautifully for a long time, iOS6, early iOS7 builds, etc. Smooth as butter. But just recently, I've been seeing a VERY odd set of behaviors, and I'm curious if anybody here has had similar experiences.
First, I trigger redraws the normal way, setNeedsDisplay when I need to redraw the whole view, and setNeedsDisplayInRect passing a dirty rect for just the small bit that needs redrawing. Weirdly, recently when I call setNeedsDisplayInRect the rect I pass is ignored, and instead drawRect receives a rect matching the view's bounds and not the intended subrect. I only noticed this when testers reported dramatically bad drawing performance compared to previous builds - I've been using the simulator too much recently and hadn't noticed :/
I suspected this might be the result of some kind of multiple-dirty-rect union, so I overloaded setNeedsDisplay and setNeedsDisplayInRect, tracing when they got called, but that let nowhere. So then I decided to track my dirty rect manually as an iVar in the my rendering view class. Here's where the second oddity shows up: when I clip to the now correct dirty rect, the contents outside the dirty rect are cleared.
They should NOT be cleared. I have explicitly set self.clearsContextBeforeDrawing = NO, so I don't understand why the behavior I'm seeing is happening, particularly when it wasn't happening in the past.
Now, I'm probably going to do a git-bisect style of debugging to see if I can find a version way back when which didn't have this bug. But I'm hoping somebody here with CoreGraphics experience can lend me some conceptual support. This has me totally baffled.
P.S. I haven't touched the drawing code for my app in a long time since I've spent the last few months developing the app around the drawing tools. This is why I suspect a change to iOS7's drawing pipeline may play some kind of role here.
I think you need to include this line in your code inside drawRect method :-
[super drawRect:dirtyRect];
For anybody who stumbles across this in the future when searching for help with iOS dirty rects, I want to say I solved this, and it was my bug, not apple's. I spent a morning doing a git bisect to narrow down when the trouble showed up, and found that when I did some refactoring back in early october, I introduced some change notifications to my model which were indirectly causing a setNeedsDisplay which overwrote the setNeedsDisplayInRect my touch handling code was laboriously computing.
So - not an apple bug!
drawRect:dirtyRect is for mac. For ios you should use drawRect:rect. It should be called with setNeedsDisplay.

Resources