Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous? - ios

I'm developing an iPad app that uses large textures in OpenGL ES. When the scene first loads I get a large black artifact on the ceiling for a few frames, as seen in the picture below. It's as if higher levels of the mipmap have not yet been filled in. On subsequent frames, the ceiling displays correctly.
This problem only began showing up when I started using mipmapping. One possible explanation is that the glGenerateMipmap() call does its work asynchronously, spawning some mipmap creation worker (in a separate process, or perhaps in the GPU) and returning.
Is this possible, or am I barking up the wrong tree?

Within a single context, all operations will appear to execute strictly in order. However, in your most recent reply, you mentioned using a second thread. To do that, you must have created a second shared context: it is always illegal to re-enter an OpenGL context. If already using a shared context, there are still some synchronization rules you must follow, documented at http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html

It should be synchronous; OpenGL does not in itself have any real concept of threading (excepting the implicit asynchronous dialogue between CPU and GPU).
A good way to diagnose would be to switch to GL_LINEAR_MIPMAP_LINEAR. If it's genuinely a problem with lower resolution mip maps not arriving until later then you'll see the troublesome areas on the ceiling blend into one another rather than the current black-or-correct effect.
A second guess, based on the output, would be some sort of depth buffer clearing issue.

I followed #Tommy's suggestion and switched to GL_LINEAR_MIPMAP_LINEAR. Now the black-or-correct effect changed to a fade between correct and black.
I guess that although we all know that OpenGL is a pipeline (and therefore asynchronous unless you are retrieving state or explicity synchronizing), we tend to forget it. I certainly did in this case, where I was not drawing, but loading and setting up textures.
Once I confirmed the nature of the problem, I added a glFinish() after loading all my textures, and the problem went away. (Btw, my draw loop is in the foreground and my texture loading loop - because it is so time consuming and would impair interactivity - is in the background. Also, since this may vary between platforms, I'm using iOS5 on an iPad 2)

Related

Screen tearing and camera capture with Metal

To avoid writing to a constant buffer from both the gpu and cpu at the same time, Apple recommends using a triple-buffered system with the help of a semaphore to prevent the cpu getting too far ahead of the gpu (this is fine and covered in at least three Metal videos now at this stage).
However, when the constant resource is an MTLTexture and the AVCaptureVideoDataOutput delegate runs separately than the rendering loop (CADisplaylink), how can a similar triple-buffered system (as used in Apple’s sample code MetalVideoCapture) guarantee synchronization? Screen tearing (texture tearing) can be observed if you take the MetalVideoCapture code and simply render a full screen quad and change the preset to AVCaptureSessionPresetHigh (at the moment the tearing is obscured by the rotating quad and low quality preset).
I realize that the rendering loop and the captureOutput delegate method (in this case) are both on the main thread and that the semaphore (in the rendering loop) keeps the _constantDataBufferIndex integer in check (which indexes into the MTLTexture for creation and encoding), but screen tearing can still be observed, which is puzzling to me (it would make sense if the gpu writing of the texture is not the next frame after encoding but 2 or 3 frames after, but I don’t believe this to be the case). Also, just a minor point: shouldn’t the rendering loop and the captureOutput have the same frame rate for a buffered texture system so old frames aren’t rendered interleaved with recent ones.
Any thoughts or clarification on this matter would be greatly appreciated; there is another example from McZonk, which doesn’t use the triple-buffered system, but I also observed tearing with this approach (but less so). Obviously, no tearing is observed if I use waitUntilCompleted (equivalent to Open GL’s glfinish), but thats like playing an accordion with one arm tied behind your back!

glDrawArrays takes long on first call using OpenGL ES on iOS

I'm trying to use multiple GLSL fragment shaders with OpenGL ES on iOS 7 and upwards. The shaders itself are running fine after the first call to glDrawArrays. Nevertheless, the very first call to glDrawArrays after the shaders and their program have been compiled and linked takes ages to complete. Afterwards some pipeline or whatever seems to have been loaded and everything goes smooth. Any ideas what the cause of this issue are and how to prevent it?
The most likely cause is that your shaders may not be fully compiled until you use them the first time. They might have been translated to some kind of intermediate form when you call glCompileShader(), which would be enough for the driver to provide a compile status and to act as if the shaders had been compiled. But the full compilation and optimization could well be deferred until the first draw call that uses the shader program.
A commonly used technique for games is to render a few frames without actually displaying them while some kind of intro screen is still shown to the user. This prevents the user from seeing stuttering that could otherwise result from all kinds of possible deferred initialization or data loading during the first few frames.
You could also look into using binary shaders to reduce slowdowns from shader compilation. See glShaderBinary() in the ES 2.0 documentation.
What actually helped speeding up the first draw call was the following (which is fine in my use case since I'm rendering a video so no depth testing is needed).
glDisable(GL_DEPTHTEST)

iOS OpenGL ES - Only draw on request

I'm using OpenGL ES to write a custom UI framework on iOS. The use case is for an application, as in something that won't be updating on a per-frame basis (such as a game). From what I can see so far, it seems that the default behavior of the GLKViewController is to redraw the screen at a rate of about 30fps. It's typical for UI to only redraw itself when necessary to reduce resource usage, and I'd like to not drain extra battery power by utilizing the GPU while the user isn't doing anything.
I tried only clearing and drawing the screen once as a test, and got a warning from the profiler saying that an uninitialized color buffer was being displayed.
Looking into it, I found this documentation: http://developer.apple.com/library/ios/#DOCUMENTATION/iPhone/Reference/EAGLDrawable_Ref/EAGLDrawable/EAGLDrawable.html
The documentation states that there is a flag, kEAGLDrawablePropertyRetainedBacking, which when set to YES, will allow the backbuffer to retain things drawn to it in the previous frame. However, it also states that it isn't recommended and cause performance and memory issues, which is exactly what I'm trying to avoid in the first place.
I plan to try both ways, drawing every frame and not, but I'm curious if anyone has encountered this situation. What would you recommend? Is it not as big a deal as I assume it is to re-draw everything 30 times per frame?
In this case, you shouldn't use GLKViewController, as its very purpose is to provide a simple animation timer on the main loop. Instead, your view can be owned by any other subclass of UIViewController (including one of your own creation), and you can rely on the usual setNeedsDisplay/drawRect system used by all other UIKit views.
It's not the backbuffer that retains the image, but a separate buffer. Possibly a separate buffer created specifically for your view.
You can always set paused on the GLKViewController to pause the rendering loop.

Multithreading OpenGL ES on iOS for games

Currently, I have a fixed timestep game loop running on a second thread in my game. The OpenGL context is on the same thread, and rendering is done once per frame after any updates. So the main "loop" has to wait for drawing each frame before it can proceed. This wasn't really a problem until I wrote my particle system. Upwards of 1500+ particles with a physics step of 16ms causes the framerate to drop just below 30, anymore and it's worse. The particle rendering can't be optimized anymore without losing capability, so I decided to try moving OpenGL to a 3rd thread. I know this is somewhat of an extreme case, but I feel it should be able to handle it.
I've thought of running 2 loops concurrently, one for the main stepping (fixed timestep) and one for drawing (however fast it can go). However the rendering calls pass in data that may be changed each update, so I was concerned that locking would slow it down and negate benefit. However, after implenting a test to do this, I'm just getting EXC_BAD_ACCESS after less than a second of runtime. I assume because they're trying to access the same data at the same time? I thought the system automatically handled this?
When I was first learning OpenGL on the iPhone, I had OpenGL setup on the main thread, and would call performSelectorOnMainThread:withObject:waitUntilDone: with the rendering selector, and these errors would happen any time waitUntilDone was false. If it was true, it would happen randomly sometimes, but sometimes I could let it run for 30 mins and it would be fine. Same concept as what's happening now I assume. I am getting the first frame drawn to the screen before it crashes though, so I know something is working.
How would this be properly handled, and if so would it even provide the speed up I'm looking for? Or would multiple access slow it down just as much?

Pushing texture information to OpenGLES for iPad - Implications?

We have a project that is up and coming that will require us to push texture image information to a the EAGLView of an iPad app. Being green to OpenGL in general, are there implications to having a surface wait for texture information? What will OpenGL do while it's waiting for the image data? Does OpenGL require constant updates to it's textures, or will it retain the data until we update the texture again? We're not going to be having a loop or anything in the view, but more like an observer pattern.
When you upload a texture, you hand it off to the GPU — so a copy is made, in memory you don't have direct access to. It's then available to be drawn with as many times as you want. So there's no need for constant updates.
OpenGL won't do anything else while waiting for the image data, it's a synchronous API. The call to upload the data will take as long as it takes, the texture object will have no graphic associated with it beforehand and will have whatever you uploaded associated with it afterwards.
In the general case, OpenGL objects, including texture objects, belong to a specific context and contexts belong to a specific thread. However, iOS implements share groups, which allow you to put several contexts into a share group, allowing objects to be shared between them subject to you having to be a tiny bit careful about synchronisation.
iOS provides a specific subclass of CALayer, CAEAGLLayer, that you can use to draw to from OpenGL. It's up to you when you draw and how often. So your approach is the more native one, if anything. A lot of the samples wrap
Obviously try the simplest approach of 'everything on the main thread' first. If you're not doing all that much then it'll likely be fast enough and save you code maintenance. However, uploading can cost more than you expect, since the OpenGL way around of working is that you specify the data and the format it's in, leaving OpenGL to rearrange it as necessary for the particular GPU you're on. We're talking amounts of the 0.3 of a second variety rather than 30 seconds, but enough that there'll be an obvious pause if the user taps a button or tries to move a slider at the same time.
So if keeping the main thread responsive proves an issue, I'd imagine that you'd want to hop onto a background thread, create a new context within the same share group as the one on the main thread, upload, then hop back to do the actual drawing. In which case it'll remain up to you how you communicate to the user that data has been received and is being processed as distinct from no data having been received yet, if the gap is large enough to justify doing so.

Resources