What is the right way of dealing with a context restored event?
Do i go ahead and create a new WebGL Rendering Context? What happens to all cached textures, programs, shaders that I had?
When you lose the webgl context all your textures, programs, shaders, renderbuffers and buffers are gone.
The webglcontextrestored event is basically just an event telling you you have a new webgl context, start over.
Related
Do we need to create a new RenderPassDescriptor for each draw, or we can cache it and reuse it?
If not, what about the multisample texture and the depth stencil texture? Do I also need to recreate them on each pass?
Some objects in Metal are designed to be transient and extremely lightweight, while others are more expensive and can last for a long time, perhaps for the lifetime of the app. MTLRenderPassDescriptor is transient object. Typically, you create a MTLRenderPassDescriptor object once and reuse it each time your app renders a frame.
what about the multisample texture and the depth stencil texture? Do I
also need to recreate them on each pass?
MTLTexture type objects is not transient, reuse these objects in performance sensitive code, and avoid creating them repeatedly.
MTLRenderPassDescriptor are generally lightweight, so creating them is not a big perf hit.
But MTLRenderEncoders in Metal are most costly operations. But in my experience what I noticed is that if you cache MTLRenderPassDescriptor with some default properties and only update properties you need to modify then we can save a lot on perf in creating encoders.
I'm trying to render bitmap fonts in directX10 at the moment, and I want to do this as efficiently as possible. I'm having a hard time getting a start on my design because of this question though.
So should I reuse a single VertexBuffer or make multiple VertexBuffer objects?
Currently I allocate one dynamic VertexBuffer per Quad object in my program. This way I wouldn't have to map/unmap a VertexBuffer if nothing moves on my screen. For fonts I can implement a similar method on where I allocate one buffer per text box, or something similar.
After searching I read about reusing a single VertexBuffer for all objects. Vertex caching came up also. What is the advantage/disadvantage of this, and is it faster than my previous method?
For last, is there any other method I should look into rendering many 2d quads in the screen?
Thank you in advance.
Using a single dynamic Vertex Buffer with the proper combinations of DISCARD and NO_OVERWRITE is the best way to handle this kind of dynamic submission. The driver will perform buffer renaming with DISCARD to minimize GPU stalls.
This is the mechanism used by SpriteBatch/SpriteFont and PrimitiveBatch in the DirectX Tool Kit. You can check that source for details, and if really needed you could adopt it to Direct3D 10.x. Of course, moving to Direct3D 11 is probably the better choice.
Sometimes the term Graphics Context is a little bit abstract. Are they actually system resource, but they are resource from the Graphics Card, just like a file handle is system resource from the hard drive or any permanent storage device?
Just as a file handle has some states about whether the file handle is for read only or read/write, and the current position for the next read operating -- the states, a Graphics Context has states about the current stroke color, stroke width, or any relevant data. (update: and in write mode, we can go to any point in a 200MB file and change data, just like we have the canvas of the Graphics Context and draw things on top of it)
So Graphics Context are actually global, system-wide resource. They are not part of the application singleton or anything, just like a file or file handle is not (necessarily) part of the application singleton.
And if there is no powerful graphics card (or if the graphics card ran out of resource already), then the operating system can simulate a Graphics Context using low level graphics routines using bitmaps, instead of letting the graphics card handle it.
Is this how a Graphics Context work actually, on iOS and most other common OS in general?
I think it's best not to think of a Graphics Context in terms of a specific system resource. As far as I know, the graphics context doesn't correspond to any specific resource anymore than an any class 'object' does, besides memory of course. Really, the Graphics context is designed to provide a 'canvas' for the core graphics functions to operate on. The truth is, Apple doesn't give us the specific details of how a graphics context works internally. But there are several things we do know about it:
The graphics context is basically a 'state' more than anything else. It holds information such as stoke/fill color, line width, etc for a particular set of drawing routines.
It doesn't process on the GPU. Instead it processes (does all it's drawing) on the CPU and 'passes' the resulting image (some form of a bit map) to the GPU for display/animation (actually it renders the image directly to the GPU's buffers). This is why the 'renderInContext' method isn't working so well in the new iPad 3. renderInContext gives you the image first, which involves rendering and copying the image. If you wish to then display it, it must be passed back to Core Graphics which then writes the image back out. On the iPad 3, this involves a lot of memory (depending on the size of the view) and can easily overflow buffers.
The graphics contexts given to the 'drawRect' method of UIView is designed to provide a context that is as efficient as possible. This is why you can't draw anything in a view outside a context, nor can you create your own context for a view to draw in. The actual drawing is handled in the run loop, which is why we use this method to flag a UIView as needing to be drawn: [view setNeedsDisplay].
The graphics contexts for UIViews are drawn on the main thread and yes, again, processed on the CPU. This does mean overly complex drawings can tie up your main application, but now days with multi-core processors that's not so much of a problem.
You can create a graphics context, but only to draw to draw to an image. This is exactly the same thing as what a UIView context does, except that it's meant to be used by you rather than drawn to the screen or animated. Since iOS 4, you can process these image contexts in other threads (besides the main thread).
If you're looking to do GPU drawing, I believe the only way to do this is to use OpenGL if you're using iOS. If you're using MacOS, I think you can actually enable Quartz (core-graphics...same thing) drawing on the GPU using QuartzGL. But it may not be worth the effort, see this article: Mac QuartzGL (2D drawing on the graphics card) performance
Update
As you can see in the comments below, the current arrangement Apple has for Quartz drawing is probably the best, especially since views are drawn directly to the GPU buffers. There is a temptation to think that processing anything visual should be done on the GPU but the truth is, GPU's aren't designed for vector drawings. They're designed to handle massive transforms, lighting, texture mapping, etc. By using the CPU to process vector drawing and leaving everything else to the GPU Apple has split the graphics processing appropriately. Moreover, you're not loosing any efficiency in the data transfer between the CPU and GPU since Quartz is drawing directly to the GPU's buffer (which avoids that onerous memcpy).
Sometimes the term Graphics Context is a little bit abstract.
Yes, intentionally so. Quartz is meant to be an abstraction, a general-purpose drawing system. It may or may not perform some optimizations with the graphics hardware, internally, but you don't get to have much visibility into that. And the kinds of optimizations it makes may change over time and with different kinds of graphics hardware.
Are they actually system resource, but they are resource from the Graphics Card
No, absolutely not. Quartz is a software renderer -- it works even when there is no graphics hardware present, and can draw to things like PDFs where the graphics hardware wouldn't be of any use.
Internally, Quartz (and its interfaces with the rest of the OS) may have a few "fast paths" that take advantage of the GPU in some situations. But that's by no means the common case.
Just as a file handle has some states about whether the file handle is for read only or read/write, and the current position for the next read operating -- the states, a Graphics Context has states about the current stroke color, stroke width, or any relevant data.
This is correct.
So Graphics Context are actually global, system-wide resource.
No. Quartz is just a library that runs code within your app. If you make a new CGContext, only your app is affected -- exactly the same way as if your code created a new instance of one of your own classes.
And if there is no powerful graphics card (or if the graphics card ran out of resource already), then the operating system can simulate a Graphics Context using low level graphics routines using bitmaps, instead of letting the graphics card handle it.
You have the two cases flipped. In general Quartz is working in software, with bitmaps. In a few cases, it may use the GPU to get those bitmaps on the screen faster, if everything is lined up exactly right.
We have a project that is up and coming that will require us to push texture image information to a the EAGLView of an iPad app. Being green to OpenGL in general, are there implications to having a surface wait for texture information? What will OpenGL do while it's waiting for the image data? Does OpenGL require constant updates to it's textures, or will it retain the data until we update the texture again? We're not going to be having a loop or anything in the view, but more like an observer pattern.
When you upload a texture, you hand it off to the GPU — so a copy is made, in memory you don't have direct access to. It's then available to be drawn with as many times as you want. So there's no need for constant updates.
OpenGL won't do anything else while waiting for the image data, it's a synchronous API. The call to upload the data will take as long as it takes, the texture object will have no graphic associated with it beforehand and will have whatever you uploaded associated with it afterwards.
In the general case, OpenGL objects, including texture objects, belong to a specific context and contexts belong to a specific thread. However, iOS implements share groups, which allow you to put several contexts into a share group, allowing objects to be shared between them subject to you having to be a tiny bit careful about synchronisation.
iOS provides a specific subclass of CALayer, CAEAGLLayer, that you can use to draw to from OpenGL. It's up to you when you draw and how often. So your approach is the more native one, if anything. A lot of the samples wrap
Obviously try the simplest approach of 'everything on the main thread' first. If you're not doing all that much then it'll likely be fast enough and save you code maintenance. However, uploading can cost more than you expect, since the OpenGL way around of working is that you specify the data and the format it's in, leaving OpenGL to rearrange it as necessary for the particular GPU you're on. We're talking amounts of the 0.3 of a second variety rather than 30 seconds, but enough that there'll be an obvious pause if the user taps a button or tries to move a slider at the same time.
So if keeping the main thread responsive proves an issue, I'd imagine that you'd want to hop onto a background thread, create a new context within the same share group as the one on the main thread, upload, then hop back to do the actual drawing. In which case it'll remain up to you how you communicate to the user that data has been received and is being processed as distinct from no data having been received yet, if the gap is large enough to justify doing so.
I'm using EasyHook and SlimDX to overlay some graphics using SlimDX's Sprite and Texture classes. When I resize windows some programs fine, but others will crash - Winamp's MilkDrop 2 gives me an ambiguous memory error for example.
I expect this is due to the after market Texture I created. The question is what VTable function should I hook and/or how/when do I dispose and recreate the Texture? Reset perhaps?
If it isn't obvious I don't know much about DirectX.
edit/ps: I paint the texture inside an EndScene hook, but I haven't created any other hooks yet...
You shouldn't have to recreate texture at all if it was created in D3DPOOL_MANAGED (D3DPOOL parameter of IDirect3DDevice9::CreateTexture).
If you absolutely have to use D3DPOOL_DEFAULT and need to kill off lost textures, then,
the simplest way would be to destroy all "perishable" objects before call to IDirect3DDevice9::Reset, and restore then after the call, but only if it was succesfull.
YOu could also track functions that may return D3DERR_DEVICELOST (there are two of them), but hooking only Reset() will be easier.