Reading through Apple docs for GLKView it looks like the GLES rendering is single buffered when using GLKView.That's,GLKView creates one standard FBO, as well as MSAA FBO,if requested.That's it?No double buffering when using GLKView?
Now,if this is true,and GLKView is not double buffered,can I make the default FBO setups manually using CAEAGLLayer.In this case,I can setup as many FBOs as I want and swap between them when blitting to screen?Does it make sense?
Is
[context presentRenderbuffer:GL_RENDERBUFFER];
call even asynchronous?
Is double buffering for GL encouraged on mobile platforms (iOS in this case) from the performance point of view?
The questions above may seem trivial,but I can't find any answers in the official docs.
Related
I'm trying to render bitmap fonts in directX10 at the moment, and I want to do this as efficiently as possible. I'm having a hard time getting a start on my design because of this question though.
So should I reuse a single VertexBuffer or make multiple VertexBuffer objects?
Currently I allocate one dynamic VertexBuffer per Quad object in my program. This way I wouldn't have to map/unmap a VertexBuffer if nothing moves on my screen. For fonts I can implement a similar method on where I allocate one buffer per text box, or something similar.
After searching I read about reusing a single VertexBuffer for all objects. Vertex caching came up also. What is the advantage/disadvantage of this, and is it faster than my previous method?
For last, is there any other method I should look into rendering many 2d quads in the screen?
Thank you in advance.
Using a single dynamic Vertex Buffer with the proper combinations of DISCARD and NO_OVERWRITE is the best way to handle this kind of dynamic submission. The driver will perform buffer renaming with DISCARD to minimize GPU stalls.
This is the mechanism used by SpriteBatch/SpriteFont and PrimitiveBatch in the DirectX Tool Kit. You can check that source for details, and if really needed you could adopt it to Direct3D 10.x. Of course, moving to Direct3D 11 is probably the better choice.
At the moment I am using snapshot to do my picking. I change the render code to render out object ids, grab the snapshot, then take the value for the pixel under the user tap. I think this is quite inefficient though - and I'm getting reports of slowness on some ipads (my mini is fine).
Is it possible to render to the backbuffer, and use a call of glreadpixels to retrieve only the pixel under the user tap without the object-ids being rendered to the screen? I am using GLKView for my rendering. I've tried glreadpixels with my current code - and it always seems to return black. I know that the documentation for GLKView recommends only to use snapshot, but surely it is more efficient for picking to only retrieve a single pixel.
You are correct, a much better way is to render the object ids to the back buffer and read back a particular pixel (or block of pixels).
(If you're doing a lot of selection, you could even use a second offscreen renderbuffer and generate the object ids every frame in a single render pass.)
But you will have to write your own view code to allocate offscreen render buffers, depth buffers, and whatnot. GLKView is a convenience class, a high level wrapper, and the Apple doco specifically says not to mess with the underlying implementation.
Setting up your own GL render buffers isn't too difficult, and there's example code all over the place. I've used the example code on the Apple dev site and from the OpenGL SuperBible.
Actually it is quite possible to read from the backbuffer, even using GLKView. The documentation states that it is not advised - but after a bit of fiddling I got it to work. The only thing which was a problem is that glreadpixels can only take GL_RGBA as argument (not GL_RGB). So long as you ensure that glClear is called after the picking you will not get object ids rendered to the screen.
Using snapshot to do the picking on an ipad mini slowed down the app 50%. Using glReadPixels leads to no noticable slowdown at all. You could do this by allocating an extra framebuffer - but I don't think it is necessary.
I want to screen capture iOS frames into an AVAssetWriter (or even just a UIImage). The traditional method of using glReadPixels works just fine - but very slow. I understand that since iOS 5.0 I can use a different - faster, method.
I followed lots of posts, like : OpenGL ES 2d rendering into image
which mention the use of CVOpenGLESTextureCacheCreate - but can't get it to work.
Right now, right before every call to presentRenderbuffer: - I'm following apple's sample with glReadPixels (http://developer.apple.com/library/ios/#qa/qa1704/_index.html) which works. When trying to follow this post - OpenGL ES 2d rendering into image to get the image, which basically replaces the call to glReadPixels with creating a cache texture, and binding it to a render target (a pixel buffer), and then read from the pixel buffer - it seems to "steal" the images from the screen - so nothing is rendered.
Can anyone shed some light on how to do this? Also, please mention if this works only for OpenGL ES 2.0 - I am looking for a fast alternative which will work also on previous versions.
A code sample would be excellent.
This is my first iPhone/iPad app :S
It is a drawing app and I would like that user could save the current work and after that continue drawing on that same image (updating).
I already did something based on this and this. This uses Quartz.
So I would have to save it to some format that can be read back into the memory and displayed on the screen for updating (user draws another line or erases some before).
The images would be saved on server and I would like it to be in a format that in future Android devices also can read it (just read it, not update).
Also, a lot of transformations is going to be on that images after they are finished drawing (scale, projection...). I found that Open GL ES is great for this transformations --> Open GL ES
So the question is,
should I use Quartz for drawing since it is simple and convert the image somehow to Open GL because Open GL is good for transformations? And in which format to save the drawing so it could be used latter for updating and that Android devices could also read it?
To port to android latter Quartz won't help you, openGL is faster, in this case more portable and great for transformations (and effects, even better with ES 2.0 and shaders).
However if you haven't used openGL yet it's quite a different journey from quartz, so maybe read some tutorials on programming in openGL first to get a fell for it.
I'm trying to do screen mirroring on the iPad with OpenGL 1.1. I've got to the point of setting up the external window and view. I'm using OpenGL on the first screen, and I've read that I can setup a shared render buffer, but since I'm somewhat of an OpenGL beginner I'm having some trouble getting something up and running that can share a render buffer.
I've got as far as setting up two separate contexts and rendering different things to both, but of course I would like to share the render buffer for the sake of efficiency. The Apple documentation explains how I would set up a share group object and initialize a shared context, but I would like to also know how I would go about setting up and sharing a render buffer so that the external screen can just draw this render buffer to it's frame buffer.
The eventual goal is to do the screen mirroring as efficiently as possible, so any advice on the matter would be most appreciated.
,
I think this topic in the cocos2d forums would be a good read for you! ( Scroll down to the last posts ).
Maybe you're not using Cocos2d at all, but the information there is quite valuable, and there's some code too.
Good luck!