I want to screen capture iOS frames into an AVAssetWriter (or even just a UIImage). The traditional method of using glReadPixels works just fine - but very slow. I understand that since iOS 5.0 I can use a different - faster, method.
I followed lots of posts, like : OpenGL ES 2d rendering into image
which mention the use of CVOpenGLESTextureCacheCreate - but can't get it to work.
Right now, right before every call to presentRenderbuffer: - I'm following apple's sample with glReadPixels (http://developer.apple.com/library/ios/#qa/qa1704/_index.html) which works. When trying to follow this post - OpenGL ES 2d rendering into image to get the image, which basically replaces the call to glReadPixels with creating a cache texture, and binding it to a render target (a pixel buffer), and then read from the pixel buffer - it seems to "steal" the images from the screen - so nothing is rendered.
Can anyone shed some light on how to do this? Also, please mention if this works only for OpenGL ES 2.0 - I am looking for a fast alternative which will work also on previous versions.
A code sample would be excellent.
Related
I've been writing a little planet generator using Haxe + Away3D, and deploying to HTML5/WebGL. But I'm having a strange issue when rendering my clouds. I have the planet mesh, and then the clouds mesh slightly bigger in the same position.
I'm using a perlin noise function to generate the planetary features and the cloud formations, writing them to a bitmap and applying the bitmap as the texture. Now, strangely, when I deploy this to iOS or C++/OSX, it renders exactly how I wanted it to:
Now, when I deploy to WebGL, it generates an identical diffuse map, but renders as:
(The above was at a much lower resolution, due to how often I was reloading the page. The problem persisted at higher resolutions.)
The clouds are there, and the edges look alright, wispy and translucent. But the inside is opaque and seemingly being rendered differently (each pixel is the same color, only the alpha channel is changed)
I realize this is likely something to do with how the code is ultimately compiled/generated in haxe, but I'm hoping it's something simple like a render setting or blending mode I'm not setting. But since I'm not even sure exactly what is happening, I wouldn't know where to look.
Here's the diffuse map being produced. I overlaid it on red so the clouds would be viewable.
Bitmapdata.perlinNoise does not work on html5.
You should implement it by yourself, or you could use pre-rendered image.
public function perlinNoise (baseX:Float, baseY:Float, numOctaves:UInt, randomSeed:Int, stitch:Bool, fractalNoise:Bool, channelOptions:UInt = 7, grayScale:Bool = false, offsets:Array = null):Void {
openfl.Lib.notImplemented ("BitmapData.perlinNoise");
}
https://github.com/openfl/openfl/blob/c072a98a3c6699f4d334dacd783be947db9cf63a/openfl/display/BitmapData.hx
Also, WebGL-Inspector is very useful for debugging WebGL apps. Have you used it?
http://benvanik.github.io/WebGL-Inspector/
Well, then, did you upload that image from ByteArray?
Lime once allowed access ByteArray with array index operator, even though it shouldn't on js. This is fixed in the lastest version of Lime to avoid mistakes.
I used __get and __set method instead of [] to access a byte array.
Away3d itself might be the cause of this issue too, because the code of backend is generated from different source files depending on the target you use.
For example, byteArrayOffset parameter of Texture.uploadFromByteArray is supported on html5, but not on native.
If away3d is the cause of the problem, which part of the code is causing the problem? I'm not sure for now.
EDIT: I've also experienced a problem with OpenFL's latest WebGL backend. I think legacy OpenFL doesn't have this problem. OpenFL's sprite renderer was changing colorMask (and possibly other OpenGL render states) without my knowledge! This problem occured because my code and OpenFL's sprite renderer was actually using the same OpenGL context. I got rid of this problem by manually disabling OpenFL's sprite renderer.
I'm doing Image Processing with gles2.0.
The Effects are writen as shaders.
And for acceleration on iOS, the results are drawn to textures(With frame buffer binded) which are created by cv (core video) functions.
All things are ok, if I use the version without cv-accerated ( OpenGL ES2.0 functions only).
But there is a problem on the cv-accerated version:
when the input-picture is very small ( such as 200*200 pixel), many unexpected lines would appear after processing with several filters.
It takes me a long time to solve this problem but it's still there.
glFinish() is called before each needed function, so this is not the point.
Thans for your help!
Here is the screenshot
At the moment I am using snapshot to do my picking. I change the render code to render out object ids, grab the snapshot, then take the value for the pixel under the user tap. I think this is quite inefficient though - and I'm getting reports of slowness on some ipads (my mini is fine).
Is it possible to render to the backbuffer, and use a call of glreadpixels to retrieve only the pixel under the user tap without the object-ids being rendered to the screen? I am using GLKView for my rendering. I've tried glreadpixels with my current code - and it always seems to return black. I know that the documentation for GLKView recommends only to use snapshot, but surely it is more efficient for picking to only retrieve a single pixel.
You are correct, a much better way is to render the object ids to the back buffer and read back a particular pixel (or block of pixels).
(If you're doing a lot of selection, you could even use a second offscreen renderbuffer and generate the object ids every frame in a single render pass.)
But you will have to write your own view code to allocate offscreen render buffers, depth buffers, and whatnot. GLKView is a convenience class, a high level wrapper, and the Apple doco specifically says not to mess with the underlying implementation.
Setting up your own GL render buffers isn't too difficult, and there's example code all over the place. I've used the example code on the Apple dev site and from the OpenGL SuperBible.
Actually it is quite possible to read from the backbuffer, even using GLKView. The documentation states that it is not advised - but after a bit of fiddling I got it to work. The only thing which was a problem is that glreadpixels can only take GL_RGBA as argument (not GL_RGB). So long as you ensure that glClear is called after the picking you will not get object ids rendered to the screen.
Using snapshot to do the picking on an ipad mini slowed down the app 50%. Using glReadPixels leads to no noticable slowdown at all. You could do this by allocating an extra framebuffer - but I don't think it is necessary.
This is my first iPhone/iPad app :S
It is a drawing app and I would like that user could save the current work and after that continue drawing on that same image (updating).
I already did something based on this and this. This uses Quartz.
So I would have to save it to some format that can be read back into the memory and displayed on the screen for updating (user draws another line or erases some before).
The images would be saved on server and I would like it to be in a format that in future Android devices also can read it (just read it, not update).
Also, a lot of transformations is going to be on that images after they are finished drawing (scale, projection...). I found that Open GL ES is great for this transformations --> Open GL ES
So the question is,
should I use Quartz for drawing since it is simple and convert the image somehow to Open GL because Open GL is good for transformations? And in which format to save the drawing so it could be used latter for updating and that Android devices could also read it?
To port to android latter Quartz won't help you, openGL is faster, in this case more portable and great for transformations (and effects, even better with ES 2.0 and shaders).
However if you haven't used openGL yet it's quite a different journey from quartz, so maybe read some tutorials on programming in openGL first to get a fell for it.
I'm trying to do screen mirroring on the iPad with OpenGL 1.1. I've got to the point of setting up the external window and view. I'm using OpenGL on the first screen, and I've read that I can setup a shared render buffer, but since I'm somewhat of an OpenGL beginner I'm having some trouble getting something up and running that can share a render buffer.
I've got as far as setting up two separate contexts and rendering different things to both, but of course I would like to share the render buffer for the sake of efficiency. The Apple documentation explains how I would set up a share group object and initialize a shared context, but I would like to also know how I would go about setting up and sharing a render buffer so that the external screen can just draw this render buffer to it's frame buffer.
The eventual goal is to do the screen mirroring as efficiently as possible, so any advice on the matter would be most appreciated.
,
I think this topic in the cocos2d forums would be a good read for you! ( Scroll down to the last posts ).
Maybe you're not using Cocos2d at all, but the information there is quite valuable, and there's some code too.
Good luck!