glReadPixels directly to a texture - ios

I render a scene (to the default renderbuffer)
I want to grab a rectangle from this scene and create a texture out of it
I would like to do it without glReadPixels()ing down to the CPU and then uploading the data back up to the GPU
Is this possible using OpenGL ES 2.0?
P.S. - I want to use a POT area of the screen, not some strange shape
Pseudocode of my already-working GPU->CPU->GPU implementation:
// Render stuff here
byte *magData = glReadPixels();
// Bind the already-generated texture object
BindTexture(GL_TEXTURE0, GL_TEXTURE_2D, alias);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, MAGWIDTH, MAGHEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, magData);

You can use glCopyTexImage2D to copy from the back buffer:
glBindTexture(GL_TEXTURE_2D, textureID);
glCopyTexImage2D(GL_TEXTURE_2D, level, internalFormat, x, y, width, height, border);
OpenGL ES 2.0 always copies from the back buffer (or front buffer for single-buffered configurations). Using OpenGL ES 3.0, you can specify the source for the copy with:
glReadBuffer(GL_BACK);
In light of ClayMontgomery's answer (glCopyTexImage2D is slow) - you might find using glCopyTexSubImage2D with a correctly sized and formatted texture is faster because it writes to the pre-allocated texture instead of allocating a a new buffer each time. If this is still too slow, you should try doing as he suggests and render to a framebuffer (although you'll also need to draw a quad to the screen using the framebuffer's texture to get the same results).

You will find that glCopyTexImage2D() is really slow. The fast way to do what you want is to render directly to the texture as an attachment to an FBO. This can be done with either OpenGL ES 2.0 or 1.1 (with extensions). This article explains in detail:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES

Related

Texture gets blurred when using different scale

I am using OpenGL ES 2.0 ing iOS and I found when I add texture with different size and some of them get blurred, some disappeared. I used mipmap but it doesn't work.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glGenerateMipmap(GL_TEXTURE_2D);
some of them get blurred
Some blur is inevitable.
Mipmapping is simply downscaling and that is effectively 4-to-1 averaging filter which will inevitably reduce high frequency information).
You're probably also using bilinear filtering. Pixels don't exactly align with texels, so you grab the 4 nearest texels and perform a weighted sum on those. Yet more blur ...
It's worth noting that the built-in mipmap generation normally uses a very simple box filter to downscale the mipmaps. You can often fine tune the sharpness of the result with other downsampling algorithms, but you need to generate the mipmaps manually offline if you want to try that.
some disappeared.
The clipping on the edges of the faces is usually a sign that your texture is "too tight" to the edge of the geometry, so the blur effect is effectively hitting the edge of the triangle and clipping.
Ensure that you have a small ring of transparent texels around the edge of the on-screen rendering of each icon (even when rendered at the smallest mipmap level). If it is still clipping make that buffer region larger.

Creating & Updating OpenGLES Texture on iOS

I want to create an OpenGLES 2.0 texture and update it on every draw call on the CPU side so that shaders have the correct data when they use the texture. Here is the code to create texture:
GLubyte * oglData;
- (GLuint)setupTexture
{
oglData = (GLubyte *) calloc(256*1*4, sizeof(GLubyte));
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, oglData);
return texName;
}
I plan to update oglData (256x1 RGBA pixel data) everytime before glDrawArrays(). My doubts:
Does iOS supports 256x1 textures? If not, what is the way out?
Is it possible to update oglData multiple times and then call glTexImage2D everytime to update the texture? Is this repeated copying of data from CPU to GPU really efficient or there are better ways?
Am I on the correct track?
1) I haven't worked with iOS and OpenGL ES on it, but, if those textures are not supported by default, some extension could add support for them.
2) For updating textures you should use glTexSubImage2d. The difference is that glTexImage2D reallocates the memory, while glTexSubImage2D does not. This might be efficient enough for you depending on the texture size, upload frequency etc. If there isn't much texture data, it might be better to upload it all instead of updating.

Quad Texture weird behaviour OpenGL old GPU

I'm programming a Pascal project (using Lazarus) for school and I experienced some weird behaviour when the project was executed on one of the school computers.
It ran completely fine on my laptop from 2011 with an NVidia 650M. The school computers drivers are outdated, so it falls back to Windows GDI, which basically is a software implementation for OpenGL 1.1. So maybe the graphics card is not the cause of the problem.
Also it may be important to mention that this ONLY happens when using the GL_NEAREST method for both mag and min filters. The problem doesn't occurr when using GL_LINEAR for MAG and GL_NEAREST for MIN for example.
This is the code that loads the texture(s):
tex := LoadTGA(filename);
if tex.iType = 2 then
begin
glGenTextures(1, #(Result.textureID));
glBindTexture(GL_TEXTURE_2D, Result.textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
if tex.bpp = 3 then glFormat := GL_BGR
else glFormat := GL_BGRA;
glTexImage2D(GL_TEXTURE_2D, 0, tex.bpp, tex.w, tex.h, 0, glFormat, GL_UNSIGNED_BYTE, tex.data);
FreeMem(tex.data);
end;
This is the code that renders the quad:
glColor3ub(255, 255, 255);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, q.surface.textureID);
glBegin(GL_QUADS);
glTexCoord2f(0, 1); glVertex3f(q.points[0].x, q.points[0].y, q.points[0].z);
glTexCoord2f(1, 1); glVertex3f(q.points[1].x, q.points[1].y, q.points[1].z);
glTexCoord2f(1, 0); glVertex3f(q.points[2].x, q.points[2].y, q.points[2].z);
glTexCoord2f(0, 0); glVertex3f(q.points[3].x, q.points[3].y, q.points[3].z);
glEnd;
glDisable(GL_TEXTURE_2D);
The texture has the format 32x32 pixels.
Update:
As it turns out, colored quads experience a similiar bug as well. In the pic with the colored quads, it seems like GDI is doing clipping first and THEN applying the color. Should be the other way around. Now we just have to find out why.
I painted the edges of the clipping triangles in the lower picture, et voila, you can see exactly where the texture is "inconsistent"
Pictures:
It is very hard to guess. The problem is probably related to difference in texture interpolation implemented in Nvidia and ATI drivers. Using GL_LINEAR you use different interpolation method and the "seam" is not visible.
Some ideas:
Try filling quad with solid color, e.g. yellow, and check if separate triangles are also visible then.
Create your quad using two triangles (6 vertices) not GL_QUADS.
Do you pass vertices in counter clockwise order?

OpenGL ES 1.1 modulation and GL_RGB_SCALE

In my Application I need to multiply two textures and then multiply the result by a factor higher than 2. I am using GL_MODULATE and GL_RGB_SCALE for this. I use following code for this
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Input.texID);
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, Temp64.texID);
glEnable(GL_TEXTURE_2D);
// glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, Temp64.wide, Temp64.high, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_RGB_SCALE, 4.0);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO);
glViewport(0, 0, wide, high);
//glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
According to my understanding of the OpenGL ES 1.1 specifications first modulation between textures are done, then multiplication by 4 is done and then clamped to [0 1].
This is what the specification says
"If the value of TEXTURE ENV MODE is COMBINE, the form of the texture func- tion depends on the values of COMBINE RGB and COMBINE ALPHA, according to table 3.17. The RGB and ALPHA results of the texture function are then multi-plied by the values of RGB SCALE and ALPHA SCALE, respectively. The results are clamped to [0, 1]."
But what I notice is different. It first multiplies texture unit 1 by 4 and clamped into [0 1] then modulated with Texture unit 0. But I want the RGB_SCALE to be applied after the modulation.
I tried to write the modulation results to a FBO and then to use scaling but it didn't work.
What I want to do is to multiply an image with another and then multiply the result with a value higher than 2. There shouldn't be clamping until the 2nd multiplication. Can somebody please help me?
In the past I've had several issues related to texture combiners similar to the ones you mention. Modern hardware does not have a fixed function pipeline so all its functionality is implemented thru shaders and some of these implementations have unexpected results.
What's your target platform? My suggestion is to get rid of the texture combiners at all and move all your code to shaders (if your target platform allows it).
Also, if I'm reading correctly your code, you need to place glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE) after binding the first texture so you can use it when you call glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS).
glEnable(GL_TEXTURE_2D) is also misplaced and it should be called before any other texture related function.

iphone OpenGLES video texture

i know that apple offers a sample called GLCameraRipple which is using CVOpenGLESTextureCacheCreateTextureFromImage to achieve this. But when i changed to glTexImage2D, it displays nothing, what's wrong with my code?
if (format == kCVPixelFormatType_32BGRA) {
CVPixelBufferRef pixelBuf = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuf, 0);
void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuf);
glGenTextures(1, &textureID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, baseaddress);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
CVPixelBufferUnlockBaseAddress(pixelBuf, 0);
}
thank you very much for any help!
There are a couple of problems here. First, the GLCameraRipple example was built to take in YUV camera data, not BGRA. Your above code is only uploading one texture of BGRA data, rather than the separate Y and UV planes expected by the application. It uses a colorspace conversion shader to merge these planes as a first stage, and that needs the YUV data to work.
Second, you are allocating a new texture for each uploaded frame, which is a really bad idea. This is particularly bad if you don't delete that texture when done, because you will chew up resources this way. You should allocate a texture once for each plane you'll upload, then keep that texture around as you upload each video frame, deleting it only when you're done processing video.
You'll either need to rework the above to upload the separate Y and UV planes, or remove / rewrite their color processing shader. If you go the BGRA route, you'll also need to be sure that the camera is now giving you BGRA frames instead of YUV ones.

Resources