Can somebody tell me if it is possible to use full precision floating point 2DTextures on the iPad2? (full precision = single precision)
By printing out the implemented OpenGL extensions on the iPad2 using
glGetString(GL_EXTENSIONS)
I figured out that both OES_texture_half_float and OES_texture_float are supported.
However, using GL_HALF_FLOAT_OES as the textures type works fine,
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_HALF_FLOAT_OES, NULL);
whereas using GL_FLOAT results in an incomplete framebuffer object.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, NULL);
Am I making something wrong here or are full precision floating point textures just not supported?
Thank u in advance.
The OES_texture_float extension provides for 32-bit floating point textures to be used as inputs, but that doesn't mean that you can render into them. The EXT_color_buffer_half_float adds the capability for iOS devices (I believe A5 GPUs and higher) to render into 16-bit half float textures, but not 32-bit full float ones.
I don't believe that any of the current iOS devices allow for rendering into full 32-bit float textures, just to use them as inputs when rendering a scene.
Related
I'm programming a Pascal project (using Lazarus) for school and I experienced some weird behaviour when the project was executed on one of the school computers.
It ran completely fine on my laptop from 2011 with an NVidia 650M. The school computers drivers are outdated, so it falls back to Windows GDI, which basically is a software implementation for OpenGL 1.1. So maybe the graphics card is not the cause of the problem.
Also it may be important to mention that this ONLY happens when using the GL_NEAREST method for both mag and min filters. The problem doesn't occurr when using GL_LINEAR for MAG and GL_NEAREST for MIN for example.
This is the code that loads the texture(s):
tex := LoadTGA(filename);
if tex.iType = 2 then
begin
glGenTextures(1, #(Result.textureID));
glBindTexture(GL_TEXTURE_2D, Result.textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
if tex.bpp = 3 then glFormat := GL_BGR
else glFormat := GL_BGRA;
glTexImage2D(GL_TEXTURE_2D, 0, tex.bpp, tex.w, tex.h, 0, glFormat, GL_UNSIGNED_BYTE, tex.data);
FreeMem(tex.data);
end;
This is the code that renders the quad:
glColor3ub(255, 255, 255);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, q.surface.textureID);
glBegin(GL_QUADS);
glTexCoord2f(0, 1); glVertex3f(q.points[0].x, q.points[0].y, q.points[0].z);
glTexCoord2f(1, 1); glVertex3f(q.points[1].x, q.points[1].y, q.points[1].z);
glTexCoord2f(1, 0); glVertex3f(q.points[2].x, q.points[2].y, q.points[2].z);
glTexCoord2f(0, 0); glVertex3f(q.points[3].x, q.points[3].y, q.points[3].z);
glEnd;
glDisable(GL_TEXTURE_2D);
The texture has the format 32x32 pixels.
Update:
As it turns out, colored quads experience a similiar bug as well. In the pic with the colored quads, it seems like GDI is doing clipping first and THEN applying the color. Should be the other way around. Now we just have to find out why.
I painted the edges of the clipping triangles in the lower picture, et voila, you can see exactly where the texture is "inconsistent"
Pictures:
It is very hard to guess. The problem is probably related to difference in texture interpolation implemented in Nvidia and ATI drivers. Using GL_LINEAR you use different interpolation method and the "seam" is not visible.
Some ideas:
Try filling quad with solid color, e.g. yellow, and check if separate triangles are also visible then.
Create your quad using two triangles (6 vertices) not GL_QUADS.
Do you pass vertices in counter clockwise order?
I really need to get an RGB 8 bytes per channel buffer from the GPU.
I need it to pass to a trained convolutional neural network, and it only accepts data in that format.
I can't convert it on the CPU as I'm heavily CPU bound and it's quite slow.
I currently have FBO with a renderbuffer attached, which is defined with:
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8_OES, bufferWidth, bufferHeight);
There are no errors when I bind, define and render to the buffer.
But when I use
glReadPixels(0, 0, bufferWidth, bufferHeight, GL_RGB, GL_UNSIGNED_BYTE, rgbBufferRawName);
it gives an invalid enum error (0x0500). It works just fine when I pass GL_RED_EXT or GL_RGBA and produces correct buffers (I've checked it by uploading those buffers to a texture and rendering them, and they looked correct).
I tried setting glPixelStorei(GL_PACK_ALIGNMENT, 1); but that made no difference.
I'm on iOS10 and iPhone 6. I was doing ES2.0, but now tried switching to ES3.0 in hopes that it will help me solve the problem. It did not.
I would really appreciate help in getting RGB8 buffer in any way,
Thanks.
According the OpenGL 3.0 specification, GL_RGB is not a valid value for format.
https://www.khronos.org/opengles/sdk/docs/man3/html/glReadPixels.xhtml
You may want to either convert it to RGB after retrieving the GL_RGBA formatted buffer, or adjusting your algorithm to compensate for RGBA.
I was to do the swizzling as, if I try to convert it later then it seems to be very slow.
Any examples?
I am using this to make video of my application
I currently use
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
and it returns me data in RGBA format, I need to convert it to BGRA how can I do it efficiently?
You know there is a BGRA format, which glReadPixels can deliver:
glReadPixels(x, y, width, height, GL_BGRA, GL_UNSIGNED_BYTE, data);
And in fact it's recommended you use BGRA for reading pixels, since this is the ordering most GUI systems use internally, thus saving a conversion step.
Using OpenGL on iOS, is it possible to update a small texture (by setting each pixel individually) and then scale it up to fill the screen (60 frames per second)?
You should be able to update the content of a texture using glTexImage2D.
Untested example:
GLubyte data[1024]; // 32x32 (power of two)
for (int i=0; i<1024; i+=4) {
// write a red pixel (RGBA)
data[i] = 255;
data[i+1] = 0;
data[i+2] = 0;
data[i+3] = 255;
}
glBindTexture(GL_TEXTURE_2D, my_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
// then simply render a quad with this texture.
In general the answer is yes, it is possible. But it might depend on what you need to draw.
Since you don't provide more details I will describe the general approach:
Bind a texture to a framebuffer (Here is a good explanation with code on how to do that. See "Example 6.10. Initialize() for Supersampling" code example)
Now draw what you need in the same way as you would on the screen (transformations, modelview matrix etc). If you need pixel accuracy (to modify each and every pixel) you might consider using an orthographic projection. If this is possible or not, depends on what you need to draw. All this drawing will be performed on your texture achieving the "update the texture" part.
Bind the normal framebuffer that you use, to draw on the screen. Draw a rectangle (possibly using orthographic projection again) that uses the texture from the previous step. You can scale this rectangle to fill the screen.
If the above approach would be able to achieve a 60 fps, depends on your target device and the scene you need to render.
Hope that helps
I'm writing media player framework for Apple TV, using OpenGL ES and ffmpeg.
Conversion to RGBA is required for rendering on OpenGL ES, soft convert using swscale is unbearably slow, so using information on the internet I came up with two ideas: using neon (like here) or using fragment shaders and GL_LUMINANCE and GL_LUMINANCE_ALPHA.
As I know almost nothing about OpenGL, the second option still doesn't work :)
Can you give me any pointers how to proceed?
Thank you in advance.
It is most definitely worthwhile learning OpenGL ES2.0 shaders:
You can load-balance between the GPU and CPU (e.g. video decoding of subsequent frames while GPU renders the current frame).
Video frames need to go to the GPU in any case: using YCbCr saves you 25% bus bandwidth if your video has 4:2:0 sampled chrominance.
You get 4:2:0 chrominance up-sampling for free, with the GPU hardware interpolator. (Your shader should be configured to use the same vertex coordinates for both Y and C{b,r} textures, in effect stretching the chrominance texture out over the same area.)
On iOS5 pushing YCbCr textures to the GPU is fast (no data-copy or swizzling) with the texture cache (see the CVOpenGLESTextureCache* API functions). You will save 1-2 data-copies compared to NEON.
I am using these techniques to great effect in my super-fast iPhone camera app, SnappyCam.
You are on the right track for implementation: use a GL_LUMINANCE texture for Y and GL_LUMINANCE_ALPHA if your CbCr is interleaved. Otherwise use three GL_LUMINANCE textures if all of your YCbCr components are noninterleaved.
Creating two textures for 4:2:0 bi-planar YCbCr (where CbCr is interleaved) is straightforward:
glBindTexture(GL_TEXTURE_2D, texture_y);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_LUMINANCE, // Texture format (8bit)
width,
height,
0, // No border
GL_LUMINANCE, // Source format (8bit)
GL_UNSIGNED_BYTE, // Source data format
NULL
);
glBindTexture(GL_TEXTURE_2D, texture_cbcr);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_LUMINANCE_ALPHA, // Texture format (16-bit)
width / 2,
height / 2,
0, // No border
GL_LUMINANCE_ALPHA, // Source format (16-bits)
GL_UNSIGNED_BYTE, // Source data format
NULL
);
where you would then use glTexSubImage2D() or the iOS5 texture cache to update these textures.
I'd also recommend using a 2D varying that spans the texture coordinate space (x: [0,1], y: [0,1]) so that you avoid any dependent texture reads in your fragment shader. The end result is super-fast and doesn't load the GPU at all in my experience.
Converting YUV to RGB using NEON is very slow. Use a shader to offload onto the GPU.