Please help me to understand a mipmap technology.
When I use mipmaps in my program, the images have deformations at reducing. While creating a texture I use a trilinear filtering:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
I load the texture with 128x128 size and set mipmap levels count:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 10);
I use VBO for rendering. At fragment shader I use function texture2D for texture render. When I set square size at 128x128 the images have not defects. But when I set size at 85px, I see deformations:
Then at 64px the image have not defects and with 50px size it is deformed:
When I use function textureLod it is possible to achieve good quality for texture at square size from 64px to 128px if I use lod = 0.
If I reduce square to size at 50px texture has defects with lod = 0:
and with lod = 1 or 2:
I don’t understand what level of detail I need to draw texture at square less than 64px correctly when I use texture size 128px. And I don’t understand why function texture2D does not draw texture at square 85px correctly.
I try to use float values for lod and try to use bias parameter at function texture2D but I have not any optimal result.
Please help me who faced the similar challenge.
Test program (7z-archive)
Test program with source code (7z-archive)
Related
I am using OpenGL ES 2.0 ing iOS and I found when I add texture with different size and some of them get blurred, some disappeared. I used mipmap but it doesn't work.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glGenerateMipmap(GL_TEXTURE_2D);
some of them get blurred
Some blur is inevitable.
Mipmapping is simply downscaling and that is effectively 4-to-1 averaging filter which will inevitably reduce high frequency information).
You're probably also using bilinear filtering. Pixels don't exactly align with texels, so you grab the 4 nearest texels and perform a weighted sum on those. Yet more blur ...
It's worth noting that the built-in mipmap generation normally uses a very simple box filter to downscale the mipmaps. You can often fine tune the sharpness of the result with other downsampling algorithms, but you need to generate the mipmaps manually offline if you want to try that.
some disappeared.
The clipping on the edges of the faces is usually a sign that your texture is "too tight" to the edge of the geometry, so the blur effect is effectively hitting the edge of the triangle and clipping.
Ensure that you have a small ring of transparent texels around the edge of the on-screen rendering of each icon (even when rendered at the smallest mipmap level). If it is still clipping make that buffer region larger.
I'm doing a color lookup using a texture to apply an effect to a picture. My lookup is a gradient map using the luminance of the fragment of the first texture, then looking that up on a second texture. The 2nd texture is 256x256 with gradients going horizontally and several different gradients top to bottom. So 32 horizontal stripes each 8 pixels tall. My lookup on the x is the luminance, on the y it's a gradient and I target the center of the stripe to avoid crossover.
My fragment shader looks like this:
lowp vec4 source = texture2D(u_textureSampler, v_fragmentTexCoord0);
float luminance = 1.0 - dot(source.rgb, W);
lowp vec2 texPos;
texPos.x = clamp(luminance, 0.0, 1.0);
// the y value selects which gradient to use by supplying a T value
// this would be more efficient in the vertex shader
texPos.y = clamp(u_value4, 0.0, 1.0);
lowp vec4 newColor1 = texture2D(u_textureSampler2, texPos);
It works good but I was getting distortion in the whitest parts of the whites and the blackest part of the blacks. Basically it looked like it grabbed that newColor from a completely different place on texture2, or possibly was just getting nothing for those fragments. I added the clamps in the shader to try to keep it from getting outside the edge of the lookup texture but that didn't help. Am I not using clamp correctly?
Finally I considered that it might have something to do with my source texture or the way it's loaded. I ended up fixing it by adding:
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
So.. WHY?
It's a little annoying to have to clamp the textures because it means I have to write an exception in my code when I'm loading lookup tables..
If my textPos.x and .y are clamped to 0-1.. how is it pulling a sample beyond the edge?
Also.. do I have to use the above clamp call when creating the texture or can I call it when I'm about to use the texture?
This is correct behavior of texture sampler.
Let me explain this. When you use textures with GL_LINEAR sampling GPU will take an average color of pixel blended with nearby pixels (that's why you don't see pixelation as with GL_NEAREST mode - pixels are blurred instead).
And with GL_REPEAT mode texture coordinates will wrap from 0 to 1 and vice versa, blending with nearby pixels (i.e. in extreme coordinates it will blend with opposite side of texture). GL_CLAMP_TO_EDGE prevents this wrapping behavior, and pixels won't blend with pixels from opposite side of texture.
Hope my explanation is clear.
I render a scene (to the default renderbuffer)
I want to grab a rectangle from this scene and create a texture out of it
I would like to do it without glReadPixels()ing down to the CPU and then uploading the data back up to the GPU
Is this possible using OpenGL ES 2.0?
P.S. - I want to use a POT area of the screen, not some strange shape
Pseudocode of my already-working GPU->CPU->GPU implementation:
// Render stuff here
byte *magData = glReadPixels();
// Bind the already-generated texture object
BindTexture(GL_TEXTURE0, GL_TEXTURE_2D, alias);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, MAGWIDTH, MAGHEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, magData);
You can use glCopyTexImage2D to copy from the back buffer:
glBindTexture(GL_TEXTURE_2D, textureID);
glCopyTexImage2D(GL_TEXTURE_2D, level, internalFormat, x, y, width, height, border);
OpenGL ES 2.0 always copies from the back buffer (or front buffer for single-buffered configurations). Using OpenGL ES 3.0, you can specify the source for the copy with:
glReadBuffer(GL_BACK);
In light of ClayMontgomery's answer (glCopyTexImage2D is slow) - you might find using glCopyTexSubImage2D with a correctly sized and formatted texture is faster because it writes to the pre-allocated texture instead of allocating a a new buffer each time. If this is still too slow, you should try doing as he suggests and render to a framebuffer (although you'll also need to draw a quad to the screen using the framebuffer's texture to get the same results).
You will find that glCopyTexImage2D() is really slow. The fast way to do what you want is to render directly to the texture as an attachment to an FBO. This can be done with either OpenGL ES 2.0 or 1.1 (with extensions). This article explains in detail:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
Using OpenGL on iOS, is it possible to update a small texture (by setting each pixel individually) and then scale it up to fill the screen (60 frames per second)?
You should be able to update the content of a texture using glTexImage2D.
Untested example:
GLubyte data[1024]; // 32x32 (power of two)
for (int i=0; i<1024; i+=4) {
// write a red pixel (RGBA)
data[i] = 255;
data[i+1] = 0;
data[i+2] = 0;
data[i+3] = 255;
}
glBindTexture(GL_TEXTURE_2D, my_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
// then simply render a quad with this texture.
In general the answer is yes, it is possible. But it might depend on what you need to draw.
Since you don't provide more details I will describe the general approach:
Bind a texture to a framebuffer (Here is a good explanation with code on how to do that. See "Example 6.10. Initialize() for Supersampling" code example)
Now draw what you need in the same way as you would on the screen (transformations, modelview matrix etc). If you need pixel accuracy (to modify each and every pixel) you might consider using an orthographic projection. If this is possible or not, depends on what you need to draw. All this drawing will be performed on your texture achieving the "update the texture" part.
Bind the normal framebuffer that you use, to draw on the screen. Draw a rectangle (possibly using orthographic projection again) that uses the texture from the previous step. You can scale this rectangle to fill the screen.
If the above approach would be able to achieve a 60 fps, depends on your target device and the scene you need to render.
Hope that helps
I'm writing media player framework for Apple TV, using OpenGL ES and ffmpeg.
Conversion to RGBA is required for rendering on OpenGL ES, soft convert using swscale is unbearably slow, so using information on the internet I came up with two ideas: using neon (like here) or using fragment shaders and GL_LUMINANCE and GL_LUMINANCE_ALPHA.
As I know almost nothing about OpenGL, the second option still doesn't work :)
Can you give me any pointers how to proceed?
Thank you in advance.
It is most definitely worthwhile learning OpenGL ES2.0 shaders:
You can load-balance between the GPU and CPU (e.g. video decoding of subsequent frames while GPU renders the current frame).
Video frames need to go to the GPU in any case: using YCbCr saves you 25% bus bandwidth if your video has 4:2:0 sampled chrominance.
You get 4:2:0 chrominance up-sampling for free, with the GPU hardware interpolator. (Your shader should be configured to use the same vertex coordinates for both Y and C{b,r} textures, in effect stretching the chrominance texture out over the same area.)
On iOS5 pushing YCbCr textures to the GPU is fast (no data-copy or swizzling) with the texture cache (see the CVOpenGLESTextureCache* API functions). You will save 1-2 data-copies compared to NEON.
I am using these techniques to great effect in my super-fast iPhone camera app, SnappyCam.
You are on the right track for implementation: use a GL_LUMINANCE texture for Y and GL_LUMINANCE_ALPHA if your CbCr is interleaved. Otherwise use three GL_LUMINANCE textures if all of your YCbCr components are noninterleaved.
Creating two textures for 4:2:0 bi-planar YCbCr (where CbCr is interleaved) is straightforward:
glBindTexture(GL_TEXTURE_2D, texture_y);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_LUMINANCE, // Texture format (8bit)
width,
height,
0, // No border
GL_LUMINANCE, // Source format (8bit)
GL_UNSIGNED_BYTE, // Source data format
NULL
);
glBindTexture(GL_TEXTURE_2D, texture_cbcr);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_LUMINANCE_ALPHA, // Texture format (16-bit)
width / 2,
height / 2,
0, // No border
GL_LUMINANCE_ALPHA, // Source format (16-bits)
GL_UNSIGNED_BYTE, // Source data format
NULL
);
where you would then use glTexSubImage2D() or the iOS5 texture cache to update these textures.
I'd also recommend using a 2D varying that spans the texture coordinate space (x: [0,1], y: [0,1]) so that you avoid any dependent texture reads in your fragment shader. The end result is super-fast and doesn't load the GPU at all in my experience.
Converting YUV to RGB using NEON is very slow. Use a shader to offload onto the GPU.