I'm programming a Pascal project (using Lazarus) for school and I experienced some weird behaviour when the project was executed on one of the school computers.
It ran completely fine on my laptop from 2011 with an NVidia 650M. The school computers drivers are outdated, so it falls back to Windows GDI, which basically is a software implementation for OpenGL 1.1. So maybe the graphics card is not the cause of the problem.
Also it may be important to mention that this ONLY happens when using the GL_NEAREST method for both mag and min filters. The problem doesn't occurr when using GL_LINEAR for MAG and GL_NEAREST for MIN for example.
This is the code that loads the texture(s):
tex := LoadTGA(filename);
if tex.iType = 2 then
begin
glGenTextures(1, #(Result.textureID));
glBindTexture(GL_TEXTURE_2D, Result.textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
if tex.bpp = 3 then glFormat := GL_BGR
else glFormat := GL_BGRA;
glTexImage2D(GL_TEXTURE_2D, 0, tex.bpp, tex.w, tex.h, 0, glFormat, GL_UNSIGNED_BYTE, tex.data);
FreeMem(tex.data);
end;
This is the code that renders the quad:
glColor3ub(255, 255, 255);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, q.surface.textureID);
glBegin(GL_QUADS);
glTexCoord2f(0, 1); glVertex3f(q.points[0].x, q.points[0].y, q.points[0].z);
glTexCoord2f(1, 1); glVertex3f(q.points[1].x, q.points[1].y, q.points[1].z);
glTexCoord2f(1, 0); glVertex3f(q.points[2].x, q.points[2].y, q.points[2].z);
glTexCoord2f(0, 0); glVertex3f(q.points[3].x, q.points[3].y, q.points[3].z);
glEnd;
glDisable(GL_TEXTURE_2D);
The texture has the format 32x32 pixels.
Update:
As it turns out, colored quads experience a similiar bug as well. In the pic with the colored quads, it seems like GDI is doing clipping first and THEN applying the color. Should be the other way around. Now we just have to find out why.
I painted the edges of the clipping triangles in the lower picture, et voila, you can see exactly where the texture is "inconsistent"
Pictures:
It is very hard to guess. The problem is probably related to difference in texture interpolation implemented in Nvidia and ATI drivers. Using GL_LINEAR you use different interpolation method and the "seam" is not visible.
Some ideas:
Try filling quad with solid color, e.g. yellow, and check if separate triangles are also visible then.
Create your quad using two triangles (6 vertices) not GL_QUADS.
Do you pass vertices in counter clockwise order?
Related
I am using OpenGL ES 2.0 ing iOS and I found when I add texture with different size and some of them get blurred, some disappeared. I used mipmap but it doesn't work.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glGenerateMipmap(GL_TEXTURE_2D);
some of them get blurred
Some blur is inevitable.
Mipmapping is simply downscaling and that is effectively 4-to-1 averaging filter which will inevitably reduce high frequency information).
You're probably also using bilinear filtering. Pixels don't exactly align with texels, so you grab the 4 nearest texels and perform a weighted sum on those. Yet more blur ...
It's worth noting that the built-in mipmap generation normally uses a very simple box filter to downscale the mipmaps. You can often fine tune the sharpness of the result with other downsampling algorithms, but you need to generate the mipmaps manually offline if you want to try that.
some disappeared.
The clipping on the edges of the faces is usually a sign that your texture is "too tight" to the edge of the geometry, so the blur effect is effectively hitting the edge of the triangle and clipping.
Ensure that you have a small ring of transparent texels around the edge of the on-screen rendering of each icon (even when rendered at the smallest mipmap level). If it is still clipping make that buffer region larger.
I render a scene (to the default renderbuffer)
I want to grab a rectangle from this scene and create a texture out of it
I would like to do it without glReadPixels()ing down to the CPU and then uploading the data back up to the GPU
Is this possible using OpenGL ES 2.0?
P.S. - I want to use a POT area of the screen, not some strange shape
Pseudocode of my already-working GPU->CPU->GPU implementation:
// Render stuff here
byte *magData = glReadPixels();
// Bind the already-generated texture object
BindTexture(GL_TEXTURE0, GL_TEXTURE_2D, alias);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, MAGWIDTH, MAGHEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, magData);
You can use glCopyTexImage2D to copy from the back buffer:
glBindTexture(GL_TEXTURE_2D, textureID);
glCopyTexImage2D(GL_TEXTURE_2D, level, internalFormat, x, y, width, height, border);
OpenGL ES 2.0 always copies from the back buffer (or front buffer for single-buffered configurations). Using OpenGL ES 3.0, you can specify the source for the copy with:
glReadBuffer(GL_BACK);
In light of ClayMontgomery's answer (glCopyTexImage2D is slow) - you might find using glCopyTexSubImage2D with a correctly sized and formatted texture is faster because it writes to the pre-allocated texture instead of allocating a a new buffer each time. If this is still too slow, you should try doing as he suggests and render to a framebuffer (although you'll also need to draw a quad to the screen using the framebuffer's texture to get the same results).
You will find that glCopyTexImage2D() is really slow. The fast way to do what you want is to render directly to the texture as an attachment to an FBO. This can be done with either OpenGL ES 2.0 or 1.1 (with extensions). This article explains in detail:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
In my Application I need to multiply two textures and then multiply the result by a factor higher than 2. I am using GL_MODULATE and GL_RGB_SCALE for this. I use following code for this
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Input.texID);
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, Temp64.texID);
glEnable(GL_TEXTURE_2D);
// glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, Temp64.wide, Temp64.high, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_RGB_SCALE, 4.0);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO);
glViewport(0, 0, wide, high);
//glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
According to my understanding of the OpenGL ES 1.1 specifications first modulation between textures are done, then multiplication by 4 is done and then clamped to [0 1].
This is what the specification says
"If the value of TEXTURE ENV MODE is COMBINE, the form of the texture func- tion depends on the values of COMBINE RGB and COMBINE ALPHA, according to table 3.17. The RGB and ALPHA results of the texture function are then multi-plied by the values of RGB SCALE and ALPHA SCALE, respectively. The results are clamped to [0, 1]."
But what I notice is different. It first multiplies texture unit 1 by 4 and clamped into [0 1] then modulated with Texture unit 0. But I want the RGB_SCALE to be applied after the modulation.
I tried to write the modulation results to a FBO and then to use scaling but it didn't work.
What I want to do is to multiply an image with another and then multiply the result with a value higher than 2. There shouldn't be clamping until the 2nd multiplication. Can somebody please help me?
In the past I've had several issues related to texture combiners similar to the ones you mention. Modern hardware does not have a fixed function pipeline so all its functionality is implemented thru shaders and some of these implementations have unexpected results.
What's your target platform? My suggestion is to get rid of the texture combiners at all and move all your code to shaders (if your target platform allows it).
Also, if I'm reading correctly your code, you need to place glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE) after binding the first texture so you can use it when you call glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS).
glEnable(GL_TEXTURE_2D) is also misplaced and it should be called before any other texture related function.
I am working on a handwriting application on iOS. I found the sample project "GLPaint" from iOS documentation which is implemented by OpenGL ES, and I did something modification on it.
I track the touch points and calculate the curves between the points and draw particle images alone the curve to make it looks like where the finger passby.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); // burshData is from CGImage, it is
// vertexBuffer is generated based on the calculated points, it's just a sequence of point where need to draw image.
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
What I got is a solid line which looks quite good. But now I want to draw semi-transparent highlight instead of solid line. So I replace the particle image with a 50% transparency one without changing code.
Result of 50% transparency particle image
There is something wrong with blend.
What I need
I draw three points using the semi-transparency particle image, and the intersection area should keep 50% transparency.
What's the solution?
Im maybe two years later answering that question, but i hope it helps somebody who comes here looking for a solution to this problem, like it happened to me.
You are going to need to assign to each cirle a different z value. It doesn't matter how big or small this difference is, we only need them to not be strictly equal.
First, you disable writing in the color buffer glColorMask(false,false,false,false) , and then draw the circles normally. The Z-buffer will be updated as desired, but no circles will be drawn yet.
Then, you enable writing in the color buffer (glColorMask(true,true,true,true) ) and set the depthFunc to LEQUAL ( glDepthFunc(GL_LEQUAL) ). Only the nearest circle pixels will pass the depth test (Setting it to LEQUAL instead of EQUAL deals with some rare but possible floating point approximation errors). Enabling blending and drawing them again will produce the image you wanted, with no transparency overlap.
You have to change the blend function. You can play around it with:
glBlendFunc(GL_SRC_ALPHA,GL_ONE);
Maybe (GL_ONE, GL_ONE), forgot how to handle your case, but the solution is in that function.
http://www.opengl.org/sdk/docs/man/xhtml/glBlendFunc.xml
Late reply but hopefully useful for others.
Another way to avoid that effect is to grab the color buffer before transparent circles are drawn (ie. do a GrabPass) and then read and blend manually with the opaque buffer in the fragment shader of your circles.
i know that apple offers a sample called GLCameraRipple which is using CVOpenGLESTextureCacheCreateTextureFromImage to achieve this. But when i changed to glTexImage2D, it displays nothing, what's wrong with my code?
if (format == kCVPixelFormatType_32BGRA) {
CVPixelBufferRef pixelBuf = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuf, 0);
void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuf);
glGenTextures(1, &textureID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, baseaddress);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
CVPixelBufferUnlockBaseAddress(pixelBuf, 0);
}
thank you very much for any help!
There are a couple of problems here. First, the GLCameraRipple example was built to take in YUV camera data, not BGRA. Your above code is only uploading one texture of BGRA data, rather than the separate Y and UV planes expected by the application. It uses a colorspace conversion shader to merge these planes as a first stage, and that needs the YUV data to work.
Second, you are allocating a new texture for each uploaded frame, which is a really bad idea. This is particularly bad if you don't delete that texture when done, because you will chew up resources this way. You should allocate a texture once for each plane you'll upload, then keep that texture around as you upload each video frame, deleting it only when you're done processing video.
You'll either need to rework the above to upload the separate Y and UV planes, or remove / rewrite their color processing shader. If you go the BGRA route, you'll also need to be sure that the camera is now giving you BGRA frames instead of YUV ones.