Rounding error in texture lookup on iPad / OpenGL ES 2.0 - ios

I'm using a texture attached to a framebuffer as a custom depth buffer. In a first rendering pass, I render to the texture so it stores the depth values. In the second rendering pass, I do a lookup from this texture to decide whether to render or discard the fragment.
This works well, except that on the device (iPad 3), there are some annoying artifacts which seem to come from some kind of rounding error which happens when the depth values are written to the texture. I tried writing some fixed value like 0.5 to the texture, but when it is read back from the texture, it's more than 0.03 higher or lower than 0.5.
I 'encode' the depth value into three RGB values (the fourth component, alpha or w, is ignored):
const highp vec4 packFactors = vec4(1.0, 256.0, 256.0 * 256.0, 256.0 * 256.0 * 256.0);
const highp vec4 cutoffMask = vec4(1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0, 0.0);
void main() {
highp float depth = ...
...
highp vec4 packedVal = fract(packFactors * depth);
packedVal.x = depth; // undo effect of fract() on x component
gl_FragColor = packedVal - packedVal.yzww * cutoffMask;
}
This way, I get 3x8 bits precision to store the depth (inspired by http://www.rojtberg.net/348/powervr-sgx-530-does-not-support-depth-textures)
In the other fragment shader (second rendering pass), I read from the texture like this:
highp vec4 depthBufferLookup = texture2D(depthTexture, vDepthTex);
highp float depthFromDepthBuffer = dot(depthBufferLookup, vec4(1.0) / packFactors);
using the same values for packFactors as in the first shader.
I would expect this procedure to give a decent precision, but an error of more than 0.03 at a value of 0.5 makes it pretty unusable.
Any hints?
BTW I'm using the following texture type:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);

Related

WebGL: Access buffer from shader

I need to access a buffer from my shader. The buffer is created from an array. (In the real scenario, the array has 10k+ (variable) numbers.)
var myBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, myBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Uint8Array([1,2,3,4,5,6,7]), gl.STATIC_DRAW);
How do I send it so it's usable by the shader?
precision mediump float;
uniform uint[] myBuffer;//???
void main() {
gl_FragColor = vec4(myBuffer[0],myBuffer[1],0,1);
}
Normally, if it were a attribute, it'd be
gl.vertexAttribPointer(myBuffer, 2, gl.UNSIGNED_BYTE, false, 4, 0);
but I need to be able to access the whole array from any shader pixel, so it's not a vertex attribute.
Use a texture if you want random access to lots of data in a shader.
If you have 10000 values you might make a texture that's 100x100 pixels. you can then get each value from the texture with something like
uniform sampler2D u_texture;
vec2 textureSize = vec2(100.0, 100.0);
vec4 getValueFromTexture(float index) {
float column = mod(index, textureSize.x);
float row = floor(index / textureSize.x);
vec2 uv = vec2(
(column + 0.5) / textureSize.x,
(row + 0.5) / textureSize.y);
return texture2D(u_texture, uv);
}
Make sure your texture filtering is set to gl.NEAREST.
Of course if you make textureSize a uniform you could pass in the size of the texture.
As for why the + 0.5 part see this answer
You can use normal gl.RGBA, gl.UNSIGNED_BYTE textures and add/multiply the channels together to get a large range of values. Or, you could use floating point textures if you don't want to mess with that. You need to enable floating point textures.

Creating an efficient texture recoloring fragment shader

I'm trying to create a fragment shader to recolor a 2D grayscale sprite but leave white and near-white fragments intact (ie: don't recolor pure white fragments, and only slightly recolor near-white fragments). I'm not sure how to do this without using a conditional branch which results in poor performance on certain hardware.
The existing shader in the game engine just performs a simple multiplication:
#ifdef GL_ES
precision lowp float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform sampler2D CC_Texture0;
void main()
{
vec4 texColor = texture2D(CC_Texture0, v_texCoord);
gl_FragColor = texColor * v_fragmentColor;
}
I think that in order to avoid the conditional, I need some sort of continuous mathematical function that will recolor fragments with RGB values greater than, say, (0.9, 0.9, 0.9) less than it would for fragments which are less than (0.9, 0.9, 0.9).
Any help would be great!
I would do something like this: Calculate the fully-recolored pixel, then mix with the original based on a function. Here's an idea:
vec4 texColor = texture2D(CC_Texture0, v_texCoord);
const vec4 kLumWeights = vec4(.2126, .7152, .0722, 0.0); // Rec. 709 luminance weights
float luminance = dot (texColor, kLumWeights);
vec4 recolored = texColor * v_fragmentColor;
const float kThreshold = 0.8;
float mixAmount = (luminance - kThreshold) / (1.0 - kThreshold); // Everything below kThreshold becomes 0, and from kThreshold to 1.0 becomes 0 to 1.0
mixAmount = clamp (mixAmount, 0.0, 1.0);
gl_FragColor = mix (recolored, texColor, mixAmount);
Let me know if that works.

iOs OpenGL ES 2.0 adding textures with low opacity on device and on simulator

I have a problem with multiple drawing of textures in my program.
Blending mode is
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
The value of a-channel is passed into the shader from cpu-code.
precision highp float;
uniform sampler2D tShape;
uniform vec4 vColor;
uniform float sOpacity;
varying vec4 texCoords;\n"
void main() {
float a = texture2D(tShape, texCoords.xy).x * sOpacity;
gl_FragColor = vec4(vColor.rgb, a);
}
It's calculated previously with
O = pow(O, 1.3);
for the best visual effect.
I draw with color (0; 0; 0) on the black transparent canvas (0;0;0;0), but with very low opacity:
0.03 -> 0.01048
0.06 -> 0.0258
0.09 -> 0.0437
0.12 -> 0.0635
...
I expect, that maximal value of point's color will be (0;0;0;1) (black, no transparent) after multiple drawings as on the simulator:
but it isn't so on the device:
Do you have any ideas, why is it so?
UPDATE:
Also manual blending works incorrect too (and with difference from standard).
glBlendFunc(GL_ONE, GL_ZERO);
Fragment shader code:
#extension GL_EXT_shader_framebuffer_fetch : require
precision highp float;
uniform sampler2D tShape;
uniform vec4 vColor;
uniform float sOpacity;
varying vec4 texCoords;
void main() {
float a = texture2D(tShape, texCoords.xy).x * sOpacity;
gl_FragColor = vec4(vColor.rgb * a, a) + (gl_LastFragData[0] * (1.0 - a));
}
Result on the simulator:
Result on the device:
I'm trying to understand your approach so I wrote down some equations:
This is how a new drawing is performed (If didn't make any mistake):
color = {pencil_shape}*sourceAlpha + {old_paint}*(1-sourceAlpha)
alpha = {pencil_shape} + {old_paint}*(1-sourceAlpha)
So basically you alpha is getting closer to 1 on each frame, and you color is blended each time based on the src alpha in the *pencil_shape*.
Questions:
Do you intend to use the alpha in the output image for anything?
Is your *pencil_shape* all black (0, 0, 0, 0)? (besides the cornes where I suppose it has some antialiasing effect)
After some experiments I've understood, that this problem is in supported precision of device . So on the iPad Air this problem appears less than on iPad 4, 3.

Adding projection matrix to opengl es point sprites particle effect vertex shader

I have been learning opengl es from the opengl es 2.0 programming guide. They have a particle effect that looks like an explosion. I am trying to enhance their example code by adding a mat4 projection matrix to the vertex shader, the shader compiles and works, but I am having problems getting the effect to position taking the projection into account. The code I have is as follows
const char* ParticleExplosionVertexShader = STRINGIFY (
uniform float u_time;
uniform vec3 u_centerPosition;
uniform mat4 Projection;
attribute float a_lifetime;
attribute vec3 a_startPosition;
attribute vec3 a_endPosition;
varying float v_lifetime;
void main()
{
if ( u_time <= a_lifetime )
{
gl_Position.xyz = a_startPosition + (u_time * a_endPosition);
gl_Position.xyz += u_centerPosition;
gl_Position.w = 1.0;
}
else
gl_Position = vec4( -1000, -1000, 0, 0 );
v_lifetime = 1.0 - ( u_time / a_lifetime );
v_lifetime = clamp ( v_lifetime, 0.0, 1.0 );
gl_PointSize = ( v_lifetime * v_lifetime ) * 40.0;
}
);
I am able to add the projection to the line without any errors, but unfortunately here its not really required as that code is placing the object of d=screen at the end of its lifetime
gl_Position = Projection * vec4( -1000, -1000, 0, 0 );
I have also tried changing the line
gl_Position.xyz += u_centerPosition;
to
gl_Position += Projection * u_centerPosition;
But I have had no luck getting it to place as I want it
Am I doing something wrong? Or is there a reason the book didn't have a projection matrix such as its not something someone should do with point sprites?
Any help or pointers to what I should look into will be appreciated
Thanks
Edit: Please let me know if you need more information from me
What about multiplying the whole gl_Position by modelview-projection matrix, as with any normal geometry?
Also, you will probably need to modify the line that calculates gl_PointSize, for example try to divide it by gl_Position.w (after multiplication by modelview-projection), otherwise the sprites will all have the same size (is that what you are trying to fix?).

Render YpCbCr iPhone 4 Camera Frame to an OpenGL ES 2.0 Texture in iOS 4.3

I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
and I'm passing the pixel data to my texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));
My fragement shaders is:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}
and my vertex shader is
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
This works just fine when I'm working with an BGRA image, and my fragment shader only does
gl_FragColor = texture2D(videoFrame, textureCoordinate);
What if anything am I missing here? Thanks!
OK. We have a working success here. The key was passing the Y and the UV as two separate textures to the fragment shader. Here is the final shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 textureCoordinate;
uniform sampler2D videoFrame;
uniform sampler2D videoFrameUV;
const mat3 yuv2rgb = mat3(
1, 0, 1.2802,
1, -0.214821, -0.380589,
1, 2.127982, 0
);
void main() {
vec3 yuv = vec3(
1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625),
texture2D(videoFrameUV, textureCoordinate).r - 0.5,
texture2D(videoFrameUV, textureCoordinate).a - 0.5
);
vec3 rgb = yuv * yuv2rgb;
gl_FragColor = vec4(rgb, 1.0);
}
You'll need to create your textures along like this:
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));
and then pass them like this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glActiveTexture(GL_TEXTURE0);
glUniform1i(videoFrameUniform, 0);
glUniform1i(videoFrameUniformUV, 1);
Boy am I relieved!
P.S. The values for the yuv2rgb matrix are from here http://en.wikipedia.org/wiki/YUV and I copied code from here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to figure out how to get the correct YUV values to begin with.
Your code appears to attempt to convert a 32-bit colour in 444-plus-unused-byte to RGBA. That's not going to work too well. I don't know of anything that outputs "YUVA", for one.
Also, I think the returned alpha channel is 0 for BGRA camera output, not 1, so I'm not sure why it works (IIRC to convert it to a CGImage you need to use AlphaNoneSkipLast).
The 420 "bi planar" output is structued something like this:
A header telling you where the planes are (used by CVPixelBufferGetBaseAddressOfPlane() and friends)
The Y plane: height × bytes_per_row_1 × 1 bytes
The Cb,Cr plane: height/2 × bytes_per_row_2 × 2 bytes (2 bytes per 2x2 block).
bytes_per_row_1 is approximately width and bytes_per_row_2 is approximately width/2, but you'll want to use CVPixelBufferGetBytesPerRowOfPlane() for robustness (you also might want to check the results of ..GetHeightOfPlane and ...GetWidthOfPlane).
You might have luck treating it as a 1-component width*height texture and a 2-component width/2*height/2 texture. You'll probably want to check bytes-per-row and handle the case where it isn't simply width*number-of-components (although this is probably true for most of the video modes). AIUI, you'll also want to flush the GL context before calling CVPixelBufferUnlockBaseAddress().
Alternatively, you can copy it all to memory into your expected format (optimizing this loop might be a bit tricky). Copying has the advantage that you don't need to worry about things accessing memory after you've unlocked the pixel buffer.

Resources