We are running into a situation where for the same shader, a texture renders fine on iPad1, but not on the iPad2, when using GL_LUMINANCE. We've traced it to the texture2D call in the fragment shader. The vec4 returned by the texture2D call on the iPad1 contains the intensity value of the texture, but the vec4 returned by texture2D on the iPad2 is constant.
It looks like someone else also is seeing this problem [http://www.imgtec.com/forum/forum_posts.asp?TID=1267&PID=4307]. However, they are using GL_HALF_FLOAT_OES, while we are using GL_FLOAT.
Has anyone else seen this issue, and is there a workaround?
For a GL_LUMINANCE floating point texture, the iPad 2 clips luminance values to 0.0 - 1.0, so you should normalize the texture data before calling glTexImage2D. However, on the iPad 1 you're able to pass any floating point texture value to the shader.
Not sure why this inconsistency exists (GL driver bug?) but if anyone have a good explanation that'd be great.
Related
I'm trying to create a Photo editing program using OpenGL ES2 on iOS. I want to be able to modify parts of a photo using the fragment shader. For example, if the user touches the screen that point will be sent to the fragment shader. The fragment shader will add an effect within a certain radius of the point.
What I need is for the modifications made in the fragment shader to be persisted to the next frame. I've read that the way to do this is to setup a second frame buffer object which is associated with a texture. Here's what the program does:
Is the current texture 0? If so this is the first draw so we draw the photo to our FBO (i.e. the texture is projected onto a 2D rectangle). Then re-draw the rectangle to the screen but this time use the FBO as the texture source. After that, we draw the FBO's texture back to the FBO.
i.e.
if(_currentTextureID == 0)
_currentTextureID = _imageTexture
else
_currentTextureID = _frameBufferTextureID;
glBindFrameBuffer(GL_FRAMEBUFFER, _frameBufferID)
[self drawTexture: _currentTextureID];
[self bindDrawable]
[self drawTexture: _currentTextureID];
This kind of work but as the draw method is called multiple times the image gets blurry. I thought it might be because you can't render a texture into it's own FBO so I tried with two FBOs but that didn't work either. I'm fairly new to OpenGL so any advice would be greatly appreciated!
Here's a link to the full source:
Source Code
As it turned out the problem was in the fragment shader. Previously, the texture coordinate was being represented as a lowp vec2. When I changed it to a highp vec2 the problem disappeared.
I'm trying to apply a video file as a texture in OpenGL ES on iOS 5.0+ using CVOpenGLESTextureCache.
I've found Apple's RosyWriter sample code, and have been reading through it.
The question I have is:
How are the textures finally being delivered to the uniforms in the fragment shader?
In the RosyWriterPreviewView class, I follow it all the way up to
glBindTexture(CVOpenGLESTextureGetTarget(texture),
CVOpenGLESTextureGetName(texture))
after which some texture parameters are specified.
However, I don't see the texture uniform (sampler2D videoframe) ever being explicitly referenced by the sample code. The texture-sending code I've become used to would look something like:
GLint uniform = glGetUniformLocation(program, "u_uniformName");
with a subsequent call to actually send the texture to the uniform:
glUniform1i(GLint location, GLint x);
So I know that SOMEhow RosyWriter is delivering the texture to the uniform in its fragment shader, but I can't see how and where it's happening.
In fact, the sample code includes the comment where it builds up the OpenGL program:
// we don't need to get uniform locations in this example
Any help on why this is & how the texture is getting sent over would be great.
In the RosyWriter example, I think the reason they're able to get away without using glUniformi() at any point for the videoframe uniform is that they're binding the input texture to texture unit 0.
When specifying a uniform value for a texture, the value you use is the texture unit that texture is bound to. If you don't set a value for a uniform, I believe it should default to 0, so by binding the texture to unit 0 always they never have to set a value for the videoframe uniform. It will still pull in the texture attached to unit 0.
I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1.
I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app.
I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable.
As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me...
I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself.
UPDATE:
It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)
Use this tutorial, it's targeted at OpenGL 2.0, but most features are available in ES 2.0, the only thing i have doubts is floating point textures.
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
Basically, you need 4 vertex positions (as vec2) of a quad (with corners (-1,-1) and (1,1)) passed as a vertex attribute.
You don't really need a projection, because the shader will not need any.
Create an FBO, bind it and attach the target surface. Don't forget to check the completeness status.
Bind the shader, set up input textures and draw the quad.
Your vertex shader may look like this:
#version 130
in vec2 at_pos;
out vec2 tc;
void main() {
tc = (at_pos+vec2(1.0))*0.5; //texture coordinates
gl_Position = vec4(at_pos,0.0,1.0); //no projection needed
}
And a fragment shader:
#version 130
in vec2 tc;
uniform sampler2D unit_in;
void main() {
vec4 v = texture2D(unit_in,tc);
gl_FragColor = do_something();
}
If you want an example, I created this project for iOS devices for processing frames of video grabbed from the camera using OpenGL ES 2.0 shaders. I explain more about it in my writeup here.
Basically, I pull in the BGRA data for a frame and create a texture from that. I then use two triangles to generate a rectangle and map the texture on that. A shader is used to directly display the image onscreen, perform some effect on the image and display it, or perform some effect on the image while in an offscreen FBO. In the last case, I can then use glReadPixels() to pull in the image for some CPU-based processing, but ideally I want to fix this so that the processed image is just passed on as a texture to the next set of shaders.
You should also check out ogles_gpgpu, which even supports Android systems. An overview about this topic is given in this publication: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You can do more advanced GPGPU things with OpenGL ES 3.0 now. Check out this post for example. Apple now also has the "Metal API" which allows even more GPU compute operations. Both, OpenGL ES 3.x and Metal are only supported by newer devices with A7 chip.
I'm trying to implement the technique described at : Compositing Images with Depth.
The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.
The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?
The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.
The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.
Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.
EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:
void main()
{
// write a value to the depth map
gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}
To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.
To then use the texture in a future fragment shader, you'd do something like:
uniform sampler2D depthMap;
void main()
{
// read a value from the depth map
lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);
// discard the current fragment if it is less close than the stored value
if(colourFromDepthMap.r > gl_FragCoord.w) discard;
... set gl_FragColor appropriately otherwise ...
}
EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.
I have a vertex shader in which I do a texture lookup to determine gl_Position. I am using this as part of a GPU particle simulation system, where particle positions are stored in a texture.
It seems that: vec4 textureValue = texture2D(dataTexture, vec2(1.0, 1.0)); behaves differently on the simulator than the iPad device. On the simulator, the texture lookup succeeds (the value at that location is 0.5, 0.5) and my particle appears there. However, on the iPad itself the texture lookup is constantly returning 0.0, 0.0.
I have tried both textures of the format GL_FLOAT and GL_UNSIGNED_BYTE.
Has anyone else experienced this? The GLSL ES spec says that texture lookups can be done in both the vertex and fragment shaders, so I don't see what the problem is.
I am using the latest GM Beta of iOS SDK 4.2
I just tested this as well. Using iOS 4.3, you can do a texture lookup on the vertex shader on both the device and the simulator. There is a bit of strangeness though (which is maybe why it's not "official" as szu mentioned). On the actual device (I tested on the iPad 2) you have to do a lookup on the fragment shader as well as on the vertex shader. That is, if you are not actually using it on the fragment shader, you'll still have to reference it in some way. Here's a trivial example where I'm passing in a texture and using the red pixel to reposition the y value of the vertex by a little bit:
/////fragment shader
uniform sampler2D tex; //necessary even though not actually used
void main() {
vec4 notUsed = texture2D(tex, vec2(0.0,0.0)); //necessary even though not actually used
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}
/////vertex shader
attribute vec4 position;
attribute vec2 texCoord;
uniform sampler2D tex;
void main() {
float offset = texture2D(tex, texCoord).x;
offset = (offset - 0.5) * 2.0; //map 0->1 to -1 to +1
float posx = position.x;
float posy = position.y + offset/8.0;
gl_Position = vec4(posx, posy, 0.0, 1.0);
}
I have a slightly fuller write-up of this at http://www.mat.ucsb.edu/a.forbes/blog/?p=453
from the official "OpenGL ES Programming Guide for iOS", section "Platform Notes"
"You cannot use texture lookups in a vertex shader."
This isn't the only thing that behaves differently in the sim versus the device. I'll make you the same suggestion I make everyone else: Ignore the simulator when you need to test that things look how they should on the device. Only test things like logic, functionality, not look on the sim, and only if you can.
I have a feeling GLES on the iPad (or iPhone) does not support texture look up in a vertex shader, but don't quote me.
If it does support texel lookup in vertex shaders, perhaps you have your texture coordinates clipped or wrapping? Because 1.0x1.0 is outside of the texture IIRC.
1.0x1.0 in wrapping mode should be 0.0x0.0.
1.0x1.0 in clipping should be the last texel.
I tried this out myself and Texture2D does work on iPad (both device and simulator) under iOS 4.2 when used in the vertex shader.
My best guess is that you have mip mapping enabled, and that is the problem. I noticed that mip-mapped lookups in the vertex shader using Texture2D work in the simulator but not on the device. You cannot use mip maps with Texture2D in the vertex shader because there is no way for it to select which mip-map level to use. You need to disable mip-mapping for that texture, or use Texture2DLod instead, which supports mip maps.