Pass Texture to Uniform with CVOpenGLESTextureCache in OpenGL ES - ios

I'm trying to apply a video file as a texture in OpenGL ES on iOS 5.0+ using CVOpenGLESTextureCache.
I've found Apple's RosyWriter sample code, and have been reading through it.
The question I have is:
How are the textures finally being delivered to the uniforms in the fragment shader?
In the RosyWriterPreviewView class, I follow it all the way up to
glBindTexture(CVOpenGLESTextureGetTarget(texture),
CVOpenGLESTextureGetName(texture))
after which some texture parameters are specified.
However, I don't see the texture uniform (sampler2D videoframe) ever being explicitly referenced by the sample code. The texture-sending code I've become used to would look something like:
GLint uniform = glGetUniformLocation(program, "u_uniformName");
with a subsequent call to actually send the texture to the uniform:
glUniform1i(GLint location, GLint x);
So I know that SOMEhow RosyWriter is delivering the texture to the uniform in its fragment shader, but I can't see how and where it's happening.
In fact, the sample code includes the comment where it builds up the OpenGL program:
// we don't need to get uniform locations in this example
Any help on why this is & how the texture is getting sent over would be great.

In the RosyWriter example, I think the reason they're able to get away without using glUniformi() at any point for the videoframe uniform is that they're binding the input texture to texture unit 0.
When specifying a uniform value for a texture, the value you use is the texture unit that texture is bound to. If you don't set a value for a uniform, I believe it should default to 0, so by binding the texture to unit 0 always they never have to set a value for the videoframe uniform. It will still pull in the texture attached to unit 0.

Related

Reading variable from vertex shader for rendering in webgl

I want to implement a collision detector between a moving and a static object. The way I am thinking of doing so is by checking in vertex shader every time if any vertex of the moving object intersects with the position of the static object.
By doing the above, I would get the point of collision in the vertex shader, but I want to use the variable for rendering purposes in the js file.
Is there a way to do it.
In WebGL 1 you can not directly read any data from a vertex shader. The best you can do is use the vertex shader to affect the pixels rendered in the fragment shader. So you could for example set gl_Position so nothing is rendered if it fails your test and a single pixel is rendered if the test passes. Or you can set some varying that sets certain colors based on your test results. Then you can either read the pixel with gl.readPixels or you can just pass the texture you wrote to to another shader in a different draw calls.
In WebGL2 you can use transform feedback to allow a vertex shader to write its varyings to a buffer. You can then use that buffer in other draw calls or read it's contents with gl.getSubBuffer
In WebGL2 you can also do occlusion queries which means you can try to draw something and test if it was actually drawn or if the depth buffer prevented it from being drawn.

Modifying Individual Pixels with SKShader

I am attempting to write a fragment shader for the app that I am working on. I pass my uniform into the shader which works but it works on the entire object. I want to be able to modify the object pixel by pixel. So my code now is....
let shader = SKShader( fileNamed: "Shader.fsh" );
shader.addUniform( SKUniform( name: "value", float: 1.0 ) );
m_image.shader = shader;
Here the uniform "value" will be the same for all pixels. But, for example, let's say I want to change "value" to "0.0" after a certain amount of pixels are drawn. So for example....
shader.addUniform( SKUniform( name: "value", float: 1.0 ) );
// 100 pixels are drawn
shader.addUniform( SKUniform( name: "value", float: 0.0 ) );
Is this even possible with SKShader? Would this have to be done in the shader source?
One idea I was thinking of was using an array uniform but it doesn't appear that SKShader allows this.
Thanks for any help is advance.
In general, the word uniform means unchanging — something that's the same in all cases or situations. Such is the way of shader uniforms: even though the shader code runs independently (and in parallel) for each pixel in a rendered, images, the value of a uniform variable input to the shader is the same across all pixels.
While you could, in theory, pass an array of values into the shader representing the colors for every pixel, that's essentially the same as passing an image (or just setting a texture image on the sprite)... at that point you're using a shader for nothing.
Instead, you typically want your GLSL(ish*) code to, if it's doing anything based on pixel location, find out the pixel coordinates it's writing to and calculate a result based on that. In a shader for SKShader, you get pixel coordinates from the vec2 v_tex_coord shader variable.
(This looks like a decent tutorial (with links to others) for getting started on SpriteKit shaders. If you follow other tutorials or shader code libraries for help doing cool stuff with pixel shaders, you'll find ideas and algorithms you can reuse, but the ways they find the current output pixel will be different. In a shader for SpriteKit, you can usually safely replace gl_FragCoord with v_tex_coord.)
* SKShader doesn't use actual GLSL per se, It actually uses a subset of GLSL that automatically translates to appropriate GPU code for the device/renderer in use.

OpenGL ES 1 multi-texturing with different uv coordinates

I need to render an object using multi-texturing but both the textures have different uv coordinates for same object. One is normal map and other one is light map.
Please provide any useful material regarding this.
In OpenGL ES 2 you use shaders anyway. So you're completely free to use whatever texture coordinates you like. Just introduce an additional attribute for the second texture cooridnate pair and delegate this to the fragment shader, as usual:
...
attribute vec2 texCoord0;
attribute vec2 texCoord1;
varying vec2 vTexCoord0;
varying vec2 vTexCoord1;
void main()
{
...
vTexCoord0 = texCoord0;
vTexCoord1 = texCoord1;
}
And in the fragment shader use the respective coordinates to access the textures:
...
uniform sampler2D tex0;
uniform sampler2D tex1;
...
varying vec2 vTexCoord0;
varying vec2 vTexCoord1;
void main()
{
... = texture2D(tex0, vTexCoord0);
... = texture2D(tex1, vTexCoord1);
}
And of course you need to provide data to this new attribute (using glVertexAttribPointer). But if all this sounds very alien to you, then you should either delve a little deeper into GLSL shaders or you actually use OpenGL ES 1. In this case you should retag your question and I will update my answer.
EDIT: According to your update for OpenGL ES 1 the situation is a bit different. I assume you already know how to use a single texture and specify texture coordinates for this, otherwise you should start there before delving into multi-texturing.
With glActiveTexture(GL_TEXTUREi) you can activate the ith texture unit. All following operations related to texture state only refer to the ith texture unit (like glBindTexture, but also glTexEnv and gl(En/Dis)able(GL_TEXTURE_2D)).
For specifying the texture coordinates you still use the glTexCoordPointer function, as with single texturing, but with glCientActiveTexture(GL_TEXTUREi) you can select the texture unit to which following calls to glTexCoordPointer and glEnableClientAttrib(GL_TEXTURE_COORD_ARRAY) refer.
So it would be something like:
//bind and enable textures
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, <second texture>);
glTexEnv(<texture environment for second texture>); //maybe, if needed
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, <first texture>);
glTexEnv(<texture environment for first texture>); //maybe, if needed
glEnable(GL_TEXTURE_2D);
//set texture coordinates
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(<texCoords for second texture>);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(<texCoords for first texture>);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//other arrays, like glVertexPointer, ...
glDrawArrays(...)/glDrawElements(...);
//disable arrays
glClientActiveTexture(GL_TEXTURE1);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
//disable textures
glActiveTexture(GL_TEXTURE1);
glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glDisable(GL_TEXTURE_2D);
The reason I set the parameters for the second texture before the first texture is only so that after setting them we end up with texture unit 0 active. I think I have already seen drivers making problems when drawing and another unit than unit 0 was active. And it's always a good idea to leave a more or less clean state at the end, which means the default texture unit (GL_TEXTURE0) active, as otherwise code that doesn't care about multi-texturing could get problems.
EDIT: If you use immediate mode (glBegin/glEnd) instead of vertex arrays, then you don't use glTexCoordPointer, of course. In this case you also don't need glClientAttribTexture, of course. You just need to use glMultiTexCoord(GL_TEXTUREi, ...) with the appropriate texture unit (GL_TEXTURE0, GL_TEXTURE1, ...) instead of glTexCoord(...). But if I'm informed correctly, OpenGL ES doesn't have immediate mode, anyway.

GPGPU programming with OpenGL ES 2.0

I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1.
I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app.
I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable.
As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me...
I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself.
UPDATE:
It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)
Use this tutorial, it's targeted at OpenGL 2.0, but most features are available in ES 2.0, the only thing i have doubts is floating point textures.
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
Basically, you need 4 vertex positions (as vec2) of a quad (with corners (-1,-1) and (1,1)) passed as a vertex attribute.
You don't really need a projection, because the shader will not need any.
Create an FBO, bind it and attach the target surface. Don't forget to check the completeness status.
Bind the shader, set up input textures and draw the quad.
Your vertex shader may look like this:
#version 130
in vec2 at_pos;
out vec2 tc;
void main() {
tc = (at_pos+vec2(1.0))*0.5; //texture coordinates
gl_Position = vec4(at_pos,0.0,1.0); //no projection needed
}
And a fragment shader:
#version 130
in vec2 tc;
uniform sampler2D unit_in;
void main() {
vec4 v = texture2D(unit_in,tc);
gl_FragColor = do_something();
}
If you want an example, I created this project for iOS devices for processing frames of video grabbed from the camera using OpenGL ES 2.0 shaders. I explain more about it in my writeup here.
Basically, I pull in the BGRA data for a frame and create a texture from that. I then use two triangles to generate a rectangle and map the texture on that. A shader is used to directly display the image onscreen, perform some effect on the image and display it, or perform some effect on the image while in an offscreen FBO. In the last case, I can then use glReadPixels() to pull in the image for some CPU-based processing, but ideally I want to fix this so that the processed image is just passed on as a texture to the next set of shaders.
You should also check out ogles_gpgpu, which even supports Android systems. An overview about this topic is given in this publication: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You can do more advanced GPGPU things with OpenGL ES 3.0 now. Check out this post for example. Apple now also has the "Metal API" which allows even more GPU compute operations. Both, OpenGL ES 3.x and Metal are only supported by newer devices with A7 chip.

Writing texture data onto depth buffer

I'm trying to implement the technique described at : Compositing Images with Depth.
The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.
The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?
The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.
The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.
Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.
EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:
void main()
{
// write a value to the depth map
gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}
To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.
To then use the texture in a future fragment shader, you'd do something like:
uniform sampler2D depthMap;
void main()
{
// read a value from the depth map
lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);
// discard the current fragment if it is less close than the stored value
if(colourFromDepthMap.r > gl_FragCoord.w) discard;
... set gl_FragColor appropriately otherwise ...
}
EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.

Resources